IT professionals continue to debate the benefits of standardization versus the benefits of innovation, and the potential of standards inhibiting engineer and software developer ability to develop creative solutions to business opportunities and challenges. At the Open Group Conference in San Diego last week (3~5 February) the topic of standards and innovation popped up not only in presentations, but also in sidebar conversations surrounding the conference venue.
In his presentation “SOA4BT (Service-Oriented Architecture for Business Technology) – From Business Services to Realization,” Nikhil Kumar noted that with rigid standards there is “always a risk of service units creating barriers to business units.” The idea is that service and IT organizations must align their intended use of standards with the needs of the business units. Kumar further described a traditional cycle where:
- Enterprise drivers establish ->
- Business derived technical drivers, which encounter ->
- Legacy and traditional constraints, which result in ->
- “Business Required” technologies and technology (enabled) SOAs
Going through this cycle does not require a process with too much overhead, it is simply a requirement for ensuring the use of a standard, or standard business architecture framework drive the business services groups (IT) into the business unit circle. While IT is the source of many innovative ideas and deployments of emerging technologies, the business units are the ultimate benefactors of innovation, allowing the unit to address and respond to rapidly emerging opportunities or market requirements.
Standards come in a lot of shapes and sizes. One standard may be a national or international standard, such as ISO 20000 (service delivery), NIST 800-53 (security), or BICSI 002-2011 (data center design and operations). Standards may also be internal within an organization or industry, such as standardizing data bases, applications, data formats, and virtual appliances within a cloud computing environment.
In his presentation “The Implications of EA in New Audit Guidelines (COBIT5), Robert Weisman noted there are now more than 36,500 TOGAF (The Open Group Architecture Framework) certified practitioners worldwide, with more than 60 certified training organizations providing TOGAF certifications. According to ITSMinfo.com, just in 2012 there were more than 263,000 ITIL Foundation certifications granted (for service delivery), and ISACA notes there were more than 4000 COBIT 5 certifications granted (for IT planning, implementation, and governance) in the same period.
With a growing number of organizations either requiring, or providing training in enterprise architecture, service delivery, or governance disciplines, it is becoming clear that organizations need to have a more structured method of designing more effective service-orientation within their IT systems, both for operational efficiency, and also for facilitating more effective decision support systems and performance reporting. The standards and frameworks attempt to provide greater structure to both business and IT when designing technology toolsets and solutions for business requirements.
So use of standards becomes very effective for providing structure and guidelines for IT toolset and solutions development. Now to address the issue of innovation, several ideas are important to consider, including:
- Developing an organizational culture of shared vision, values, and goals
- Developing a standardized toolkit of virtual appliances, interfaces, platforms, and applications
- Accepting a need for continual review of existing tools, improvement of tools to match business requirements, and allow for further development and consideration when existing utilities and tools are not sufficient or adequate to task
Once an aligned vision of business goals is available and achieved, a standard toolset published, and IT and business units are better integrated as teams, additional benefits may become apparent.
- Duplication of effort is reduced with the availability of standardized IT tools
- Incompatible or non-interoperable organizational data is either reduced or eliminated
- More development effort is applied to developing new solutions, rather than developing basic or standardized components
- Investors will have much more confidence in management’s ability to not only make the best use of existing resources and budgets, but also the organization’s ability to exploit new business opportunities
- Focusing on a standard set of utilities and applications, such as database software, will not only improve interoperability, but also enhance the organization’s ability to influence vendor service-level agreements and support agreements, as well as reduce cost with volume purchasing
Rather than view standards as an inhibitor, or barrier to innovation, business units and other organizational stakeholders should view standards as a method of not only facilitating SOAs and interoperability, but also as a way of relieving developers from the burden of constantly recreating common sets and libraries of underlying IT utilities. If developers are free to focus their efforts on pure solutions development and responding to emerging opportunities, and rely on both technical and process standardization to guide their efforts, the result will greatly enhance an organization’s ability to be agile, while still ensuring a higher level of security, interoperability, systems portability, and innovation.
Modern Data Centers are very complex environments. Data center operators must have visibility into a wide range of integrated data bases, applications, and performance indicators to effectively understand and manage their operations and activities.
While each data center is different, all Data Centers share some common systems and common characteristics, including:
- Facility inventories
- Provisioning and customer fulfillment processes
- Maintenance activities (including computerized maintenance management systems <CMMS>)
- Customer management (including CRM, order management, etc.)
- Trouble management
- Customer portals
- Security Systems (physical access entry/control and logical systems management)
- Billing and Accounting Systems
- Service usage records (power, bandwidth, remote hands, etc.)
- Decision support system and performance management integration
- Standards for data and applications
- Staffing and activities-based management
- Scheduling /calendar
Unfortunately, in many cases, the above systems are either done manually, have no standards, and had no automation or integration interconnecting individual back office components. This also includes many communication companies and telecommunications carriers which previously either adhered, or claimed to adhere to Bellcore data and operations standards.
In some cases, the lack of integration is due to many mergers and acquisitions of companies which have unique, or non standard back office systems. The result is difficulty in cross provisioning, billing, integrated customer management systems, and accounting – the day to day operations of a data center.
Modern data centers must have a high level of automation. In particular, if a data center operator owns multiple facilities, it becomes very difficult to have a common look and feel or high level of integration allowing the company to offer a standardized product to their markets and customers.
Operational support systems or OSS, traditionally have four main components which include:
- Support for process automation
- Collection and storage for a wide variety of operational data
- The use of standardized data structures and applications
- And supporting technologies
With most commercial or public colocation and Data Centers customers and tenants organizations represent many different industries, products, and services. Some large colocation centers may have several hundred individual customers. Other data centers may have larger customers such as cloud service providers, content delivery networks, and other hosting companies. While single large customers may be few, their internal hosted or virtual customers may also be at the scale of hundreds, or even thousands of individual customers.
To effectively support their customers Data Centers must have comprehensive OSS capabilities. Given the large number of processes, data sources, and user requirements, the OSS should be designed and developed using a standard architecture and framework which will ensure OSS integration and interoperability.
We have conducted numerous Interoperability Readiness surveys with both governments and private sector (commercial) data center operators during the past five years. In more than 80% of surveys processes such as inventory management have been built within simple spreadsheets. Provisioning of inventory items was normally a manual process conducted via e-mail or in some cases paper forms.
Provisioning, a manual process, resulted in some cases of double booked or double sold inventory items, as well as inefficient orders for adding additional customer-facing inventory or build out of additional data center space.
The problem often further compounded into additional problems such as missing customer billing cycles, accounting shortfalls, and management or monitoring system errors.
The new data center, including virtual data centers within cloud service providers, must develop better OSS tools and systems to accommodate the rapidly changing need for elasticity and agility in ICT systems. This includes having as single window for all required items within the OSS.
Preparing an OSS architecture, based on a service-oriented architecture (SOA), should include use of ICT-friendly frameworks and guidance such as TOGAF and/or ITIL to ensure all visions and designs fully acknowledge and embrace the needs of each organization’s business owners and customers, and follow a comprehensive and structured development process to ensure those objectives are delivered.
Use of standard databases, APIs, service busses, security, and establishing a high level of governance to ensure a “standards and interoperability first” policy for all data center IT will allow all systems to communicate, share, reuse, and ultimately provide automated, single source data resources into all data center, management, accounting, and customer activities.
Any manual transfer of data between offices, applications, or systems must be prevented, preferring to integrate inventory, data collections and records, processes, and performance management indicators into a fully integrated and interoperable environment. A basic rule of thought might be that if a human being has touched data, then the data likely has been either corrupted or its integrity may be brought into question.
Looking ahead to the next generation of data center services, stepping a bit higher up the customer service maturity continuum requires much higher levels of internal process and customer process automation.
Similar to NIST’s definition of cloud computing, stating the essential characteristics of cloud computing include “self-service provisioning,” “rapid elasticity,” ”measured services,” in addition to resource pooling and broadband access, it can be assumed that data center users of the future will need to order and fulfill services such as network interconnections, power, virtual space (or physical space), and other services through self-service, or on-demand ordering.
“The OSS must strive to meet the following objectives:
- Reusable components and APIs
- Data sharing
To accomplish this will require nearly all above mentioned characteristics of the OSS to have inventories in databases (not spreadsheets), process automation, and standards in data structure, APIs, and application interoperability.
And as the ultimate key success factor, management DSS will finally have potential for development of true dashboard for performance management, data analytics, and additional real-time tools for making effective organizational decisions.
Cloud Computing has helped us understand both the opportunity, and the need, to decouple physical IT infrastructure from the requirements of business. In theory cloud computing greatly enhances an organization’s ability to not only decommission inefficient data center resources, but even more importantly eases the process an organization needs to develop when moving to integration and service-orientation within supporting IT systems.
Current cloud computing standards, such as published by the US National Institute of Standards and Technology (NIST) have provided very good definitions, and solid reference architecture for understanding at a high level a vision of cloud computing.
However these definitions, while good for addressing the vision of cloud computing, are not at a level of detail needed to really understand the potential impact of cloud computing within an existing organization, nor the potential of enabling data and systems resources to meet a need for interoperability of data in a 2020 or 2025 IT world.
The key to interoperability, and subsequent portability, is a clear set of standards. The Internet emerged as a collaboration of academic, government, and private industry development which bypassed much of the normal technology vendor desire to create a proprietary product or service. The cloud computing world, while having deep roots in mainframe computing, time-sharing, grid computing, and other web hosting services, was really thrust upon the IT community with little fanfare in the mid-2000s.
While NIST, the Open GRID Forum, OASIS, DMTF, and other organizations have developed some levels of standardization for virtualization and portability, the reality is applications, platforms, and infrastructure are still largely tightly coupled, restricting the ease most developers would need to accelerate higher levels of integration and interconnections of data and applications.
NIST’s Cloud Computing Standards Roadmap (SP 500-291 v2) states:
“…the migration to cloud computing should enable various multiple cloud platforms seamless access between and among various cloud services, to optimize the cloud consumer expectations and experience.
Cloud interoperability allows seamless exchange and use of data and services among various cloud infrastructure offerings and to the the data and services exchanged to enable them to operate effectively together.”
Very easy to say, however the reality is, in particular with PaaS and SaaS libraries and services, that few fully interchangeable components exist, and any information sharing is a compromise in flexibility.
The Open Group, in their document “Cloud Computing Portability and Interoperability” simplifies the problem into a single statement:
“The cheaper and easier it is to integrate applications and systems, the closer you are getting to real interoperability.”
The alternative is of course an IT world that is restrained by proprietary interfaces, extending the pitfalls and dangers of vendor lock-in.
What Can We Do?
The first thing is, the cloud consumer world must make a stand and demand vendors produce services and applications based on interoperability and data portability standards. No IT organization in the current IT maturity continuum should be procuring systems that do not support an open, industry-standard, service-oriented infrastructure, platform, and applications reference model (Open Group).
In addition to the need for interoperable data and services, the concept of portability is essential to developing, operating, and maintaining effective disaster management and continuity of operations procedures. No IT infrastructure, platform, or application should be considered which does not allow and embrace portability. This includes NIST’s guidance stating:
“Cloud portability allows two or more kinds of cloud infrastructures to seamlessly use data and services from one cloud system and be used for other cloud systems.”
The bottom line for all CIOs, CTOs, and IT managers – accept the need for service-orientation within all existing or planned IT services and systems. Embrace Service-Oriented Architectures, Enterprise Architecture, and at all costs the potential for vendor lock-in when considering any level of infrastructure or service.
Standards are the key to portability and interoperability, and IT organizations have the power to continue forcing adoption and compliance with standards by all vendors. Do not accept anything which does not fully support the need for data interoperability.
2010 was a great year for cloud computing. The hype phase of cloud computing is closing in on maturity, as the message has finally hit awareness of nearly all in the Cxx tier. And for good reason. The diffusion of IT-everything into nearly every aspect of our lives needs a lot of compute, storage, and network horsepower.
And,… we are finally getting to the point cloud computing is no longer explained with exotic diagrams on a white board or Powerpoint presentation, but actually something we can start knitting together into a useful tool.
The National Institute of Standards and Technology (NIST) in the United States takes cloud computing seriously, and is well on the way to setting standards for cloud computing, at least in the US. The NIST definitions of cloud computing are already an international reference, and as that taxonomy continues to baseline vendor cloud solutions, it is a good sign we are on the way to product maturity.
Now is the Time to Build Confidence
Unless you are an IY manager in a bleeding-edge technology company, there is rarely any incentive to be in the first-mover quadrant of technology implementation. The intent of IT managers is to keep the company’s information secure, and provide the utilities needed to meet company objectives. Putting a company at risk by implementing “cool stuff” is not the best career choice.
However, as cloud computing continues to mature, and the cost of operating an internal data center continues to rise (due to the cost of electricity, real estate, and equipment maintenance), IT managers really have no choice – they have to at least learn the cloud computing technology and operations environment. If for no other reason than their Cxx team will eventually ask the question of “what does this mean to our company?”
An IT manager will need to prepare an educated response to the Cxx team, and be able to clearly articulate the following:
- Why cloud computing would bring operational or competitive advantage to the company
- Why it might not bring advantage to the company
- The cost of operating in a cloud environment versus a traditional data center environment
- The relationship between data center consolidation and cloud computing
- The advantage or disadvantage of data center outsourcing and consolidation
- The differences between enterprise clouds, public clouds, and hybrid clouds
- The OPEX/CAPEX comparisons of running individual servers versus virtualization, or virtualization within a cloud environment
- Graphically present and describe cloud computing models compared to traditional models, including the cost of capacity
Wish List Priority 1 – Cloud Computing Interoperability
It is not just about vendor lock-in. it is not just about building a competitive environment. it is about having the opportunity to use local, national, and international cloud computing resources when it is in the interest of your organization.
Hybrid clouds are defined by NIST, but in reality are still simply a great idea. The idea of being able to overflow processing from an enterprise cloud to a public cloud is well-founded, and in fact represents one of the basic visions of cloud computing. Processing capacity on demand.
But let’s take this one step further. The cloud exchange. We’ve discussed this for a couple of years, and now the technology needs to catch up with the concept.
If we can have an Internet Exchange, a Carrier Ethernet Exchange, and a telephone exchange – why can’t we have a Cloud Exchange? or a single one-stop-shop for cloud compute capacity consumers to go to access a spot market for on-demand cloud compute resources?
Here is one idea. Take your average Internet Exchange Point, like Amsterdam (AMS-IX), Frankfurt (DE-CIX), Any2, or London (LINX) where hundreds of Internet networks, content delivery networks, and enterprise networks come together to interconnect at a single point. This is the place where the only restriction you have for interconnection of networks and resources is the capacity of your port/s connecting you to the exchange point.
Most Internet Exchange Points are colocated with large data centers, or are in very close proximity to large data centers (with a lot of dark fiber connecting the facilities). The data centers manage most of the large content delivery networks (CDNs) facing the Internet. Many of those CDNs have irregular capacity requirements based on event-driven, seasonal, or other activities.
The CDN can either build their colocation capacity to meet the maximum forecast requirements of their product, or they could potentially interconnect with a colocated cloud computing company for overflow capacity – at the point of Internet exchange.
The cloud computing companies (with the exception of the “Big 3”), are also – yes, in the same data centers as the CDNs. Ditto for the enterprise networks choosing to either outsource their operations into a data center – or outsource into a public cloud provider.
Wish List – Develop a cloud computing exchange colocated, or part of large Internet Exchange Points.
Wish List Extra Credit – Switch vendors develop high capacity SSDs that fit into switch slots, making storage part of the switch back plane.
Simple and Secure Disaster Recovery Models
Along with the idea of distributed cloud processing, interoperability, and on-demand resources comes the most simple of all cloud visions – disaster recovery.
One of the reasons we all talk cloud computing is the potential for data center consolidation and recovery of CAPEX/OPEX for reallocation into development and revenue-producing activities.
However, with data center consolidation comes the equally important task of developing strong disaster recovery and business continuity models. Whether it be through producing hot standby images of applications and data, simply backing up data into a remote (secure) location, or both, disaster recovery still takes on a high priority for 2011.
You might state “disaster recovery has been around since the beginning of computing, with 9 track tapes copies and punch cards – what’s new?”
What’s new is the reality of disaster recovery is most companies and organizations still have no meaningful disaster recovery plan. There may be a weekly backup to tape or disk, there may even be the odd company or organization with a standby capability that limits recovery time and recovery point objectives to a day or two. But let’s be honest – those are the exceptions.
Having surveyed enterprise and government users over the past two years, we have noticed that very, very few organizations with paper disaster recovery plans actually implement their plans in practice. This includes many local and state governments within the US (check out some of the reports published by the National Association of State CIOs/NASCIO if you don’t believe this statement!).
Wish List Item 2 – Develop a simple, really simple and cost effective disaster recovery model within the cloud computing industry. Make it an inherent part of all cloud computing products and services. Make it so simple no IT manager can ever again come up with an excuse why their recovery point and time objectives are not ZERO.
Moving Towards the Virtual Desktop
Makes sense. If cloud computing brings applications back to the SaaS model, and communications capacity and bandwidth are bringing delays –even on long distance connections, to the point us humans cannot tell if we are on a LAN or a WAN, then let’s start dumping high cost works stations.
Sure, that 1% of the IT world using CAD, graphics design, and other funky stuff will still need the most powerful computer available on the market, but the rest of us can certainly live with hosted email, other unified communications, and office automation applications. You start your dumb terminal with the 30” screen at 0800, and log off at 1730.
If you really need to check email at night or on the road, your 3G->4G smart phone or netbook connection will provide more than adequate bandwidth to connect to your host email application or files.
This supports disaster recovery objectives, lowers the cost of expensive workstations, and allows organizations to regain control of their intellectual property.
With applications portability, at this point it makes no difference if you are using Google Apps, Microsoft 365, or some other emerging hosted environment.
Wish List Item 3 – IT Managers, please consider dumping the high end desktop workstation, gain control over your intellectual property, recover the cost of IT equipment, and standardize your organizational environment.
More Wish List Items
Yes, there are many more. But those start edging towards “cool.” We want to concentrate on those items really needed to continue pushing the global IT community towards virtualization.