I have a client who is concerned with some of their departments bypassing the organization’s traditional IT resource acquisition process, and going directly to cloud vendors for their IT resource needs. Not really unique, as the cloud computing industry has really disrupted IT provisioning processes, not to mention near complete loss of control over configuration management databases and inventories.
IT service disintermediation occurs when end users cut out the middleman when procuring ICT services, and go directly to the service provider with an independent account. Disintermediation normally occurs when one of the following conditions exist:
- The end user desires to remain independent, for reasons of control, use of decentralized budgets, or simply individual pride.
- The organizational service provider does not have a suitable resource available to meet the end user’s needs
- The end user does not have confidence in the organizational service provider
- The organizational service provider has a suitable service, however is not able or willing to provision the service in order to meet the end user’s demands for timing, capacity, or other reasons. This is often the result of a lengthy, bureaucratic process which is not agile, flexible, or promotes a “sense of urgency” to complete provisioning tasks.
- The organizational service provider is not able to, or is unwilling to accommodate “special” orders which fall out of the service provider’s portfolio.
- The organizational service provider does not respond to rapidly changing market, technology, and usage opportunities, with the result of creating barriers for the business units to compete or respond to external conditions.
The result of this is pretty bad for any organization. Some of the highlights of this failure may include:
- Loss of control over IT budgets – decentralization of IT budget which do not fall within a strategic plan or policy cannot be controlled.
- Inability to develop and maintain organizational relationships with select or approved vendors. Vendors relish the potential of disrupting single points of contacts within large organizations, as it allows them to develop and sustain multiple high value contracts with the individual agencies, rather than falling within volume purchasing agreements, audits, standards, security, SLAs, training, and so on.
- Individual applications will normally result in incompatible information silos. While interoperability within an organization is a high priority, particularly when looking at service-orientation and organizational decision support systems, systems disintermediation will result in failure, or extreme difficulty in developing data sharing structure.
- Poor Continuity of Operations and Disaster Management. Undocumented, non-standard systems are normally not fully documented, and often are not made available to the Organization’s IT Management or support operations. Thus, when disasters occur, there is a high risk of complete data loss in a disaster, or inability to quickly restore full services to the organization, customers, and general user base.
- There is also difficulty in data/systems portability. If/when a service provider fails to meet the expectation of the end user, decides to go out of business, or for some reason decides not to continue supporting the user, then the existing data and systems should be portable to another service provider (this is also within the NIST standard).
While there are certainly other considerations, this covers the main pain points disintermediation might present.
The next obvious question is how to best mitigate the condition. This is a more difficult issue than in the past, as it is now so easy to establish an account and resources through cloud companies with a simple credit card, or aggressive sales person. In addition, the organizational service provider must follow standard architectural and governance processes, which includes continual review and improvement cycles.
As technology and organization priorities change, so must the policies change to be aware of, and accommodate reasonable change. The end users must be fully aware of the products and services IT departments have to offer, and of course IT departments must have an aggressive sense of urgency in trying to respond and fulfill those requirements.
Responsibility falls in two areas; 1) Ensuring the organizational service provider is able to meet the needs of end users in a timely manner to assist the end user>, and 2) develop policies and processes which not only facilitate end user acquisition of resources, but also establishes accountability when those policies are not followed.
In addition, the organizational service provider must follow standard architectural and governance processes, which includes continual review and improvement cycles. As technology and organization priorities change, so must the policies change to be aware of, and accommodate reasonable change. The end users must be fully aware of the products and services IT departments have to offer, and of course IT departments must have an aggressive sense of urgency in trying to respond and fulfill those requirements.
Information Technology is a great field. With technology advancing at the speed of sound, there is never a period when IT becomes boring, or hits an intellectual wall. New devices, new software, more network bandwidth, and new opportunities to make all this technology do great things for our professional and private lives.
Or, it becomes a frightening professional and intellectual cyclone which threatens to make our jobs obsolete, or diluted due to business units accessing IT resources via a web page and credit card, bypassing the IT department entirely.
One of the biggest challenges IT managers have traditionally encountered is the need for providing both process, as well as utility to end users and supported departments or divisions within the organization. It is easy to get tied down in a virtual mountain of spreadsheets, trouble tickets, and unhappy users while innovation races past.
The Role of IT in Future Organizations
In reality, the technology component of IT is the easy part. If, for example, I decide that it is cost-effective to transition the entire organization to a Software as a Service (SaaS) application such as MS 365, it is a pretty easy business case to bring to management.
But more questions arise, such as does MS 365 give business users within the organization sufficient utility, and creative tools, to help solve business challenges and opportunities, or is it simply a new and cool application (in the opinion of the IT guys…) that IT guys find interesting?
Bridging the gap between old IT and the new world does not have to be too daunting. The first step is simply understanding and accepting the fact internal data center are going away in favor of virtualized cloud-enabled infrastructure. In the long term Software as a Service and Platform as a Service-enabled information, communication, and service utilities will begin to eliminate even the most compelling justifications for physical or virtual servers.
End user devices become mobile, with the only real requirement being a high definition display, input device, and high speed network connection (not this does not rely on “Internet” connections). Applications and other information and decision support resources are accessed someplace in the “cloud,” relieving the user from the burden of device applications and storage.
The IT department is no longer responsible for physical infrastructure
If we consider disciplines such as TOGAF (The open Group Architecture Framework), ITIL (Service Delivery and Management Framework), or COBIT (Governance and Holistic Organizational Enablement), a common theme emerges for IT groups.
IT organizations must become full members of an organization’s business team
If we consider the potential of systems integration, interoperability, and exploitation of large data (or “big data”) within organization’s, and externally among trading partners, governments, and others, the need for IT managers and professionals to graduate from the device world to the true information management world becomes a great career and future opportunity.
But this requires IT professionals to reconsider those skills and training needed to fully become a business team member and contributor to an organization’s strategic vision for the future. Those skills include enterprise architecture, governance modeling, data analytics, and a view of standards and interoperability of data. The value of a network routing certification, data center facility manager, or software installer will edge towards near zero within a few short years.
Harsh, but true. Think of the engineers who specialized in digital telephone switches in the 1990s and early 2000s. They are all gone. Either retrained, repurposed, or unemployed. The same future is hovering on the IT manager’s horizon.
So the call to action is simple. If you are a mid-career IT professional, or new IT professional just entering the job market, prepare yourself for a new age of IT. Try to distance yourself from being stuck in a device-driven career path, and look at engaging and preparing yourself for contributing to the organization’s ability to fully exploit information from a business perspective, an architectural perspective, and fully indulge in a rapidly evolving and changing information services world.
A friend of mine’s son recently returned from an extended absence which basically removed him from nearly all aspects of technology, including the Internet, for a bit longer than 5 years. Upon return, observing him restore his awareness of technologies and absorb all things new developed over the past 5 years was both exciting and moving.
To be fair, the guy grew up in an Internet world, with access to online resources including Facebook, Twitter, and other social applications.
The interesting part of his re-introduction to the “wired” world was watching the comprehension flashes he went through when absorbing the much higher levels of application and data integration, and speed of network access.
As much as all of us continue to complain about terrible access speeds, it is remarkable to see how excited he became when learning he could get 60Mbps downloads from just a cable modem. And the ability to download HD movies to a PC in just a few moments, or stream HD videos through a local device.
Not to mention the near non-need to have CATV period to continue enjoying nearly any network or alternative programming desired.
Continuing to observe the transformation, it took him about 2 minutes to nail up a multipoint video call with 4 of his friends, take a stroll through my eBook library, and prepare a strategy for his own digital move into cloud-based applications, storage, and collaboration.
Looking back to my personal technical point of reference at the point this kid dropped out, I dug up blog articles I’ve posted with titles such as:
- “Flattening the American Internet” (discussing the need for more Internet Exchange Points in the US)
- “IXPs and Disaster Recovery” (the role IXPs could and should play in global disasters)
- “2009 – The Year of IPv6 and Internet Virtualization”
- “The Law of Plentitude and Chaos Theory”
- “Why I Hate Kayaks” (the hypocrisy of some environmentalists)
- “Contributing to a Cause with Technology – The World Community GRID” (the cloud before the cloud)
- “Blackberrys, PDA Phones, and Frog Soup”
And so on…
We have come a long way technically over those years, but the amazing thing is the near immediacy of the young man absorbing those changes. I was almost afraid with all the right brain flashes that he would have a breakdown, but the enjoyment he showed diving into the new world of “apps” and anytime, anywhere computing appears to only be accelerating.
Now the questions are starting to pop up. “Can we do this now?” “It would be nice if this was possible.”
Maybe because he grew up in a gaming world, or maybe because he was dunked into the wired world about the same time he learned to stand on his own feet. Maybe the synaptic connections in his brain are just much better wired than those of my generation.
Perhaps the final, and most important revelation for me, is that young people have a tremendous capacity to exploit the technology resources developed in just a few short years. Collaboration tools which astound my generation are slow and boring to the new crew. Internet is expected, it is a utility, and it is demanded at broadband speeds which, again, to somebody whose first commercial modem was a large card capable of 300 baud (do you even know what baud means?) is still mind boggling.
The new generations are going to have a lot more fun than we did, on a global scale.
I am jealous
IT professionals continue to debate the benefits of standardization versus the benefits of innovation, and the potential of standards inhibiting engineer and software developer ability to develop creative solutions to business opportunities and challenges. At the Open Group Conference in San Diego last week (3~5 February) the topic of standards and innovation popped up not only in presentations, but also in sidebar conversations surrounding the conference venue.
In his presentation “SOA4BT (Service-Oriented Architecture for Business Technology) – From Business Services to Realization,” Nikhil Kumar noted that with rigid standards there is “always a risk of service units creating barriers to business units.” The idea is that service and IT organizations must align their intended use of standards with the needs of the business units. Kumar further described a traditional cycle where:
- Enterprise drivers establish ->
- Business derived technical drivers, which encounter ->
- Legacy and traditional constraints, which result in ->
- “Business Required” technologies and technology (enabled) SOAs
Going through this cycle does not require a process with too much overhead, it is simply a requirement for ensuring the use of a standard, or standard business architecture framework drive the business services groups (IT) into the business unit circle. While IT is the source of many innovative ideas and deployments of emerging technologies, the business units are the ultimate benefactors of innovation, allowing the unit to address and respond to rapidly emerging opportunities or market requirements.
Standards come in a lot of shapes and sizes. One standard may be a national or international standard, such as ISO 20000 (service delivery), NIST 800-53 (security), or BICSI 002-2011 (data center design and operations). Standards may also be internal within an organization or industry, such as standardizing data bases, applications, data formats, and virtual appliances within a cloud computing environment.
In his presentation “The Implications of EA in New Audit Guidelines (COBIT5), Robert Weisman noted there are now more than 36,500 TOGAF (The Open Group Architecture Framework) certified practitioners worldwide, with more than 60 certified training organizations providing TOGAF certifications. According to ITSMinfo.com, just in 2012 there were more than 263,000 ITIL Foundation certifications granted (for service delivery), and ISACA notes there were more than 4000 COBIT 5 certifications granted (for IT planning, implementation, and governance) in the same period.
With a growing number of organizations either requiring, or providing training in enterprise architecture, service delivery, or governance disciplines, it is becoming clear that organizations need to have a more structured method of designing more effective service-orientation within their IT systems, both for operational efficiency, and also for facilitating more effective decision support systems and performance reporting. The standards and frameworks attempt to provide greater structure to both business and IT when designing technology toolsets and solutions for business requirements.
So use of standards becomes very effective for providing structure and guidelines for IT toolset and solutions development. Now to address the issue of innovation, several ideas are important to consider, including:
- Developing an organizational culture of shared vision, values, and goals
- Developing a standardized toolkit of virtual appliances, interfaces, platforms, and applications
- Accepting a need for continual review of existing tools, improvement of tools to match business requirements, and allow for further development and consideration when existing utilities and tools are not sufficient or adequate to task
Once an aligned vision of business goals is available and achieved, a standard toolset published, and IT and business units are better integrated as teams, additional benefits may become apparent.
- Duplication of effort is reduced with the availability of standardized IT tools
- Incompatible or non-interoperable organizational data is either reduced or eliminated
- More development effort is applied to developing new solutions, rather than developing basic or standardized components
- Investors will have much more confidence in management’s ability to not only make the best use of existing resources and budgets, but also the organization’s ability to exploit new business opportunities
- Focusing on a standard set of utilities and applications, such as database software, will not only improve interoperability, but also enhance the organization’s ability to influence vendor service-level agreements and support agreements, as well as reduce cost with volume purchasing
Rather than view standards as an inhibitor, or barrier to innovation, business units and other organizational stakeholders should view standards as a method of not only facilitating SOAs and interoperability, but also as a way of relieving developers from the burden of constantly recreating common sets and libraries of underlying IT utilities. If developers are free to focus their efforts on pure solutions development and responding to emerging opportunities, and rely on both technical and process standardization to guide their efforts, the result will greatly enhance an organization’s ability to be agile, while still ensuring a higher level of security, interoperability, systems portability, and innovation.
Carrier hotels are an integral part of global communications infrastructure. The carrier hotel serves a vital function, specifically the role of a common point of interconnection between facility-based (physical cable in either terrestrial, submarine, or satellite networks) carriers, networks, content delivery networks (CDNs), Internet Service Providers (ISPs), and even private or government networks and hosting companies.
In some locations, such as the One Wilshire Building in Los Angeles, or 60 Hudson in New York, several hundred carriers and service providers may interconnect physically within a main distribution frame (MDF), or virtually through interconnections at Internet Exchange Points (IXPs) or Ethernet Exchange points.
Carrier hotel operators understand that technology is starting to overcome many of the traditional forms of interconnection. With 100Gbps wavelengths and port speeds, network providers are able to push many individual virtual connections through a single interface, reducing the need for individual cross connections or interconnections to establish customer or inter-network circuits.
While connections, including internet peering and VLANs have been available for many years through IXPs and use of circuit multiplexing, software defined networking (SDNs) are poised to provide a new model of interconnections at the carrier hotel, forcing not only an upgrade of supporting technologies, but also reconsideration of the entire model and concept of how the carrier hotel operates.
Several telecom companies have announced their own internal deployments of order fulfillment platforms based on SDN, including PacNet’s PEN and Level 3’s (originally Time Warner) pilot test at DukeNet, proving that circuit design and provisioning can be easily accomplished through SDN-enabled orchestration engines.
However inter-carrier circuit or service orchestration is still not yet in common use at the main carrier hotels and interconnection points.
Taking a closer look at the carrier hotel environment we will see an opportunity based on a vision which considers that if the carrier hotel operator provides an orchestration platform which allows individual carriers, networks, cloud service providers, CDNs, and other networks to connect at a common point, with standard APIs to allow communication between different participant network or service resources, then interconnection fulfillment may be completed in a matter of minutes, rather than days or weeks as is the current environment.
This capability goes even a step deeper. Let’s say carrier “A” has an enterprise customer connected to their network. The customer has an on-demand provisioning arrangement with Carrier “A,” allowing the customer to establish communications not only within Carrier”A’s” network resources, but also flow through the carrier hotel’s interconnection broker into say, a cloud service provider’s network. The customer should be able to design and provision their own solutions – based on availability of internal and interconnection resources available through the carrier.
Participants will announce their available resources to the carrier hotel’s orchestration engine (network access broker), and those available resources can then be provisioned on-demnd by any other participant (assuming the participants have a service agreement or financial accounting agreement either based on the carrier hotel’s standard, or individual service agreements established between individual participants.
If we use NIST’s characteristics of cloud computing as a potential model, then the carrier hotels interconnection orchestration engine should ultimately provide participants:
- On-demand self-service provisioning
- Elasticity, meaning short term usage agreements, possibly even down to the minute or hour
- Resource pooling, or a model similar to a spot market (in competing markets where multiple carriers or service providers may be able to provide the same service)
- Measured service (usage based or usage-sensitive billing for service use)
- And of course broad network access – currently using either 100gbps or multiples of 100gbps (until 1tbps ports become available)
While layer 1 (physical) interconnection of network resources will always be required – the bits need to flow on fiber or wireless at some point, the future of carrier and service resource intercommunications must evolve to accept and acknowledge the need for user-driven, near real time provisioning of network and other service resources, on a global scale.
The carrier hotel will continue to play an integral role in bringing this capability to the community, and the future is likely to be based on software driven , on-demand meet-me-rooms.
Risk management has been around for a long time. Financial managers run risk assessments for nearly all business models, and the idea of risk carries nearly as many definitions as the Internet. However, for IT managers and IT professionals, risk management still frequently takes a far lower priority that other operations and support activities.
For IT managers a good, simple definition for RISK may be from the Open FAIR model which states:
“Risk is defined as the probable frequency and magnitude of future loss”
Risk management should follow a structured process acknowledging many aspects of the IT operations process, with special considerations for security and systems availability.
Frameworks, such as Open FAIR, distill risk into a structure of probabilities, frequencies, and values. Each critical system or process is considered independently, with a probability of disruption or loss event paired with a probable value.
It would not be uncommon for an organization to perform numerous risk assessments based on critical systems, identifying and correcting shortfalls as needed to mitigate the probability or magnitude of a potential event or loss. Much like other frameworks used in the enterprise architecture process / framework, service delivery (such as ITIL), or governance, the objective is to produce a structured risk assessment and analysis approach, without becoming overwhelming.
IT risk management has been neglected in many organizations, possibly due to the rapid evolution of IT systems, including cloud computing and implementation of broadband networks. When service disruptions occur, or security events occur, those organizations find themselves either unprepared for dealing with the loss magnitude of the disruptions, and a lack of preparation or mitigation for disasters may result in the organization never fully recovering from the event.
Fortunately processes and frameworks guiding a risk management process are becoming far more mature, and attainable by nearly all organizations. The Open Group’s Open FAIR standard and taxonomy provide a very robust framework, as does ISACA’s Cobit 5 Risk guidance.
In addition, the US Government’s National Institute of Standards and Technology (NIST) provides open risk assessment and management guidance for both government and non-government users within the NIST Special Publication Series, including SP 800-30 (Risk Assessment), SP 800-37 (System Risk Management Framework), and SP 800-39 (Enterprise-Wide Risk Management).
ENISA also publishes a risk management process which is compliant with the ISO 13335 standard, and builds on ISO 27005..
What is the objective of going through the risk assessment and analysis process? Of course it is to build mitigation controls, or build resistance to potential disruptions, threats, and events that would result in a loss to the company, or other direct and secondary stakeholders.
However, many organizations, particularly small to medium enterprises, either do not believe they have the resources to go through risk assessments, have no formal governance process, no formal security management process, or simply believe spending the time on activities which do not directly support rapid growth and development of the company continue to be at risk.
As managers, leaders, investors, and customers we have an obligation to ensure our own internal risk is assessed and understood, as well as from the viewpoint of customers or consumers that our suppliers and vendors are following formal risk management processes. In a fast, agile, global, and unforgiving market, the alternative is not pretty.
Software Defined Networking and Network Function Virtualization (NVF) themes dominated workshops and side conversations throughout the PTC 2015 venue in Honolulu, Hawai’i this week.
SDNs, or more specifically provisioning automation platforms service provider interconnections, and have crept into nearly all marketing materials and elevator pitches in discussions with submarine cable operators, networks, Internet Exchange Points, and carrier hotels.
While some of the material may have included a bit of “SDN Washing,” for the most part each operators and service provider engaging in the discussion understands and is scrambling to address the need for communications access, and is very serious in their acknowledgement of a pending industry “Paradigm shift” in service delivery models.
Presentations by companies such as Ciena and Riverbed showed a mature service delivery structure based on SDNS, while PacNet and Level 3 Communications (formerly TW Telecom) presented functional on-demand self-service models of both service provisioning and a value added market place.
Steve Alexander from Ciena explained some of the challenges which the industry must address such as development of cross-industry SDN-enabled service delivery and provisioning standards. In addition, as service providers move into service delivery automation, they must still be able to provide a discriminating or unique selling point by considering:
- How to differentiate their service offering
- How to differentiate their operations environment
- How to ensure industry-acceptable delivery and provisioning time cycles
- How to deal with legacy deployments
Alexander also emphasized that as an industry we need to get away from physical wiring when possible. With 100Gbps ports, and the ability to create a software abstraction of individual circuits within the 100gbps resource pool (as an example), there is a lot of virtual or logical provision that can be accomplished without the need for dozens or hundreds off physical cross connections.
The result of this effort should be an environment within both a single service provider, as well as in a broader community marketplace such as a carrier hotel or large telecomm interconnection facility (i.e., The Westin Building, 60 Hudson, One Wilshire). Some examples of actual and required deployments included:
- A bandwidth on-demand marketplace
- Data center interconnections, including within data center operators which have multiple interconnected meet-me-points spread across a geographic area
- Interconnection to other services within the marketplace such as cloud service providers (e.g., Amazon Direct Connect, Azure, Softlayer, etc), content delivery networks, SaaS, and disaster recovery capacity and services
Robust discussions on standards also spawned debated. With SDNs, much like any other emerging use of technologies or business models, there are both competing and complimentary standards. Even terms such as Network Function Virtualization / NFV, while good, do not have much depth within standard taxonomies or definitions.
During the PTC 2015 session entitled “Advanced Capabilities in the Control Plane Leveraging SDN and NFV Toward Intelligent Networks” a long listing of current standards and products supporting the “concpet” of SDNs was presented, including:
- Open Contrail
- Open Daylight
- Open Stack
- Open Flow
- Project Floodlight
- Open Networking
- and on and on….
For consumers and small network operators this is a very good development, and will certainly usher in a new era of on-demand self-service capacity provisioning, elastic provisioning (short term service contracts even down to the minute or hour), carrier hotel-based bandwidth and service marketplaces, variable usage metering and costs, allowing a much better use of OPEX budgets.
For service providers (according to discussions with several North Asian telecom carriers), it is not quite as attractive, as they generally would like to see long term, set (or fixed) contracts or wholesale capacity sales.
The connection and integration of cloud services with telecom or network services is quite clear. At some point provisioning of both telecom and compute/storage/application services will be through a single interface, on-demand, elastic (use only what you need and for only as long as you need it), usage-based (metered), and favor the end user.
While most operators get the message, and are either in the process of developing and deploying their first iteration solution, others simply still have a bit of homework to do. In the words of one CEO from a very large international data center company, “we really need to have a strategy to deal with this multi-cloud, hybrid cloud, or whatever you call it thing.”
In an informal survey of words used during seminars and discussions, two main themes are emerging at the Pacific Telecommunications Council’s 2015 annual conference. The first, as expected, is development of more submarine cable capacity both within the Pacific, as well as to end points in ANZ, Asia, and North America. The second, software defined networking (SDN), as envisioned could quickly begin to re-engineer the gateway and carrier hotel interconnection business.
New cable development, including Arctic Fiber, Trident, SEA-US, and APX-E have sparked a lot of interest. One discussion at Sunday morning’s Submarine Cable Workshop highlighted the need for Asian (and other regions) need to find ways to bypass the United States, not just for performance issues, but also to avoid US government agencies from intercepting and potentially exploiting data hitting US networks and data systems.
The bottom line with all submarine cable discussions is the need for more, and more, and more cable capacity. Applications using international communications capacity, notably video, are consuming at rates which are driving fear the cable operators won’t be able to keep up with capacity demands.
However perhaps the most interesting, and frankly surprising development is with SDNs in the meet me room (MMR). Products such as PacNet’s PEN (PacNet Enabled Network) are finally putting reality into on-demand, self-service circuit provisioning, and soon cloud computing capacity provisioning within the MMR. Demonstrations showed how a network, or user, can provisioning from 1Mbps to 10Gbps point to point within a minute.
In the past on demand provisioning of interconnections was limited to Internet Exchange Points. Fiber cross connects, VLANs, and point to point Ethernet connections. Now, as carrier hotels and MMRs acknowledge the need for rapid provisioning of elastic (rapid addition and deletion of bandwidth or capacity) resources, the physical cross connect and IXP peering tools will not be adequate for market demands in the future.
SDN models, such as PacNet’s PEN, are a very innovative step towards this vision. The underlying physical interconnection infrastructure simply becomes a software abstraction for end users (including carriers and networks) allowing circuit provisioning in a matter of minutes, rather than days.
The main requirement for full deployment is to “sell” carriers and networks on the concept, as key success factors will revolve around the network effect of participant communities. Simply, the more connecting and participating networks within the SDN “community,” the more value the SDN MMR brings to a facility or market.
A great start to PTC 2015. More PTC 2015 “sidebars” on Tuesday.
Modern Data Centers are very complex environments. Data center operators must have visibility into a wide range of integrated data bases, applications, and performance indicators to effectively understand and manage their operations and activities.
While each data center is different, all Data Centers share some common systems and common characteristics, including:
- Facility inventories
- Provisioning and customer fulfillment processes
- Maintenance activities (including computerized maintenance management systems <CMMS>)
- Customer management (including CRM, order management, etc.)
- Trouble management
- Customer portals
- Security Systems (physical access entry/control and logical systems management)
- Billing and Accounting Systems
- Service usage records (power, bandwidth, remote hands, etc.)
- Decision support system and performance management integration
- Standards for data and applications
- Staffing and activities-based management
- Scheduling /calendar
Unfortunately, in many cases, the above systems are either done manually, have no standards, and had no automation or integration interconnecting individual back office components. This also includes many communication companies and telecommunications carriers which previously either adhered, or claimed to adhere to Bellcore data and operations standards.
In some cases, the lack of integration is due to many mergers and acquisitions of companies which have unique, or non standard back office systems. The result is difficulty in cross provisioning, billing, integrated customer management systems, and accounting – the day to day operations of a data center.
Modern data centers must have a high level of automation. In particular, if a data center operator owns multiple facilities, it becomes very difficult to have a common look and feel or high level of integration allowing the company to offer a standardized product to their markets and customers.
Operational support systems or OSS, traditionally have four main components which include:
- Support for process automation
- Collection and storage for a wide variety of operational data
- The use of standardized data structures and applications
- And supporting technologies
With most commercial or public colocation and Data Centers customers and tenants organizations represent many different industries, products, and services. Some large colocation centers may have several hundred individual customers. Other data centers may have larger customers such as cloud service providers, content delivery networks, and other hosting companies. While single large customers may be few, their internal hosted or virtual customers may also be at the scale of hundreds, or even thousands of individual customers.
To effectively support their customers Data Centers must have comprehensive OSS capabilities. Given the large number of processes, data sources, and user requirements, the OSS should be designed and developed using a standard architecture and framework which will ensure OSS integration and interoperability.
We have conducted numerous Interoperability Readiness surveys with both governments and private sector (commercial) data center operators during the past five years. In more than 80% of surveys processes such as inventory management have been built within simple spreadsheets. Provisioning of inventory items was normally a manual process conducted via e-mail or in some cases paper forms.
Provisioning, a manual process, resulted in some cases of double booked or double sold inventory items, as well as inefficient orders for adding additional customer-facing inventory or build out of additional data center space.
The problem often further compounded into additional problems such as missing customer billing cycles, accounting shortfalls, and management or monitoring system errors.
The new data center, including virtual data centers within cloud service providers, must develop better OSS tools and systems to accommodate the rapidly changing need for elasticity and agility in ICT systems. This includes having as single window for all required items within the OSS.
Preparing an OSS architecture, based on a service-oriented architecture (SOA), should include use of ICT-friendly frameworks and guidance such as TOGAF and/or ITIL to ensure all visions and designs fully acknowledge and embrace the needs of each organization’s business owners and customers, and follow a comprehensive and structured development process to ensure those objectives are delivered.
Use of standard databases, APIs, service busses, security, and establishing a high level of governance to ensure a “standards and interoperability first” policy for all data center IT will allow all systems to communicate, share, reuse, and ultimately provide automated, single source data resources into all data center, management, accounting, and customer activities.
Any manual transfer of data between offices, applications, or systems must be prevented, preferring to integrate inventory, data collections and records, processes, and performance management indicators into a fully integrated and interoperable environment. A basic rule of thought might be that if a human being has touched data, then the data likely has been either corrupted or its integrity may be brought into question.
Looking ahead to the next generation of data center services, stepping a bit higher up the customer service maturity continuum requires much higher levels of internal process and customer process automation.
Similar to NIST’s definition of cloud computing, stating the essential characteristics of cloud computing include “self-service provisioning,” “rapid elasticity,” ”measured services,” in addition to resource pooling and broadband access, it can be assumed that data center users of the future will need to order and fulfill services such as network interconnections, power, virtual space (or physical space), and other services through self-service, or on-demand ordering.
“The OSS must strive to meet the following objectives:
- Reusable components and APIs
- Data sharing
To accomplish this will require nearly all above mentioned characteristics of the OSS to have inventories in databases (not spreadsheets), process automation, and standards in data structure, APIs, and application interoperability.
And as the ultimate key success factor, management DSS will finally have potential for development of true dashboard for performance management, data analytics, and additional real-time tools for making effective organizational decisions.
A couple years ago I attended several “fast pitch” competitions and events for entrepreneurs in Southern California, all designed to give startups a chance to “pitch” their ideas in about 60 seconds to a panel of representatives from the local investment community. Similar to television’s “Shark Tank,” most of the ideas pitches were harshly critiqued, with the real intent of assisting participating entrepreneurs in developing a better story for approaching investors and markets.
While very few of the pitches received a strong, positive response, I recall one young guy who really set the panel back a step in awe. The product was related to biotech, and the panel provided a very strong, positive response to the pitch.
Wishing to dig a bit deeper, one of the panel members asked the guy how much money he was looking for in an investment, and how he’d use the money.
“$5 million he responded,” with a resounding wave of nods from the panel. “I’d use around $3 million for staffing, getting the office started, and product development.” Another round of positive expressions. “And then we’d spend around $2 million setting up in a data center with servers, telecoms, and storage systems.”
This time the panel looked as if they’d just taken a crisp slap to the face. After a moment of collection, the panel spokesman launched into a dress down of the entrepreneur stating “I really like the product, and think you vision is solid. However, with a greater then 95% chance of your company going bust within the first year, I have no desire to be stuck with $2 million worth of obsolete computer hardware, and potentially contract liabilities once you shut down your data center. You’ve got to use your head and look at going to Amazon for your data center capacity and forget this data center idea.”
Now it was the entire audience’s turn to take a pause.
In the past IT managers really placed buying and controlling their own hardware, in their own facility, as a high priority – with no room for compromise. For perceptions of security, a desire for personal control, or simply a concern that outsourcing would limit their own career potential, sever closets and small data centers were a common characteristic of most small offices.
At some point a need to have proximity to Internet or communication exchange points, or simple limitations on local facility capacity started forcing a migration of enterprise data centers into commercial colocation. For the most part, IT managers still owned and controlled any hardware outsourced into the colocation facility, and most agreed that in general colocation facilities offered higher uptime, fewer service disruptions, and good performance, in particular for eCommerce sites.
Now we are at a new IT architecture crossroads. Is there really any good reason for a startup, medium, or even large enterprise to continue operating their own data center, or even their own hardware within a colocation facility? Certainly if the average CFO or business unit manager had their choice, the local data center would be decommissioned and shut down as quickly as possible. The CAPEX investment, carrying hardware on the books for years of depreciation, lack of business agility, and dangers of business continuity and disaster recovery costs force the question of “why don’t we just rent IT capacity from a cloud service provider?”
Many still question the security of public clouds, many still question the compliance issues related to outsourcing, and many still simply do not want to give up their “soon-to-be-redundant” data center jobs.
Of course it is clear most large cloud computing companies have much better resources available to manage security than a small company, and have made great advances in compliance certifications (mostly due to the US government acknowledging the role of cloud computing and changing regulations to accommodate those changes). If we look at the US Government’s FedRAMP certification program as an example, security, compliance, and management controls are now a standard – open for all organizations to study and adopt as appropriate.
So we get back to the original question, what would justify a company in continuing to develop data centers, when a virtual data center (as the first small step in adopting a cloud computing architecture) will provide better flexibility, agility, security, performance, and lower cost than operating a local of colocated IT physical infrastructure? Sure, exceptions exist, including some specialized interfaces on hardware to support mining, health care, or other very specialized activities. However if you re not in the computer or switch manufacturing business – can you really continue justifying CAPEX expenditures on IT?
IT is quickly becoming a utility. As a business we do not plan to build roads, build water distribution, or build our own power generation plants. Compute, telecom, and storage resources are becoming a utility, and IT managers (and data center / colocation companies) need to do a comprehensive review of their business and strategy, and find a way to exploit this technology reality, rather than allow it to pass us by.