A good indication any new technology or business model is starting to mature is the number of certifications popping up related to that product, framework, or service. Cloud computing is certainly no exception, with vendors such as Microsoft, Google, VMWare, and IBM offering certification training for their own products, as well as organizations such CompTIA and Architura competing for industry neutral certifications.
Is this all hype, or is it an essential part of the emerging cloud computing ecosystem? Can we remember the days when entry level Cisco, Microsoft, or other vendor certifications were almost mocked by industry elitists?
Much like the early Internet days of eEverything, cloud computing is at the point where most have heard the term, few understand the concepts, and marketing folk are exploiting every possible combination of the words to place their products in a favorable, forward leaning light.
So, what if executive management takes a basic course in cloud computing principles, or sales and customer service people take a Cloud 101 course? Is that bad?
Of course not. Cloud computing has the potential of being transformational to business, governments, organization, and even individuals. Business leaders need to understand the potential and impact of what a service-oriented cloud computing infrastructure might mean to their organization, the game-changing potential of integration and interoperability, the freedom of mobility, and the practical execution of basic cloud computing characteristics within their ICT environment.
A certification is not all about getting the test, and certificate. As an instructor for the CompTIA course, I manage classes of 20 or more students ranging from engineers, to network operations center staff, to customer service and sales, to mid-level executives. We’ve yet to encounter an individual who claims they have learned nothing from attending the course, and most leave the course with a very different viewpoint of cloud computing than held prior to the class.
As with most technology driven topics, cloud computing does break into different branches – including technical, operations, and business utility.
The underlying technologies of cloud computing are probably the easiest part of the challenge, as ultimately skills will develop based on time, experience, and operation of cloud-related technologies.
The more difficult challenge is understanding the impact of cloud computing may mean to an organization, both internally as well as on a global scale. No business-related discussion of cloud computing is complete without consideration of service-oriented architectures, enterprise architectures, interoperability, big data, disaster management, and continuity of operations.
Business decisions on data center consolidation, ICT outsourcing, and other aspects of the current technology refresh or financial consideration will be more effective and structured when accompanied by a basic business and high level understanding of cloud computing underlying technologies. As an approach to business transformation, additional complimentary capabilities in enterprise architecture, service-oriented architectures, and IT service management will certainly help senior decision makers best understand the relationship between cloud computing and their organizational planning.
While reading the news, clipping stories, and self-study may help decision makers understand the basic components of cloud computing and other supporting technologies. Taking an introduction cloud computing course, regardless if vendor training or neutral, will give enough background knowledge to at least engage in the conversation. Given the hype surrounding cloud computing, and the potential long term consequences of making an uniformed decision, the investment in cloud computing training must be considered valuable at all levels of the organization, from technical to senior management.
Day two of the Gartner Data Center Conference in Las Vegas continued reinforcing old topics, appearing at times to be either enlist attendees in contributing to Gartner research, or simply providing conference content directed to promoting conference sponsors.
For example, sessions “To the Point: When Open Meets Cloud” and “Backup/Recovery: Backing Up the Future” included a series of audience surveys. Those surveys were apparently the same as presented, in the same sessions, for several years. Thus the speaker immediately referenced this year’s results vs. results from the same survey questions from the past two years. This would lead a casual attendee to believe nothing radically new is being presented in the above topics, and the attendees are generally contributing to further trend analysis research that will eventually show up in a commercial Gartner Research Note.
Gartner analyst and speaker on the topic of “When Open Meets Clouds,” Aneel Lakhani, did make a couple useful, if not obvious points in his presentation.
- We cannot secure complete freedom from vendors, regardless of how much you adopt open source
- Open source can actually be more expensive than commercial products
- Interoperability is easy to say, but a heck of a lot more complicated to implement
- Enterprise users have a very low threshold for “test” environments (sorry DevOps guys)
- If your organization has the time and staff, test, test, and test a bit more to ensure your open source product will perform as expected or designed
However analyst Dave Russell, speaker on the topic of “Backup/Recovery” was a bit more cut and paste in his approach. Lots of questions to match against last year’s conference, and a strong emphasis on using tape as a continuing, if not growing media for disaster recovery.
Problem with this presentation was the discussion centered on backing up data – very little on business continuity. In fact, in one slide he referenced a recovery point objective (RPO) of one day for backups. What organization operating in a global market, in Internet time, can possibly design for a one day RPO?
In addition, there was no discussion on the need for compatible hardware in a disaster recovery site that would allow immediate or rapid restart of applications. Having data on tape is fine. Having mainframe archival data is fine. But without a business continuity capability, it is likely any organization will suffer significant damage in their ability to function in their marketplace. Very few organizations today can absorb an extended global presence outage or marketplace outage.
The conference continues until Thursday and we will look for more, positive approaches, to data center and cloud computing.
Federal, state, and local government agencies gathered in Washington D.C. on 16 February to participate in Cloud/Gov 2012 held at the Westin Washington D.C. With Keynotes by David L. McLure, US General Services Administration, and Dawn Leaf, NIST, vendors and government agencies were brought up to date on federal cloud policies and initiatives.
Of special note were updates on the FedRAMP program (a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services) and NIST’s progress on standards. “The FedRAMP process chart looks complicated” noted McLure, “however we are trying to provide support needed to accelerate the (FedRAMP vendor) approval process.
McLure also provided a roadmap for FedRAMP implementation, with FY13/Q2 targeted for full operation and FY14 planned for sustaining operations.
In a panel focusing on government case studies, David Terry from the Department of Education commented that “mobile phones are rapidly becoming the access point (to applications and data) for young people.” Applications (SaaS) should be written to accommodate mobile devices, and “auto-adjust to user access devices.”
Tim Matson from DISA highlighted the US Department of Defense’s Forge.Mil initiative providing an open collaboration community for both the military and development community to work together in rapidly developing new applications to better support DoD activities. While Forge.Mil has tighter controls than standard GSA (US General Services Administration) standards, Matson emphasized “DISA wants to force the concept of change into the behavior of vendors.” Matson continued explaining that Forge.Mil will reinforce “a pipeline to support continuous delivery” of new applications.
While technology and process change topics provided a majority of discussion points, mostly enthusiastic, David Mihalchik from Google advised “we still do not know the long term impact of global collaboration. The culture is changing, forced on by the idea of global collaboration.”
Other areas of discussion among panel members throughout the day included the need for establishing and defining service level agreements (SLAs) for cloud services. Daniel Burton from SalesForce.Com explained their SLAs are broken into two categories, SLAs based on subscription services, and those based on specific negotiations with government customers. Other vendors took a stab at explaining their SLAs, without giving specific examples of their SLAs, leaving the audience without a solid answer.
NIST Takes the Leadership Role
The highlight of the day was provided by Dawn Leaf, Senior Executive for Cloud Computing with NIST. Leaf provided very logical guidance for all cloud computing stakeholders, including vendors and users.
“US industry requires an international standard to ensure (global) competitiveness” explained Leaf. In the past US vendors and service providers have developed standards which were not compatible with European and other standards, notably in wireless telephony, and one of NIST’s objectives is to participate in developing a global standard for cloud computing to prevent this possibility in cloud computing.
Cloud infrastructure and SaaS portability is also a high interest item for NIST. Leaf advises that “we can force vendors into demonstrating their portability. There are a lot of new entries in the business, and we need to force the vendors into proving their portability and interoperability.”
Leaf also reinforced the idea that standards are developed in the private sector. NIST provides guidance and an architectural framework for vendors and the private sector to use as reference when developing those specific technical standards. However leaf also had one caution for private industry, “industry should try to map their products to NIST references, as the government is not in a position to wait” for extended debates on the development of specific items, when the need for cloud computing development and implementation is immediate.
Further information on the conference, with agendas and participants is available at www.sia.net
With dozens of public cloud service providers on the market, offering a wide variety of services, standards, SLAs, and options, how does an IT manager make an informed decision on which provider to use? Is it time in business? Location? Cost? Performance?
Pacific-Tier Communications met up with Jason Read, owner of CloudHarmony, a company specializing in benchmarking the cloud, at Studio City, California, on 25 October. Read understands how confusing and difficult it is to evaluate different service providers without an industry-standard benchmark. In fact, Read started CloudHarmony based on his own frustrations as a consultant helping a client choose a public cloud service provider, while attempting to sort through vague cloud resource and service terms used by industry vendors.
“Cloud is so different. Vendors describe resources using vague terminology like 1 virtual CPU, 50 GB storage. I think cloud makes it much easier for providers to mislead. Not all virtual CPUs and 50 GB storage volumes are equal, not by a long shot, but providers often talk and compare as if they are. It was this frustration that led me to create CloudHarmony” explained Read.
So, Read went to work creating a platform for not only his client, but also other consultants and IT managers that would give a single point of testing public cloud services not only within the US, but around the world. Input to the testing platform came from aggregating more than 100 testing benchmarks and methodologies available to the public. However CloudHarmony standardized on CentOS/RHEL Linux as an operating system which all cloud vendors support, “to provide as close to an apples to apples comparison as possible” said Read.
Customizing a CloudHarmony Benchmark Test
Setting up a test is simple. You go to the CloudHarmony Benchmarks page, select the benchmarks you would like to run, the service providers you would like to test, configurations of virtual options within those service providers, geographic location, and the format of your report.
Figure 1. Benchmark Configuration shows a sample report setup.
“CloudHarmony is a starting point for narrowing the search for a public cloud provider” advised Read. “We provide data that can facilitate and narrow the selection process. We don’t have all of the data necessary to make a decision related to vendor selection, but I think it is a really good starting point.
Read continued “for example, if a company is considering cloud for a very CPU intensive application, using the CPU performance metrics we provide, they’d quickly be able to eliminate vendors that utilize homogenous infrastructure with very little CPU scaling capabilities from small to larger sized instance.”
Cloud vendors listed in the benchmark directory are surprisingly open to CoudHarmony testing. “We don’t require or accept payment from vendors to be listed on the site and included in the performance analysis” mentioned Read. “We do, however, ask that vendors provide resources to allow us to conduct periodic compute benchmarking, continual uptime monitoring, and network testing.”
When asked if cloud service providers contest or object to CloudHarmony’s methodology or reports, Read replied “not frequently. We try to be open and fair about the performance analysis. We don’t recommend one vendor over another. I’d like CloudHarmony to simply be a source of reliable, objective data. The CloudHarmony performance analysis is just a piece of the puzzle, users should also consider other factors such as pricing, support, scalability, etc.”
During an independent trial of CloudHarmony’s testing tool, Pacific-Tier Communications selected the following parameters to complete a sample CPU benchmark:
- CPU Benchmark (Single Threaded CPU)
- GMPbench math library
- Cloud Vendor – AirVM (MO/USA)
- Cloud Vendor – Amazon EC2 (CA/USA)
- Cloud Vendor – Bit Refinery Cloud Hosting (CO/USA)
- 1/2/4 CPUs
- Small/Medium/Large configs
- Bar Chart and Sortable Table report
The result, shown above in Figure 2., shows a test result including performance measured against each of the above parameters. Individual tests for each parameter are available, allowing a deeper look into the resources used and test results based on those resources.
In addition, as shown in Figure 3., CloudHarmony provides a view providing uptime statistics of dozens of cloud service providers over a period of one year. Uptime statistics showed a range (at the time of this article) between 98.678% availability to 100% availability, with 100% current uptime (27 October).
Who Uses CloudHarmony Benchmark Testing?
While the average user today may be in the cloud computing industry, likely vendors eager to see how their product compares against competitors, Read targets CloudHarmony’s product to “persons responsible for making decisions related to cloud adoption.” Although he admits that today most users of the site lean towards the technical side of the cloud service provider industry.
Running test reports on cloud harmony is based on a system of purchasing credits. Read explained “we have a system in place now where the data we provide is accessible via the website or web services – both of which rely on web service credits to provide the data. Currently, the system is set up to allow 5 free requests daily. For additional requests, we sell web service credits where we provide a token that authorizes you to access the data in addition to the 5 free daily requests.”
The Bottom Line
“Cloud is in many ways a black box” noted Read. “Vendors describe the resources they sell using sometimes similar and sometimes very different terminology. It is very difficult to compare providers and to determine performance expectations. Virtualization and multi-tenancy further complicates this issue by introducing performance variability. I decided to build CloudHarmony to provide greater transparency to the cloud.”
And to both vendors and potential cloud service customers, provide an objective, honest, transparent analysis of commercially available public cloud services.
Check out CloudHarmony and their directory of services at cloudharmony.com.
In an online “blogger” press conference on 5 August, Erik Bansleben, Ph. D., Program Development Director, Academic Programs at the University of Washington outlined a new certificate program offered by the university in Cloud Computing. The program is directed towards “college level and career professionals” said Bansleben, adding “all courses are practical in approach.”
Using a combination of classroom and online instruction, the certificate program will allow flexibility accommodating remote students in a virtual extension of the residence program. While not offering formal academic credit for the program, the certificates are “well respected locally by employers, and really tend to help students a fair amount in getting internships, getting new jobs, or advancing in their current jobs.”
The Certificate in Cloud Computing is broken into three courses, including:
- Introduction to Cloud Computing
- Cloud Computing in Action
- Scalable & Data-Intensive Computing in the Cloud
The courses are taught by instructors from both the business community and the University’s Department of Computer Science & Engineering. Topics within each course are designed to provide not only an overview of the concepts and value of cloud computing in a business sense, but also includes project work and assignments.
To bring more relevance to students, Bansleben noted “part of the courses will be based on student backgrounds and student interests.” Dr. Bill Howe, instructor for the “Scalable & Data-Intensive Computing in the Cloud” course added “nobody is starting a company without being in the clouds.” With the program covering topical areas such as:
- Cloud computing models: software as a service (SaaS), platform as a service (PaaS), infrastructure as a service (laaS) and database as a service
- Market overview of cloud providers
- Strategic technology choices and development tools for basic cloud application building
- Web-scale analytics and frameworks for processing large data sets
- Database query optimization
- Fault tolerance and disaster recovery
Students will walk away with a solid background of cloud computing and how it will impact future planning for IT infrastructure. In addition, each course will invite guest speakers from cloud computing vendors and industry leaders to present actual case studies to further apply context to course theory. Bansleben reinforced the plan to provide students with specific “use cases for or against using cloud services vs. using your own hosted services.”
Not designed as a simple high level overview of cloud computing concepts, the program does require students to have a background in IT networks and protocols, as well as familiarity with file manipulation in system environments such as Linux. Bansleben stated that “some level of programming experience is required” as a prerequisite to participate in the certificate program.
The Certificate in Cloud Computing program starts on 10 October, and will cost students around $2,577 for the entire program. The program is limited to 40 students, including both resident and online. For more information on University of Washington certificate programs or the Certificate in Cloud Computing contact:
Erik Bansleben, Program Development Director
Every week a new data center hits the news with claims of greater than 100,000 square feet at >300 watts/square foot, and levels of security rivaling that of the NSA. Hot and cold aisle containment, marketing people slinging terms such as PUE (Power Utilization Efficiency), modular data centers, containers, computational fluid dynamics, and outsourcing with such smoothness and velocity that even used car salesmen regard them in complete awe.
Don’t get me wrong, outsourcing your enterprise data center or Internet site into a commercial data center (colocation), or cloud computing-supported virtual data center, is not a bad thing. As interconnections between cities are reinforced, and sufficient levels of broadband access continues to find its way to both business and residences throughout the country – not to mention all the economic drivers such as OPEX, CAPEX, and flexibility in cloud environments, the need or requirement to maintain an internal data center or server closet makes little sense.
Small Data Centers Feel Pain
In the late 1990s data center colocation started to develop roots. The Internet was becoming mature, and eCommerce, entertainment, business-to-business, academic, government IT operations found proximity to networks a necessity, and the colocation industry formed to meet the opportunity stimulated by Internet adoption.
Many of these data centers were built in “mixed use” buildings, or existing properties in city centers which were close to existing telecommunication infrastructure. In cities such as Los Angeles, the commercial property absorption in city centers was at a low, providing very available and affordable space for the emerging colocation industry.
The power densities in those early days was minimal, averaging somewhere around 70 watts/square foot. Thus, equipment installed in colocation space carved out of office buildings was manageable through over-subscribing air conditioning within the space. The main limitation in the early colocation days was floor loading within an office space, as batteries and equipment cabinets within colocation areas would stretch building structures to their limits.
As the data center industry, and Internet content hosting continued to grow, the amount of equipment being placed in mixed-use building colocation centers finally started reaching a breaking point in ~2005. The buildings simply could not support the requirement for additional power, cooling, backup generators needed to support the rapidly developing data center market.
Around that time a new generation of custom-built data center properties began construction, with very little limitation on either weight, power consumption, cooling requirements, or creativity in custom designs of space to gain greatest PUE factors and move towards “green” designs.
The “boom town” inner-city data centers then began experiencing difficulty attracting new customers and retaining their existing customer base. Many of the “dot com” customers ran out of steam during this period, going bankrupt or abandoning their cabinets and cages, while new data center customers fit into a few categories:
- High end hosting and content delivery networks (CDNs), including cloud computing
- Enterprise outsourcing
- Telecom companies, Internet Service Providers, Network Service Providers
With few exceptions these customers demanded much higher power densities, physical security, redundancy, reliability, and access to large numbers of communication providers. Small data centers operating out of office building space find it very difficult to meet demands of high end users, and thus the colocation community began a migration the larger data centers. In addition, the loss of cash flow from “dot com” churn forced many data centers to shut down, leaving much of the small data center industry in ruins.
Data Center Consolidation and Cloud Computing Compounds the Problem
New companies are finding it very difficult to justify spending money on physical servers and basic software licenses. if you are able to spool up servers and storage on demand through a cloud service provider – why waste the time and money trying to build your own infrastructure – even infrastructure outsourced or colocated in a small data center? It is simply a bad investment for most companies to build data centers – particularly if the cloud service provider has inherent disaster recovery and backup utility.
Even existing small eCommerce sites hitting refresh cycles for their hardware and software find it difficult to continue one or two cabinet installations within small data centers when they can accomplish the same thing, for a lower cost, and receive higher performance refreshing in a cloud service provider.
Even the US Government, as the world’s largest IT user has turned its back on small data center installations throughout federal government agencies.
The goals of the Federal Data Center Consolidation Initiative are to assist agencies in identifying their existing data center assets and to formulate consolidation plans that include a technical roadmap and consolidation targets. The Initiative aims to address the growth of data centers and assist agencies in leveraging best practices from the public and private sector to:
- Promote the use of Green IT by reducing the overall energy and real estate footprint of government data centers;
- Reduce the cost of data center hardware, software and operations;
- Increase the overall IT security posture of the government; and,
- Shift IT investments to more efficient computing platforms and technologies.
To harness the benefits of cloud computing, we have instituted a Cloud First policy. This policy is intended to accelerate the pace at which the government will realize the value of cloud computing by requiring agencies to evaluate safe, secure cloud computing options before making any new investments. (Federal Cloud Computing Strategy)
Adding similar initiatives in the UK, Australia, Japan, Canada, and other countries to eliminate inefficient data center programs, and the level of attention being given to these initiatives in the private sector, it is a clear message that inefficient data center installations may become an exception.
Hope for Small Data Centers?
Absolutely! There will always be a compelling argument for proximity of data and applications to end users. Whether this be enterprise data, entertainment, or disaster recovery and business continuity, there is a need for well built and managed data centers outside of the “Tier 1” data center industry.
Internet and applications/data access is no longer a value-added service, it is critical infrastructure. Even the most “shoestring” budget facility will need to meet basic standards published by BICSI (Ex BICSI 2010-002), the Telecom Industry Association (TIA-942), or even private organizations such as the Uptime Institute.
With the integration of network-enabled everything into business and social activities, investors and insurance companies are demanding audits of data centers, using audit standards such as SAS70 to provide confidence their investments are protected with satisfactory operational process and construction.
Even if a data center cannot provide 100,000 square feet of 300 watt space, but can provide the local market with adequate space and quality to meet customer needs, there will be a market.
This is particularly true for customers who require flexibility in service agreements, custom support, a large selection of telecommunications companies available within the site, and have a need for local business continuity options. Hosting a local Internet exchange point or carrier Ethernet exchange within the facility would also make the space much more attractive.
The Road Ahead
Large data centers and cloud service providers are continuing to expand, developing their options and services to meet the growing data center consolidation and virtualization trend within both enterprise and global Internet-facing community. This makes sense, and will provide a very valuable service for a large percentage of the industry.
Small data centers in Tier 1 cities (in the US that would include Los Angeles, the Northern California Bay Area, New York, Northern Virginia/DC/MD) are likely to find difficulty competing with extremely large data centers – unless they are able to provide a very compelling service such as hosting a large carrier hotel (network interconnection point), Internet Exchange Point, or Cloud Exchange.
However, there will always be a need for local content delivery, application (and storage) hosting, disaster recovery, and network interconnection. Small data centers will need to bring their facilities up to international standards to remain competitive, as their competition is not local, but large data centers in Tier 1 cities.