Yield to Cloud A couple years ago I attended several “fast pitch” competitions and events for entrepreneurs in Southern California, all designed to give startups a chance to “pitch” their ideas in about 60 seconds to a panel of representatives from the local investment community.  Similar to television’s “Shark Tank,” most of the ideas pitches were harshly critiqued, with the real intent of assisting participating entrepreneurs in developing a better story for approaching investors and markets.

While very few of the pitches received a strong, positive response, I recall one young guy who really set the panel back a step in awe.  The product was related to biotech, and the panel provided a very strong, positive response to the pitch.

Wishing to dig a bit deeper, one of the panel members asked the guy how much money he was looking for in an investment, and how he’d use the money.

“$5 million he responded,” with a resounding wave of nods from the panel.  “I’d use around $3 million for staffing, getting the office started, and product development.”  Another round of positive expressions.  “And then we’d spend around $2 million setting up in a data center with servers, telecoms, and storage systems.”

This time the panel looked as if they’d just taken a crisp slap to the face.  After a moment of collection, the panel spokesman launched into a dress down of the entrepreneur stating “I really like the product, and think you vision is solid.  However, with a greater then 95% chance of your company going bust within the first year, I have no desire to be stuck with $2 million worth of obsolete computer hardware, and potentially contract liabilities once you shut down your data center.  You’ve got to use your head and look at going to Amazon for your data center capacity and forget this data center idea.”

Now it was the entire audience’s turn to take a pause.

In the past IT managers really placed buying and controlling their own hardware, in their own facility, as a high priority – with no room for compromise.  For perceptions of security, a desire for personal control, or simply a concern that outsourcing would limit their own career potential, sever closets and small data centers were a common characteristic of most small offices.

At some point a need to have proximity to Internet or communication exchange points, or simple limitations on local facility capacity started forcing a migration of enterprise data centers into commercial colocation.  For the most part, IT managers still owned and controlled any hardware outsourced into the colocation facility, and most agreed that in general colocation facilities offered higher uptime, fewer service disruptions, and good performance, in particular for eCommerce sites.

Now we are at a new IT architecture crossroads.  Is there really any good reason for a startup, medium, or even large enterprise to continue operating their own data center, or even their own hardware within a colocation facility?  Certainly if the average CFO or business unit manager had their choice, the local data center would be decommissioned and shut down as quickly as possible.  The CAPEX investment, carrying hardware on the books for years of depreciation, lack of business agility, and dangers of business continuity and disaster recovery costs force the question of “why don’t we just rent IT capacity from a cloud service provider?”

Many still question the security of public clouds, many still question the compliance issues related to outsourcing, and many still simply do not want to give up their “soon-to-be-redundant” data center jobs.

Of course it is clear most large cloud computing companies have much better resources available to manage security than a small company, and have made great advances in compliance certifications (mostly due to the US government acknowledging the role of cloud computing and changing regulations to accommodate those changes).  If we look at the US Government’s FedRAMP certification program as an example, security, compliance, and management controls are now a standard – open for all organizations to study and adopt as appropriate.

So we get back to the original question, what would justify a company in continuing to develop data centers, when a virtual data center (as the first small step in adopting a cloud computing architecture) will provide better flexibility, agility, security, performance, and lower cost than operating a local of colocated IT physical infrastructure?  Sure, exceptions exist, including some specialized interfaces on hardware to support mining, health care, or other very specialized activities.  However if you re not in the computer or switch manufacturing business – can you really continue justifying CAPEX expenditures on IT?

IT is quickly becoming a utility.  As a business we do not plan to build roads, build water distribution, or build our own power generation plants.  Compute, telecom, and storage resources are becoming a utility, and IT managers (and data center / colocation companies) need to do a comprehensive review of their business and strategy, and find a way to exploit this technology reality, rather than allow it to pass us by.

Cloud Computing has helped us understand both the opportunity, and the need, to decouple physical IT infrastructure from the requirements of business.  In theory cloud computing greatly enhances an organization’s ability to not only decommission inefficient data center resources, but even more importantly eases the process an organization needs to develop when moving to integration and service-orientation within supporting IT systems.

Current cloud computing standards, such as published by the US National Institute of Standards and Technology (NIST) have provided very good definitions, and solid reference architecture for understanding at a high level a vision of cloud computing.

image However these definitions, while good for addressing the vision of cloud computing, are not at a level of detail needed to really understand the potential impact of cloud computing within an existing organization, nor the potential of enabling data and systems resources to meet a need for interoperability of data in a 2020 or 2025 IT world.

The key to interoperability, and subsequent portability, is a clear set of standards.  The Internet emerged as a collaboration of academic, government, and private industry development which bypassed much of the normal technology vendor desire to create a proprietary product or service.  The cloud computing world, while having deep roots in mainframe computing, time-sharing, grid computing, and other web hosting services, was really thrust upon the IT community with little fanfare in the mid-2000s.

While NIST, the Open GRID Forum, OASIS, DMTF, and other organizations have developed some levels of standardization for virtualization and portability, the reality is applications, platforms, and infrastructure are still largely tightly coupled, restricting the ease most developers would need to accelerate higher levels of integration and interconnections of data and applications.

NIST’s Cloud Computing Standards Roadmap (SP 500-291 v2) states:

…the migration to cloud computing should enable various multiple cloud platforms seamless access between and among various cloud services, to optimize the cloud consumer expectations and experience.

Cloud interoperability allows seamless exchange and use of data and services among various cloud infrastructure offerings and to the the data and services exchanged to enable them to operate effectively together.”

Very easy to say, however the reality is, in particular with PaaS and SaaS libraries and services, that few fully interchangeable components exist, and any information sharing is a compromise in flexibility.

The Open Group, in their document “Cloud Computing Portability and Interoperability” simplifies the problem into a single statement:

“The cheaper and easier it is to integrate applications and systems, the closer you are getting to real interoperability.”

The alternative is of course an IT world that is restrained by proprietary interfaces, extending the pitfalls and dangers of vendor lock-in.

What Can We Do?

The first thing is, the cloud consumer world must make a stand and demand vendors produce services and applications based on interoperability and data portability standards.  No IT organization in the current IT maturity continuum should be procuring systems that do not support an open, industry-standard, service-oriented infrastructure, platform, and applications reference model (Open Group).

In addition to the need for interoperable data and services, the concept of portability is essential to developing, operating, and maintaining effective disaster management and continuity of operations procedures.  No IT infrastructure, platform, or application should be considered which does not allow and embrace portability.  This includes NIST’s guidance stating:

“Cloud portability allows two or more kinds of cloud infrastructures to seamlessly use data and services from one cloud system and be used for other cloud systems.”

The bottom line for all CIOs, CTOs, and IT managers – accept the need for service-orientation within all existing or planned IT services and systems.  Embrace Service-Oriented Architectures, Enterprise Architecture, and at all costs the potential for vendor lock-in when considering any level of infrastructure or service.

Standards are the key to portability and interoperability, and IT organizations have the power to continue forcing adoption and compliance with standards by all vendors.  Do not accept anything which does not fully support the need for data interoperability.

The NexGen Cloud Computing Conference kicked off in San Diego on Thursday with a fair amount of hype and a lot of sales people.  Granted the intent of the conference is for cloud computing vendors to find and NexGen Cloud Conference develop either sales channels, or business development opportunities within the market.

As an engineer, the conference will probably result in a fair amount of frustration, but will at least provide a level of awareness in how an organization’s sales, marketing, and business teams are approaching their vision of a cloud computing product or service delivery.

However, one presentation stood out.  Terry Hedden, from Marketopia, made some very good points.  His presentation was entitled “How to Build a Successful Cloud Practice.”  While the actual presentation is not so important, he made several points, which I’ll refer to as “Heddonisms,” which struck me as important enough, or amusing enough, to record.

Heddonisms for the Cloud Age:

  • Entire software companies are transitioning to SaaS development.  Lose the idea of licensed software – think of subscription software.
  • Integrators and consultants have a really good future – prepare yourself.
  • The younger generation does not attend tech conferences.  Only old people who think they can sell things, get new jobs, or are trying to put some knowledge to the junk they are selling (the last couple of points are mine).
  • Companies selling hosted SaaS products and services are going to kill those who still hang out at the premise.
  • If you do not introduce cloud services to your customers. your competitor will introduce cloud to your customers.
  • If you are not aspiring to be a leader in cloud, you are not relevant.
  • There is little reason to go into the IaaS business yourself.  Let the big guys build infrastructure – you can make higher margins selling their stuff.  In general, IaaS companies are really bad sales organizations (also mine…).
  • Budgets for security at companies like Microsoft are much higher than for smaller companies.  Thus, it is likely Microsoft’s ability to design, deploy, monitor, and manage secure infrastructure is much higher than the average organization.
  • Selling cloud is easy – you are able to relieve your customers of most up front costs (like buying hardware, constructing data centers, etc.).
  • If you simply direct your customer to Microsoft or Google’s website for a solution, then you are adding no value to our customer.
  • If you hear the word “APP” come up in a conversation, just turn around and run away.
  • If you assist a company in a large SaaS implementation (successfully), they will likely be your customer for life.
  • Don’t do free work or consulting – never (this really hurt me to hear – guilty as charged…).
  • Customers have one concern, and one concern only – Peace of Mind.  Make their pains go away, and you will be successful.  Don’t give them more problems.
  • Customers don’t care what is behind the curtain (such as what kind of computers or routers you are using).  They only care about you taking the pain of stuff that doesn’t make them money away from their lives.
  • Don’t try to sell to IT guys and engineers.  Never.  Never. Never.
  • The best time to work with a company is when they are planning for their technology refresh cycles.

Heddon was great.  While he may have a bit of contempt for engineers (I have thick skin, I can live with the wounds), he provided a very logical and realistic view of how to approach selling and deploying cloud computing.

Now about missing the point.  Perhaps the biggest shortfall of the conference, in my opinion, is that most presentations and even vendor efforts solved only single silos of issues.  Nobody provided an integrated viewpoint of how cloud computing is actually just one tool an organization can use within a larger, planned, architecture.

No doubt I have become bigoted myself after several years of plodding through TOGAF, ITIL, COBIT, Risk Assessments, and many other formal IT-supporting frameworks.  Maybe a career in the military forced me into systems thinking and structured problem solving.  Maybe I lack a higher level of innovative thinking or creativity – but I crave a structured, holistic approach to IT.

Sadly, I got no joy at the NexGen Cloud Computing Conference.  But, I would have driven from LA to San Diego just for Heddon’s presentation and training session – that made the cost of conference and time a valuable investment.

Just finished another ICT-related technical assistance visit with a developing country government. Even in mid-2014, I spend a large amount of time teaching basic principles of enterprise architecture, and the need for adding form and structure to ICT strategies.

Service-oriented architectures (SOA) have been around for quite a long time, with some references going back to the 1980s. ITIL, COBIT, TOGAF, and other ICT standards or recommendations have been around for quite a long time as well, with training and certifications part of nearly every professional development program.

So why is the idea of architecting ICT infrastructure still an abstract to so many in government and even private industry? It cannot be the lack of training opportunities, or publicly available reference materials. It cannot be the lack of technology, or the lack of consultants readily willing to assist in deploying EA, SOA, or interoperability within any organization or industry cluster.

During the past two years we have run several Interoperability Readiness Assessments within governments. The assessment initially takes the form of a survey, and is distributed to a sample of 100 or more participants, with positions ranging from administrative task-based workers, to Cxx or senior leaders within ministries and government agencies.

Questions range from basic ICT knowledge to data sharing, security, and decision support systems.

While the idea of information silos is well-documented and understood, it is still quite surprising to see “siloed” attitudes are still prevalent in modern organizations.  Take the following question:

Question on Information Sharing

This question did not refer to sharing data outside of the government, but rather within the government.  It indicates a high lack of trust when interacting with other government agencies, which will of course prevent any chance of developing a SOA or facilitating information sharing among other agencies.  The end result is a lower level of both integrity and value in national decision support capability.

The Impact of Technology and Standardization

Most governments are considering or implementing data center consolidation initiatives.  There are several good reasons for this, including:

  • Cost of real estate, power, staffing, maintenance, and support systems
  • Transition from CAPEX-based ICT infrastructure to OPEX-based
  • Potential for virtualization of server and storage resources
  • Standardized cloud computing resources

While all those justifications for data center consolidation are valid, the value potentially pales in comparison of the potential of more intelligent use of data across organizations, and even externally to outside agencies.  To get to this point, one senior government official stated:

“Government staff are not necessarily the most technically proficient.  This results in reliance on vendors for support, thought leadership, and in some cases contractual commitments.  Formal project management training and certification are typically not part of the capacity building of government employees.

Scientific approaches to project management, especially ones that lend themselves to institutionalization and adoption across different agencies will ensure a more time-bound and intelligent implementation of projects. Subsequently, overall knowledge and technical capabilities are low in government departments and agencies, and when employees do gain technical proficiency they will leave to join private industry.”

There is also an issue with a variety of international organizations going into developing countries or developing economies, and offering no or low cost single-use ICT infrastructure, such as for health-related agencies, which are not compatible with any other government owned or operated applications or data sets.

And of course the more this occurs, the more difficult it is for government organizations to enable interoperability or data sharing, and thus the idea of an architecture or data sharing become either impossible or extremely difficult to implement or accomplish.

The Road to EA, SOAs, and Decision Support

There are several actions to take on the road to meeting our ICT objectives.

  1. Include EA, service delivery (ITIL), governance (COBIT), and SOA training in all university and professional ICT education programs.  It is not all about writing code or configuring switches, we need to ensure a holistic understanding of ICT value in all ICT education, producing a higher level of qualified graduates entering the work force.
  2. Ensure government and private organizations develop or adopt standards or regulations which drive enterprise architecture, information exchange models, and SOAs as a basic requirement of ICT planning and operations.
  3. Ensure executive awareness and support, preferably through a formal position such as the Chief Information Officer (CIO).  Principles developed and published via the CIO must be adopted and governed by all organizations,
    Nobody expects large organizations, in particular government organizations, to change their cultures of information independence overnight.  This is a long term evolution as the world continues to better understand the value and extent of value within existing data sets, and begin creating new categories of data.  Big data, data analytics, and exploitation of both structured and unstructured data will empower those who are prepared, and leave those who are not prepared far behind.
    For a government, not having the ability to access, identify, share, analyze, and address data created across agencies will inhibit effective decision support, with potential impact on disaster response, security, economic growth, and overall national quality of life.
      If there is a call to action in this message, it is for governments to take a close look at how their national ICT policies, strategies, human capacity, and operations are meeting national objectives.  Prioritizing use of EA and supporting frameworks or standards will provide better guidance across government, and all steps taken within the framework will add value to the overall ICT capability.

 

A good indication any new technology or business model is starting to mature is the number of certifications popping up related to that product, framework, or service.   Cloud computing is certainly no exception, with vendors such as Microsoft, Google, VMWare, Cloud Computing Certificationsand IBM offering certification training for their own products, as well as organizations such CompTIA and Architura competing for industry neutral certifications.

Is this all hype, or is it an essential part of the emerging cloud computing ecosystem?  Can we remember the days when entry level Cisco, Microsoft, or other vendor certifications were almost mocked by industry elitists?

Much like the early Internet days of eEverything, cloud computing is at the point where most have heard the term, few understand the concepts, and marketing folk are exploiting every possible combination of the words to place their products in a favorable, forward leaning light.

So, what if executive management takes a basic course in cloud computing principles, or sales and customer service people take a Cloud 101 course?  Is that bad?

Of course not.  Cloud computing has the potential of being transformational to business, governments, organization, and even individuals.  Business leaders need to understand the potential and impact of what a service-oriented cloud computing infrastructure might mean to their organization, the game-changing potential of integration and interoperability, the freedom of mobility, and the practical execution of basic cloud computing characteristics within their ICT environment.

A certification is not all about getting the test, and certificate.  As an instructor for the CompTIA course, I manage classes of 20 or more students ranging from engineers, to network operations center staff, to customer service and sales, to mid-level executives.  We’ve yet to encounter an individual who claims they have learned nothing from attending the course, and most leave the course with a very different viewpoint of cloud computing than held prior to the class.

As with most technology driven topics, cloud computing does break into different branches – including technical, operations, and business utility.

The underlying technologies of cloud computing are probably the easiest part of the challenge, as ultimately skills will develop based on time, experience, and operation of cloud-related technologies.

The more difficult challenge is understanding the impact of cloud computing may mean to an organization, both internally as well as on a global scale.  No business-related discussion of cloud computing is complete without consideration of service-oriented architectures, enterprise architectures, interoperability, big data, disaster management, and continuity of operations.

Business decisions on data center consolidation, ICT outsourcing, and other aspects of the current technology refresh or financial consideration will be more effective and structured when accompanied by a basic business and high level understanding of cloud computing underlying technologies.  As an approach to business transformation, additional complimentary capabilities in enterprise architecture, service-oriented architectures, and IT service management will certainly help senior decision makers best understand the relationship between cloud computing and their organizational planning.

While reading the news, clipping stories, and self-study may help decision makers understand the basic components of cloud computing and other supporting technologies. Taking an introduction cloud computing course, regardless if vendor training or neutral, will give enough background knowledge to at least engage in the conversation. Given the hype surrounding cloud computing, and the potential long term consequences of making an uniformed decision, the investment in cloud computing training must be considered valuable at all levels of the organization, from technical to senior management.

Throughout 2012 large organizations and governments around the world continued to struggle with the idea of consolidating inefficient data centers, server closets, and individual “rogue” servers scattered around their enterprise or government agencies.  Issues dealt with the cost of operating data centers, disaster management of information technology resources, and of course human factors centered on control, power, or retention of jobs in a rapidly evolving IT industry.

Cloud computing and virtualization continue to have an impact on all consolidation discussions, not only from the standpoint of providing a much better model for managing physical assets, but also in the potential cloud offers to solve disaster recovery shortfalls, improve standardization, and encourage or enable development of service-oriented architectures.

Our involvement in projects ranging from local, state, and national government levels in both the United States and other countries indicates a consistent need for answering the following concerns:

  • Existing IT infrastructure, including both IT and facility, is reaching the end of its operational life
  • Collaboration requirements between internal and external users are expanding quickly, driving an architectural need for interoperability
  • Decision support systems require access to both raw data, and “big data/archival data”

We would like to see an effort within the IT community to move in the following directions:

  1. Real effort at decommissioning and eliminating inefficient data centers
  2. All data and applications should be fit into an enterprise architecture framework – regardless of the size of organization or data
  3. Aggressive development of standards supporting interoperability, portability, and reuse of objects and data

Regardless of the very public failures experienced by cloud service providers over the past year, the reality is cloud computing as an IT architecture and model is gaining traction, and is not likely to go away any time soon.  As with any emerging service or technology, cloud services will continue to develop and mature, reducing the impact and frequency of failures.

Future Data CentersWhy would an organization continue to buy individual high powered workstations, individual software licenses, and device-bound storage when the same application can be delivered to a simple display, or wide variety of displays, with standardized web-enabled cloud (SaaS) applications that store mission critical data images on a secure storage system at a secure site?  Why not facilitate the transition from CAPEX to OPEX, license to subscription, infrastructure to product and service development?

In reality, unless an organization is in the hardware or software development business, there is very little technical justification for building and managing a data center.  This includes secure facilities supporting military or other sensitive sites.

The cost of building and maintaining a data center, compared with either outsourcing into a commercial colocation site – or virtualizing data, applications, and network access requirements has gained the attention of CFOs and CEOs, requiring IT managers to more explicitly justify the cost of building internal infrastructure vs. outsourcing.  This is quickly becoming a very difficult task.

Money spent on a data center infrastructure is lost to the organization.  The cost of labor is high, the cost of energy, space, and maintenance is high.  Mooney that could be better applied to product and service development, customer service capacity, or other revenue and customer-facing activities.

The Bandwidth Factor

The one major limitation the IT community will need to overcome as data center consolidation continues and cloud services become the ‘norm, is bandwidth.  Applications, such as streaming video, unified communications, and data intensive applications will need more bandwidth.  The telecom companies are making progress, having deployed 100gbps backbone capacity in many markets.  However this capacity will need to continue growing quickly to meet the needs of organizations needing to access data and applications stored or hosted within a virtual or cloud computing environment.

Consider a national government’s IT requirements.  If the government, like most, are based within a metro area.  The agencies and departments consolidate their individual data centers and server closets into a central or reduced number of facilities.   Government interoperability frameworks begin to make small steps allowing cross-agency data sharing, and individual users need access to a variety of applications and data sources needed to fulfill their decision support requirements.

For example, a GIS (Geospatial/Geographic Information System) with multiple demographic or other overlays.  Individual users will need to display data that may be drawn from several data sources, through GIS applications, and display a large amount of complex data on individual display screens.  Without broadband access between both the user and application, as well as application and data sources, the result will be a very poor user experience.

Another example is using the capabilities of video conferencing, desktop sharing, and interactive persistent-state application sharing.  Without adequate bandwidth this is simply not possible.

Revisiting the “4th Utility” for 2013

The final vision on the 2013 “wishlist” is that we, as an IT industry, continue to acknowledge the need for developing the 4th Utility.  This is the idea that broadband communications, processing capacity (including SaaS applications), and storage is the right of all citizens.  Much like the first three utilities, roads, water, and electricity, the 4th Utility must be a basic part of all discussions related to national, state, or local infrastructure discussions.  As we move into the next millennium, Internet-enabled, or something like Internet-enabled communications will be an essential part of all our lives.

The 4th Utility requires high capacity fiber optic infrastructure and broadband wireless be delivered to any location within the country which supports a community or individual connected to a community.   We’ll have to [pay a fee to access the utility (same as other utilities), but it is our right and obligation to deliver the utility.

2013 will be a lot of fun for us in the IT industry.  Cloud computing is going to impact everybody – one way or the other.  Individual data centers will continue to close.  Service-oriented architectures, enterprise architecture, process modeling, and design efficiency will drive a lot of innovation.   – We’ll lose some players, gain players, and and we’ll be in a better position at the end of 2013 than today.

LV-2Day two of the Gartner Data Center Conference in Las Vegas continued reinforcing old topics, appearing at times to be either enlist attendees in contributing to Gartner research, or simply providing conference content directed to promoting conference sponsors.

For example, sessions “To the Point:  When Open Meets Cloud” and “Backup/Recovery: Backing Up the Future” included a series of audience surveys.  Those surveys were apparently the same as presented, in the same sessions, for several years.  Thus the speaker immediately referenced this year’s results vs. results from the same survey questions from the past two years.  This would lead a casual attendee to believe nothing radically new is being presented in the above topics, and the attendees are generally contributing to further trend analysis research that will eventually show up in a commercial Gartner Research Note.

Gartner analyst and speaker on the topic of “When Open Meets Clouds,” Aneel Lakhani, did make a couple useful, if not obvious points in his presentation.

  • We cannot secure complete freedom from vendors, regardless of how much you adopt open source
  • Open source can actually be more expensive than commercial products
  • Interoperability is easy to say, but a heck of a lot more complicated to implement
  • Enterprise users have a very low threshold for “test” environments (sorry DevOps guys)
  • If your organization has the time and staff, test, test, and test a bit more to ensure your open source product will perform as expected or designed

However analyst Dave Russell, speaker on the topic of “Backup/Recovery” was a bit more cut and paste in his approach.  Lots of questions to match against last year’s conference, and a strong emphasis on using tape as a continuing, if not growing media for disaster recovery.

Problem with this presentation was the discussion centered on backing up data – very little on business continuity.  In fact, in one slide he referenced a recovery point objective (RPO) of one day for backups.   What organization operating in a global market, in Internet time, can possibly design for a one day RPO?

In addition, there was no discussion on the need for compatible hardware in a disaster recovery site that would allow immediate or rapid restart of applications.  Having data on tape is fine.  Having mainframe archival data is fine.  But without a business continuity capability, it is likely any organization will suffer significant damage in their ability to function in their marketplace.  Very few organizations today can absorb an extended global presence outage or marketplace outage.

The conference continues until Thursday and we will look for more, positive approaches, to data center and cloud computing.

Gartner’s 2012 Data Center Conference in Las Vegas is noted for  not yielding any major surprise.  While having an uncanny number of attendees (*the stats are not available, however it is clear they are having a very good conference), most of the sessions appear to be simply reaffirming what everybody really knows already, serving to reinforce the reality data center consolidation, cloud computing, big data, and the move to an interoperable framework will be part of everybody’s life within a few years.

Childs at Gartner ConferenceGartner analyst Ray Paquet started the morning by drawing a line at the real value of server hardware in cloud computing.  Paquet stressed that cloud adopters should avoid integrated hardware solutions based on blade servers, which carry a high margin, and focus their CAPEX on cheaper “skinless” servers.  Paquet emphasized that integrated solutions are a “waste of money.”

Cameron Haight, another Gartner analyst, fired a volley at the process and framework world, with a comparison of the value DevOps brings versus ITIL.  Describing ITIL as a cumbersome burden to organizational agility, DevOps is a culture-changer that allows small groups to quickly respond to challenges.  Haight emphasized the frequently stressful relationship between development organizations and operations organizations, where operations demands stability and quality, and development needs freedom to move projects forward, sometimes without the comfort of baking code to the standards preferred by operations – and required by frameworks such as ITIL.

Haight’s most direct slide described De Ops as being “ITIL minus CRAP.”  Of course most of his supporting slides for moving to DevOps looked eerily like an ITIL process….

Other sessions attended (by the author) included “Shaping Private Clouds,” a WIPRO product demonstration, and a data center introduction by Raging Wire.  All valuable introductions for those who are considering making a major change in their internal IT deployments, but nothing cutting edge or radical.

The Raging Wire data center discussion did raise some questions on the overall vulnerability of large box data centers.  While it is certainly possible to build a data center up to any standard needed to fulfill a specific need, the large data center clusters in locations such as Northern Virginia are beginning to appear very vulnerable to either natural, human, or equipment failure disruptions.  In addition to fulfilling data center tier classification models as presented by the Uptime Institute, it is clear we are producing critical national infrastructure which if disrupted could cause significant damage to the US economy or even social order.

Eventually, much like the communications infrastructure in the US, data centers will need to come under the observation or review of a national agency such as Homeland Security.  While nobody wants a government officer in the data center, protection of national infrastructure is a consideration we probably will not be able to avoid for long.

Raging Wire also noted that some colocation customers, particularly social media companies, are hitting up to 8kW per cabinet.  Also scary if true, and in extended deployments.  This could result in serious operational problems if cooling systems were disrupted, as the heat generated in those cabinets will quickly become extreme.  Would also be interesting if companies like Raging Wire and other colocation companies considered developing a real time CFD monitor for their data center floors allowing better monitoring and predictability than simple zone monitoring solutions.

The best presentation of the day came at the end, “Big Data is Coming to Your Data Center.”  Gartner’s Sheila Childs brought color and enthusiasm to a topic many consider, well, boring.  Childs was able to bring the value, power, and future of big data into a human consumable format that kept the audience in their seats until the end of session at 6 p.m. in the late afternoon.

Childs hit on concepts such as “dark data” within organizations, the value of big data in decision support systems (DSS), and the need for developing and recruiting skilled staff who can actually write or build the systems needed to fully exploit the value of big data.  We cannot argue that point, and can only hope our education system is able to focus on producing graduates with the basic skills needed to fulfill that requirement.

CloudGov 2012 Highlights Government Cloud Initiatives

On February 19, 2012, in Cloud Computing, by Administrator

Federal, state, and local government agencies gathered in Washington D.C. on 16 February to participate in Cloud/Gov 2012 held at the Westin Washington D.C.  With Keynotes by David L. McLure, US General Services Administration, and Dawn Leaf, NIST, vendors and government agencies were brought up to date on federal cloud policies and initiatives.

Of special note were updates on the FedRAMP program (a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services) and NIST’s progress on standards.  “The FedRAMP process chart looks complicated” noted McLure, “however we are trying to provide support needed to accelerate the (FedRAMP vendor) approval process.

McLure also provided a roadmap for FedRAMP implementation, with FY13/Q2 targeted for full operation and FY14 planned for sustaining operations.

In a panel focusing on government case studies, David Terry from the Department of Education commented that “mobile phones are rapidly becoming the access point (to applications and data) for young people.”  Applications (SaaS) should be written to accommodate mobile devices, and “auto-adjust to user access devices.”

Tim Matson from DISA highlighted the US Department of Defense’s Forge.Mil initiative providing an open collaboration community for both the military and development community to work together in rapidly developing new applications to better support DoD activities.  While Forge.Mil has tighter controls than standard GSA (US General Services Administration)  standards, Matson emphasized “DISA wants to force the concept of change into the behavior of vendors.” Matson continued explaining that Forge.Mil will reinforce “a pipeline to support continuous delivery” of new applications.

While technology and process change topics provided a majority of  discussion points, mostly enthusiastic, David Mihalchik from Google advised “we still do not know the long term impact of global collaboration.  The culture is changing, forced on by the idea of global collaboration.”

Other areas of discussion among panel members throughout the day included the need for establishing and defining service level agreements (SLAs) for cloud services.  Daniel Burton from SalesForce.Com explained their SLAs are broken into two categories, SLAs based on subscription services, and those based on specific negotiations with government customers.   Other vendors took a stab at explaining their SLAs, without giving specific examples of their SLAs, leaving the audience without a solid answer.

NIST Takes the Leadership Role

The highlight of the day was provided by Dawn Leaf, Senior Executive for Cloud Computing with NIST.  Leaf provided very logical guidance for all cloud computing stakeholders, including vendors and users.

“US industry requires an international standard to ensure (global) competitiveness” explained Leaf.  In the past US vendors and service providers have developed standards which were not compatible with European and other standards, notably in wireless telephony, and one of NIST’s objectives is to participate in developing a global standard for cloud computing to prevent this possibility in cloud computing.

Cloud infrastructure and SaaS portability is also a high interest item for NIST.  Leaf advises that “we can force vendors into demonstrating their portability.  There are a lot of new entries in the business, and we need to force the vendors into proving their portability and interoperability.”

Leaf also reinforced the idea that standards are developed in the private sector.  NIST provides guidance and an architectural framework for vendors and the private sector to use as reference when developing those specific technical standards.  However leaf also had one caution for private industry, “industry should try to map their products to NIST references, as the government is not in a position to wait” for extended debates on the development of specific items, when the need for cloud computing development and implementation is immediate.

Further information on the conference, with agendas and participants is available at www.sia.net

Tagged with:
 

2011 was a great year for technology innovation.  The science of data center design and operations continued to improve, the move away from mixed-use buildings used as data centers continued, the watts/sqft metric took a second seat to overall kilowatts available to a facility or customer, and the idea of compute capacity and broadband as a utility began to take its place as a basic right of citizens.

However, there are 5 areas where we will see additional significant advances in 2012.

1.  Data Center Consolidation.  The US Government admits it is using only 27% of its overall available compute power.  With 2094 data centers supporting the federal government (from the CIO’s 25 Point Plan to Reform Fed IT Mgt), the government is required to close at least 800 of those data centers by 2015.

Data Center ConstructionThe lesson is not lost on state and local governments, private industry, or even internet content providers.  The economics of operating a data center or server closet, whether in costs of real estate, power, hardware, in addition to service and licensing agreements, are compelling enough to make even the most fervent server-hugger reconsider their religion.

2.  Cloud Computing.  Who doesn’t believe cloud computing will eventually replace the need for a server closets, cabinets, or even small cages in data centers?  The move to cloud computing is as certain as the move to email was in the 1980s.

Some IT managers and data owners hate the idea of cloud computing, enterprise service busses, and consolidated data.  Not so much an issue of losing control, but in many cases because it brings transparency to their operation.  If you are the owner of data in a developing country, and suddenly everything you do can be audited by a central authority – well it might make you uncomfortable…

A lesson learned while attending a  fast pitch contest during late 2009 in Irvine, CA…  An enterprising entrepreneur gave his “pitch” to a panel of investment bankers and venture capital representatives.  He stated he was looking for a $5 million investment in his startup company.

A panelist asked what the money was for, and the entrepreneur stated “.. and $2 million to build out a data center…”  The panelist responded that 90% of new companies fail within 2 years.  Why would he want to be stuck with the liability of a data center and hardware if the company failed? The gentleman further stated, “don’t waste my money on a data center – do the smart thing, use the Amazon cloud.”

3.  Virtual Desktops and Hosted Office Automation.  How many times have we lost data and files due to a failed hard drive, stolen laptop, or virus disrupting our computer?  What is the cost or burden of keeping licenses updated, versions updated, and security patches current in an organization with potentially hundreds of users?  What is the lead time when a user needs a new application loaded on a computer?

From applications as simple as Google Docs, to Microsoft 365, and other desktop replacement applications suites, users will become free from the burden of carrying a heavy laptop computer everywhere they travel.  Imagine being able to connect your 4G/LTE phone’s HDMI port to a hotel widescreen television monitor, and be able to access all the applications normally used at a desktop.  You can give a presentation off your phone, update company documents, or nearly any other IT function with the only limitation being a requirement to access broadband Internet connections (See # 5 below).

Your phone can already connect to Google Docs and Microsoft Live Office, and the flexibility of access will only improve as iPads and other mobile devices mature.

The other obvious benefit is files will be maintained on servers, much more likely to be backed up and included in a disaster recovery plan.

4.  The Science of Data Centers. It has only been a few years since small hosting companies were satisfied to go into a data center carved out of a mixed-use building, happy to have access to electricity, cooling, and a menu of available Internet network providers.  Most rooms were Data Center Power Requirementsdesigned to accommodate 2~3kW per cabinet, and users installed servers, switches, NAS boxes, and routers without regard to alignment or power usage.

That has changed.  No business or organization can survive without a 24x7x265 presence on the Internet, and most small enterprises – and large enterprises, are either consolidating their IT into professionally managed data centers, or have already washed their hands of servers and other IT infrastructure.

The Uptime Institute, BICSI, TIA, and government agencies have begun publishing guidelines on data center construction providing best practices, quality standards, design standards, and even standards for evaluation.  Power efficiency using metrics such as the PUE/DCiE provide additional guidance on power management, data center management, and design.

The days of small business technicians running into a data center at 2 a.m. to install new servers, repair broken servers, and pile their empty boxes or garbage in their cabinet or cage on the way out are gone.  The new data center religion is discipline, standards, discipline, and security.  Electricity is as valuable as platinum, just as cooling and heat are managed more closely than inmates at San Quentin.  While every other standards organization is now offering certification in cabling, data center design, and data center management, we can soon expect universities to offer an MS or Ph.D in data center sciences.

5.  The 4th Utility Gains Traction.  Orwell’s “1984” painted a picture of pervasive government surveillance, and incessant public mind control (Wikipedia).  Many people believe the Internet is the source of all evil, including identity theft, pornography, crime, over-socialization of cultures and thoughts, and a huge intellectual time sink that sucks us into the need to be wired or connected 24 hours a day.

Yes, that is pretty much true, and if we do not consider the 1000 good things about the Internet vs. each 1 negative aspect, it might be a pretty scary place to consider all future generations being exposed and indoctrinated.  The alternative is to live in a intellectual Brazilian or Papuan rain forest, one step out of the evolutionary stone age.

The Internet is not going away, unless some global repressive government, fundamentalist religion, or dictator manages to dismantle civilization as we know it.

The 4th utility identifies broadband access to the ‘net as a basic right of all citizens, with the same status as roads, water, and electricity.  All governments with a desire to have their nation survive and thrive in the next millennium will find a way to cooperate with network infrastructure providers to build out their national information infrastructure (haven’t heard that term since Al Gore, eh?).

Without a robust 4th utility, our children and their children will produce a global generation of intellectual migrant workers, intellectual refugees from a failed national information sciences vision and policy.

2012 should be a great year.  All the above predictions are positive, and if proved true, will leave the United States and other countries with stronger capacities to improve their national quality of life, and bring us all another step closer.

Happy New Year!