Cloud Computing has helped us understand both the opportunity, and the need, to decouple physical IT infrastructure from the requirements of business.  In theory cloud computing greatly enhances an organization’s ability to not only decommission inefficient data center resources, but even more importantly eases the process an organization needs to develop when moving to integration and service-orientation within supporting IT systems.

Current cloud computing standards, such as published by the US National Institute of Standards and Technology (NIST) have provided very good definitions, and solid reference architecture for understanding at a high level a vision of cloud computing.

image However these definitions, while good for addressing the vision of cloud computing, are not at a level of detail needed to really understand the potential impact of cloud computing within an existing organization, nor the potential of enabling data and systems resources to meet a need for interoperability of data in a 2020 or 2025 IT world.

The key to interoperability, and subsequent portability, is a clear set of standards.  The Internet emerged as a collaboration of academic, government, and private industry development which bypassed much of the normal technology vendor desire to create a proprietary product or service.  The cloud computing world, while having deep roots in mainframe computing, time-sharing, grid computing, and other web hosting services, was really thrust upon the IT community with little fanfare in the mid-2000s.

While NIST, the Open GRID Forum, OASIS, DMTF, and other organizations have developed some levels of standardization for virtualization and portability, the reality is applications, platforms, and infrastructure are still largely tightly coupled, restricting the ease most developers would need to accelerate higher levels of integration and interconnections of data and applications.

NIST’s Cloud Computing Standards Roadmap (SP 500-291 v2) states:

…the migration to cloud computing should enable various multiple cloud platforms seamless access between and among various cloud services, to optimize the cloud consumer expectations and experience.

Cloud interoperability allows seamless exchange and use of data and services among various cloud infrastructure offerings and to the the data and services exchanged to enable them to operate effectively together.”

Very easy to say, however the reality is, in particular with PaaS and SaaS libraries and services, that few fully interchangeable components exist, and any information sharing is a compromise in flexibility.

The Open Group, in their document “Cloud Computing Portability and Interoperability” simplifies the problem into a single statement:

“The cheaper and easier it is to integrate applications and systems, the closer you are getting to real interoperability.”

The alternative is of course an IT world that is restrained by proprietary interfaces, extending the pitfalls and dangers of vendor lock-in.

What Can We Do?

The first thing is, the cloud consumer world must make a stand and demand vendors produce services and applications based on interoperability and data portability standards.  No IT organization in the current IT maturity continuum should be procuring systems that do not support an open, industry-standard, service-oriented infrastructure, platform, and applications reference model (Open Group).

In addition to the need for interoperable data and services, the concept of portability is essential to developing, operating, and maintaining effective disaster management and continuity of operations procedures.  No IT infrastructure, platform, or application should be considered which does not allow and embrace portability.  This includes NIST’s guidance stating:

“Cloud portability allows two or more kinds of cloud infrastructures to seamlessly use data and services from one cloud system and be used for other cloud systems.”

The bottom line for all CIOs, CTOs, and IT managers – accept the need for service-orientation within all existing or planned IT services and systems.  Embrace Service-Oriented Architectures, Enterprise Architecture, and at all costs the potential for vendor lock-in when considering any level of infrastructure or service.

Standards are the key to portability and interoperability, and IT organizations have the power to continue forcing adoption and compliance with standards by all vendors.  Do not accept anything which does not fully support the need for data interoperability.

Wireless Mesh Networking (WMN) has been around for quite a few years.  However, not until recently, when protesters in Cairo and Hong Kong used utilities such as Firechat to bypass the mobile phone systems and communicate directly with each other, did mesh networking become well known.

Wireless Mesh Networking WMN establishes an ad hoc communications network using the WiFi (802.11/15/16) radios on their mobile phones and laptops to connect with each other, and extend the connectable portion of the network to any device with WMN software.  Some devices may act as clients, some as mesh routers, and some as gateways.  Of course there are more technical issues to fully understand with mesh networks, however the bottom line is if you have an Android, iOS, or software enabled laptop you can join, extend, and participate in a WMN.

In locations highly vulnerable to natural disasters, such as hurricane, tornado, earthquake, or wildfire, access to communications can most certainly mean the difference between surviving and not surviving.  However, during disasters, communications networks are likely to fail.

The same concept used to allow protesters in Cairo and Hong Kong to communicate outside of the mobile and fixed telephone networks could, and possibly should, have a role to play in responding to disasters.

An interesting use of this type of network was highlighted in a recent novel by Matthew Mather, entitled “Cyberstorm.”  Following a “Cyber” attack on the US Internet and connected infrastructures, much of the fixed communications infrastructure was rendered inoperable, and utilities depending on networks also fell under the impact.  An ad hoc WMN was built by some enterprising technicians, using the wireless radios available within most smart phones.  This allowed primarily messaging, however did allow citizens to communicate with each other – and the police, by interconnecting their smart phones into the mesh.

We have already embraced mobile phones, with SMS instant messaging, into many of our country’s emergency notification systems.  In California we can receive instant notifications from emergency services via SMS and Twitter, in addition to reverse 911.  This actually works very well, up to the point of a disaster.

WMN may provide a model for ensuring communications following a disaster.  As nearly every American now has a mobile phone, with a WiFi radio, the basic requirements for a mesh network are already in our hands.  The main barrier, today, with WMN is the distance limitations between participating access devices.  With luck WiFi antennas will continue to increase in power, reducing distance barriers, as each new generation is developed.

There are quite a few WMN clients available for smart phones, tablets, and WiFi-enabled devices today.  While many of these are used as instant messaging and social platforms today, just as with other social communications applications such as Twitter, the underlying technology can be used for many different uses, including of course disaster communications.

Again, the main limitation on using WMNs in disaster planning today is the limited number of participating nodes (devices with a WiFi radio), distance limitations with existing wireless radios and protocols, and the fact very few people are even aware of the concept of WMNs and potential deployments or uses.  The more participants in a WMN, the more robust is becomes, the better performance the WMN will support, and the better chance your voice will be heard in the event of a disaster.

Here are a couple WMN Disaster Support ideas I’d like to either develop, or see others develop:

  • Much like the existing 911 network, a WMN standard could and should be developed for all mobile phone devices, tablets, and laptops with a wireless radio
  • Each mobile device should include an “App” for disaster communications
  • Cities should attempt to install WMN compatible routers and access points, particularly in areas at high risk for natural disasters, which could be expected to survive the disaster
  • Citizens in disaster-prone areas should be encouraged to add a solar charging device to their earthquake, wildfire, and  other disaster-readiness kits to allow battery charging following an anticipated utility power loss
  • Survivable mesh-to-Internet gateways should be the responsibility of city government, while allowing citizen or volunteer gateways (including ham radio) to facilitate communications out of the disaster area
  • Emergency applications should include the ability to easily submit disaster status reports, including photos and video, to either local, state, or FEMA Incident Management Centers

That is a start.

Take a look at Wireless Mesh Networks.  Wikipedia has a great high-level explanation, and  Google search yields hundreds of entries.  WMNs are nothing new, but as with the early days of the Internet, are not getting a lot of attention.  However maybe at sometime in the future a WMN could save your life.

The scenario is a data center, late on a Saturday evening.  A telecom distribution system fails, and operations staff are called in from their weekend to quickly find the problem and restore operations as quickly as possible.

Critical Thinking As time goes on,  many customers begin to call in, open trouble tickets, upset at systems outages and escalating customer disruptions.

The team spends hours trying to fix a rectifier providing DC power to a main telecommunications distribution switch, and start by replacing each systems component one-by-one hoping to find the guilty part.  The team grows very frustrated due to not only fatigue, but also their failure in being able to s0lve the problem.  After many hours the team finally realizes there is no issue with either the telecom switch, or rectifier supplying DC power to the switch.  What could the problem be?

Finally, after many hours of troubleshooting, chasing symptoms, and hit / miss component replacements,  an electrician discovers there is a panel circuit that has failed due to many years of misuse (for those electrical engineers it was actually a circuit that oxidized and shorted due to “over-amping” the circuit – without preventive maintenance or routine checks).

The incident highlighted a reality – the organization working on the problem had very little critical thinking or problem solving skills.  They chased each obvious symptom, but never really addressed or successfully identified the underlying problem.  Great technicians, poor critical thinkers.   And a true story.

While this incident was a data center-related trouble shooting fail, we frequently fail to use good critical thinking in not only trouble shooting, but also developing opportunities and solutions for our business users and customers.

A few years ago I took a break from the job and spent some time working on personal development.  In addition to collecting certifications in TOGAF, ITIL, and other aerchitecture-related subjects I added a couple of additional classes, including Kepner-Tregoe (K-T) and Kepner-Fourie (K-F) Critical Thinking and Problem Solving Courses.

Not bad schools of thought, and a good refresher course reminding me of those long since forgotten systems management skills learned in graduate school – heck, nearly 30 years ago.

Here is the problem: IT systems and business use of technologies have rapidly developed during the past 10 years, and that rate of change appears to be accelerating.  Processes and standards developed 10, 15, or 20 years ago are woefully inadequate to support much of our technology and business-related design, development, and operations.  Tacit knowledge, tacit skills, and gut feelings cannot be relied on to correctly identify and solve problems we encounter in our fast-paced IT world.

Keep in mind, this discussion is not only related to problem solving, but also works just as well when considering new product or solution development for new and emerging business opportunities or challenges.

Critical Thinking forces us to know what a problem (or opportunity) is, know and apply the differences between inductive and deductive reasoning, identify premises and conclusions, good and bad arguments, and acknowledge issue descriptions and explanations (Erlandson).

Critical Thinking “religions” such as Kepner-Fourie (K-F) provide a process and model for solving problems.  Not bad if you have the time to create and follow heavy processes, or even better can automate much of the process.  However even studying extensive system like K-T and K-F will continue to drive the need for establishing an appropriate system for responding to events.

Regardless of the approach you may consider, repeated exposure to critical thinking concepts and practice will force us to  intellectually step away from chasing symptoms or over-reliance on tacit knowledge (automatic thinking) when responding to problems and challenges.

For IT managers, think of it as an intellectual ITIL Continuous Improvement Cycle – we always need to exercise our brains and thought process.  Status quo, or relying on time-honored solutions to problems will probably not be sufficient to bring our IT organizations into the future.  We need to continue ensuring our assumptions are based on facts, and avoid undue influence – in particular by vendors, to ensure our stakeholders have confidence in our problem or solution development process, and we have a good awareness of business and technology transformations impacting our actions.

In addition to those courses and critical thinking approaches listed above, exposure and study of those or any of the following can only help ensure we continue to exercise and hone our critical thinking skills.

  • A3 Management
  • Toyota Kata
  • PDSA (Plan-Do-Adjust-Study)

And lots of other university or related courseware.  For myself, I keep my interest alive by reading an occasional eBook (Such as “How to Think Clearly, A Guide to Critical Thinking” by Doug Erlandson – great to read during long flights), and Youtube videos.

What do you “think?”

Tagged with:
 

imageAs IT professionals we have been overwhelmed with different standards for each component of architecture, service delivery, governance, security, and operations.  Not only does IT need to ensure technical training and certification, but it is also desired to pursue certifications in ITIL, TOGAF, COBIT, PMP, and a variety of other frameworks – at a high cost in both time and money.

Wouldn’t it be nice to have an IT framework or reference architecture which brings all the important components of each standard or recommendation into a single model which focuses on the most important aspect of each existing model?

The Open Group is well-known for publishing TOGAF (The Open Group Architecture Framework), in addition to a variety of other standards and frameworks related to Service-Oriented Architectures (SOA), security, risk, and cloud computing.  In the past few years, recognizing the impact of broadband, cloud computing, SOAs, and need for a holistic enterprise architecture approach to business and IT, publishing many common-sense, but powerful recommendations such as:

              • TOGAF 9.1
              • Open FAIR (Risk Analysis and Assessment)
              • SOCCI (Service-Oriented Cloud Computing Infrastructure)
              • Cloud Computing
              • Open Enterprise Security Architecture
              • Document Interchange Reference Model (for interoperability)
              • and others.

The open Group’s latest project intended to streamline and focus IT systems development is called the “IT4IT” Reference Architecture.  While still in the development, or “snapshot” phase, IT4IT is surprisingly easy to read, understand, and most importantly logical.

“The IT Value Chain and IT4IT Reference Architecture represent the IT service lifecycle in a new and powerful way. They provide the missing link between industry standard best practice guides and the technology framework and tools that power the service management ecosystem. The IT Value Chain and IT4IT Reference Architecture are a new foundation on which to base your IT operating model. Together, they deliver a welcome blueprint for the CIO to accelerate IT’s transition to becoming a service broker to the business.” (Open Group’s IT4IT Reference Architecture, v 1.3)

The IT4IT Reference Architecture acknowledges changes in both technology and business resulting from the incredible impact Internet and automation have had on both enterprise and government use of information and data.  However the document also makes a compelling case that IT systems, theory, and operations have not kept up with either existing IT support technologies, nor the business visions and objectives IT is meant to serve.

IT4IT’s development team is a large, global collaborative effort including vendors, enterprise, telecommunications, academia, and consulting companies.  This helps drive a vendor or technology neutral framework, focusing more on running IT as a business, rather than conforming to a single vendor’s product or service.  Eventually, like all developing standards, IT4IT may force vendors and systems developers to provide a solid model and framework for developing business solutions, which will support greater interoperability and data sharing between both internal and external organizations.

The visions and objectives for IT4IT include two major components, which are the IT Value Chain and IT4IT Reference Architecture.  Within the IT4IT Core are sections providing guidance, including:

  • IT4IT Abstractions and Class Structures
  • The Strategy to Portfolio Value Stream
  • The Requirement to Deploy Value Stream
  • The Request to Fulfill Value Stream
  • The Detect to Correct Value Stream Each of the above main sections have borrowed from, or further developed ideas and activities from within ITIL, COBIT, and  TOGAF, but have taken a giant leap including cloud computing, SOAs, and enterprise architecture into the product.

As the IT4IT Reference Architecture is completed, and supporting roadmaps developed, the IT4IT concept will no doubt find a large legion of supporters, as many, if not most, businesses and IT professionals find the certification and knowledge path for ITIL, COBIT, TOGAF, and other supporting frameworks either too expensive, or too time consuming (both in training and implementation).

Take a look at IT4IT at the Open Group’s website, and let us know what you think.  Too light?  Not needed?  A great idea or concept?  Let us know.

The NexGen Cloud Computing Conference kicked off in San Diego on Thursday with a fair amount of hype and a lot of sales people.  Granted the intent of the conference is for cloud computing vendors to find and NexGen Cloud Conference develop either sales channels, or business development opportunities within the market.

As an engineer, the conference will probably result in a fair amount of frustration, but will at least provide a level of awareness in how an organization’s sales, marketing, and business teams are approaching their vision of a cloud computing product or service delivery.

However, one presentation stood out.  Terry Hedden, from Marketopia, made some very good points.  His presentation was entitled “How to Build a Successful Cloud Practice.”  While the actual presentation is not so important, he made several points, which I’ll refer to as “Heddonisms,” which struck me as important enough, or amusing enough, to record.

Heddonisms for the Cloud Age:

  • Entire software companies are transitioning to SaaS development.  Lose the idea of licensed software – think of subscription software.
  • Integrators and consultants have a really good future – prepare yourself.
  • The younger generation does not attend tech conferences.  Only old people who think they can sell things, get new jobs, or are trying to put some knowledge to the junk they are selling (the last couple of points are mine).
  • Companies selling hosted SaaS products and services are going to kill those who still hang out at the premise.
  • If you do not introduce cloud services to your customers. your competitor will introduce cloud to your customers.
  • If you are not aspiring to be a leader in cloud, you are not relevant.
  • There is little reason to go into the IaaS business yourself.  Let the big guys build infrastructure – you can make higher margins selling their stuff.  In general, IaaS companies are really bad sales organizations (also mine…).
  • Budgets for security at companies like Microsoft are much higher than for smaller companies.  Thus, it is likely Microsoft’s ability to design, deploy, monitor, and manage secure infrastructure is much higher than the average organization.
  • Selling cloud is easy – you are able to relieve your customers of most up front costs (like buying hardware, constructing data centers, etc.).
  • If you simply direct your customer to Microsoft or Google’s website for a solution, then you are adding no value to our customer.
  • If you hear the word “APP” come up in a conversation, just turn around and run away.
  • If you assist a company in a large SaaS implementation (successfully), they will likely be your customer for life.
  • Don’t do free work or consulting – never (this really hurt me to hear – guilty as charged…).
  • Customers have one concern, and one concern only – Peace of Mind.  Make their pains go away, and you will be successful.  Don’t give them more problems.
  • Customers don’t care what is behind the curtain (such as what kind of computers or routers you are using).  They only care about you taking the pain of stuff that doesn’t make them money away from their lives.
  • Don’t try to sell to IT guys and engineers.  Never.  Never. Never.
  • The best time to work with a company is when they are planning for their technology refresh cycles.

Heddon was great.  While he may have a bit of contempt for engineers (I have thick skin, I can live with the wounds), he provided a very logical and realistic view of how to approach selling and deploying cloud computing.

Now about missing the point.  Perhaps the biggest shortfall of the conference, in my opinion, is that most presentations and even vendor efforts solved only single silos of issues.  Nobody provided an integrated viewpoint of how cloud computing is actually just one tool an organization can use within a larger, planned, architecture.

No doubt I have become bigoted myself after several years of plodding through TOGAF, ITIL, COBIT, Risk Assessments, and many other formal IT-supporting frameworks.  Maybe a career in the military forced me into systems thinking and structured problem solving.  Maybe I lack a higher level of innovative thinking or creativity – but I crave a structured, holistic approach to IT.

Sadly, I got no joy at the NexGen Cloud Computing Conference.  But, I would have driven from LA to San Diego just for Heddon’s presentation and training session – that made the cost of conference and time a valuable investment.

In 2009 we began consulting jobs with governments in developing countries with the primary objective to consolidate data centers across government ministries and agencies into centralized, high capacity and quality data centers.  At the time, nearly all individual ministry or agency data infrastructure was built into either small computers rooms or server closets with some added “brute force” air conditioning, no backup generators, no data back up, superficial security, and lots of other ailments.

CC-SOA The vision and strategy was that if we consolidated inefficient, end of life, and high risk IT infrastructure into a standardized and professionally managed facility, national information infrastructure would not only be more secure, but through standardization, volume purchasing agreements, some server virtualization, and development of broadband infrastructure most of the IT needs of government would be easily fulfilled.

Then of course cloud computing began to mature, and the underlying technologies of Infrastructure as a Service (IaaS) became feasible.  Now, not only were the governments able to decommission inefficient and high-risk IS environments, they would also be able to build virtual data centers  with levels of on-demand compute, storage, and network resources.  Basic data center replacement.

Even those remaining committed “server hugger” IT managers and fiercely independent governmental organizations cloud hardly argue the benefits of having access to disaster recovery storage capacity though the centralized data center.

As the years passed, and we entered 2014, not only did cloud computing mature as a business model, but senior management began to increase their awareness of various aspects of cloud computing, including the financial benefits, standardization of IT resources, the characteristics of cloud computing, and potential for Platform and Software as a Service (PaaS/SaaS) to improve both business agility and internal decision support systems.

At the same time, information and organizational architecture, governance, and service delivery frameworks such as TOGAF, COBIT, ITIL, and Risk Analysis training reinforced the value of both data and information within an organization, and the need for IT systems to support higher level architectures supporting decision support systems and market interactions (including Government to Government, Business, and Citizens for the public sector) .

2015 will bring cloud computing and architecture together at levels just becoming comprehensible to much of the business and IT world.  The open Group has a good first stab at building a standard for this marriage with their Service-Oriented Cloud Computing Infrastructure (SOCCI). According to the SOCCI standard,

“Infrastructure is a foundational element for enterprise architecture. Infrastructure has been  traditionally provisioned in a physical manner. With the evolution of virtualization technologies  and application of service-orientation to infrastructure, it can now be offered as a service.

Service-orientation principles originated in the business and application architecture arena. After  repeated, successful application of these principles to application architecture, IT has evolved to  extending these principles to the infrastructure.”

At first glance the SOCII standard appears to be a document which creates a mapping between enterprise architecture (TOGAF) and cloud computing.  At second glance the SOCCI standard really steps towards tightening the loose coupling of standard service-oriented architectures through use of cloud computing tools included with all service models (IaaS/PaaS/SaaS).

The result is an architectural vision which is easily capable of absorbing existing IT requirements, as well as incorporating emerging big data analytics models, interoperability, and enterprise architecture.

Since the early days of 2009 discussion topics with government and enterprise customers have shown a marked transition from simply justifying decommissioning of high risk data centers to how to manage data sharing, interoperability, or the potential for over standardization and other service delivery barriers which might inhibit innovation – or ability of business units to quickly respond to rapidly changing market opportunities.

2015 will be an exciting year for information and communications technologies.  For those of us in the consulting and training business, the new year is already shaping up to be the busiest we have seen.

Now that We Have Adopted IaaS…

On November 25, 2014, in Internet and Telecom, by Administrator

Providing guidance or consulting to organizations on cloud computing topics can be really easy, or really tough.  In the past most of the initial engagement was dedicated to training and building awareness with your customer.  The next step was finding a high value, low risk application or service that could be moved to Infrastructure as a Service (IaaS) to solve an immediate problem, normally associated with disaster recovery or data backups.

Service Buss and DSS As the years have continued, dynamics changed.  On one hand, IT professionals and CIOs began to establish better knowledge of what virtualization, cloud computing, and outsourcing could do for their organization.  CFOs became aware of the financial potential of virtualization and cloud computing, and a healthy dialog between IT, operations, business units, and the CFO.

The “Internet Age” has also driven global competition down to the local level, forcing nearly all organizations to respond more rapidly to business opportunities.  If a business unit cannot rapidly respond to the opportunity, which may require product and service development, the opportunity can be lost far more quickly than in the past.

In the old days, procurement of IT resources could require a fairly lengthy cycle.  In the Internet Age, if an IT procurement cycle takes > 6 months, there is probably little chance to effectively meet the greatly shortened development cycle competitors in other continents – or across the street may be able to fulfill.

With IaaS the procurement cycle of IT resources can be within minutes, allowing business units to spend far more time developing products, services, and solutions, rather than dealing with the frustration of being powerless to respond to short window opportunities.  This is of course addressing the essential cloud characteristics of Rapid Elasticity and On-Demand Self-Service.

In addition to on-demand and elastic resources, IaaS has offered nearly all organizations the option of moving IT resources into either public or private cloud infrastructure.  This has the benefit of allowing data center decommissioning, and re-commissioning into a virtual environment.  The cost of operating data centers, maintaining data centers and IT equipment, and staffing data centers vs. outsourcing that infrastructure into a cloud is very interesting to CFOs, and a major justification for replacing physical data centers with virtual data centers.

The second dynamic, in addition to greater professional knowledge and awareness of cloud computing, is the fact we are starting to recruit cloud-aware employees graduating from universities and making their first steps into careers and workforce.  With these “cloud savvy” young people comes deep experience with interoperable data, social media, big data, data analytics, and an intellectual separation between access devices and underlying IT infrastructure.

The Next Step in Cloud Evolution

OK, so we all are generally aware of the components of IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS).  Let’s have a quick review of some standout features supported or enabled by cloud:

  • Increased standardization of applications
  • Increased standardization of data bases
  • Federation of security systems (Authentication and Authorization)
  • Service busses
  • Development of other common applications (GIS, collaboration, etc.)
  • Transparency of underlying hardware

Now let’s consider the need for better, real-time, accurate decision support systems (DSS).  Within any organization the value of a DSS is dependent on data integrity, data access (open data within/without an organization), and single-source data.

Frameworks for developing an effective DSS are certainly available, whether it is TOGAF, the US Federal Enterprise Architecture Framework (FEAF), interoperability frameworks, and service-oriented architectures (SOA).  All are fully compatible with the tools made available within the basic cloud service delivery models (IaaS, PaaS, SaaS).

The Open Group (same organization which developed TOGAF) has responded with their model of a Cloud Computing Service Oriented Infrastructure (SOCCI) Framework.  The SOCCI is identified as the marriage of a Service-Oriented Infrastructure and cloud computing.  The SOCCI also incorporates aspects of TOGAF into the framework, which may drive more credibility into a SOCCI architectural development process.

The expected result of this effort is for existing organizations dealing with departmental “silos” of IT infrastructure, data, and applications, a level of interoperability and DSS development based on service-orientation, using a well-designed underlying cloud infrastructure.  This data sharing can be extended beyond the (virtual) firewall to others in an organization’s trading or governmental community, resulting in  DSS which will become closer and closer to an architecture vision based on the true value of data produced, or made available to an organization.

While we most certainly need IaaS, and the value of moving to virtual data centers is justified by itself, we will not truly benefit from the potential of cloud computing until we understand the potential of data produced and available to decision makers.

The opportunity will need a broad spectrum of contributors and participants with awareness and training in disciplines ranging from technical capabilities, to enterprise architecture, to service delivery, and governance acceptable to a cloud-enabled IT world.

For those who are eagerly consuming training and knowledge in the above skills and knowledge, the future is anything but cloudy.  For those who believe in status quo, let’s hope you are close to pension and retirement, as this is your future.

 

Now that We Have Adopted IaaS

On November 25, 2014, in Internet and Telecom, by Administrator

Providing guidance or consulting to organizations on cloud computing topics can be really easy, or really tough.  In the past most of the initial engagement was dedicated to training and building awareness with your customer.  The next step was finding a high value, low risk application or service that could be moved to Infrastructure as a Service (IaaS) to solve an immediate problem, normally associated with disaster recovery or data backups.

Service Buss and DSS As the years have continued, dynamics changed.  On one hand, IT professionals and CIOs began to establish better knowledge of what virtualization, cloud computing, and outsourcing could do for their organization.  CFOs became aware of the financial potential of virtualization and cloud computing, and a healthy dialog between IT, operations, business units, and the CFO.

The “Internet Age” has also driven global competition down to the local level, forcing nearly all organizations to respond more rapidly to business opportunities.  If a business unit cannot rapidly respond to the opportunity, which may require product and service development, the opportunity can be lost far more quickly than in the past.

In the old days, procurement of IT resources could require a fairly lengthy cycle.  In the Internet Age, if an IT procurement cycle takes > 6 months, there is probably little chance to effectively meet the greatly shortened development cycle competitors in other continents – or across the street may be able to fulfill.

With IaaS the procurement cycle of IT resources can be within minutes, allowing business units to spend far more time developing products, services, and solutions, rather than dealing with the frustration of being powerless to respond to short window opportunities.  This is of course addressing the essential cloud characteristics of Rapid Elasticity and On-Demand Self-Service.

In addition to on-demand and elastic resources, IaaS has offered nearly all organizations the option of moving IT resources into either public or private cloud infrastructure.  This has the benefit of allowing data center decommissioning, and re-commissioning into a virtual environment.  The cost of operating data centers, maintaining data centers and IT equipment, and staffing data centers vs. outsourcing that infrastructure into a cloud is very interesting to CFOs, and a major justification for replacing physical data centers with virtual data centers.

The second dynamic, in addition to greater professional knowledge and awareness of cloud computing, is the fact we are starting to recruit cloud-aware employees graduating from universities and making their first steps into careers and workforce.  With these “cloud savvy” young people comes deep experience with interoperable data, social media, big data, data analytics, and an intellectual separation between access devices and underlying IT infrastructure.

The Next Step in Cloud Evolution

OK, so we all are generally aware of the components of IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS).  Let’s have a quick review of some standout features supported or enabled by cloud:

  • Increased standardization of applications
  • Increased standardization of data bases
  • Federation of security systems (Authentication and Authorization)
  • Service busses
  • Development of other common applications (GIS, collaboration, etc.)
  • Transparency of underlying hardware

Now let’s consider the need for better, real-time, accurate decision support systems (DSS).  Within any organization the value of a DSS is dependent on data integrity, data access (open data within/without an organization), and single-source data.

Frameworks for developing an effective DSS are certainly available, whether it is TOGAF, the US Federal Enterprise Architecture Framework (FEAF), interoperability frameworks, and service-oriented architectures (SOA).  All are fully compatible with the tools made available within the basic cloud service delivery models (IaaS, PaaS, SaaS).

The Open Group (same organization which developed TOGAF) has responded with their model of a Cloud Computing Service Oriented Infrastructure (SOCCI) Framework.  The SOCCI is identified as the marriage of a Service-Oriented Infrastructure and cloud computing.  The SOCCI also incorporates aspects of TOGAF into the framework, which may drive more credibility into a SOCCI architectural development process.

The expected result of this effort is for existing organizations dealing with departmental “silos” of IT infrastructure, data, and applications, a level of interoperability and DSS development based on service-orientation, using a well-designed underlying cloud infrastructure.  This data sharing can be extended beyond the (virtual) firewall to others in an organization’s trading or governmental community, resulting in  DSS which will become closer and closer to an architecture vision based on the true value of data produced, or made available to an organization.

While we most certainly need IaaS, and the value of moving to virtual data centers is justified by itself, we will not truly benefit from the potential of cloud computing until we understand the potential of data produced and available to decision makers.

The opportunity will need a broad spectrum of contributors and participants with awareness and training in disciplines ranging from technical capabilities, to enterprise architecture, to service delivery, and governance acceptable to a cloud-enabled IT world.

For those who are eagerly consuming training and knowledge in the above skills and knowledge, the future is anything but cloudy.  For those who believe in status quo, let’s hope you are close to pension and retirement, as this is your future.

 

Now that We Have Adopted IaaS…

On November 25, 2014, in Internet and Telecom, by Administrator

Providing guidance or consulting to organizations on cloud computing topics can be really easy, or really tough.  In the past most of the initial engagement was dedicated to training and building awareness with your customer.  The next step was finding a high value, low risk application or service that could be moved to Infrastructure as a Service (IaaS) to solve an immediate problem, normally associated with disaster recovery or data backups.

Service Buss and DSS As the years have continued, dynamics changed.  On one hand, IT professionals and CIOs began to establish better knowledge of what virtualization, cloud computing, and outsourcing could do for their organization.  CFOs became aware of the financial potential of virtualization and cloud computing, and a healthy dialog between IT, operations, business units, and the CFO.

The “Internet Age” has also driven global competition down to the local level, forcing nearly all organizations to respond more rapidly to business opportunities.  If a business unit cannot rapidly respond to the opportunity, which may require product and service development, the opportunity can be lost far more quickly than in the past.

In the old days, procurement of IT resources could require a fairly lengthy cycle.  In the Internet Age, if an IT procurement cycle takes > 6 months, there is probably little chance to effectively meet the greatly shortened development cycle competitors in other continents – or across the street may be able to fulfill.

With IaaS the procurement cycle of IT resources can be within minutes, allowing business units to spend far more time developing products, services, and solutions, rather than dealing with the frustration of being powerless to respond to short window opportunities.  This is of course addressing the essential cloud characteristics of Rapid Elasticity and On-Demand Self-Service.

In addition to on-demand and elastic resources, IaaS has offered nearly all organizations the option of moving IT resources into either public or private cloud infrastructure.  This has the benefit of allowing data center decommissioning, and re-commissioning into a virtual environment.  The cost of operating data centers, maintaining data centers and IT equipment, and staffing data centers vs. outsourcing that infrastructure into a cloud is very interesting to CFOs, and a major justification for replacing physical data centers with virtual data centers.

The second dynamic, in addition to greater professional knowledge and awareness of cloud computing, is the fact we are starting to recruit cloud-aware employees graduating from universities and making their first steps into careers and workforce.  With these “cloud savvy” young people comes deep experience with interoperable data, social media, big data, data analytics, and an intellectual separation between access devices and underlying IT infrastructure.

The Next Step in Cloud Evolution

OK, so we all are generally aware of the components of IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS).  Let’s have a quick review of some standout features supported or enabled by cloud:

  • Increased standardization of applications
  • Increased standardization of data bases
  • Federation of security systems (Authentication and Authorization)
  • Service busses
  • Development of other common applications (GIS, collaboration, etc.)
  • Transparency of underlying hardware

Now let’s consider the need for better, real-time, accurate decision support systems (DSS).  Within any organization the value of a DSS is dependent on data integrity, data access (open data within/without an organization), and single-source data.

Frameworks for developing an effective DSS are certainly available, whether it is TOGAF, the US Federal Enterprise Architecture Framework (FEAF), interoperability frameworks, and service-oriented architectures (SOA).  All are fully compatible with the tools made available within the basic cloud service delivery models (IaaS, PaaS, SaaS).

The Open Group (same organization which developed TOGAF) has responded with their model of a Cloud Computing Service Oriented Infrastructure (SOCCI) Framework.  The SOCCI is identified as the marriage of a Service-Oriented Infrastructure and cloud computing.  The SOCCI also incorporates aspects of TOGAF into the framework, which may drive more credibility into a SOCCI architectural development process.

The expected result of this effort is for existing organizations dealing with departmental “silos” of IT infrastructure, data, and applications, a level of interoperability and DSS development based on service-orientation, using a well-designed underlying cloud infrastructure.  This data sharing can be extended beyond the (virtual) firewall to others in an organization’s trading or governmental community, resulting in  DSS which will become closer and closer to an architecture vision based on the true value of data produced, or made available to an organization.

While we most certainly need IaaS, and the value of moving to virtual data centers is justified by itself, we will not truly benefit from the potential of cloud computing until we understand the potential of data produced and available to decision makers.

The opportunity will need a broad spectrum of contributors and participants with awareness and training in disciplines ranging from technical capabilities, to enterprise architecture, to service delivery, and governance acceptable to a cloud-enabled IT world.

For those who are eagerly consuming training and knowledge in the above skills and knowledge, the future is anything but cloudy.  For those who believe in status quo, let’s hope you are close to pension and retirement, as this is your future.

 

Wiring the Sierras

On November 20, 2014, in Burbank, Internet and Telecom, net neutrality, by Administrator

Inyo County, the second largest county in California, is ready to jumpstart the process of delivering a true broad band infrastructure to business and residences within the Owens Valley.  The plan, called the 21st Century Obsidian Project, envisions delivering a fiber infrastructure to all residents of Inyo County and other surrounding areas along the Eastern Sierras and parts of Death Valley.

Owens Valley Eastern Sierras According to the project RFP, the project goal is “an operating, economically sustainable,
Open Access, Fiber-to-the-Premise, gigabit network serving the Owens Valley and select
neighboring communities. The project is driven by the expectation that Inyo County’s
economy will improve as a result of successfully attaining the goal.”

Many cities are finding ways to bypass the nonsense surrounding discussion on “Net Neutrality.”  Rather than worry about what Comcast, AT&T, Verizon, or other carriers and ISPs feuding over the rights and responsibilities of delivering Internet content to the premise,  many governments understand the need for high speed broadband as a critical economic, social, and academic tool, and are developing alternatives to traditional carriers.

Whether it is the Inyo County project, Burbank One (a product of Burbank Water and Power), Glendale Fiber Optic Solutions (Glendale Water and Power), Pasadena’s City Fiber Services, or Los Angeles Department of Water and Power’s (LADWP) Fiber Optic Enterprise, the fiber utility is becoming available in spite of carrier reluctance to develop fiber infrastructure.

Much of the infrastructure is being built to support intelligent grids (power metering and control), and city schools or emergency services – with the awareness fiber optics are fiber optics, and the incremental cost of adding additional fiber cores to each distribution route is low.  So why not build it out for the citizens and businesses?

The important aspect of municipal or city infrastructure development is the acknowledgement this is a utility.  While some government agencies will provide “lit” services, in general the product is “dark” fiber, available for lease or use by commercial service providers.  Many city networks are interconnected (such as in Los Angeles County utility fiber from Glendale, Burbank, and LADWP), as well as having a presence at major network interconnection points.  This allows fiber networks to carry signal to locations such as One Wilshire’s meet-me-room, with additional access to major Internet Exchange Points and direct interconnections allowing further bypass and peering to other national and global service providers.

In the case of Inyo County, planners fully understand they do not have the expertise necessary to become a telecommunications carrier, and plan to outsource maintenance and some operations of the 21st century Obsidian Project to a third party commercial operator – of course within the guidelines established by the RFP.  The intent is to make it easy and cost effective for all businesses, public facilities, schools, and residences to take advantage and exploit broadband infrastructure.

However the fiber will be considered a utility, with no prejudice or limitations given to commercial service providers desiring to take advantage of the infrastructure and deliver services to the county.

We hope more communities will look at innovative visions such as being published by Inyo County, and consider investing in fiber optics as a utility, diluting the potential impact of carrier sanctions against both internet access, content, or applications (including cloud computing Software as a Service subscriptions.  e.g., MS 365, Adobe Creative Cloud, Google Apps, etc.)..

Congratulations to Inyo County for your vision, and best of luck.