Now that We Have Adopted IaaS…

On November 25, 2014, in Internet and Telecom, by Administrator

Providing guidance or consulting to organizations on cloud computing topics can be really easy, or really tough.  In the past most of the initial engagement was dedicated to training and building awareness with your customer.  The next step was finding a high value, low risk application or service that could be moved to Infrastructure as a Service (IaaS) to solve an immediate problem, normally associated with disaster recovery or data backups.

Service Buss and DSS As the years have continued, dynamics changed.  On one hand, IT professionals and CIOs began to establish better knowledge of what virtualization, cloud computing, and outsourcing could do for their organization.  CFOs became aware of the financial potential of virtualization and cloud computing, and a healthy dialog between IT, operations, business units, and the CFO.

The “Internet Age” has also driven global competition down to the local level, forcing nearly all organizations to respond more rapidly to business opportunities.  If a business unit cannot rapidly respond to the opportunity, which may require product and service development, the opportunity can be lost far more quickly than in the past.

In the old days, procurement of IT resources could require a fairly lengthy cycle.  In the Internet Age, if an IT procurement cycle takes > 6 months, there is probably little chance to effectively meet the greatly shortened development cycle competitors in other continents – or across the street may be able to fulfill.

With IaaS the procurement cycle of IT resources can be within minutes, allowing business units to spend far more time developing products, services, and solutions, rather than dealing with the frustration of being powerless to respond to short window opportunities.  This is of course addressing the essential cloud characteristics of Rapid Elasticity and On-Demand Self-Service.

In addition to on-demand and elastic resources, IaaS has offered nearly all organizations the option of moving IT resources into either public or private cloud infrastructure.  This has the benefit of allowing data center decommissioning, and re-commissioning into a virtual environment.  The cost of operating data centers, maintaining data centers and IT equipment, and staffing data centers vs. outsourcing that infrastructure into a cloud is very interesting to CFOs, and a major justification for replacing physical data centers with virtual data centers.

The second dynamic, in addition to greater professional knowledge and awareness of cloud computing, is the fact we are starting to recruit cloud-aware employees graduating from universities and making their first steps into careers and workforce.  With these “cloud savvy” young people comes deep experience with interoperable data, social media, big data, data analytics, and an intellectual separation between access devices and underlying IT infrastructure.

The Next Step in Cloud Evolution

OK, so we all are generally aware of the components of IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS).  Let’s have a quick review of some standout features supported or enabled by cloud:

  • Increased standardization of applications
  • Increased standardization of data bases
  • Federation of security systems (Authentication and Authorization)
  • Service busses
  • Development of other common applications (GIS, collaboration, etc.)
  • Transparency of underlying hardware

Now let’s consider the need for better, real-time, accurate decision support systems (DSS).  Within any organization the value of a DSS is dependent on data integrity, data access (open data within/without an organization), and single-source data.

Frameworks for developing an effective DSS are certainly available, whether it is TOGAF, the US Federal Enterprise Architecture Framework (FEAF), interoperability frameworks, and service-oriented architectures (SOA).  All are fully compatible with the tools made available within the basic cloud service delivery models (IaaS, PaaS, SaaS).

The Open Group (same organization which developed TOGAF) has responded with their model of a Cloud Computing Service Oriented Infrastructure (SOCCI) Framework.  The SOCCI is identified as the marriage of a Service-Oriented Infrastructure and cloud computing.  The SOCCI also incorporates aspects of TOGAF into the framework, which may drive more credibility into a SOCCI architectural development process.

The expected result of this effort is for existing organizations dealing with departmental “silos” of IT infrastructure, data, and applications, a level of interoperability and DSS development based on service-orientation, using a well-designed underlying cloud infrastructure.  This data sharing can be extended beyond the (virtual) firewall to others in an organization’s trading or governmental community, resulting in  DSS which will become closer and closer to an architecture vision based on the true value of data produced, or made available to an organization.

While we most certainly need IaaS, and the value of moving to virtual data centers is justified by itself, we will not truly benefit from the potential of cloud computing until we understand the potential of data produced and available to decision makers.

The opportunity will need a broad spectrum of contributors and participants with awareness and training in disciplines ranging from technical capabilities, to enterprise architecture, to service delivery, and governance acceptable to a cloud-enabled IT world.

For those who are eagerly consuming training and knowledge in the above skills and knowledge, the future is anything but cloudy.  For those who believe in status quo, let’s hope you are close to pension and retirement, as this is your future.

 

Now that We Have Adopted IaaS…

On November 25, 2014, in Internet and Telecom, by Administrator

Providing guidance or consulting to organizations on cloud computing topics can be really easy, or really tough.  In the past most of the initial engagement was dedicated to training and building awareness with your customer.  The next step was finding a high value, low risk application or service that could be moved to Infrastructure as a Service (IaaS) to solve an immediate problem, normally associated with disaster recovery or data backups.

Service Buss and DSS As the years have continued, dynamics changed.  On one hand, IT professionals and CIOs began to establish better knowledge of what virtualization, cloud computing, and outsourcing could do for their organization.  CFOs became aware of the financial potential of virtualization and cloud computing, and a healthy dialog between IT, operations, business units, and the CFO.

The “Internet Age” has also driven global competition down to the local level, forcing nearly all organizations to respond more rapidly to business opportunities.  If a business unit cannot rapidly respond to the opportunity, which may require product and service development, the opportunity can be lost far more quickly than in the past.

In the old days, procurement of IT resources could require a fairly lengthy cycle.  In the Internet Age, if an IT procurement cycle takes > 6 months, there is probably little chance to effectively meet the greatly shortened development cycle competitors in other continents – or across the street may be able to fulfill.

With IaaS the procurement cycle of IT resources can be within minutes, allowing business units to spend far more time developing products, services, and solutions, rather than dealing with the frustration of being powerless to respond to short window opportunities.  This is of course addressing the essential cloud characteristics of Rapid Elasticity and On-Demand Self-Service.

In addition to on-demand and elastic resources, IaaS has offered nearly all organizations the option of moving IT resources into either public or private cloud infrastructure.  This has the benefit of allowing data center decommissioning, and re-commissioning into a virtual environment.  The cost of operating data centers, maintaining data centers and IT equipment, and staffing data centers vs. outsourcing that infrastructure into a cloud is very interesting to CFOs, and a major justification for replacing physical data centers with virtual data centers.

The second dynamic, in addition to greater professional knowledge and awareness of cloud computing, is the fact we are starting to recruit cloud-aware employees graduating from universities and making their first steps into careers and workforce.  With these “cloud savvy” young people comes deep experience with interoperable data, social media, big data, data analytics, and an intellectual separation between access devices and underlying IT infrastructure.

The Next Step in Cloud Evolution

OK, so we all are generally aware of the components of IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS).  Let’s have a quick review of some standout features supported or enabled by cloud:

  • Increased standardization of applications
  • Increased standardization of data bases
  • Federation of security systems (Authentication and Authorization)
  • Service busses
  • Development of other common applications (GIS, collaboration, etc.)
  • Transparency of underlying hardware

Now let’s consider the need for better, real-time, accurate decision support systems (DSS).  Within any organization the value of a DSS is dependent on data integrity, data access (open data within/without an organization), and single-source data.

Frameworks for developing an effective DSS are certainly available, whether it is TOGAF, the US Federal Enterprise Architecture Framework (FEAF), interoperability frameworks, and service-oriented architectures (SOA).  All are fully compatible with the tools made available within the basic cloud service delivery models (IaaS, PaaS, SaaS).

The Open Group (same organization which developed TOGAF) has responded with their model of a Cloud Computing Service Oriented Infrastructure (SOCCI) Framework.  The SOCCI is identified as the marriage of a Service-Oriented Infrastructure and cloud computing.  The SOCCI also incorporates aspects of TOGAF into the framework, which may drive more credibility into a SOCCI architectural development process.

The expected result of this effort is for existing organizations dealing with departmental “silos” of IT infrastructure, data, and applications, a level of interoperability and DSS development based on service-orientation, using a well-designed underlying cloud infrastructure.  This data sharing can be extended beyond the (virtual) firewall to others in an organization’s trading or governmental community, resulting in  DSS which will become closer and closer to an architecture vision based on the true value of data produced, or made available to an organization.

While we most certainly need IaaS, and the value of moving to virtual data centers is justified by itself, we will not truly benefit from the potential of cloud computing until we understand the potential of data produced and available to decision makers.

The opportunity will need a broad spectrum of contributors and participants with awareness and training in disciplines ranging from technical capabilities, to enterprise architecture, to service delivery, and governance acceptable to a cloud-enabled IT world.

For those who are eagerly consuming training and knowledge in the above skills and knowledge, the future is anything but cloudy.  For those who believe in status quo, let’s hope you are close to pension and retirement, as this is your future.

 

Now that We Have Adopted IaaS…

On November 25, 2014, in Internet and Telecom, by Administrator

Providing guidance or consulting to organizations on cloud computing topics can be really easy, or really tough.  In the past most of the initial engagement was dedicated to training and building awareness with your customer.  The next step was finding a high value, low risk application or service that could be moved to Infrastructure as a Service (IaaS) to solve an immediate problem, normally associated with disaster recovery or data backups.

Service Buss and DSS As the years have continued, dynamics changed.  On one hand, IT professionals and CIOs began to establish better knowledge of what virtualization, cloud computing, and outsourcing could do for their organization.  CFOs became aware of the financial potential of virtualization and cloud computing, and a healthy dialog between IT, operations, business units, and the CFO.

The “Internet Age” has also driven global competition down to the local level, forcing nearly all organizations to respond more rapidly to business opportunities.  If a business unit cannot rapidly respond to the opportunity, which may require product and service development, the opportunity can be lost far more quickly than in the past.

In the old days, procurement of IT resources could require a fairly lengthy cycle.  In the Internet Age, if an IT procurement cycle takes > 6 months, there is probably little chance to effectively meet the greatly shortened development cycle competitors in other continents – or across the street may be able to fulfill.

With IaaS the procurement cycle of IT resources can be within minutes, allowing business units to spend far more time developing products, services, and solutions, rather than dealing with the frustration of being powerless to respond to short window opportunities.  This is of course addressing the essential cloud characteristics of Rapid Elasticity and On-Demand Self-Service.

In addition to on-demand and elastic resources, IaaS has offered nearly all organizations the option of moving IT resources into either public or private cloud infrastructure.  This has the benefit of allowing data center decommissioning, and re-commissioning into a virtual environment.  The cost of operating data centers, maintaining data centers and IT equipment, and staffing data centers vs. outsourcing that infrastructure into a cloud is very interesting to CFOs, and a major justification for replacing physical data centers with virtual data centers.

The second dynamic, in addition to greater professional knowledge and awareness of cloud computing, is the fact we are starting to recruit cloud-aware employees graduating from universities and making their first steps into careers and workforce.  With these “cloud savvy” young people comes deep experience with interoperable data, social media, big data, data analytics, and an intellectual separation between access devices and underlying IT infrastructure.

The Next Step in Cloud Evolution

OK, so we all are generally aware of the components of IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS).  Let’s have a quick review of some standout features supported or enabled by cloud:

  • Increased standardization of applications
  • Increased standardization of data bases
  • Federation of security systems (Authentication and Authorization)
  • Service busses
  • Development of other common applications (GIS, collaboration, etc.)
  • Transparency of underlying hardware

Now let’s consider the need for better, real-time, accurate decision support systems (DSS).  Within any organization the value of a DSS is dependent on data integrity, data access (open data within/without an organization), and single-source data.

Frameworks for developing an effective DSS are certainly available, whether it is TOGAF, the US Federal Enterprise Architecture Framework (FEAF), interoperability frameworks, and service-oriented architectures (SOA).  All are fully compatible with the tools made available within the basic cloud service delivery models (IaaS, PaaS, SaaS).

The Open Group (same organization which developed TOGAF) has responded with their model of a Cloud Computing Service Oriented Infrastructure (SOCCI) Framework.  The SOCCI is identified as the marriage of a Service-Oriented Infrastructure and cloud computing.  The SOCCI also incorporates aspects of TOGAF into the framework, which may drive more credibility into a SOCCI architectural development process.

The expected result of this effort is for existing organizations dealing with departmental “silos” of IT infrastructure, data, and applications, a level of interoperability and DSS development based on service-orientation, using a well-designed underlying cloud infrastructure.  This data sharing can be extended beyond the (virtual) firewall to others in an organization’s trading or governmental community, resulting in  DSS which will become closer and closer to an architecture vision based on the true value of data produced, or made available to an organization.

While we most certainly need IaaS, and the value of moving to virtual data centers is justified by itself, we will not truly benefit from the potential of cloud computing until we understand the potential of data produced and available to decision makers.

The opportunity will need a broad spectrum of contributors and participants with awareness and training in disciplines ranging from technical capabilities, to enterprise architecture, to service delivery, and governance acceptable to a cloud-enabled IT world.

For those who are eagerly consuming training and knowledge in the above skills and knowledge, the future is anything but cloudy.  For those who believe in status quo, let’s hope you are close to pension and retirement, as this is your future.

 

Wiring the Sierras

On November 20, 2014, in Burbank, Internet and Telecom, net neutrality, by Administrator

Inyo County, the second largest county in California, is ready to jumpstart the process of delivering a true broad band infrastructure to business and residences within the Owens Valley.  The plan, called the 21st Century Obsidian Project, envisions delivering a fiber infrastructure to all residents of Inyo County and other surrounding areas along the Eastern Sierras and parts of Death Valley.

Owens Valley Eastern Sierras According to the project RFP, the project goal is “an operating, economically sustainable,
Open Access, Fiber-to-the-Premise, gigabit network serving the Owens Valley and select
neighboring communities. The project is driven by the expectation that Inyo County’s
economy will improve as a result of successfully attaining the goal.”

Many cities are finding ways to bypass the nonsense surrounding discussion on “Net Neutrality.”  Rather than worry about what Comcast, AT&T, Verizon, or other carriers and ISPs feuding over the rights and responsibilities of delivering Internet content to the premise,  many governments understand the need for high speed broadband as a critical economic, social, and academic tool, and are developing alternatives to traditional carriers.

Whether it is the Inyo County project, Burbank One (a product of Burbank Water and Power), Glendale Fiber Optic Solutions (Glendale Water and Power), Pasadena’s City Fiber Services, or Los Angeles Department of Water and Power’s (LADWP) Fiber Optic Enterprise, the fiber utility is becoming available in spite of carrier reluctance to develop fiber infrastructure.

Much of the infrastructure is being built to support intelligent grids (power metering and control), and city schools or emergency services – with the awareness fiber optics are fiber optics, and the incremental cost of adding additional fiber cores to each distribution route is low.  So why not build it out for the citizens and businesses?

The important aspect of municipal or city infrastructure development is the acknowledgement this is a utility.  While some government agencies will provide “lit” services, in general the product is “dark” fiber, available for lease or use by commercial service providers.  Many city networks are interconnected (such as in Los Angeles County utility fiber from Glendale, Burbank, and LADWP), as well as having a presence at major network interconnection points.  This allows fiber networks to carry signal to locations such as One Wilshire’s meet-me-room, with additional access to major Internet Exchange Points and direct interconnections allowing further bypass and peering to other national and global service providers.

In the case of Inyo County, planners fully understand they do not have the expertise necessary to become a telecommunications carrier, and plan to outsource maintenance and some operations of the 21st century Obsidian Project to a third party commercial operator – of course within the guidelines established by the RFP.  The intent is to make it easy and cost effective for all businesses, public facilities, schools, and residences to take advantage and exploit broadband infrastructure.

However the fiber will be considered a utility, with no prejudice or limitations given to commercial service providers desiring to take advantage of the infrastructure and deliver services to the county.

We hope more communities will look at innovative visions such as being published by Inyo County, and consider investing in fiber optics as a utility, diluting the potential impact of carrier sanctions against both internet access, content, or applications (including cloud computing Software as a Service subscriptions.  e.g., MS 365, Adobe Creative Cloud, Google Apps, etc.)..

Congratulations to Inyo County for your vision, and best of luck.

Just finished another ICT-related technical assistance visit with a developing country government. Even in mid-2014, I spend a large amount of time teaching basic principles of enterprise architecture, and the need for adding form and structure to ICT strategies.

Service-oriented architectures (SOA) have been around for quite a long time, with some references going back to the 1980s. ITIL, COBIT, TOGAF, and other ICT standards or recommendations have been around for quite a long time as well, with training and certifications part of nearly every professional development program.

So why is the idea of architecting ICT infrastructure still an abstract to so many in government and even private industry? It cannot be the lack of training opportunities, or publicly available reference materials. It cannot be the lack of technology, or the lack of consultants readily willing to assist in deploying EA, SOA, or interoperability within any organization or industry cluster.

During the past two years we have run several Interoperability Readiness Assessments within governments. The assessment initially takes the form of a survey, and is distributed to a sample of 100 or more participants, with positions ranging from administrative task-based workers, to Cxx or senior leaders within ministries and government agencies.

Questions range from basic ICT knowledge to data sharing, security, and decision support systems.

While the idea of information silos is well-documented and understood, it is still quite surprising to see “siloed” attitudes are still prevalent in modern organizations.  Take the following question:

Question on Information Sharing

This question did not refer to sharing data outside of the government, but rather within the government.  It indicates a high lack of trust when interacting with other government agencies, which will of course prevent any chance of developing a SOA or facilitating information sharing among other agencies.  The end result is a lower level of both integrity and value in national decision support capability.

The Impact of Technology and Standardization

Most governments are considering or implementing data center consolidation initiatives.  There are several good reasons for this, including:

  • Cost of real estate, power, staffing, maintenance, and support systems
  • Transition from CAPEX-based ICT infrastructure to OPEX-based
  • Potential for virtualization of server and storage resources
  • Standardized cloud computing resources

While all those justifications for data center consolidation are valid, the value potentially pales in comparison of the potential of more intelligent use of data across organizations, and even externally to outside agencies.  To get to this point, one senior government official stated:

“Government staff are not necessarily the most technically proficient.  This results in reliance on vendors for support, thought leadership, and in some cases contractual commitments.  Formal project management training and certification are typically not part of the capacity building of government employees.

Scientific approaches to project management, especially ones that lend themselves to institutionalization and adoption across different agencies will ensure a more time-bound and intelligent implementation of projects. Subsequently, overall knowledge and technical capabilities are low in government departments and agencies, and when employees do gain technical proficiency they will leave to join private industry.”

There is also an issue with a variety of international organizations going into developing countries or developing economies, and offering no or low cost single-use ICT infrastructure, such as for health-related agencies, which are not compatible with any other government owned or operated applications or data sets.

And of course the more this occurs, the more difficult it is for government organizations to enable interoperability or data sharing, and thus the idea of an architecture or data sharing become either impossible or extremely difficult to implement or accomplish.

The Road to EA, SOAs, and Decision Support

There are several actions to take on the road to meeting our ICT objectives.

  1. Include EA, service delivery (ITIL), governance (COBIT), and SOA training in all university and professional ICT education programs.  It is not all about writing code or configuring switches, we need to ensure a holistic understanding of ICT value in all ICT education, producing a higher level of qualified graduates entering the work force.
  2. Ensure government and private organizations develop or adopt standards or regulations which drive enterprise architecture, information exchange models, and SOAs as a basic requirement of ICT planning and operations.
  3. Ensure executive awareness and support, preferably through a formal position such as the Chief Information Officer (CIO).  Principles developed and published via the CIO must be adopted and governed by all organizations,
    Nobody expects large organizations, in particular government organizations, to change their cultures of information independence overnight.  This is a long term evolution as the world continues to better understand the value and extent of value within existing data sets, and begin creating new categories of data.  Big data, data analytics, and exploitation of both structured and unstructured data will empower those who are prepared, and leave those who are not prepared far behind.
    For a government, not having the ability to access, identify, share, analyze, and address data created across agencies will inhibit effective decision support, with potential impact on disaster response, security, economic growth, and overall national quality of life.
    If there is a call to action in this message, it is for governments to take a close look at how their national ICT policies, strategies, human capacity, and operations are meeting national objectives.  Prioritizing use of EA and supporting frameworks or standards will provide better guidance across government, and all steps taken within the framework will add value to the overall ICT capability.

Pacific-Tier Communications LLC  provides consulting to governments and commercial organizations on topics related to data center consolidation, enterprise architecture, risk management, and cloud computing.

ICT Training Development Survey

On August 4, 2014, in Internet and Telecom, by Administrator

Pacific-Tier Communications LLC is preparing our ICT courseware development plan for 2015.  We would be very grateful if you took a minute and filled out the following linked survey.

We are not collecting any personal or location information – just interested in what your organization would find useful for professional ICT training and courseware.

Thanks for your support!

If you have any questions, please send us a note at info@pacific-tier.com

Tagged with:
 

ACC 2013The 2013 ACC kicked off on Tuesday morning with an acknowledgement by Philippine Long Distance Telecommunications (PLDT) CEO Napolean L. Nazareno that “we’re going through a profound and painful transformation to digital technologies.”  He continued to explain that in addition to making the move to a digital corporate culture and architecture that for traditional telcos to succeed they will need to “master new skills, including new partnership skills.”

That direction drives a line straight down the middle of attendees at the conference.  Surprisingly, many companies attending and advertising their products still focus on “minutes termination,” and traditional voice-centric relationships with other carriers and “voice” wholesalers.

Philippe MilletMatthew Howett, Regulation and Policy Practice Leader for Ovum Research noted ”while fixed and mobile minutes are continuing to grow, traditional voice revenue is on the decline.”  He backed the statement up with figures including “Over the Top/OTT” services, which are when a service provider sends all types of communications, including video, voice, and other connections, over an Internet protocol network – most commonly over the public Internet.

Howett informed the ACC’s plenary session attendees that Ovum Research believes up to US$52 billion will be lost in traditional voice revenues to OTT providers by 2016, and an additional US$32.6 billion to instant messaging providers in the same period.

The message was simple to traditional communications carriers – adapt or become irrelevant.  National carriers may try to work with government regulators to try and adopt legal barriers to prevent the emergence of OTTs operating in that country, however that is only a temporary step to stem the flow of “technology-enabled” competition and retain revenues. 

As noted by Nazareno, the carriers must wake up to the reality we are in a global technology refresh cycle and  business visions, expectations, and construct business plans that will not only allow the company to survive, but also meet the needs of their users and national objectives.

Kevin Vachon, MEFMartin Geddes, owner of Martin Geddes Consulting, introduced the idea of “Task Substitution.’”  Task Substitution occurs when an individual or organization is able to use a substitute technology or process to accomplish tasks that were previously only available from a single source.  One example is the traditional telephone call.  In the past you would dial a number, and the telephone company would go through a series of connections, switches, and processes that would both connect two end devices, as well as provide accounting for the call.

The telephone user now has many alternatives to the tradition phone call – all task substitutions.  You can use Skype, WebEx, GoToMeeting, instant messaging – any one of a multitude of utilities allowing an individual or group to participate in one to one or many to many communications.  When a strong list of alternative methods to complete a task exist, then the original method may become obsolete, or have to rapidly adapt to avoid being discarded by users.

A strong message, which made many attendees visibly uncomfortable.

Ivan Landen, Managing Director at Asia-Pacific Expereo, described the telecom revolution in terms all attendees could easily visualize.  “Today around 80% of the world’s population have access to the electrical grid/s, while more than 85% of the population has access to Wifi.”

Ivan Landen, ExpereoHe also provided an additional bit of information which did not surprise attendees, but also made some of the telecom representatives a bit uneasy.  In a survey Geddes conducted he discovered that more than 1/2 of business executives polled admitted their Internet access was better at their homes than in their offices.”  This information can be analyzed in several different ways, from having poor IT planning with the company, to poor UT capacity management within the communication provider, to the  reality traffic on consumer networks is simply lower during the business day than during other time periods.

However the main message was “there is a huge opportunity for communication companies to fix business communications.”

The conference continues until Friday.  Many more sessions, many more perimeter discussions, and a lot of space for the telecom community to come to grips with the reality “we need to come to grips with the digital world.”

Tagged with:
 

Cloud Computing ClassroomNormally, when we think of technical-related training, images of rooms loaded with switches, routers, and servers might come to mind.    Cloud computing is different.  In reality, cloud computing is not a technology, but rather a framework employing a variety of technologies – most notably virtualization, to solve business problems or enable opportunities.

From our own practice, the majority of cloud training students represent non-technical careers and positions. Our training does follow the CompTIA Cloud Essentials course criterion, and is not a technical course, so the non-technical student trend should not come as any big surprise. 

What does come as a surprise is how enthusiastically our students dig into the topic.  Whether business unit managers, accounting and finance, sales staff, or executives, all students come into class convinced they need to know about cloud computing as an essential part of their future career progression, or even at times to ensure their career survival.

Our local training methodology is based on establishing an indepth knowledge of the NIST Cloud Definitions and Cloud Reference Architecture.  Once the students get beyond a perception such documents are too complex, and that we will refer nearly all aspects of training to both documents, we easily establish a core cloud computing knowledge base needed to explore both technical aspects, and more importantly practical aspects of how cloud computing is used in our daily lives, and likely future lives.

This is not significantly different than when we trained business users on how to use, employ, and exploit  the Internet in the 90s.  Those of us in engineering or technical operations roles viewed this type of training with either amusement or contempt, at times mocking those who did not share our knowledge and experience of internetworking, and ability to navigate the Internet universe.

We are in the same phase of absorbing and developing tacit knowledge of compute and storage access on demand, service-oriented architectures, Software as a Service, the move to a subscription-based application world.

Hamster Food as a Service (HFaaS)Those students who attend cloud computing training leave the class better able to engage in decision-making related to both personal and organizational information and communication technology, and less exposed to the spectrum of cloud washing, or marketing use of “cloud” and “XXX as a Service”  language overwhelming nearly all media on subjects ranging from hamster food to SpaceX and hyper loops.

Even the hardest core engineers who have degraded themselves to join a non-technical business-oriented cloud course walk away with a better view on how their tools support organizational agility (good jargon, no?), in addition to the potential financial impacts, reduced application development cycles, disaster recovery, business continuity, and all the other potential benefits to the organization when adopting cloud computing.

Some even walk away from the course planning a breakup with some of their favorite physical servers.

The Bottom Line

No student has walked away from a cloud computing course knowing less about the role, impact, and potential of implementing cloud in nearly any organization.  While the first few hours of class embrace a lot of great debates on the value of cloud computing, by the end of the course most students agree they are better prepared to consider, envision, evaluate, and address the potential or shortfalls of cloud computing.

Cloud computing is, and will continue to have influence on many aspects of our lives. It is not going away anytime soon.  The more we can learn, either through self-study or resident training, the better position we’ll be in to make intelligent decisions regarding the use and value of cloud in our lives and organizations.

Seattle Washington - Home of WBXInternational telecommunication carriers all share one thing in common – the need to connect with other carriers and networks.  We want to make calls to China, a video conference in Moldova, send an email message for delivery within 5 seconds to Australia – all possible with our current state of global communications.  Magic?  Of course not.  While an abstract to most, the reality is telecommunications physical infrastructure extends to nearly every corner of the world, and communications carriers bring this global infrastructure together at  a small number of facilities strategically placed around the world informally called “carrier hotels.”

Pacific-Tier had the opportunity to visit the Westin Building Exchange (commonly known as the WBX), one of the world’s busiest carrier hotels, in early August.   Located in the heart of Seattle’s bustling business district, the WBX stands tall at 34 stories.  The building also acts as a crossroads of the Northwest US long distance terrestrial cable infrastructure, and is adjacent to trans-Pacific submarine cable landing points.

The world’s telecommunications community needs carrier hotels to interconnect their physical and value added networks, and the WBX is doing a great job in facilitating both physical interconnections between their more than 150 carrier tenants.

“We understand the needs of our carrier and network tenants” explained Mike Rushing,   Business Development Manager at the Westin Building.  “In the Internet economy things happen at the speed of light.  Carriers at the WBX are under constant pressure to deliver services to their customers, and we simply want to make this part of the process (facilitating interconnections) as easy as possible for them.”

Main Distribution Frame at WBXThe WBX community is not limited to carriers.  The community has evolved to support Internet Service Providers, Content Delivery Networks (CDNs), cloud computing companies, academic and research networks, enterprise customers, public colocation and data center operators, the NorthWest GigaPOP, and even the Seattle Internet Exchange Point (SIX), one of the largest Internet exchanges in the world.

“Westin is a large community system,” continued Rushing.  “As new carriers establish a point of presence within the building, and begin connecting to others within the tenant and accessible community, then the value of the WBX community just continues to grow.”

The core of the WBX is the 19th floor meet-me-room (MMR).  The MMR is a large, neutral, interconnection point for networks and carriers representing both US and international companies.  For example, if China Telecom needs to connect a customer’s headquarters in Beijing to an office in Boise served by AT&T, the actual circuit must transfer at a physical demarcation point from China Telecom  to AT&T.  There is a good chance that physical connection will occur at the WBX.

According to Kyle Peters, General Manager of the Westin Building, “we are supporting a wide range of international and US communications providers and carriers.  We fully understand the role our facility plays in supporting not only our customer’s business requirements, but also the role we play in supporting global communications infrastructure.”

You would be correct in assuming the WBX plays an important role in that critical US and global communications infrastructure.  Thus you would further expect the WBX to be constructed and operated in a manner providing a high level of confidence to the community their installed systems will not fail.

Lance Forgey, Director of Operations at the WBX, manages not only the MMR, but also the massive mechanical (air conditioning) and electrical distribution systems within the building.  A former submarine engineer, Forgey runs the Westin Building much like he operated critical systems within Navy ships.  Assisted by an experienced team of former US Navy engineers and US Marines, the facility presents an image of security, order, cleanliness, and operational attention to detail.

“Our operations and facility staff bring the discipline of many years in the military, adding innovation needed to keep up with our customer’s industries” said Forgey.  “Once you have developed a culture of no compromise on quality, then it is easy keep things running.”

That is very apparent when you walk through the site – everything is in its place, it is remarkably clean, and it is very obvious the entire site is the product of a well-prepared plan.

WBX GeneratorsOne area which stands out at the WBX is the cooling and electrical distribution infrastructure.  With space within adjacent external parking structures and additional areas outside of the building most heavy equipment is located outside of the building, providing an additional layer of physical security, and allowing the WBX to recover as much space within the building as possible for customer use.

“Power is not an issue for us”  noted Forgey.  “It is a limiting factor for much of our industry, however at the Westin Building we have plenty, and can add additional power anytime the need arises.”

That is another attraction for the WBX versus some of the other carrier hotels on the West Coast of the US.  Power in Washington State averages around $0.04/kWH, while power in California may be nearly three times as expensive.

“In addition to having all the interconnection benefits similar operations have on the West Coast, the WBX can also significantly lower operating costs for tenants” added Rushing.  As the cost of power is a major factor in data center operations, reducing the cost of operations through a significant reduction in the cost of power is a big issue.

The final area carrier hotels need to address is the ever changing nature of communications, including interconnections between members of the WBX community.  Nothing is static, and the WBX team is constantly communicating with tenants, evaluating changes in supporting technologies, and looking for ways to ensure they have the tools available to meet their rapidly changing environments.

Cloud computing, software-defined networking, carrier Ethernet – all  topics which require frequent communication with tenants to gain insight into their visions, concerns, and plans.  The WBX staff showed great interest in cooperating with their tenants to ensure the WBX will not impede development or implementation of new  technologies, as well as attempt to stay ahead of their customer deployments.

“If a customer comes to us and tells us they need a new support infrastructure or framework with very little lead time, then we may not be able to respond quickly enough to meet their requirements” concluded Rushing.  “Much better to keep an open dialog with customers and become part of their team.”

Pacific-Tier has visited, and evaluated dozens of data centers during the past four years.  Some have been very good, some have been very bad.  Some have gone over the edge in data center deployments, chasing the “grail” of a Tier IV data center certification, while some have been little more than a server closet.

The Westin Building (WBX)The Westin Building / WBX is unique in the industry.  Owned by both Clise Properties of Seattle and Digital Realty Trust,  the Westin Building brings the best of both the real estate world and data centers into a single operation.  The quality of mechanical and electrical infrastructure, the people maintaining the infrastructure, and the vision of the company give a visitor an impression that not only is the WBX a world-class facility, but also that all staff and management know their business, enjoy the business, and put their customers on top as their highest priority.

As Clise Properties owns much of the surrounding land, the WBX has plenty of opportunity to grow as the business expands and changes.  “We know cloud computing companies will need to locate close to the interconnection points, so we better be prepared to deliver additional high-density infrastructure as their needs arise” said Peters.  And in fact Clise has already started planning for their second colocation building.  This building, like its predecessor, will be fully interconnected with the Westin Building, including virtualizing the MMR distribution frames in each building into a single cross interconnection environment.

Westin WBX LogoWBX offers the global telecom industry an alternative to other carrier hotels in Los Angeles and San Francisco. One shortfall in the global telecom industry are the “single threaded” links many have with other carriers in the global community.  California has the majority of North America / Asia carrier interconnections today, but all note California is one of the world’s higher risk options for building critical infrastructure, with the reality it is more a matter of “when” than “if” a catastrophic event such as an earthquake occurs which could seriously disrupt international communications passing through one of the region’s MMRs.

The telecom industry needs to have the option of alternate paths of communications and interconnection points.  While the WBX stands tall on its own as a carrier hotel and interconnection site, it is also the best alternative and diverse landing point for trans-Pacific submarine cable capacity – and subsequent interconnections.

The WBX offers a wide range of customer services, including:

  • Engineering support
  • 24×7 Remote hands
  • Fast turn around for interconnections
  • Colocation
  • Power circuit monitoring and management
  • Private suites and lease space for larger companies
  • 24×7 security monitoring and access control

Check out the Westin Building and WBX the next time you are in Seattle, or if you want to learn more about the telecom community revolving and evolving in the Seattle area.  Contact Mike Rushing at mrushing@westinbldg.com for more information.

 

Tagged with:
 

ByeBye-Telephones You are No Longer RequiredJust finished another frustrating day of consulting with an organization that is convinced technology is going to solve their problems.  Have an opportunity?  Throw money and computers at the opportunity.  Have a technology answer to your process problems?  Really?.

The business world is changing.  With cloud computing potentially eliminating the need for some current IT roles, such as physical server huggers…, information technology professionals, or more appropriately information and communications technology (ICT) professionals, need to rethink their roles within organizations.

Is it acceptable to simply be a technology specialist, or do ICT professionals also need to be an inherent part of the business process?  Yes, a rhetorical question, and any negative answer is wrong.  ICT professionals are rapidly being relieved of the burden of data centers, servers (physical servers), and a need to focus on ensuring local copies of MS Office are correctly installed, configured, and have the latest service packs or security patches installed.

You can fight the idea, argue the concept, but in reality cloud computing is here to stay, and will only become more important in both the business and financial planning of future organizations.

Now those copies of MS Office are hosted on MS 365 or Google Docs, and your business users are telling you either quickly meet their needs or they will simply bypass the IT organization and use an external or hosted Software as a Service (SaaS) application – in spite of your existing mature organization and policies.

So what is this TOGAF stuff?  Why do we care?

Well…

As it should be, ICT is firmly being set in the organization as a tool to meet business objectives.  We no longer have to consider the limitations or “needs” of IT when developing business strategies and opportunities.  SaaS and Platform as a Service (PaaS) tools are becoming mature, plentiful, and powerful.

Argue the point, fight the concept, but if an organization isn’t at least considering a requirement for data and systems interoperability, the use of large data sets, and implementation of a service-oriented architecture (SOA) they will not be competitive or effective in the next generation of business.

TOGAF, which is “The Open Group Architecture Framework,” brings structure to development of ICT as a tool for meeting business requirements.   TOGAF is a tool which will force each stakeholder, including senior management and business unit management, to work with ICT professionals to apply technology in a structured framework that follows the basic:

  • Develop a business vision
  • Determine your “AS-IS” environment
  • Determine your target environment
  • Perform a gap analysis
  • Develop solutions to meet the business requirements and vision, and fill the “gaps” between “AS-IS” and “Target”
  • Implement
  • Measure
  • Improve
  • Re-iterate
    Of course TOGAF is a complex architecture framework, with a lot more stuff involved than the above bullets.  However, the point is ICT must now participate in the business planning process – and really become part of the business, rather than a vendor to the business.
    As a life-long ICT professional, it is easy for me to fall into indulging in tech things.  I enjoy networking, enjoy new gadgets, and enjoy anything related to new technology.  But it was not until about 10 years ago when I started taking a formal, structured approach to understanding enterprise architecture and fully appreciating the value of service-oriented architectures that I felt as if my efforts were really contributing to the success of an organization.
    TOGAF was one course of study that really benefitted my understanding of the value and role IT plays in companies and government organizations.  TOGAF provide both a process, and structure to business planning.
    You may have a few committed DevOps evangelists who disagree with the structure of TOGAF, but in reality once the “guardrails” are in place even DevOps can be fit into the process.  TOGAF, and other frameworks are not intended to stifle innovation – just encourage that innovation to meet the goals of an organization, not the goals of the innovators.
    While just one of several candidate enterprise architecture frameworks (including the US Federal Enterprise Architecture Framework/FEAF, Dept. of Defense Architecture Framework /DoDAF), TOGAF is now universally accepted, and accompanying certifications are well understood within government and enterprise.

What’s an IT Guy to Do?

    Now we can send the “iterative” process back to the ICT guy’s viewpoint.  Much like telecom engineers who operated DMS 250s, 300s, and 500s, the existing IT and ICT professional corps will need to accept the reality they will either need to accept the concept of cloud computing, or hope they are close to retirement.  Who needs a DMS250 engineer in a world of soft switches?  Who needs a server manager in a world of Infrastructure as a Service?  Unless of course you work as an infrastructure technician at a cloud service provider…
    Ditto for those who specialize in maintaining copies of MS Office and a local MS Exchange server.  Sadly, your time is limited, and quickly running out.  Either become a cloud computing expert, in some field within cloud computing’s broad umbrella of components, or plan to be part of the business process.  To be effective as a member of the organization’s business team, you will need skills beyond IT – you will need to understand how ICT is used to meet business needs, and the impact of a rapidly evolving toolkit offered by all strata of the cloud stack.

Even better, become a leader in the business process.  If you can navigate your way through a TOGAF course and certification, you will acquire a much deeper appreciation for how ICT tools and resources could, and likely should, be planned and employed within an organization to contribute to the success of any individual project, or the re-engineering of ICTs within the entire organization.


John Savageau is TOGAF 9.1 Certified

Tagged with: