2011 was a great year for technology innovation.  The science of data center design and operations continued to improve, the move away from mixed-use buildings used as data centers continued, the watts/sqft metric took a second seat to overall kilowatts available to a facility or customer, and the idea of compute capacity and broadband as a utility began to take its place as a basic right of citizens.

However, there are 5 areas where we will see additional significant advances in 2012.

1.  Data Center Consolidation.  The US Government admits it is using only 27% of its overall available compute power.  With 2094 data centers supporting the federal government (from the CIO’s 25 Point Plan to Reform Fed IT Mgt), the government is required to close at least 800 of those data centers by 2015.

Data Center ConstructionThe lesson is not lost on state and local governments, private industry, or even internet content providers.  The economics of operating a data center or server closet, whether in costs of real estate, power, hardware, in addition to service and licensing agreements, are compelling enough to make even the most fervent server-hugger reconsider their religion.

2.  Cloud Computing.  Who doesn’t believe cloud computing will eventually replace the need for a server closets, cabinets, or even small cages in data centers?  The move to cloud computing is as certain as the move to email was in the 1980s.

Some IT managers and data owners hate the idea of cloud computing, enterprise service busses, and consolidated data.  Not so much an issue of losing control, but in many cases because it brings transparency to their operation.  If you are the owner of data in a developing country, and suddenly everything you do can be audited by a central authority – well it might make you uncomfortable…

A lesson learned while attending a  fast pitch contest during late 2009 in Irvine, CA…  An enterprising entrepreneur gave his “pitch” to a panel of investment bankers and venture capital representatives.  He stated he was looking for a $5 million investment in his startup company.

A panelist asked what the money was for, and the entrepreneur stated “.. and $2 million to build out a data center…”  The panelist responded that 90% of new companies fail within 2 years.  Why would he want to be stuck with the liability of a data center and hardware if the company failed? The gentleman further stated, “don’t waste my money on a data center – do the smart thing, use the Amazon cloud.”

3.  Virtual Desktops and Hosted Office Automation.  How many times have we lost data and files due to a failed hard drive, stolen laptop, or virus disrupting our computer?  What is the cost or burden of keeping licenses updated, versions updated, and security patches current in an organization with potentially hundreds of users?  What is the lead time when a user needs a new application loaded on a computer?

From applications as simple as Google Docs, to Microsoft 365, and other desktop replacement applications suites, users will become free from the burden of carrying a heavy laptop computer everywhere they travel.  Imagine being able to connect your 4G/LTE phone’s HDMI port to a hotel widescreen television monitor, and be able to access all the applications normally used at a desktop.  You can give a presentation off your phone, update company documents, or nearly any other IT function with the only limitation being a requirement to access broadband Internet connections (See # 5 below).

Your phone can already connect to Google Docs and Microsoft Live Office, and the flexibility of access will only improve as iPads and other mobile devices mature.

The other obvious benefit is files will be maintained on servers, much more likely to be backed up and included in a disaster recovery plan.

4.  The Science of Data Centers. It has only been a few years since small hosting companies were satisfied to go into a data center carved out of a mixed-use building, happy to have access to electricity, cooling, and a menu of available Internet network providers.  Most rooms were Data Center Power Requirementsdesigned to accommodate 2~3kW per cabinet, and users installed servers, switches, NAS boxes, and routers without regard to alignment or power usage.

That has changed.  No business or organization can survive without a 24x7x265 presence on the Internet, and most small enterprises – and large enterprises, are either consolidating their IT into professionally managed data centers, or have already washed their hands of servers and other IT infrastructure.

The Uptime Institute, BICSI, TIA, and government agencies have begun publishing guidelines on data center construction providing best practices, quality standards, design standards, and even standards for evaluation.  Power efficiency using metrics such as the PUE/DCiE provide additional guidance on power management, data center management, and design.

The days of small business technicians running into a data center at 2 a.m. to install new servers, repair broken servers, and pile their empty boxes or garbage in their cabinet or cage on the way out are gone.  The new data center religion is discipline, standards, discipline, and security.  Electricity is as valuable as platinum, just as cooling and heat are managed more closely than inmates at San Quentin.  While every other standards organization is now offering certification in cabling, data center design, and data center management, we can soon expect universities to offer an MS or Ph.D in data center sciences.

5.  The 4th Utility Gains Traction.  Orwell’s “1984” painted a picture of pervasive government surveillance, and incessant public mind control (Wikipedia).  Many people believe the Internet is the source of all evil, including identity theft, pornography, crime, over-socialization of cultures and thoughts, and a huge intellectual time sink that sucks us into the need to be wired or connected 24 hours a day.

Yes, that is pretty much true, and if we do not consider the 1000 good things about the Internet vs. each 1 negative aspect, it might be a pretty scary place to consider all future generations being exposed and indoctrinated.  The alternative is to live in a intellectual Brazilian or Papuan rain forest, one step out of the evolutionary stone age.

The Internet is not going away, unless some global repressive government, fundamentalist religion, or dictator manages to dismantle civilization as we know it.

The 4th utility identifies broadband access to the ‘net as a basic right of all citizens, with the same status as roads, water, and electricity.  All governments with a desire to have their nation survive and thrive in the next millennium will find a way to cooperate with network infrastructure providers to build out their national information infrastructure (haven’t heard that term since Al Gore, eh?).

Without a robust 4th utility, our children and their children will produce a global generation of intellectual migrant workers, intellectual refugees from a failed national information sciences vision and policy.

2012 should be a great year.  All the above predictions are positive, and if proved true, will leave the United States and other countries with stronger capacities to improve their national quality of life, and bring us all another step closer.

Happy New Year!

Fire season is here. Southern California fire departments and forestry services are urging residents to cut back brush on their properties and create “defensible space” Burbank is in a High Risk Period for Wildfirebetween the dry chaparral and their homes. Local news stations have spooled their resources to bring fire-related journalism to the population. And, we have already seen extreme technology such as DC-10s and 747s dumping insane amounts of Foscheck and water to quickly knock down fires which have popped up early in the season.

Southern California has fires, just as Kansas has tornadoes and Florida has hurricanes. Disasters are a natural part of nature and life. How we deal with natural disasters, our ability to survive and overcome challenges, and how we restore our communities defines our society.

Technology tools in place or being developed are having a major impact on our ability to react, respond, and recover from disaster. In the early stages of any disaster, communication is key to both survival and response. As nearly every person in the world is now tethered to a wireless device, the communication part isDefensible space to avoid brush fires becoming much easier, as even the most simple handset will support basic features such as text messaging and voice communications.

Getting the Message Out

Over the past 25 years the world has adopted Internet-enabled communications in a wide variety of formats for everything from email to citizen journalism. It is hard to find an event occurring anyplace in the world that is not recorded by a phone camera, YouTube video, blog, or real time broadcast.

In the 2008 Santa Barbara Tea Fire students from UC Santa Barbara used Twitter to warn fellow students and local residents to get out of the fire’s path as it raced through 2000 acres and more than 210 houses within the city limits. While it is not possible to put a statistic on the value of Twitter on evacuations and emergency notification, interviews following the fire with students revealed many had their initial notification through Twitter lists, and indicated they were able to get out of areas consumed in the fire (while screaming the heads off to others in the neighborhood to get out) before public safety officials were able to respond to the fire.

NOTE: I was driving through Santa Barbara (along the ‘101) during the initial phase of the fire, and can personally verify the fire moved really, really fast through the city. It looked like lava streaming out of a volcano, and you could see houses literally exploding as the fire hit them and moved through… I wasted no time myself getting through the city and on the way to LA.

Houses in Burbank's Verdugu MoutnainsThis article will not review all the potential technologies or software becoming available for emergency notifications, however we will look at the basic utility enabling all the great stuff happening to keep our citizens safe. The Internet.

Internet’s Utility is Now Bigger than Individuals and Companies

We all remember the infamous interview with Ed Whitcare, former CEO at AT&T.

Q: How concerned are you about Internet upstarts like Google, MSN, Vonage, and others?

A: How do you think they’re going to get to customers? Through a broadband pipe. Cable companies have them. We have them. Now what they would like to do is use my pipes free, but I ain’t going to let them do that because we have spent this capital and we have to have a return on it. So there’s going to have to be some mechanism for these people who use these pipes to pay for the portion they’re using. Why should they be allowed to use my pipes?

The Internet can’t be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo or Vonage or anybody to expect to use these pipes [for] free is nuts!

This statement, clearly indicates many in the internet network and service provider business do not yet get the big picture of what this “4th Utility” represents. The internet is not funny cat videos, porn, corporate web sites, or Flickr. Those features and applications exist on the Internet, but they are not the Internet.

Internet, broadband, and applications are a basic right of every person on the planet. The idea that two network administrators might have an argument at a bar, and subsequently consider the possibility of “de-peering” a network based on personalities or manageable financial considerations borders on being as irresponsible as a fire department going on strike during a California wildfire.

From http://www.wired.com/autopia/2009/09/evergreen-supertanker/As a utility, the Internet has value. Just as electricity, water, or roads. The utility must be paid for either before or after use, however the utility cannot be denied to those who need the service. When a city grows, and attracts more traffic, residents, and commerce, the intent is normally not to restrict or control the process, you build better roads, better infrastructure, and the people will eventually pay the price of that growth through taxes and utility bills. The 4th Utility is no different. When it gets oversubscribed, it is the carrier’s responsibility to build better infrastructure.

Disputes between network administrators, CFOs, or colocation landlords should never present a risk that SMS, Twitter, email, or other citizen journalism could be blocked, resulting is potential loss of life, property, and quality of life.

Communicating in the Dangerous Season

Fire season is upon us. As well as riots, traffic congestion, government crackdowns, take downs, and other bad things people need to know so they can react and respond. The Internet delivers CalTrans traffic information to smart phones, SMS, and web browsers to help us avoid gridlock and improve our quality of life. Twitter and YouTube help us understand the realities of a Tehran government crackdown, and Google Maps helps guide us through the maze of city streets while traveling to a new location.

We have definitely gone well past the “gee whiz” phase of the Internet, and must be ready to deal with the future of the Internet as a basic right, a basic utility, and essential component of our lives.

Net neutrality is an important topic – learn more about network neutrality, and weigh in on how you believe this utility should be envisioned.

In the early 1990s TWICS, a commercial bulletin board service provider in Tokyo, jumped on the Internet. Access was very poor based on modern Internet speeds, however at the time 128kbps over frame relay (provided by Sprint international) was unique, and in fact represented the first truly commercial Internet access point in Japan.

The good old boys of the Japanese academic community were appalled, and did everything in their power to intimidate TWICS into disconnecting their connection, to the point of sending envelopes filled with razor blades to TWICS staff and the late Roger Boisvert (*), who through Intercon International KK acted as their project manager. The traditional academic community did not believe anybody outside of the academic community should ever have the right to access the Internet, and were determined to never let that happen in Japan.

Since the beginning, the Internet has been a dichotomy of those who wish to control or profit from the Internet, and those who envision potential and future of the Internet. Internet “peering” originally came about when academic networks needed to interconnect their own “Internets” to allow interchange of traffic and information between separately operated and managed networks. In the Internet academic “stone age” of the NSFNet, peering was a normal and required method of participating in the community. But,… if you were planning to send any level of public or commercial traffic through the network you would violate the NSFNET’s “acceptable use policy/AUP” preventing use of publically-funded networks for non-academic or government use.

Commercial internet Exchange Points such as the CIX, and eventually the NSF supported network access points/NAPs popped up to accommodate the growing interest in public access and commercial Internet. Face it, if you went through university or the military with access to the Internet or Milnet, and then jumped into the commercial world, it would be pretty difficult to give up the obvious power of interconnected networks bringing you close to nearly every point on the globe.

The Tier 1 Subsidy

To help privatize the untenable growth of the NSFNet (due to “utility” academic network access), the US Government helped pump up American telecom carriers such as Sprint, AT&T, and MCI by handing out contracts to take over control and management of the world’s largest Internet networks, which included the NSFNet and the NSF’s international Connection Managers bringing the international community into the NSFNet backbone.

This allowed Sprint, AT&T, and MCI to gain visibility into the entire Internet community of the day, as well as take advantage of their own national fiber/transmission networks to continue building up the NSFNet community on long term contracts. With that infrastructure in place, those networks were clear leaders in the development of large commercial internet networks. The Tier 1 Internet provider community is born.

Interconnection and Peering in the Rest of the World

In the Internet world Tier1 networks are required (today…), as they “see” and connect with all other available routes to individual networks and content providers scattered around the world. Millions and millions of them. The Tier 1 networks are also generally facility-based network providers (they own and operate metro and long distance fiber optic infrastructure) which in addition to offering a global directory for users and content to find each other, but also allows traffic to transit their network on a global or continental scale.

Thus a web hosting company based in San Diego can eventually provide content to a user located in Jakarta, with a larger network maintaining the Internet “directory” and long distance transmission capacity to make the connection either directly or with another interconnected network located in the “distant end” country.

Of course, if you are a content provider, local internet access provider, regional network, or global second tier network, this makes you somewhat dependant on one or more “Tier 1s” to make the connection. That, as in all supply/demand relationships, may get expensive depending on the nature of your business relationship with the “transit” network provider.

Thus, content providers and smaller networks (something less than a Tier 1 network) try to find places to interconnect that will allow them to “peer” with other networks and content providers, and wherever possible avoid the expense of relying on a larger network to make the connection. Internet “Peering.”

Peering Defined (Wikipedia)

Peering is a voluntary interconnection of administratively separate Internet networks for the purpose of exchanging traffic between the customers of each network. The pure definition of peering is settlement-free or “sender keeps all,” meaning that neither party pays the other for the exchanged traffic; instead, each derives revenue from its own customers. Marketing and commercial pressures have led to the word peering routinely being used when there is some settlement involved, even though that is not the accurate technical use of the word. The phrase “settlement-free peering” is sometimes used to reflect this reality and unambiguously describe the pure cost-free peering situation.

That is a very “friendly definition of peering. In reality, peering has become a very complicated process, with a constant struggle between the need to increase efficiency and performance on networks, to gaining business advantage over competition.

Bill Norton, long time Internet personality and evangelist has a new web site called “DR Peering,” which is dedicated to helping Internet engineers and managers sift through the maze of relationships and complications surrounding Internet peering. Not only the business of peering, but also in many cases the psychology of peering.

Peering Realities

In a perfect world peering allows networks to interconnect, reducing the number of transit “hops” along the route from points “A” to “B,” where either side may represent users, networks, applications, content, telephony, or anything else that can be chopped up into packets, 1s and 0s, and sent over a network, giving those end points the best possible performance.

Dr Peering provides an “Intro to Peering 101~204,” reference materials, blogs, and even advice columns on the topic of peering. Bill helps “newbies” understand the best ways to peer, the finances and business of peering, and the difficulties newbies will encounter on the route to a better environment for their customers.

And once you have navigated the peering scene, you realize we are back to the world of who wants to control, and who wants to provide vision. While on one level peering is determined by which vendor provides the best booze and most exciting party at a NANOG “Beer and Gear” or after party, there is another level you have to deal with as the Tier 1s, Tier 1 “wanna-be networks,” and global content providers jockey for dominance in their defined environment.

At that point it becomes a game, where personalities often take precedence over business requirements, and the ultimate loser will be the end user.

Another reality. Large networks would like to eliminate smaller networks wherever possible, as well as control content within their networks. Understandable, it is a natural business objective to gain advantage in your market and increase profits by rubbing out your competition. In the Internet world that means a small access network, or content provider, will budget their cost of global “eyeball or content” access based on the availability of peering within their community.

The greater the peering opportunity, the greater the potential of reducing operational expenses. Less peering, more power to the larger Tier 1 or regional networks, and eventually the law of supply and demand will result in the big networks increasing their pricing, diluting the supply of peers, and increasing operational expenses. Today transit pricing for small networks and content providers is on a downswing, but only because competition is fierce in the network and peering community supported by exchanges such as PAIX, LINX, AMS-IX, Equinix, DE-CIX, and Any2.

At the most basic level, eyeballs (users) need content, and content has no value without users. As the Internet becomes an essential component of everybody on the planet’s life, and in fact becomes (as the US Government has stated) a “basic right of every citizen,” then the existing struggle for internet control and dominance among individual players becomes a hindrance or roadblock in the development of network access and compute/storage capacity as a utility.

The large networks want to act as a value-added service, rather than a basic utility, forcing network-enabled content into a tiered, premium, or controlled commodity. Thus the network neutrality debates and controversy surrounding freedom of access to applications and content.

This Does Not Help the Right to Broadband and Content

There are analogies provided for just about everything. Carr builds a great analogy between cloud computing and the electrical grid in his book the “Big Switch.” The Internet itself is often referred to as the “Information Highway.” The marriage of cloud computing and broadband access can be referred to as the “4th Utility.”

Internet protocols and technologies have become, and will continue to be reinforced as a part of the future every person on our planet will engage over the next generations. This is the time we should be laying serious infrastructure pipe, and not worrying about whose content should be preferred, settlements between networks, and who gives the best beer head at a NANOG party.

At this point in the global development of Internet infrastructure, much of the debate surrounding peering – paid or unpaid, amounts to noise. It is simply retarding the development of global Internet infrastructure, and may eventually prevent the velocity of innovation in all things Internet the world craves to bring us into a new generation of many-to-many and individual communications.

The Road Ahead

All is not lost. There are visionaries such as Hunter Newby aggressively pushing development of infrastructure to “address America’s need to eliminate obstacles for broadband access, wireless backhaul and lower latency through new, next generation long haul dark fiber construction with sound principles and an open access philosophy.”

Oddly, as a lifelong “anti-establishment” evangelist, I tend to think we need better controls by government over the future of Internet and Internet vision. Not by the extreme right wing nuts who want to ensure the Internet is monitored, regulated, and restricted to those who meet their niche religions or political cults, but rather on the level of pushing an agenda to build infrastructure as a utility with sufficient capacity to meet all future needs.

The government should subsidize research and development, and push deployment of infrastructure much as the Interstate Highway System and electrical and water utilities. You will have to pay for the utility, but you will – as a user – not be held hostage to the utility. And have competition on utility access.

In the Internet world, we will only meet our objectives if peering is made a necessary requirement, and is a planned utility at each potential geographic or logical interconnection point. In some countries such as Mongolia, an ISP must connect to the Mongolia Internet Exchange as a requirement of receiving an ISP license. Why? Mongolia needs both high performance access to the global Internet – as well as high performance access to national resources. It makes a lot of sense. Why give an American, Chinese, or Singaporean money to send an email from one Mongolian user to another Mongolian user (while in the same country)? Peering is an essential component of a healthy Internet.

The same applies to Los Angeles, Chicago, Omaha, or any other location where there is proximity between the content and user, or user and user. And peering as close to the end users as technically possible supports all the performance and economic benefits needed to support a schoolhouse in Baudette (Minn), without placing an undue financial burden on the local access provider based on predatory network or peering policies mandated by regional or Tier 1 networks.

We’ve come a long way, but are still taking baby steps in the evolution of the Internet. Let’s move ahead with a passion and vision.

(*)  Roger Boisvert was a friend for many years, both during my tensure as  US Air Force officer and telecom manager with Sprint based in Tokyo (I met him while he was still with McKinsey and a leader in the Tokyo PC User’s Group), and afterwards through different companies, groups, functions, and conferences in Japan and the US.  Roger was murdered in Los Angeles nine years ago, and is a true loss to the internet community, not only in Japan but throughout the world.

Tagged with:
 

A new telecom paradigm is on the verge of becoming reality. Not a disruptive technology, not the right brain flash of a new radical idea – rather it is a logical development of existing infrastructure using better operational execution. It is an acknowledgement of fiber optic infrastructure as an inherent requirement in the development of the 4th utility – broadband Internet, compute capacity, and storage as a basic right for all Americans.

The “utility” label has merit. Just as we need roads, water, and electricity to function in the modern world, we need communications. Much like the roads, electrical distribution, and water distribution systems crossing North America, the communications infrastructure follows a similar matrix of hubs, spokes, loops, and major exchange points interconnecting every square mile of the continent. The matrix includes a well-interconnected mixture of fiber optic cable, wireless, cable TV, copper telephone lines, and even satellite connections.

However, the arteries of this telecom circulatory system remain fiber optic cable. Fiber optic cable allows tremendous densities of communication, information, and data to travel across the street, or across the continent. Fiber goes north and south, east and west, connecting everything from wireless towers, satellite earth stations, collocation and hosting centers, communication carriers, Internet Service Providers, and end users to each other on a global scale.

Geography of the 4th Utility

Let’s take a deeper look at this circulatory system in geographic terms. When looking at a US map, latitude lines run horizontally, parallel to each other based on degrees north or south of the equator. The northern 40th parallel runs from Northern California to New Jersey, hitting parts of 12 states along its path. If we look at the US Interstate Highway system you will see some of the longer “arteries” stretch from the West Coast to the East Coast, such as interstate highway 10, running 2460 miles, hitting 8 states from California to Florida, and 35 major cities.

In addition, I-10 intersects with 45 other interstate highway junctions, and has several thousand entry and exit points serving both major cities and rural locations along the route. If you dig into the electrical grid you will find a similar mesh of interconnections, nodes, and relationships originating at power plants, and ending at the utility outlet in a bedroom or office.

The fiber optic system follows a similar model. The east-west and north-south routes follow the interstate highway system, rail system, and electrical grid – taking advantage of rights-of-way and interconnect nodes all along the route. The routes are generally shared by several different fiber optic providers and carriers, further extending their reach by collocating fiber at major carrier hotels along the coast, such as 60 Hudson in New York, the Westin Building in Seattle, NAP (network access point) of the Americas in Miami, and One Wilshire in Los Angeles, where they splice their fiber with major intercontinental submarine fiber optic systems.

Within North America further domestic interconnections are provided at each major city junction point throughout the country reinforcing the mesh of fiber networks in cities such as Salt Lake City, Atlanta, Chicago, Las Vegas, Washington DC, Dallas, Omaha, and Minneapolis.

The Local Value of a Global Fiber Optic Circulatory System

All this fiber is of little value if its utility does not reach every potential end user in America, or around the world. Much like the interstate highway system sporting several thousand access points and exits, the new fiber optic backbone will support fiber optic connections to every end user in the country, or push wireless broadband to every other addressable mobile and rural user. In the new world, the utility does not end at a wall outlet, but ends wherever the user is located. And that mobility is a local challenge.

Hunter Newby, CEO of Allied Fiber, an emerging fiber utility provider in the United States, advises that “It’s all about fiber…to the tower. For that component the long haul (fiber routes) is just how we get out there and back.” So while we may be able to analogize fiber routes with cities and interconnection points with the idea of a system starting at the driveway in a house to the East Los Angeles interchange and I-10, the wireless towers provide an undefined end point to the telecom grid that is unique.

The main difference discriminating the road system and electrical grid from the fiber grid are that in the telecom industry each route has many competing commercial providers. By definition, competition is not neutral. And if not neutral, it is not a utility, and cannot be expected to provide service in a location (or market) that will not be of financial advantage to the service provider – resulting in locations potentially stranded from the infrastructure.

Is this Really Different than the Existing Telecom Infrastructure?

Newby continues “The truth is that it’s the fiber that binds. Our route and its design is unique to today’s needs, unlike the design and needs of the cables from 10+ years ago. There are no neutral colos on those cables every 60 miles. There are also no FTTT (fiber to the tower) ducts (supporting) a separate fiber cable with handholes every 3000 ft on those systems.”

Following telecom deregulation in the United States, companies such as AT&T are no longer monopolies, with infrastructure development based on economic factors. If Carp, Minnesota (population ~100) does not offer sufficient economic incentive for AT&T to build broadband infrastructure, then it is unlikely to happen. Unless broadband is available through wireless networks, connecting to a broadband fiber backbone, and the rest of the world.

With companies such as Allied Fiber entering the market, access to the east-west, north-south routes will include a truly neutral alternative to the private road system of the existing telecom carriers. The long haul fiber routes will connect to regional neutral fiber routes, such as provided by companies such as Fiberlight in the eastern United States, and even more importantly provide both access to towers and interconnections at least every 60 miles (or more often) along the route.

That is because the long haul utility cable system will need to regenerate their signals at 60 miles points, offering a location for towers and regional fiber providers additional local access to supplement the carrier hotels and collocation facilities located at major junction or interconnection points. And financial incentives are available to companies through programs such as the Rural Development Telecommunications Program (RDTA) supporting the US government’s 4th utility Broadband Initiatives Program (BIP).

Hunter Newby brings evangelism to his vision.   

“Add to that the neutral colos allow the rural wireline and wireless carriers to colocate locally – in their county, or closeby by using the short haul duct to get to the closest AF colo – and in those locations they can buy high capacity transport and transit at wholesale rates from the large US and international carriers coming through. Right there! Wholesale! The rural carriers don’t even have to lease dark from us to get to the big cities/carrier hotels if they don’t want to or can’t afford to yet.

The ability to gain access to the power of the major US carrier hotels, but not have to actually get to them is the next frontier in the US.”

The 4th Utility is an American Entitlement
Newby concludes “The fiber laterals will all be built to us (the long haul neutral fiber providers). The tower companies won’t build them, but there are several transport providers that will. The mobile operators want their Ethernet over fiber.” Fiber that connects them to the content and people available on a global network-connected community. Broadband access that allows Americans to function in a global community.

Those wireless companies, whether mobile operators offering LTE/4G services, or WiFi providers offering a local competitive service, will pay the same tariff to connect to the neutral towers and fiber systems without prejudice. Just like an electrical utility doesn’t care if the outlet is supporting a private individual’s television set, a small storefront business’s display case, or an aircraft assembly plant, the only discriminating issue is in volume and required capacity.

A utility. Broadband access is now an expected utility – not a value-added service, available to all, but rather as an entitlement to living in America.

One of the greatest moments a cloud evangelist indulges in occurs at that point a listener experiences an intuitive leap of understanding following your explanation of cloud computing. No greater joy and intrinsic sense of accomplishment.

Government IT managers, particularly those in developing countries, view information and communications technology (ICT) as almost a “black” art. Unlike the US, Europe, Korea, Japan, or other countries where Internet and network-enabled everything has diffused itself into the core of Generation “Y-ers,” Millennials, and Gen “Z-ers.” The black art gives IT managers in some legacy organizations the power they need to control the efforts of people and groups needing support, as their limited understanding of ICT still sets them slightly above the abilities of their peers.

But, when the “users” suddenly have that right brain flash of comprehension in a complex topic such as cloud computing, the barrier of traditional IT control suddenly becomes a barrier which must be explained and justified. Suddenly everybody from the CFO down to supervisors can become “virtual” data center operators – at the touch of a keyboard. Suddenly cloud computing and ICT becomes a standard tool for work – a utility.

The Changing Role of IT Managers

IT managers normally make marginal business planners. While none of us like to admit it, we usually start an IT refresh project with thoughts like, “what kind of computers should we request budget to buy?” Or “that new “FuzzPort 2000″ is a fantastic switch, we need to buy some of those…” And then spend the next fiscal year making excuses why the IT division cannot meet the needs and requests of users.

The time is changing. The IT manager can no longer think about control, but rather must think about capacity and standards. Setting parameters and process, not limitations.

Think about topics such as cloud computing, and how they can build an infrastructure which meets the creativity, processing, management, scaling, and disaster recovery needs of the organization. Think of gaining greater business efficiencies and agility through data center consolidation, education, and breaking down ICT barriers.

The IT manager of the future is not only a person concerned about the basic ICT food groups of concrete, power, air conditioning, and communications, but also concerns himself with capacity planning and thought leadership.

The Changing Role of Users

There is an old story of the astronomer and the programmer. Both are pursuing graduate degrees at a prestigious university, but from different tracks. By the end of their studies (this is a very old story), the computer science major focusing on software development found his FORTRAN skills were actually below the FORTRAN skills of the astronomer.

“How can this be” cried the programmer? “I have been studying software development for years, and you studying the stars?”

The astronomer replied “you have been studying FORTRAN as a major for the past three years. I have needed to learn FORTRAN and apply it in real application to my major, studying the solar system, and needed to learn code better than you just to do my job.”

There will be a point when the Millenials, with their deep-rooted appreciation for all things network and computer, will be able to take our Infrastructure as a Service (IaaS), and use this as their tool for developing great applications driving their business into a globally wired economy and community. Loading a LINUX image and suite of standard applications will give the average person no more intellectual stress than a “Boomer” sending a fax.

Revisiting the “4th” Utility

Yes, it is possible IT managers may be the road construction and maintenance crews of the Internet age, but that is not a bad thing. We have given the Gen Y-ers the tools they need to be great, and we should be proud of our accomplishments. Now is the time to build better tools to make them even more capable. Tools like the 4th utility which marries broadband communications with on-demand compute and storage utility.

The cloud computing epiphany awakens both IT managers and users. It stimulates an intellectual and organizational freedom that lets creative people and productive people explore more possibilities, with more resources, with little risk of failure (keep in mind with cloud computing your are potentially just renting your space).

If we look at other utilities as a tool, such as a road, water, or electricity – there are far more possibilities to use those utilities than the original intent. As a road may be considered a place to drive a car from point “A” to point “B,” it can also be used for motorcycles, trucks, bicycles, walking, a temporary hard stand, a temporary runway for airplanes, a stick ball field, a street hockey rink – at the end of the day it is a slab of concrete or asphalt that serves an open-ended scope of use – with only structural limitations.

Cloud computing and the 4th utility are the same. Once we have reached that cloud computing epiphany, our next generations of tremendously smart people will find those creative uses for the utility, and we will continue to develop and grow closer as a global community.

Tagged with:
 

A lot has been said the past couple months about broadband as the fourth utility. The same status as roads, water, and electricity. As an American, the next generation will have broadband network access as an entitlement. But is it enough?

Carr, in “the Big Switch” discusses cloud computing being analogous to the power grid. The only difference is for cloud computing to be really useful, it has to be connected. Connected to networks, homes, businesses, SaaS, and people. So the next logical extension for a fourth utility, beyond simply referring to broadband network access as a basic right for Americans (and others around the world – it just happens as an American for purposes of this article I’ll refer to my own country’s situation), should include additional resources beyond simply delivering bits.

The “New” 4th Utility

So the next logical step is to marry cloud computing resources, including processing capacity, storage, and software as a service, to the broadband infrastructure. SaaS doesn’t mean you are owned by Google, it simply means you have access to those applications and resources needed to fulfill your personal or community objectives, such as having access to centralized e-Learning resources to the classroom, or home, or your favorite coffee shop. The network should simply be there, as should the applications needed to run your life in a wired world.

The data center and network industry will need to develop a joint vision that allows this environment to develop. Data centers house compute utility, networks deliver the bits to and from the compute utility and users. The data center should also be the interconnection point between networks, which at some point in the future, if following the idea of contributing to the 4th utility, will finally focus their construction and investments in delivering big pipes to users and applications.

Relieving the User from the Burden of Big Processing Power

As we continue to look at new home and laptop computers with quad-core processors, more than 8 gigs of memory, and terabyte hard drives, it is hard to believe we actually need that much compute power resting on our knees to accomplish the day-to-day activities we perform online. Do we need a quad core computer to check Gmail or our presentation on Microsoft Live Office?

In reality, very few users have applications that require the amounts of processing and storage we find in our personal computers. Yes, there are some applications such as gaming and very high end rendering which burn processing calories, but for most of the world all we really need is a keyboard and screen. This is what the 4th utility may bring us in the future. All we’ll really need is an interface device connecting to the network, and the processing “magic” will take place in a cloud computing center with processing done on a SaaS application.

The interface device is a desktop terminal, intelligent phone (such as an Android, iPhone, or other wired PDA device), laptop, or anything else that can display and input data.

We won’t really care where the actual storage or processing of our application occurs, as long as the application’s latency is near zero.

The “Network is the Computer” Edges Closer to Reality

Since John Gage coined those famous words while working at Sun Microsystems, we’ve been edging closer to that reality. Through the early days of GRID computing, software as a service, and virtualization – added to the rapid development of the Internet over the past 20 years, technology has finally moved compute resource into the network.

If we are honest with ourselves, we will admit that for 95% of computer users, a server-based application meets nearly all our daily office automation, social media, and entertainment needs. Twitter is not a computer-based application, it is a network-enabled server-based application. Ditto for Facebook, MySpace, LinkedIN, and most other services.

Now the “Network is the Computer” has finally matured into a utility, and at least in the United States, will soon be an entitlement for every resident. It is also another step in the globalization of our communities, as within time no person, country, or point on the earth will be beyond our terminal or input device.

That is good

Broadband communications access is rapidly gaining traction as a “4th Utility” in countries around the world. Recently, at Digital Africa 2010 in Kampala, several ministry-level delegates referenced their national initiatives building the “4th Utility” as among their highest priorities. On March 16th, FCC Chairman Genachowski stated “…broadband is essential for opportunity in America – for all Americans, from all communities and backgrounds, living in rural towns, inner cities, or in between.”

This means that broadband communications should be considered a basic right for all Americans, and persons from all countries, at the same level of other utilities including:

  1. Heating
  2. Water
  3. Electricity

None of the above utilities are free, all require major infrastructure development, and all are basic requirements for survival in the 21st century.

Genachoski went on to set some ambitious goals for the United States, as included in the “National Broadband Plan,” that include:

  • 1 gigabit to every community
  • affordable 100 megabits to 100 million households
  • raising adoption (of broadband access) from 65% to 90% adoption, heading to 100%

Consumer Network Test at FCC WebsiteNot a Bad Start

FCC Commissioner Mignon Clyburn stated in a March 10th release that 93 million Americans still do not access broadband communications at home. 36% of those indicating they are not using broadband cite the high cost of access as their major reason for not gaining access, or terms of broadband access are unattractive.

While it would be easy for us to say Internet and broadband providers should be regulated on pricing and terms of service, we should also, if we want to consider broadband a 4th utility, compare the terms of access with other utilities provided to citizens of the United States. The cost of broadband will no doubt change based on:

  • Location – rural vs. urban
  • Number of providers in a community or market – including wireless
  • Distance from Internet interconnection and exchange points
  • Subscriber density in a specific geography (sparsely populated areas will have a higher cost of service)

The National Broadband Plan adds additional goals and action items that further reinforce the idea of broadband as a 4th utility, including:

  • Goal No. 1: At least 100 million U.S. homes should have affordable access to actual download speeds of at least 100 megabits per second and actual upload speeds of at least 50 megabits per second
  • Goal No. 2: The United States should lead the world in mobile innovation, with the fastest and most extensive wireless networks of any nation
  • Goal No. 3: Every American should have affordable access to robust broadband service, and the means and skills to subscribe if they so choose
  • Goal No. 4: Every community should have affordable access to at least 1 gigabit per second broadband service to anchor institutions such as schools, hospitals, and government buildings
  • Goal No. 5: To ensure the safety of American communities, every first responder should have access to a nationwide, wireless, interoperable broadband public safety network
  • Goal No. 6: To ensure that America leads in the clean energy economy, every American should be able to use broadband to track and manage their real-time energy consumption.

This is a pretty comprehensive framework, adding additional forward thinking such as using broadband to support the “intelligent grid,” and wireless communications. And there is still a lot of work to accomplish. The broadband.gov website now includes several utilities used to both give consumers an idea of their current broadband performance, as well as show a very good map on the best places in the United States for accessing Internet services, and the worst.

The best states, which give an average data download speed of greater than 10Mbps, include:

  • Massachusetts
  • Delaware
  • New Jersey
  • Maryland
  • Virginia

And the worst averaging less than 2Mbps downloads including:

  • Alaska
  • Idaho
  • Montana
  • Wyoming
  • New Mexico

Even the best locations in the United States are a fraction of the average Internet and broadband access speeds enjoyed in countries like South Korea, with average home access throughout the country nearing 50Mbps today and plans to increase that to 1Gbps by 2012 (Brookings Institution).

The Overall Framework

The National Broadband Plan correctly looks at more than just home access to the Internet. As a utility, the broadband plan must cover all aspects of society and life that require communications, and includes reference to broadband categories such as:

  • Broadband and US economic opportunity (global economy)
  • Education
  • Health Care
  • Energy
  • Environment
  • eGovernment
  • Civic Engagement
  • Public Safety
  • Entertainment

Next Steps in Broadband

Powerpoint slides and MS Word documents are fine, however we need to focus on tangible results that are measured by meeting our goals. Those goals start with digging holes in the ground, constructing towers, and pulling cable into houses and offices. Everything else is cute, but noise.

“This plan is in beta, and always will be

Like the Internet itself, this plan will always be changing—adjusting to new developments in technologies and markets, reflecting new realities and evolving to realize previously unforeseen opportunities” (From National Broadband Plan)

The National Broadband Plan was delivered to the American people on 17 March, 2010. The goals (as above) are mandated to be in place by 2020. It is an aggressive plan, however Chairman Genachowski appears to have the sense of urgency needed to get it done – unless of course American politics create barriers preventing success.

Americans, and people of all nations should take a close look at the US National Broadband Plan, and those of other nations. If the US and other nations around the world truly consider broadband access as a 4th utility, those who do not have that utility will not be functional in the mid-21st century.

The US plan and strategy is available to all at broadband.gov

Dr. Gilbert Balibaseka Bukenya, Vice President of Uganda told a story during the opening session of Digital Africa 2010. While traveling within the country, he paid special attention to small schools. While lacking nearly every normal school resource, each school had one common denominator – they all had black boards and chalk.

The question started nagging him. As the VP, he was in pretty good touch with imports, exports, and manufacturing within Uganda. But chalk, as an ubiquitous tool, was nearly completely imported from China. Something as simple as chalk, a tool used by nearly everybody n the country, was not being produced in the domestic business sector.

Primary school in a small village near KampalaDr. Bukenya changed that. The chalk problem was quickly rectified, and a new program of “can we make it in Uganda” started. The basic idea is if the product is capable of being made in-country, then Uganda should not pay another country for the product.

Reward local innovation, but don’t forget we are part of a global community

It is very easy to slap a flag on a cardboard box identifying the origin of contents with a “Made with Pride in ____.” And a good idea. If the materials and labor force are available, those things should not be imported, and the product may actually be robust enough for export. In the US we are nearly militant in our enthusiasm supporting “Made in America” campaigns, almost to the point of being accused of a shortfall in patriotism for buying foreign materials.

But let’s keep in mind we are part of a global economy. Innovation and entrepreneurship occurs in every nation of the world, and although it is difficult to admit, some ideas are better than ours. And at some point we like variety. And we can call this world trade.

Be a Hunter, not a Gatherer

Dr. Bukenya further challenged the delegates to change our minds (as a society) from accepting handouts from others, buying everything we use from others, and being dependent on donors for our livelihoods. Take control of our own destiny, and start producing. Nurture entrepreneurs, nurture innovation.

This includes innovation in the ICT sector. Dr. Aggrey Awori, Uganda’s Minister of ICT, stated “broadband (communications) and ICT are now the greatest enablers of modern society.” He went to make an even stronger statement “access to ICT is a basic human entitlement.”

Evidence indicates this is not idle rhetoric, but actual policy. The Open Internet Initiative (ONI) does not find any evidence of government filtering or censoring within the country. The major obstacle in Uganda’s efforts to bring Internet to the people being a lack of basic infrastructure, including both telecom and electricity.

The eLearning Component

Ugandans enjoy government mandated education up secondary school. However, while the basic literacy rate is high (66.8%), there is little wide spread access to advanced education tools such as Internet. Thus students complete their education at a great disadvantage to students in other countries with much greater access to network applications and technology.

Chalk is easy, producing software or manufacturing consumer and industrial goods for export is not. While Dr. Bukenya’s “can we make it in Uganda” idea is worthy, to make it work will require considerably more attention to building basic infrastructure needed to prepare workers for the global marketplace.

As we’ve discussed in previous articles, ICT is the 4th utility. Roads, power, and water are now joined by information and communications technology. Without ICT infrastructure as a basic requirement, a country cannot compete in the global marketplace, and will be restricted to depending on global donors for its existence – not to mention the vulnerability such as country has to political upheaval and violence.

Uganda gets it, and the delegates of Digital Africa 2010 get it. Now it is our job to make sure the rest of the world gets it.

Previous article in this series:

Digital Africa 2010 and Cloud Computing in Developing Countries