Net Neutrality – Sophistry Spawned From Greed…5June10
In the past, the FCC has classified the Internet as an “information service”. This has resulted in “light regulation” of the global data flow phenomenon.
I would call your attention to the fact, however, that what we call the “Internet”, refers to the “wholesale” consumer revolt on college campuses in the 80s against expensive monopoly telecom services. Defense Advanced Research Projects Agency (DARPA) leased private circuits between University Campus LANs so their research fellows could collaborate more easily with one another. Resourceful researchers exploited the ‘free bandwidth’ by taking advantage of everything in the public domain – Transaction Control Protocol / Internet Protocol, Post Office Protocol, and later Hypertext Markup Language – to manage message, file, and data exchanging for little or no cost. Not to waste a good idea, the National Science Foundation expanded this government-subsidized ‘common carrier connectivity’ to their own research fellows.
The student populations of all these government-connected university campuses quickly jumped on the bandwagon – e-mails and files, and ultimately the world-wide-web ensued.
One part of the U.S. government had inadvertently broken another part of the U.S. government’s regulated monopoly (and mishandling) of telecom connectivity with a virtually “free-for-all” Internet. The Left Hand didn’t know what the Right Hand was doing – as usual, Left couldn’t imagine there might be a Right in its world.
It’s easy for Americans to appreciate consumer revolts. Believe me, if you were used to paying New York Telephone for a $.50 voice call just to connect your 9600 baud modem to ‘talk’ to someone else’s ‘monosyllabic’ modem so you could send a 3 megabyte file across town ‘quickly’ you would appreciate the Internet. If you made a $2.00-per minute long-distance voice call every time you sent a file from New York to LA – at 1200 bytes per second, it could more than rival the cost of a FedEx package.
The “Internet” was formed to fling packets of data asynchronously across the ‘ether’, using a ‘Dewey Decimal’ library system type of address to tell each router encountered along the way which router node it must be sent to next, until all the data packets get to that unique site each packet’s IP address specified. ‘Keeping It Simple, Stupid’ could not be better…for data.
The “Internet” has also spawned additional consumer revolts –
(a) Peer to peer applications ushered in the retail destruction of the “$29.95 one-song-album” music business as millions of copyrighted songs change hands monthly without generating revenue for the copyright holders.
(b) Solitary, pajama-clad bloggers toil in their bathrooms and basements (while reading and thinking independently), tearing down Big Media shibboleths — Dan Rather’s objective reporting, the IPCC’s settled science, and the nationally-renowned editorial respect for NY Times’ point of view (“…common sayings or beliefs with little current meaning or truth…”)
The “Internet’s” creative destruction has re-invented retail music, a song-for-a-buck, brought back reading big-time, and spawned Google, the ultimate “Internet Success Story”. Ironically, it is Google leading the charge to force net neutrality rules and regulations on telecommunications carriers to prevent them from managing their Internet capacity. All Internet content should have equal sway on the “Internet.”
Those of us interested in the net neutrality controversy will also have to account for the underlying facts. Faster-than-real-time file segment transfers and P2P applications (such as Bit Torrent) are very “greedy” for network resources. They use various techniques, such as opening very large numbers of concurrent sessions and masquerading as other applications, which result in them taking far more than their fair share of bandwidth in the network.
They open multiple ‘flows’. A flow is a single meaningful end-to-end activity over the network, and is defined by the IPv4 header 5-tuple of source and destination port, source and destination address, and protocol. Examples of flows: A video download, a voice call, or an image transfer. Unfortunately, IP networks, using TCP, provide equal capacity per flow, not per subscriber.
In this fashion, Bit Torrent, a P2P multi-flow application, may use 100 flows and thus consume 100 times the capacity available to the normal user who is using one flow. Thus, if even 1% of the users are using such P2P or “faster-than-real-time” applications that open many flows, they will consume 60% to 90% of the shared capacity of the Digital Subscriber Line Access Multiplexer (DSLAM) …or cable channel. This reduces the capacity available to the other 99% of normal users to a small fraction of what they paid for. That presents a problem to everybody.
Service providers must adjust their networks’ sharing process such that when congestion occurs, capacity is allocated so that it is “equal capacity for equal pay”, not “equal capacity per flow” which is today’s standard for network capacity allocation.
P2P happens to be the current main multi-flow application. However, cloud computing appears to be the next big multi-flow application and there will be others. For the Internet to keep working smoothly requires it be fair, even under overload. Equal capacity for equal pay accomplishes this where today there is serious un-fairness.
Whether the content provider uses video-streaming or faster-than-real-time file segment transfers, congestion is inevitable within any IP network. Most networks suffer performance problems during congestion, resulting in fatal delays, lost connections, and disappointed users.
The result is that many networks are configured with enormous amounts of excess capacity in terms of both transmission facilities as well as router ports. This excess build-out is essentially a crude attempt to allay the threat of congestion by building more than enough “buffer space” to handle the inevitable activity peaks and usage spikes that can render even the most aggressively resourced networks inadequate. Over-provisioning is used in Internet Service Provider and large private network cores – it takes more than 2:1 and often 3:1 overcapacity to avoid overload, over-provisioning is expensive, typically not feasible in the ISP or corporate edge of the Internet.
This “pay and pray” approach of purchasing more and more transmission is doomed to failure over the long run, since the overall volume of network traffic is now accelerating at a rate that far outpaces Moore’s Law – network traffic is doubling every 12 months while overall packet processing power doubles roughly every 18 months. The numbers don’t line up.
Internet traffic is not only rapidly expanding, but also is becoming more varied and complex. Voice and video – millions regularly use SKYPE to place calls and go to YouTube (510Kbps) to share videos (which, by the way, are being upgraded to High Definition — 9 to 12 Mbps at 1080p resolution). Services like HuLu and NetFlix, devices such as iPhones, Androids, and Blackberries, gaming consoles like Xbox and PS3 — all shifting communications and entertainment to the Internet.
Prices charged for connectivity and content are only going down, as competition in every area increases.
One of my favorite telecom research experts, Dr. Andrew Odlyzko, now resident at the University of Minnesota, and late of AT&T Research, raises some challenges re: net neutrality:
“…Even if we allow video the dominant role in shaping the future of the Internet, we have to cope with the delusion that movies should be streamed. This is an extremely widely shared assumption, even among networking researchers. However, there is an argument that except for a very small fraction of traffic (primarily phone calls and videoconferencing), multimedia should be delivered as faster-than-real-time progressive downloads (transfer of segments of files, each segment sent faster-than-real-time, with potential pauses between segments). That is what is used by many P2P services, as well as YouTube. This approach leads to far simpler and less expensive networks than real-time streaming…” But Dr. Odlyzko is begging the question here.
There are two finite asset inventories that are consumed by content transmitted over any network – even the “Internet”.
The first inventory is bandwidth. If you have 100 seats in an airplane, each of them costs the same to fly from Los Angeles to New York. If you only sell 10 of those seats the price has to cover the cost of getting all 100 seats from Los Angeles to New York. If my Chess Club needs 50 seats on the plane to get to a Match in New York, then we will have to pay more than a couple who want to occupy only 2 seats on the same plane for our LA-NY trip. Bandwidth costs rise with the amount of bandwidth used, in a similar fashion. “Unlimited” pricing for broadband usage by Internet Service Providers belies this cost issue at this time. This contradiction however is about to be confronted by reality, as we will see.
The vast majority of new Internet traffic is coming from mobile broadband. The “Internet” must now include a wireless access increment to its capacity / reliability sphere of reference as well. If the IETF persists in its insistence to keep the “intelligence” on the edge communicating across “dumb” pipes, then this wireless “edge intelligence” has to be taken into account.
Considering the sheer volume of Apple iPhones on the U.S. AT&T and the UK 02 networks was the root cause of wireless capacity problems, some are wondering what will happen when Apple’s operating system update adds multitasking to the mix. Multitasking means users will be able to run more than one application on the phone at one time, something Apple fans have long clamored for. Apple said in April ’10 that multitasking in iPhone OS 4.0 would be available for the 3G smartphone in the Summer ‘10 and the iPad tablet in the Fall ‘10.
Fabricio Martinez of Aircom Intl. Ltd., has said, “It’s just scary to think of multitasking on the iPhone. People are going to be running two or three applications in the background all the time on networks that just aren’t designed to do that.”
Congestion and interference in densely populated cities is caused by a large number of smartphones “shouting” at the radio access networks (RAN). Having smartphone applications constantly pinging the radio network controller (RNC) simply leads to a traffic jam. Multitasking could potentially up the signaling problems that operators have already seen. “The bandwidth and session count per user will skyrocket as mobile devices become more capable of multitasking and cloud-based applications take hold,” according to Procera Networks. Their public statement on why the NYC and SF networks were “under-performing” and how they would fix it, indicates that there were problems with backhaul, signaling and older RAN (radio access network) equipment too.
Email puts a much heavier load on 3G networks than web surfing or peer-to-peer applications, according to Alcatel-Lucent’s research wing, Bell Laboratories.
Mike Schabel, a research director at Bell Labs, HAS said that P2P and web surfing account for much of the volume of data carried by mobile broadband networks, but ‘inefficiently’ managed applications, such as email, are the biggest resource hogs. When it comes to mobile broadband, “…there is a false belief that high-volume users will use a lot of wireless resources, and low-volume users will use less,” according to Schabel. “Every wireless application uses resources with different efficiencies. Operators can’t just focus on how much traffic is sent — they have to consider how the traffic is sent. We need to be much smarter about how we deliver the bits and bytes and handle the transactions.” According to Schabel, email hits networks hard because of phones constantly polling the server to check for new messages. Mobile email consumes around 69 percent of a wireless data network’s signaling resources, despite only accounting for around four percent of the volume of data carried by the network, he said.
Web surfing, on the other hand, accounts for around 70 percent of wireless network data volume, but uses only around 12 percent of the signaling resources, Schabel said.
From the viewpoint of the network’s finite signaling capacity inventory, “…email is more resource-intensive,” Schabel said, he added that many new smartphone applications — such as location-based services, weather updates, stock tickers and secure transactions — look to the network “like email” because they also constantly come in and out of the network. “This is a very challenging problem…”
Airvana engineers comparing data use profiles found that for a given volume of data transmitted, one smartphone typically generates eight times the network signaling load of a USB modem-equipped laptop. Although smartphones may only account for a minority percentage of all devices on operator networks today, they are always on, moving between cell sites and continually ‘polling’ the network. As a result, smartphones are already responsible for the most of the total signaling activity—two to three times as much as laptops.
In the first week of June ’10, AT&T has changed its pricing from “unlimited” to usage-based pricing for data. By cutting the price for most people, it incentivizes the use of smartphones. The company has explained that 3% of its ‘smartphone users’ generate 40% of its wireless-data traffic. This clogs the network, taints AT&T’s image, and cancels some of the benefits of being the exclusive Apple iPhone carrier.
AT&T’s DataPlus is $15/mo. for 200 MB of data traffic; DataPro is $25 /mo for 2 GB. AT&T says 98% of its customers use less than 2GB, and 65% use less than 200 MB. The 2GB plan replaces the original $30/mo. unlimited offering. AT&T will provide free tools and alerts to help customer manage data usage.
Usage based pricing presents us with rational network economics – better quality of service (just as Virgin’s Upper Class seating) costs more, just as larger bandwidth users should pay more.
By the way, when it really costs money to provide capacity – as in South Africa, which is thousands of miles away from “the Internet” – such capacity based pricing is universal, well understood, and accepted…. Of course, that isn’t stopping the networks from investing in five new fiber optic cables to increase capacity to the Internet. Usage based pricing has actually encouraged capacity expansion, which in turn is REDUCING usage based pricing.
How about that?
Internet as Free-For-All Identity Loss…6Dec07
The Internet appeared to the communications consuming marketplace as an apparent “for-free, always-on” communications alternative to dial-up modems and leased lines: The ultimate consumer revolt as business and individual users jumped on the DARPA-then-NSF-funded campus-to-campus Internet Protocol network.
The Internet is not safe for business. The Internet is not reliable, not secure, and full of opportunities for disaster for your enterprise. Internet is not broadband, although telecom service providers have tried to position it as such.
The normal Internet applications (such as e-mail or web-browsing) do not require broadband (2 Mbit/s of continuous bandwidth or more). The larger the files you download (such as a motion picture) the more bandwidth is desired.
“Broadband” refers to the high contiguous bandwidth flow certain applications require from the network in order to function. “Broadband” has traditionally been acquired on a dedicated basis, through leased lines, ATM and Frame Relay services.
Your Internet access charge may seem to be low. What the Internet really offers is a “Do-It-Yourself” network. The user must determine and manufacture its own connectivity; it has to create and manage its own security – dumb capacity with Internet Protocol is what’s delivered.
However, if you run broadband applications over the Internet you must pay for and manage security, network address translations, network access devices and the requisite software. Belated you will find that instead of cheap Internet access, your communications / security / management budget will be stunning. Despite the significant investment enterprises must make into their Internet infrastructure, there are many applications critical to their business they simply cannot run over the Internet.