Working on the next-gen Internet
By A. Asohan June 14, 2012
- The Internet has to be reengineered to face data explosion
- It may be a wireless world, but the backbone is still fiber
OPTICAL Space-Division Multiplexing, said the slide facing a group of journalists in Kuala Lumpur recently. One journalist immediately “twitpic-ed” it to hilarious response on Twitter, while I whispered to another seated next to me, “I remember watching this episode on The Big Bang Theory.”
But the gobbledygook was aimed at a pressing problem the Internet – and we, its users – is facing: A data explosion that would see costs skyrocket unless some innovative moves are made. Furthermore, it is not just about the data but how it is being used.
“When it started out 30 to 40 years ago, the Internet was about connecting points on a map together, and having peer-to-peer type of sessions or discussions,” says Dr Randy Giles, executive director and president of Bell Labs Seoul. Bell Labs is the research arm of global telecommunications vendor Alcatel-Lucent.
“Today, with so much video content, we’re not so concerned about geographical locations or where we are accessing our YouTube video from, or where that Google search engine is physically located,” he adds. “When you look at that model of the Internet, a number of things have to change.”
And changes have to be made, to make the Internet more effective and more cost-effective to accommodate the data explosion that is happening – the amount of data accessible on the Internet grew by 20 exabytes per month in 2011.
“That’s ‘20’ followed by 18 zeros; 20 billion gigabytes. This is really changing the way you should think about the Internet,” says Giles.
So what Bell Labs and other researchers as well as Internet bodies are doing is to focus on information-centric or content-centric networks (CCNs), not the location-centric networks that are being built today.
To change the Internet to work on content and not geographic location, there are some areas that can be addressed.
According to Giles (pic), research bodies [including Bell Labs] and bodies like the Internet Engineering Task Force, which works on the technical aspects, are looking at the naming conventions of the information packets that travel on the Internet.
“Instead of having a string of numbers to tell you the location of the host server, you could have a more naturalistic name to identify the information,” he says.
If you do a search now, for example, Google is going to have its webcrawlers go to all these pages looking for keywords and indexing them, and then building up a large table of associations, he explains.
“By having a convention with natural names, you can already access some information on the nature of the content before you start trawling,” he says.
Another method to improve the Internet’s performance would be to enable routers to have caches to more readily compare if two people are searching for these natural names – a router then needs to only send out one interest packet to get that content.
This brings us to in-network caching or storing – instead of going to data centers to get content, routers themselves will hold data in a temporary fashion.
“In the event there is popular content or videos, routers which are closer to you than the original source can hold that content and serve it to you,” says Giles.
Such developments can bring about “big saving in terms of how the network will be run and how much of network assets will have to be used to get a hold of information,” he adds.
It may be an increasingly wireless world out there, but the backbone of the Internet is still the core optical network.
“It may be the age of WiFi and iPhones, but I can guarantee you that your data moves through some fiber somewhere along the way,” says Giles.
The wireless world has however taught the fixed-line fellers a thing or two, which they are bringing to fiber.
“From wireless systems, we’ve learned advanced modulation formats which allow us to encode a higher density of information within a given bandwidth of fiber. So we can send much more data within the spectrum available,” says Giles.
Bell Labs is also engineering fiber that has more “cores” – multicore fiber, where information travels along conduits on different wavelengths, will see the era of photonic integrated circuits, acting in much the same way for photons or light particles as the electronic integrated circuit chip works today for electrons.
It would be like having multiple pipes within a single fiber.
“Seven cores in one fiber may cost twice as much, but not seven times as much,” says Giles. “We’ve had wavelength division multiplexing, now it’s spatial division multiplexing.”
And, of course, that non-existent episode of The Big Bang Theory.
Author Name :
By commenting below, you agree to abide by our ground rules.