The Internet in its present form has problems. Most users have experienced slow download times, difficulty in accessing Web sites and occasions when email messages have failed to arrive. There have also been some well-publicised blackouts of major Internet connectivity suppliers, such as AOL and Netcom.
The Internet has been growing by 100 per cent a year for many years, and has no co-ordinating body to oversee the constant upgrading of its infrastructure. In light of this, it does work remarkably well. Moreover, one of the most basic features of its design - its decentralised, non-hierarchical structure - means it is ideally adapted for further expansion.
Nevertheless, the Net is suffering growing pains. To understand why and how these can be addressed, you must consider its origins 25 years ago as a military and university research network.
At that stage, even the wildest forecasts predicted no more than a few hundred thousand computers residing on perhaps a few thousand interconnected networks. Today, there are about 13 million computers on the Net, and nearly 500,000 networks (see w.genmagic.com). So it is little wonder the strain is beginning to show.
The Internet derives its name from the fundamental Internet Protocol (IP) which implements many of the early design decisions. A protocol is simply a set of rules which, in this case, describe how packets of information are moved between the Internet's constituent networks.
At the heart of IP is an addressing system. Each point or node on the Internet must have its own unique address that can be used to route data from one computer to another. This address is a 32-bit number, usually divided into four 8-bit numbers, each of which is represented in ordinary decimal. The basic form of the IP address is therefore a.b.c.d, where a, b, c and d are all between 0 and 255.
It's not as easy for people to remember four such numbers as it is for computers. So there is an additional layer which allows words to be used.
This is the Domain Naming System. It allows an address such as www.vnu.co.uk to be used alongside the real IP address of 22.214.171.124.
The total number of possible IP addresses is 4,294,967,296. In theory, this is the maximum quantity of nodes - or computers - that can exist on the current Internet. Although this might seem more than enough, IP addresses are not given out on a basic sequential basis. For example, if a company has a network of machines, it is obviously easier for it to have a block of adjacent IP addresses.
These blocks come in certain basic sizes: 256, 65,536 and 16,777,216, generally called Class C, B and A addresses respectively. In the past, to allow for expansion, most companies have asked for considerably more addresses than they needed.
Clearly, with this kind of allocation, and the generous slack, far fewer than the theoretical four billion machines can be accommodated. As a result, it is becoming more difficult to obtain IP addresses in sequential blocks.
There is a danger that, in the near future, there will be no IP addresses left to give out.
The obvious solution is to increase the address space, which has been the case with the latest version of the IP standard. It is known as IPng - IP Next Generation - or IPv6 for version 6. The basic address length has been increased from 32-bit to 128-bit. This does not simply multiply the number of addresses by four (see playground.sun.com). In fact, the new IP address space has no less than 340,282,366,920,938,463,463,374,607,431,768,211,456 nodes.
It is possible that some of these new IP addresses will eventually be used to network not just computers, but fax machines, photocopiers and telephones.
IPv6 is backwards-compatible with IPv4 - the old addresses will still work, forming a special subset of the new ones. IPv6 functionality can be added to products in a completely transparent manner.
IPv4's limitations are most obviously shown by the shortage of consecutive blocks of IP addresses. But there is another problem hidden in its implementation.
This is to do with something called routeing tables. As explained, IP is about the way data packets are routed across the Internet. This is done using tables stored on special nodes - the Internet's main routers - which decide how to forward a packet based on its destination (expressed as an IP address).
Clearly, the bigger the Internet gets, the more complex these routeing tables become. Worse still, the increasing speed at which new addresses and new routes must be added means that routeing tables need to be updated more frequently. This means there is more chance of errors that lead to local shutdowns, which happened at Netcom in June, for example.
IPv6 solves this problem by introducing new structures into the IP addresses to simplify the routeing tables. This makes them easier and safer to update.
The benefits become apparent when IPv6 is widely implemented. But, as routeing table errors become more common, the advantages will, no doubt, encourage the Internet world to upgrade sooner rather than later.
As well as the unforeseen growth in the number of computers on the Internet, the expansion in the volume of data poses another threat. When the Internet was designed, the traffic consisted mainly of simple email, along with the intermittent transmission of data files.
Today, there is a noticeable increase in traffic because Web pages use large images, plug-ins and Java applets. The streaming audio and net telephony products are even worse because they saturate Internet connections completely with continuous flows of data.
IPv6 supports a new protocol called RSVP (Reservation Protocol - see www.isi.edu). With this, datastreams can be given priorities. These guarantee a certain quality of service, perhaps for a higher price. Differential pricing mechanisms will encourage suppliers to upgrade their infrastructures even faster, and persuade users to employ the Internet's resources rather more frugally. All this should ensure that the next generation Internet, based on IPv6, will be considerably faster and more robust than the current one.
- Glyn Moody is an Internet consultant, journalist and author.
RISC OS 5 to form the basis of RISC OS Open after Castle Technology sells to RISC OS Developments
A smartphone maker fiddling its benchmarking scores? That's unusual, isn't it?
'We are making good progress on 10nm,' claims Intel
Engineer calculates that Chengdu's plan to replace streetlights with artificial moonlight would cost $100bn