When the Personal Computer was developed in 1980, IBM made a terrible mistake. Unlike Dr Doug Englebart who, in the sixties, saw it as a means to enable people to communicate and work together through a mainframe computer, the company saw the PC as a "personal" tool. Engelbart's vision of networked users working on personal computers was not pursued; mainframes were seen as inflexible and slow to respond to business needs. Users opted for the seductive freedom of standalone PCs and personal productivity tools.
The industry, especially Novell, eventually won back these users by connecting PCs together through networks. However, PCs still retained the freedom of their stand-alone status. Users managed their own applications and hard disks and even kept vital corporate information on them. The cost of supporting these autonomous desktop clients became more prohibitive as the number and complexity of applications increased.
The only way to cut costs was to revert back to mainframe-style centralised computing. But users' expectations of computing had moved on since the mainframe days and they had become accustomed to the responsiveness of PCs.
Administrators, on the other hand, demanded greater control over the PC population. These differing views led to the network computing model, a compromise between centralised administration and freedom for users.
However, in spite of all the current hype, network computing is not a new thing. Englebart had put forward the idea many years before but it was not until 1983 that the concept gained any favour. In that year the Massachusetts Institute of Technology (MIT) set up Project Athena. The internal initiative ran for eight years, with Digital and IBM acting as sponsors. Its objective was to bring mainframe-class management to a distributed multi-vendor environment, at a time when distributed computing was still only a vision.
The project was designed to allow users to log onto any machine on the network and access their own data and the applications they needed. Everything was downloaded from the server when required, including the client software.
The project originally relied upon a single Unix hardware and operating platform, but this proved restrictive, so it was extended to support other clients, such as PCs and Macs.
The project defined the need for fast response time, scalability, reliability, security, heterogeneity, portability and low cost. It required the workstations connected to it to be disposable, holding nothing of value, such as data or applications licences, thus obviating the need for the services of systems support engineers.
The Athena environment at MIT now provides computing resources to nearly 14,000 users across the MIT campus through a system of 1,300 computers in more than 40 clusters, private offices, and machine rooms.
In the UK, Rover Group has implemented the Athena architecture (see box opposite page). Although it only supports heterogeneous Unix workstations, the principles apply equally to PCs and other clients.
Microsoft has announced a solution similar to MIT's Athena architecture, called Zero Administration for Windows (ZAW) which will be built into the next versions of Windows 95 and Windows NT Workstation and Server.
Initially, ZAW was seen as a defensive move to protect Microsoft's investment in Windows operating systems and applications against the network computer.
But it is now being accepted as a genuine attempt by Microsoft and Intel to address the problems of maintaining PC networks.
"The Gartner Group says that administration costs could be reduced by 12% to 15% immediately, just by administering the present Windows environment more effectively, for example exploiting user profiles and systems policies," says Anne Mitchard, personal systems marketing manager at Microsoft. "People are not aware of the full functionality of 32-bit Windows today. With the Zero Administration initiative, when you have all operating systems, applications and data on the server, with hundreds or thousands of users, you can reduce the network load by caching."
"There is a lot of common sense in the Zero Administration initiative," says Lalit Nathwani, director of network services at Unisys' global customer services division. "IT managers realise they have to cut down the cost of administering PCs and will provide a better service on the server.
Some of the tools needed have been around for a long time and some of the new ones will go a long way to meet the new configuration. In the past, the mentality was wrong and users wouldn't let the IT department control their PCs and take away their freedom. Now, the excuse will go away and remote management will help reduce administration costs."
Under ZAW, the desktop operating system will be updated automatically when the PC is booted, loading the latest drivers and patches from the server. Windows will be able to de-activate the floppy drive, CD-ROM and hard disk to prevent user control and configuration. The system controls the applications centrally, allowing users access to their desktop configuration from any machine.
The system controls the user's hard disk and uses persistent caching to keep a copy of frequently used applications on the local hard disk.
This helps to alleviate some of the problems, such as bandwidth issues, associated with loading huge applications directly from a network server.
If appropriate, it can ensure that users' data is automatically mirrored between the local hard disk and server, allowing mobile or remote users to work off-line. In the event of hardware failure, the PC can be replaced and the new machine used immediately without having to install or configure software. ZAW will bring the benefits of network computing to users of over 100,000 existing Windows-based applications.
As an extension to the Zero Administration initiative, Microsoft and Intel have developed a NetPC reference platform. It has a sealed case with limited expandability to prevent end-user modification. It uses an internal hard disk for caching and has support from Compaq, Dell, Digital, Gateway 2000, Hewlett-Packard, Packard Bell, NEC and Texas Instruments.
While the NetPC isn't available yet, it gives a clear indication of the way existing PCs should be used.
The NetPC combines the manageability of network computing with the considerable flexibility that a hard disk can bring. The problem with hard disks on PCs is one of manageability, arising when applications and data are stored locally, on users' PCs, rather than centrally. But on the NetPC the local hard disk cannot be accessed by the end user.
There are many advantages to having the server retain and manage the data and control the operating system and applications on the hard disk, using caching and mirroring. Users can run major business applications unsuitable for the Web browser interface, prevailing in network computing.
They can continue to work if the network is down. Mobile users can work off-line, and remote operational sites can use a centrally managed PC through a dial-up connection, rather than a leased line. Users can log into any machine on the network and access their applications and data.
This solution may also provide more performance, as the impact of network computers on network bandwidth isn't yet known.
"In many cases, you won't be able to take away capability from users, so they will need to keep their existing applications," says Robin Bloor of Bloor Research. As author of The Enterprise by other Means, he is widely seen as an independent expert on network computing. "If you put the applications and data on the server, you immediately win. It is an intelligent option."
While the network concept looks set to change the face of computing, client-server and other major business applications are a very long way from being implemented through a browser. Users expect to run fully-functional Office applications like Word and Excel and complex client software packages.
Power users cannot be expected to give up their major applications. Yet IT departments will not want to support such applications on a user's hard disk. The way to overcome this dilemma is through products like Citrix Winframe and Insignia Ntrigue which run Windows applications on centralised NT servers and present a standard Windows GUI to the end-user. All the user sees is a familiar Windows application. But the application is run directly on the server, not on the user's PC.
"We have implemented this architecture ourselves," says Bloor. "We use Winframe to access the applications on the server. If a machine went down it used to take half a day to rebuild the disk, but now replacement is instant. Products like Ntrigue and Winframe are available and can be used by a PC or network computer."
Organisations that want the benefits of network computing don't have to throw away their PCs. In fact, just changing the way the operating system, applications and data are managed will bring benefits that are not available from the network computer. It will also protect their hardware and software investments. The PC may not comply with the Network Computer Reference Profile, but it can be a better network computer than the network computer!
Ovum commentary: putting the customer first
In the eighties, end-users walked away from central IT with their PC tucked under their arm. In the nineties, with the rise of the LAN, then groupware facilities, the end-user side grew into an organisational force that central IT departments had to accept. But just as groupware was taking a grip on the market, a set of new technologies came along that redefined the IT industry: the Web and Java.
The basic Web model has a number of limitations. However, Web applications took off because they enabled companies to provide a simple interface to a broader range of corporate data than conventional systems, and to combine data in new ways without being tied to specific hardware and software architectures. Some of the Web's limitations are now being addressed through Java, a language translated before execution into an intermediate "byte code" easily downloadable and interpreted on the fly by a virtual machine.
With Java components able to connect to back-end services and bypass HTTP limitations, the shift from static Web pages to a Java-based front-end is akin to the transition from dumb terminals to client-server.
Java-enabled Web browsers define a new breed of client, the NC. Though most client machines (Windows PCs included) comply with the specification, NC technologies do provide a way out of the Wintel hardware and software "upgrade hell" as a server-centric alternative to the PC based on light clients fed, managed and upgraded from the server.
The NC focus is on total cost of ownership, with lower initial costs and even greater savings generated by cheaper administration, deployment, and security. NC technologies also introduced a new range of options.
Before long, large companies will own a diverse portfolio of fully-fledged PCs, cut-down NetPCs and network computers, depending on specific application and user needs. However, beyond cost control and fitness for purpose, central IT has to walk a fine line between recovering centralised control and respecting end-users' automony.
The balance of power varies from firm to firm, but the NC has provided both camps with the means to redefine these boundaries. For central IT, the question is not just how to enable greater freedom of access to core systems, but whether that access adds value to the business. In any case, end-users need to be treated as customers with a guaranteed level of service, irrespective of whether they have a PC or an NC in front of them.
Indeed, there is a new golden rule for IT directors: internal IT is there to serve the organisation's customers, NOT the organisation itself. This rule makes the IT director's new high-level role as an information professional managing the transfer of knowledge between an organisation and its partners (and/or customers) a lot clearer. Customers are not just end-users, but also external suppliers and partners. The need for executive-level IT expertise will become increasingly important as organisations move towards a new business model that challenges the idea of fixed corporate boundaries.
Rover: networking for the future "We were managing hardware, not user requirements and we were doing it in a labour-intensive manner," says Brian Cooper, manager of engineering computing services for Rover Group's engineering division. Traditionally, each workstation was dedicated to a single application. However, the business requirement was moving to concurrent, not functional, engineering, with multi-disciplinary teams working across multiple locations. Engineers were becoming more mobile as they moved from project to project.
At the same time, Rover's downsizing and rightsizing exercises resulted in a movement of services from the mainframe to a distributed environment.
There was a requirement to deliver more applications and services in a more complex environment.
Rover's key suppliers, Sun, IBM, HP and Digital, all recommended a client-server architecture and the appropriate management tools, recalls Cooper.
Digital introduced Rover to MIT's Project Athena (see main story) and Cooper's team set out to implement a similar architecture. It purchased 12 Sun servers and installed two at each location, one to store applications and one to hold data. The workstations were then migrated onto the Athena architecture by transferring applications and data onto their respective servers.
Rover used Sun's AutoClient and software from the other vendors to cache the applications and the operating system on each workstation. "We are running our workstations as near as possible to a network computing device, but without the network overheads," says Cooper. "It gives the benefit of local performance without the overhead of supporting massive amounts of local disk space."
Rover now offers its engineers access to 70 applications, which take up a total of 16Gb on the site's server. Engineers get what they want, when they want it, where they want it. The Solaris operating system is also cached on the 600 Sun workstations, which are rebooted at 4.45am each morning.
"The caching technology is superb. It means the process of managing a workstation is fully automated. Once set up, it is self-managing. I now have one completely integrated environment, managing 1,200 workstations from Sun, HP and IBM," says Cooper.
Athena has been one of the key enablers for concurrent engineering, which requires teams of engineers to work on the same data. "It encourages co-operative working, because everybody is part of the same environment," says Cooper. "Engineers can view each other's data, because it is no longer on their own hard disk, and there are no longer any technical issues about data sharing."
Central control of applications through the servers removes the potential for users to customise their own workstations. "It provides engineers with a 'toolbox' containing a whole set of applications and services, not just a CAD application," says Cooper. "It focuses them on exploiting the technology, not defining it. It also gives us the flexibility to respond rapidly to change. We can potentially deploy a new application to every desktop within 24 hours."
Athena offers considerable financial benefits, first by deskilling support for workstations. Hardware failure is resolved by replacing the workstation immediately and repairing the fault in the workshop. "We support our servers to a high level," says Cooper, "but the desktop is treated as a disposable commodity item."
The Athena server-based architecture has had a major impact on reducing support costs. The ratio of support staff to engineers was 15 to 25:1 before. "We are now at 100:1," says Cooper, "and we will strive to get to the design goal of 200 to 250:1.
"Athena has allowed us to realise the full potential of our investment in engineering software, which is very expensive," says Cooper. "It averages u20,000 and can cost as much as u50,000 per seat. We now have an infrastructure which is both scaleable and flexible to accommodate the engineering IT demands of tomorrow. Most of all, it has given us the ability to manage change."
Including a 15-inch Intel Core-powered device weighing less than a bag of sugar
Tuomo Suntola's ALD technology extended Moore's Law, but was only adopted by chip-makers in 2007
Trump proposes a $1.3bn fine and a round of firings to un-bork ZTE
Findings could mean new optical frequencies to transmit more data along optical cables