Once upon a time, there was no such thing as a heterogeneous network: reduce the hassle of network management. the concept simply didn't exist. All networks were inherently homogeneous, because there simply wasn't anything else. Almost certainly, you had purchased your entire network - the hardware, software and infrastructure - from the same vendor - it was probably an IBM or a Sperry mainframe, with IBM 3270 terminals. And software wasn't a problem: the operating system - S/370, perhaps? - was restricted to the mainframe, so there were no interoperability problems. The network administrator was happy, master of their own domain.
Every part of their network talked to every other part because it was made that way.
But times changed. Amid talk of downsizing, the mainframe was partially sidelined - although it never really went away - in favour of client/server technology. Initially, however, there was still a feeling among network managers that it was best to buy from a single vendor - years of believing the old adage that "no one ever got sacked for buying IBM" probably summing this up.
However, as networks became bigger and more complex, it soon became clear that no single vendor could offer best of breed in all the different areas of a network, despite vendor claims to the contrary. When it was just a Unix or VMS box and a terminal, it was fine - not very different from the mainframe model. But in larger networks? A particular company's server might be acknowledged as best-of-breed, but its desktop systems as well?
What about bridges: did the company even manufacture them?
Eventually, even those companies which had been the fiercest advocates of one-stop shopping were forced to admit that they couldn't be number one in every part of the network, and systems integrators unshackled themselves from relying on only one vendor and started to build networks consisting of multiple brands. Which is where the problems started.
Today, a typical network may be built around an HP server, Cisco routers, Dell desktops and a Cabletron hub. And the network manager is expected to manage them all. It is quite possible - if not probable - that each of these will have its own management software: Cisco's Internetworking Operating System (IOS) on the router, Cabletron Spectrum on the hub, Windows NT Workstation on the desktop and NetWare on the server. The majority of these applications only meet at the network layer, which wouldn't be too bad if there was just a single operating system residing there.
There may not be. There almost certainly won't be. Just to make life even more difficult for the poor network administrator, there are no guarantees that the network will be running the same operating system across its entirety. NT 4 may cohabit with NetWare 4, while various flavours of Unix lurk in the corners. There may be the dear old mainframe, sitting in the machine room, happily running S/370, S/380 or even S/390. It is even possible that there may be a remote outpost using AppleTalk (a Mac-based production desk, for example). This is a heterogeneous network. How has this situation happened? And, more importantly, how can a network like this be managed?
It happened because it is a rare - and very lucky - network administrator who is handed a blank cheque and allowed to go off and design and build a network from scratch. Most network administrators inherit systems from their predecessors, who inherited it from theirs, and so on.
These systems have been built up piecemeal over the years, through a combination of natural growth or merger, but that doesn't mean that they are any easier to manage. Furthermore, certain applications only run on specific OSs: for example, if you need Visio's network management tool, you need NT. Each part of the network may be easy to manage, but how does today's network manager look after a network that runs multiple operating systems and has multiple management applications running on it? How can he manage a heterogeneous network, while still ensuring that he can not only fix existing problems, but plan for future growth?
For far too long, OS companies tried to force users to make a choice between operating systems, even though, in the real world, this was infeasible.
Eventually, something had to give, and now, thankfully, there are a number of products currently on the market which address these concerns. HP's OpenView allows NT and Unix networks to work together; Tivoli's Enterprise aims to make operating systems transparent, while CA's Unicenter is another framework-based product, in direct competition to Tivoli.
Mark Wilson, systems engineering manager at Tivoli Systems, explains how his company looks at managing heterogeneous networks.
"The important thing about Tivoli Enterprise is that it sits on top of what we call the Tivoli Management Framework, which is an open system.
Because of this, network administrators have the ability to manage everything in the same way, regardless of platform or application, through a common GUI.
"There are a number of benefits to this: it allows a business to move people between the different areas of network management at minimal cost.
People aren't trained in the different operating systems or management applications so much as being trained in Tivoli, which can represent the things that are being managed in exactly the same way - transparently.
Whether they are looking at NT, Unix, NetWare ... and the same is true of applications: desktop systems, middleware, management all appear the same.
Because of this, skills portability between systems becomes much easier, and much less expensive."
Troy Leacock, managing director of ML Enterprise, a company specialising in enterprise network management, also believes that the heterogeneous network is here to stay. "We view the network as an enterprise-wide system made up of discrete components," he said. "You can no longer just talk about the servers, or the routers ... today's reality is that you are talking about everything. When anything goes wrong, the user doesn't care about which piece has failed: the service has to be there, when they want it, at the correct level."
Leacock sees management tools falling into two camps: reactive and proactive.
"Reactive tools are the ones that network managers are currently most familiar with: they're there to fix or configure the network. The next generation of tools - proactive - also have the ability to be reactive, but are also capable of forecasting, looking at trends in your network: capacity planning, bandwidth, application usage - the information that the network manager needs to design and support the network in the future."
Does Leacock believe that network administrators are in danger of receiving too much information from their management systems? "No. It doesn't matter how much information is collected from the network and analysed - network managers need as much as they can get. The key aspect in making good use of this information is how that information is presented and distributed: it has to be the right information, at the right time, to the right people.
The latest generation of network tools - the proactive tools - find ways of presenting information about heterogeneous networks
"Five years ago, five people would be sitting in the network control centre, waiting for a red light to tell them that something had gone wrong with a particular part of the network. One person would be responsible for the Bay Optivity app, someone else HP Openview, and so on.
"Five years from now, you will have highly distributed network management systems, not only capable of reacting, but also of predicting trends in your network. As the tools become more proactive, the people become more proactive. But probably the most important development will be the use of configurable Web-based interfaces. You can pick the tool you want, the interface you want, and draw the information from a common database."
Tivoli's Wilson agrees. "You might have SNMP as a monitoring protocol, but proprietary systems responsible for configuration of the end points," he said. "Tivoli's aim is to act as a trusted third party to enable transparent management of all the devices. One way of doing this is to use the Java Management API (JMAPI): vendors can put Java Virtual Machines on their devices to enable them to talk to one another."
Even this approach is dependent on other factors. Says Leacock: "The key to all of this is standards. These tools have to talk to one another: they have to be standards based. SNMP, RMON, RMON2 ... as long as they conform to the common standards, the tools don't care who they talk to, and on what network."
The concept of a heterogeneous network embraces every network operating system going, although the majority of such networks will be shared between the three leaders: Microsoft Windows NT, Novell NetWare and Unix, in a multitude of flavours. The Unix vendors have had a slight head start where interoperability is concerned: although Unix is now a popular operating system once more, there was a time when it was considered to be old technology.
Although companies were loath to ditch their Unix boxes, they also wanted the benefits that NT or NetWare would offer their networks, forcing Unix vendors to be among the first to develop applications allowing different operating systems to talk to one another. For example, HP OpenView started life as a fully-fledged Unix management application, before HP ported it to NT as interest in the new operating system increased.
This is all well and good, but why should the network administrator have to install more software just so they can manage all of it from a single interface?
Has the time come when they should put their hands up and say, "No more"?
Not according to a report from the Aberdeen Group. It claims that the question, "Which one, NetWare or NT Server?", is actually invalid: both can operate together and play to one another's strengths, while compensating for one another's weaknesses.
However, this has to be taken with a pinch of salt. Over the last few decades, we have seen countless examples of a particular technology which technically appears less sophisticated taking the market share, before finally obliterating its rivals. Betamax and Phillips 2000 lost to VHS; BSB's satellite service - based around the "squariel" - lost to Sky; and OS/2 lost to Windows, in all of its incarnations. No one would doubt that NetWare offers a number of features still missing from NT, and it still has a large market share of the installed base, but NT's popularity is rising dramatically.
There is no guarantee, though that NT will be as dominant as Microsoft would like to believe. NetWare 5 is already available, whereas Microsoft's release date for NT 5 is still only provisional - early 1999 - and Microsoft's track record for delivery to market isn't that impressive. There are signs that many companies are ditching their older versions of NT in favour of NetWare 5, and all the benefits that it offers. According to the Aberdeen Group: "Unless decision makers want to put significant improvements for their IS infrastructure on hold for the next two years, they need to embrace the reality that NT 5, with necessary service patches, is at best a mid-year 2000 product."
Faced with that, let's look at Novell's NetWare 5 first. Released a fortnight ago, the major strength of this latest version of NetWare is the latest version of NDS - NetWare Directory Services. Although NDS was introduced in NetWare 4, that was IPX-based: this version is a native IP OS. NDS brings a vast amount of management back into the operating system. Not only that, but other versions of NDS - for NT, for example, gives network administrators the interoperability that they need at the network level.
With NetWare 5 and the full range of NDS products, network administrators can manage applications on platforms as diverse as Solaris and S/390.
But what about the future of NT? Novell has gone to great lengths to ensure that NetWare will mix and match with the best of them, so will NT 5 return the compliment?
While NetWare 5's greatest strength is its NDS, NT 5's promises to be the Microsoft Active Directory. It promises to be a repository for a wide range of systems objects existing across the network, offering a common interface which will allow administrators and developers easy access to whatever information they need - effectively, it is Microsoft's answer to NDS. However, Novell has a massive head start of several years with NDS: if Microsoft is to succeed, it has to get it right first time - and in this industry, that's a tricky one.
At the end of the day, however, the battle probably won't be between NT and NetWare. For all the grand promises made for the two NOSs, they are still proprietary systems: you will almost certainly end up using the directory service of the major operating system on your network. For true, seamless interoperability, vendors will have to look beyond applications, devices and operating systems, and look at the concepts behind heterogeneous networks.
One answer is to stop looking at the network as a collection of discrete real-word parts, and consider it in conceptual terms. In programming, this is how object orientated design (OOD) works: data objects replace specific file types, with recognisable attributes which can be used to manipulate these objects. Bringing OOD to the heterogeneous network means having a server object, rather than a Compaq Proliant or IBM NetFinity; it means having a data object rather than a Unix file or an NT file. When the network is looked at in this fashion, the operating systems and proprietary configuaration and management utilities become irrelevant: true interoperability.
A similar solution almost certainly lies with one of the Web-based alternatives, based around Java or XML - this is the direction touted by Tivoli and ML Enterprise. Although NetWare 5 has wholeheartedly embraced the concept of the Java Virtual Machine, the winners in the race for network management may very well be those who go back to first principles and understand that the strengths of a heterogeneous network - the ability to select your management tools of choice, to use only those parts of the NOS that you want to, and so on - are far more important than arguing about the differences. Java gives you that freedom. This is already being seen in the Web-to-host products which are coming onto the market: Neon's Shadow Web Server sits on the mainframe and enables a Web-based desktop GUI to access the legacy data, while products such as Esker Plus and Attachmate HostView Server sit on the network to perform the same functions.
If Java is the answer, it may be only the beginning. According to Tivoli's executive vice president, Martin Neath, the aim is to develop his company's management agent so that it doesn't just run on traditional networks but, through Java, on anything with an embedded system. Perhaps that is the ultimate heterogeneous network: not NT, NetWare, Unix and S/390, but kettles, microwaves, dishwashers and VCRs.
Kicking Palantir off of AWS is among their demands, too
Rafaela Vasquez was watching The Voice at the time of the crash, new evidence shows
PUBG price slashed on Steam after selling more than 50 million copies - as daily player numbers plunge
Use the same password for every website? It might be time to change them all