Major growth in all aspects of networking has led to increased demand for network products. And because servers are the lifeblood of any network reseller, value-added resellers trying to stay ahead of the competition need to be aware of the latest developments and trends that could affect the market.
With all types of enterprises striving to provide more online services - either internally through intranets or globally via the internet - issues such as server performance, capacity, resilience and manageability are becoming vital. Some clear trends are also beginning to emerge.
As with the desktop market, the race for more power is a constant factor in the server market. Faster processors, memory and storage all top customers' requirements.
Piers Jones, a site activity analyst at InterX, who has examined the market for server sales in March 2000, highlights the latest trends. "As you may expect, Intel still dominates the processor market, but AMD has been gaining ground recently," he says.
"The most popular processor in March was Intel's Pentium III running from 500Mhz to 700Mhz. Most customers tend to be rather conservative when it comes to purchasing, and while they are always striving to get more 'bangs for their bucks', they tend to shy away from true leading-edge products such as 850Mhz processors and Rambus memory technology. Similarly, SCSI has become so well-established in the server market that few customers will opt for IDE-based storage even in the lower end of the server range."
In addition, multiprocessor-based servers are starting to take off in a big way and Jones points to an increase in demand for such systems. "Although single-processor servers are by far the most popular choice, customers are opting for dual-processor systems. Many customers have been opting for dual processor-capable motherboards, but they are buying only a single processor, initially with a view to upgrading in the future," he says.
"One interesting trend is the March increase in sales of servers using dual Intel Pentium II processors. These have proved very popular as they provide considerable performance at modest cost, compared with the Pentium III. Very few people are ordering servers with more than two processors, but this is likely to change with increased demand for even more power."
On the server storage front, higher-capacity hard disks are in big demand. "Most customers opt for at least 20Gb on their servers, but many are buying servers with between 50Gb and 100Gb of hard disk. I can see this trend continuing over the coming months," says Jones.
Because so many organisations are struggling to keep up with the demands for online storage, technologies such as storage area network (San) and network attached storage (Nas) are becoming increasingly popular.
Dom Bruning, business development manager for storage at Axis, believes that while network managers have traditionally entrusted storage to the file server or storage devices attached directly to it, there is now a drive to simplify the addition of storage to Lans. "File servers are creaking under the strain of the huge volumes of data held locally and the current need is to off-load data to reduce the pressure," he says.
"Nas is having an indirect yet visible influence on the server market in the Lan and workgroup space. The amount of data residing on networks is on a continual upward curve, meaning that new ways of storing it are being pursued to relieve the burden on the file server. The complex file server once used as a general-purpose storage mechanism is being replaced by low-cost Nas servers."
Demand for Nas servers is increasing exponentially as users try to increase storage capacity cost-effectively across multiple offices without sacrificing network performance or increasing downtime, says Bruning. Researcher IDC predicts that the global Nas market will grow by more than 53 per cent each year to reach $5bn by 2003, he adds.
Ray Thomas, vice president of global communications at Data General, a division of EMC, says resellers should not underestimate the importance of promoting Sans as a way to add value. "Sans allow us to separate storage from servers and to manage storage as a single entity," he says. "By removing storage from servers, we also make them more flexible because they can potentially boot from any logical volume in the San and access any device in the San.
"They're no longer limited by their own chassis and power supply. When the time comes to upgrade, repair or replace the server, we are able to attach a replacement server to the existing storage. Our notion of servers changes to one of racks of very dense computer elements, managed like arrays of disks in the San. We can easily add capacity and re-deploy computers to meet the changing needs of the organisation."
Bruning says Nas and San are prime examples of technologies that have been developed to address specific networking requirements, delivering high-capacity storage and providing high-speed interconnections between storage devices. "Adding extra storage has always been a frustrating and disruptive experience. The way that Nas overcomes these issues makes a great deal of sense for IT managers. For the channel, there is a big immediate and mid-term opportunity to derive services-based turnover from Nas and San products to create broader and more proactive storage management solutions rather then making a quick fix," says Bruning.
When it comes to being resilient, the biggest concern of many businesses is ensuring continuity of service. The arrival of global communications through the internet has meant that many networks are expected to be available 24 hours a day, seven days a week, and the failure of a crucial network server can wreak havoc in an organisation.
The problem of maintaining continuous network services is being tackled on two fronts: through software, with the introduction of operating systems such as Windows 2000 which supports 'clustering', and through the use of fault-tolerant hardware to reduce downtime.
The technique of clustering, where servers are closely interconnected with one another, holds two benefits for modern networking systems. First, server processes can be distributed over multiple hardware platforms so that more efficient use can be made of low-powered systems. Second, servers can be connected in such a way that automatic 'failover' can be achieved in the event of a fault.
Crackdown on downtime
Mark Kinsell, server product specialist at Hammer, says the failover redundancy features of clustering have now attracted interest within the server industry. "Loss of corporate productivity because of downtime caused by server failure is still the highest cost associated with server ownership, he says. "IT managers who economised on server specifications in the past have realised their mistakes and progressed to better-quality, stable platforms. It is the introduction of clustering that seems to appeal to many organisations."
Clustering implies an increase in the number of servers within an organisation because more system units are required to provide a degree of redundancy. This is obviously good news for resellers as it means more opportunities to sell servers, but it can pose problems for customers as 'server farms' start to take up more space. One solution to this problem is the rack-mounted server. These 19in-wide form factors are starting to generate considerable interest where space is at a premium.
Jeff Hewlett, marketing manager at Bull Information Systems, sees the rack-mounted server as the ideal solution where multiple clustering is required. "As clustering becomes more common, customers are squeezing more kit into already cramped places," he says. "Expanding office space to accommodate extra servers is extremely expensive, so rack-mounted servers makes perfect sense. At Bull, we are seeing more customers deploying rack-mounted, Intel-based servers. To meet this demand, we now have three distinct rack server products in our line."
The alternative to clustering is to use high-availability, fault-tolerant hardware for providing network resilience. Stratus Computer Systems (SCS) supplies fault-tolerant systems. Jeremy Lovett, its business development director, believes clustering is not always the best approach to providing network resiliency.
"Failover clustering is basically an insurance policy against major hardware problems. It is far better to ensure that hardware is more reliable and fault-tolerant in the first place," he says. "Historically, the market for fault-tolerant systems was constrained to specialist areas such as electronic trading and command and control systems. But with the development of fault-tolerant servers running open operating systems such as Unix and Windows NT, there is now a broader range of applications that can take advantage of continuous availability."
There is an increasing desire to implement continuous availability in hardware rather than complex software cluster solutions, for both support and flexibility reasons, he says. "With price points falling, fault tolerance in hardware is a viable alternative for anything from mail and file servers to enterprise transaction servers."
SCS says there are tremendous opportunities to offer an improvement on high-availability in the entry-server market. Fault tolerance, which supports higher up-time than all other high-availability techniques, has a part to play in this space, as long as it has the right price tag.
Consolidation is key
Many people directly involved in the server market are looking at consolidation as the current big trend. The demand for servers to provide internet services is rocketing, according to Intel, with only about four per cent of the servers required to support internet traffic in the next five years currently installed.
As servers proliferate, IT departments come under increasing strain and the industry is searching for ways to solve the problem.
Mike Fish, European managing director at Platform Computing, says: "Management of hundreds of servers can be an administrative nightmare. Today's website environments are growing at a rate that would have been inconceivable a few years ago. Long gone are the days when stately planning processes and incremental site growth were the order of the day."
Managers now need to grapple with a growing number of servers; the regular updates to applications for many sites; and the drive to add recent technologies. "It's creating a patchwork quilt of applications and operating systems," says Fish.
"These trends have turned the job of managing a site environment into a costly challenge for many organisations. What if you could simply treat all your servers like one discrete entity, a single box? By using the latest server consolidation technology, many organisations are able to virtually consolidate without the need to move applications and data to a single-box model. This gives them all of the advantages of distributed systems without the headaches."
Thomas agrees. "There is a clear trend towards server consolidation nowadays. Because of the millennium bug issue, IT managers were forced to take inventory of their systems and one of the major discoveries was the number of NT systems scattered throughout the enterprise. Now they have begun to get their arms around server proliferation," he says.
In the server market, there are clear trends emerging, with increasing demand for more powerful, highly-resilient systems. The good news is that things are likely to get even better as organisations look towards new technologies such as San, Nas, clustering and consolidation as ways to manage their investment in servers and to make sure that the hardware keeps pace with the explosive growth in online services.
- network server customers tend to be more cautious about adopting the latest technology and prefer the tried and trusted approach
- continued growth in internet-related business and the advent of ecommerce is fuelling the demand for more powerful servers
- dual and multiprocessor servers are very popular, with SCSI still dominating the hard disk storage requirement for servers
- San and Nas are starting to have an impact as companies seek flexible solutions to storage
- network resilience is fast becoming a key factor and technologies such as server clustering are fuelling demand for servers
- the problem of managing the continued proliferation of servers within organisations is finally starting to be addressed with 'server consolidation' techniques.
Topological photonic chips promise a more robust option for scalable quantum computers
In quantum physics both the chicken and the egg can come first, claim University of Queensland researchers
Cause-and-effect is not always straightforward in quantum physics
Mark Carney said that about 10 per cent of UK jobs would be replaced by automation: lower than earlier estimates
WSJ claims that staff have rubbed out bad reviews for $300 per review