The recent spate of distributed denial of service (DDoS) attacks using a new breed of complex networking tools which put allegedly secure, high-volume sites out of action, has left many network managers wondering if they can ever guarantee that their networks are safe from attack.
At 10.30am on Tuesday 15 February, an unknown person or group launched an attack on Yahoo, flooding its routers with over a gigabyte of data per second until the network couldn't take it any longer. Millions of users were unable to access mail, schedules and the web directory service for three hours.
The company denied that the attack was the work of malicious hackers and said the security of its system was not in question - after all, there had been no security breach and no customer information had been stolen.
"There was absolutely no problem with security, it was simply a surge of traffic sent at such a pitch that the systems couldn't cope," said a Yahoo spokesman. The site had suffered a "co-ordinated, distributed, denial of service attack" as the result of an overload of traffic sent by an external source that had intermittently shut down European and US services between 8.30pm and 11.30pm BST.
Using servers as weaponsWhat was alarming was that the attack had come from some 50 different IP addresses. The crackers had covered their tracks using third-party servers as staging posts from which they launched their attacks. The servers which had been chosen were unprotected because they didn't contain mission-critical information, or were not considered to be worth a hacker's efforts. But Yahoo's experience proved that in the hands of a hacker with the latest tools, any open server could be used as a weapon.
Within days, similar attacks were carried out against CNN, Amazon, eTrade, E-Bay and ZD-Net with similar effects. US network managers on the North American Network Operators Group (NANOG) mailing list were soon trying to piece together what had happened and, more to the point, how to stop it happening again.
A pattern was emerging. The crackers were using a piece of code called 'trin00' which had been posted on security bulletin boards since July last year. New versions had since been developed that used a distributed, multi-level client server architecture, in which 'master' programs communicated with distributed 'client' applications, that in turn controlled distributed 'agents'.
The 'agents' were first found in the wild on Solaris systems that had been compromised using buffer overrun bugs in the statd, cmsd and ttdbserverd RPC services. The purpose of these applications was to carry out ICMP, SYN and UDP flood and 'smurf'-style attacks.
More recent versions of trin00-type tools contain an on-demand root shell program. They improve on the originals by incorporating encrypted communication between the master programs and the clients and agents.
Upgrading security protocolsWhile many searched for a technological answer to the attacks, others used it as an excuse to sell security products. Network Associates, which has a scanner designed to warn network managers that a DoS attack is underway, warned that companies could be sued if a hacker used their servers in an attack on another company.
However, IT solicitor George Gardiner, of Tarlo Lyons, denied this claim. "It would be difficult to claim a case of vicarious liability against someone whose server had been used in a DoS attack," he said. "Many courts accept the internet is a bit like the Wild West and would understand that the person who owned the server had no control over the attack."
The only people who could be affected legally are those with contracts with the victims that specify that they should take every step to prevent such an attack, said Gardiner. But even if they are not going to be sued, very few network managers would be happy about such an open breach of security.
Barry Shein, chief executive and president of US ISP World.com, said: "We need to open up the dialogue in the networking community as to whether or not we've become ossified." He feels the industry is reluctant to push for the dramatic upheaval involved in upgrading the internet backbone to IPv6, which would bring more security and accountability.
"The network protocols we're using on the internet were developed in an era when we could trust those who had access to the internet (universities, governments and a few large corporations), and if that failed there was accountability; abusers were easily dealt with because they were working within a strict framework, such as a student or faculty member. It wasn't hard to exercise some control over someone whose behaviour was even slightly out of line, let alone criminal," he said.
Greg Knauss, vice president of research and development for US company Estate Valuations & Pricing Systems, said that it is time to realise that IP just isn't up to the job. "Poor IP. You can fatten the pipes to shuffle more data around, you can slap cryptography around each packet to make sure that your credit card number arrives safely at an ecommerce site, but you can't stop low-level hacks like smurfing, SYN floods and other DoS attacks," he said.
"IP is broken and there's nothing you can do about it. There are plenty of proposed solutions to the basic failures of the protocol, but all of them have gone almost exactly nowhere," said Knauss. He wished that more had been done with IPv6. A couple of years ago, amid fears that the internet's 'address space' was running out, there was a big push for IPv6, which solves many of the problems of the current version, including preventing untraceable network attacks.
Building firewall defencesBut the earliest date for an IPv6 internet is a decade away, according to Knauss, and the vast majority of applications and operating systems don't support it. As Knauss said: "The people capable of pushing for improvements have been too busy being fat and happy. Why bother? Things are working fine!"
Gia Threatte, director of security website Packet Storm, said: "One must understand that the problems we face are the result of building a global network without strong protocols. Until we decide to address the heart of the problem, everything else is cosmetic."
Short of major surgery on IP, is there anything else that network managers can do to protect themselves? Threatte believes that crackers can be beaten and there are several things that administrators can do to make their systems less vulnerable.
Threatte advised that all traffic not explicitly needed for the services you run should be automatically denied access. She added that external servers should have the latest kernels and security patches, because that will stop attacks such as stream.c that rely on known weaknesses in older software.
"But the best advice that can be given to any network administrator is to speak with their upstream providers, as they are best situated to defend you against DoS attacks," she said. "You should formulate a plan of action so that in the event of an attack you can contact them and co-ordinate quickly to solve the problem."
By enabling features such as unicast RPF, access lists, ingress and egress filtering, and advanced rate limiting, upstream providers can drop most, if not all, of the attacking DDoS packets. Firewall vendors such as Checkpoint, Cisco and Raptor have incorporated features into their products to shield downstream systems from SYN attacks.
In addition, your firewall should make sure that outbound packets contain source IP addresses that originate from your internal network, so that these addresses can't be spoofed from your network. But Threatte warned that network managers should not feel secure simply because they have configured all their systems properly, as even well-configured networks can be taken offline during DDoS attacks.
Joseph Shaw, a Houston based programmer and consultant, said that firewalls are sometimes helpless during DDoS attacks. "With packet based DDoS attacks, filters don't matter. Bandwidth and levels of saturation are what matters," he said. He blames suppliers for failing to come up with adequate technical fail-safes: "As long as technology companies ignore the issues, then we will continue to have problems like this."
Daniel Senie, president of US networking company Amaranth Networks, said that it was important for network managers to understand that they not only need to protect themselves, but have an obligation to prevent their networks from being used to attack others.
"Often folks only look at the inbound threat. Ingress filtering, which can really also be seen as egress filtering from the end-network looking toward the internet, is something all networks should do," said Senie. Network managers should start by disabling directed broadcast on their routers as it helps prevent their networks from being used to amplify attacks, he advised.
Creating warning systemsRodney Caston, internet services manager at US telco Southwestern Bell, said network managers should use anti-spoofing filters on their router/firewall to make sure that their machines don't forge source addresses. "This might stop any machines which have been compromised from participating in a DDoS attack," he said.
"Just about any network management system can be configured to poll interface counters on a regular basis and send an alarm when some threshold is reached," said Caston. "But the difficult question to answer is: How long should the link be saturated before sending an alarm?"
Caston said that this decision is a lot easier to make when you are using high-speed links, simply because it is easier to saturate a T1 with a file transfer than an OC-3. "Alarms should be based on deviation from the established mean. If a circuit sees around 50Mbps worth of usage on a regular basis, and then spikes to 130Mbps and stays there, something is clearly wrong," he said.
However, network managers may have to face the fact that there is little they can do to stop DDoS attacks. Peter Scott, network manager for Dallas based McBison's Plastics, pointed out that even with the best tools there is no real way to stop them: "Amazon has some of the most expensive networking and filtering equipment. All that did was reduce the amount of time their network was offline. In the end there is little you can do to stop these sorts of attacks."
Some parts of Atacama have not received rainfall for 500 years - but a sudden deluge of water upset the Desert's delicate biological balance
Spitzer Space Telescope could not spot Oumuamua, suggesting that it is actually pretty small
Greenland crater one of the 25 largest impact craters on Earth
This long-sought progenitor star was identified in an image captured by Hubble in 2007