It's 10pm. Your back aches and your eyes are blurred. Two hours ago you logged on to the internet to check the latest on mobile commerce, and after weaving a complicated path, you're none the wiser. Sounds all too familiar?
Don't despair. A lot of us seem surprisingly inept at finding facts online. While some business professionals search efficiently and with confidence, others tend to visit just a few familiar sites - often those of their existing suppliers. They rarely use the internet to solve new problems or find independent points of view.
This unexpected insight emerged during a series of recent discussions and focus groups with Computing readers. There are often good reasons for this lack of facility with the internet: surfing is still automatically regarded as wasting time in some workplaces, for example, and some people prefer to keep internet use at work to a minimum, as they find it too easy to get side-tracked.
For many individuals, the real problem is that they are simply not very good at surfing. They aren't familiar with the best information sources and most appropriate search tools, so finding what they are after takes much longer than it needs to.
We've put together an up-to-date surfing toolkit for IT professionals. All the software and sites mentioned here are free (usually funded by advertising or investors) unless stated.
Search engines get a makeover
The biggest inhibitor to successful internet use is poor search engines. It's a problem that can often lead people to give up on the internet as a sensible way of finding relevant facts fast.
The first and very obvious problem is bloat. The top search engines such as Yahoo and AltaVista have acquired ideas above their station. In an effort to turn themselves into money-making portal sites appealing to everyone, they've added feature after feature. As a result you have to wade through too much clutter before you even get started. Once you've entered your search terms you'll probably have to endure intrusive advertising banners supposedly relating to your query, as many of the search sites now sell ads triggered by words typed in by the hapless user.
The bloat is so out of hand that the leading search sites have made themselves vulnerable to simple, fast new challengers which are good at the basics. www.google.com is a prime example of a good site which returns relevant results very quickly. Searching on 'quiet PCs', for example, gave excellent links right at the top of the results list. Searching on the slightly more ambiguous 'EDS and Nato hoax' still managed to return a relevant link in the top three.
Google has recently been joined by www.raging.com, a cut-down engine from the AltaVista stable. If you're already familiar with AltaVista's search terminology this may be worth considering, because it's basically just a quicker version. It returned poorer quality results to the 'quiet PCs' search, however, yielding far too long a list of hits where the quietness of PCs was only incidental to the page. And it failed altogether on 'EDS and Nato hoax', whichever way we phrased the query, finding no mention of this story.
Keeping up to date
The second big problem with search engines is that the results you get back are often not up to date. This is a severe drawback, because one of the main advantages of the internet over traditional paper-based media is that it allows people to publish information almost instantly. This isn't much use if other people can't find it, however.
The difficulty is that there are a lot of pages out there - more than a billion according to an estimate by search software specialist Inktomi and the NEC Research Institute. So inevitably it takes search engines an appreciable amount of time to index new information. Most engines find and read the pages by running automatic software probes, called crawlers or spiders. Each search engine has numerous spiders traversing the web at any given moment. It can still take more than a month for a spider to get back to a site for another visit.
One way to do the job more quickly is to index a much smaller number of sites. Indeed, some very specialist search tools do use this approach. www.poynter.org/nelson, an experimental search engine aimed at journalists, is just one example. It manages to get spiders to revisit its small collection of news sites every 24 hours.
When topicality really matters, it's better to abandon the search engine approach and go to a news site that specialises in your chosen area, which means you will at least get up-to-date information. But it may not cover everything - even the best news services miss stories, and it isn't easy to predict which site will get a particular story out first.
A good way around this is a site aggregator such as www.newsnow.co.uk. This UK-based site scans a selection of the best news sites and displays headlines pointing to them. Because it only scans about 1500 sites and doesn't dig very deeply into them, News Now is able to update every five minutes.
Striving for completeness
The third problem with search engines is a humdinger: most pages never get visited. We've already touched on the difficulties of having to index a billion pages. So you won't be too surprised to learn the awful truth - the spiders don't bother.
Most search engines index no more than a third of the pages available, according to Search Engine Watch, a respected independent source in this area.
AltaVista is currently the most comprehensive at 35 per cent, but many others are in the mid and low twenties. See www.searchenginewatch.com/reports/sizes.html, for details. The problem can only get worse as the internet continues to expand exponentially.
One solution is to manually use several search engines. But a better option is to use a search engine aggregator, or meta-search engine, which will do this for you.
www.savvysearch.com was one of the pioneers of this approach. It's still pretty good, returning results fast from its default collection of five engines (you just click to get more).
The best aggregator we've come across is www.copernic.com. The freeware version employs a pretty good set of search sites, but if this isn't enough for you then you can upgrade to a paid-for version, which adds more.
On the downside, before you can use Copernic you will need to download and install some software on your machine. The effort is worth it, however. Copernic not only searches thoroughly, it organises the results in a useful way.
By default it saves all your searches, so you don't have to redo them if you get side-tracked when following up one of the links. You can also edit and print results as a neat report, or publish them as a web page, so Copernic is also good for sharing links with other people.
Remembering good sites
Anyone who spends a lot of time browsing inevitably encounters many sites that they think they might want to visit again. This list is likely to grow quite large. Managing it properly saves time, because you don't need to fumble for half-remembered web addresses or do repeat searches unnecessarily.
The standard way of keeping track of favourite links is to use the bookmark or favourites feature provided in your browser. Different browsers provide different facilities for organising the links into meaningful categories. However, most people seem to just pile them up without any attempt at organisation.
If your browser's bookmark facilities don't appeal to you, there are numerous third-party utilities available at shareware sites such as www.tucows.com. If none of these suit or you are not permitted to install software at work, it's worth trying www.backflip.com.
This is a free internet service that gives you a password-protected place to store your bookmarks off-site. Since they are stored remotely, you can access them from any machine, which means it is a good way to share bookmarks between home and work machines. It also lets you get at your bookmarks when travelling around.
Faster access to favourite sites
One way of doing this is to play around with your browser's cache and history settings, so more of the components of the pages you frequently visit are stored locally. However, this carries the penalty of using up a lot of your own hard disk space. Furthermore, not everyone is comfortable with altering the default browser preferences, and some support departments frown on it.
A lower risk approach is to use www.quickbrowse.com, which offers similar meta-searching facilities to Savvy Search. But there's another useful trick up its sleeve, which it calls meta-browsing. This lets you browse several sites in one go.
You just list the pages you frequently visit, either from different sites or the same one. Quickbrowse then stacks them up on top of each other, so you seem to be browsing a single big page.
This does save time once the giant page has loaded. Good luck turning browsing into useful searching.
BT wants to make the public switched telephone network history within eight years
Personal data being purloined by third parties via Facebook Login API
MacOS and iOS are better off apart, says CEO Tim Cook
Or they'll no longer be entitled to updates and bug patches