Cyber-criminals start attacking servers newly set up online about an hour after they are switched on, suggests research.The servers were part of an experiment the BBC asked a security company to carry out to judge the scale and calibre of cyber-attacks that firms face every day.About 71 minutes after the servers were set up online they were visited by automated attack tools that scanned them for weaknesses they could exploit, found security firm Cyber Reason.Once the machines had been found by the bots, they were subjected to a "constant" assault by the attack tools.
The servers were accessible online for about 170 hours to form a cyber-attack sampling tool known as a honeypot, said Israel Barak, head of security at Cyber Reason. The servers were given real, public IP addresses and other identifying information that announced their presence online.
"We set out to map the automatic attack activity," said Mr Barak.
To make them even more realistic, he said, each one was also configured to superficially resemble a legitimate server. Each one could accept requests for webpages, file transfers and secure networking."They had no more depth than that," he said, meaning the servers were not capable of doing anything more than providing a very basic response to a query about these basic net services and protocols."There was no assumption that anyone was going to go in and probe it and even if they did, there's nothing there for them to find," he said.
'Easy to expose secret web habits'
Power firms alerted on hacker threat
Deceitful data helps to thwart hackers
Rehab for teenage hackers
The servers' limited responses did not deter the automated attack tools, or bots, that many cyber-thieves use to find potential targets, he said. A wide variety of attack bots probed the servers seeking weaknesses that could be exploited had they been full-blown, production machines.
Many of the code vulnerabilities and other loopholes they looked for had been known about for months or years, he said. However, added Mr Barak, many organisations struggled to keep servers up-to-date with the patches that would thwart these bots potentially giving attackers a way to get at the server.
During the experiment:
17% of the attack bots were scrapers that sought to suck up all the web content they found
37% looked for vulnerabilities in web apps or tried well-known admin passwords
10% checked for bugs in web applications the servers might have been running
29% tried to get at user accounts using brute force techniques that tried commonly used passwords
7% sought loopholes in the operating system software the servers were supposedly running
"This was a very typical pattern for these automatic bots," said Mr Barak. "They used similar techniques to those we've seen before. There's nothing particularly new."
As well as running a bank of servers for the BBC, Cyber Reason also sought to find out how quickly phishing gangs start to target new employees. It seeded 100 legitimate marketing email lists with spoof addresses and then waited to see what would turn up.