From Firewalls to Honeypots
Consider a demilitarized zone between two firewalls, where the outside firewall is configured to prevent access to any active ports of the inner secure world, and the inside firewall is configured to prevent access to any of the outer world. Thus, inside machines and outside machines may not directly communicate.
Clearly, the security of the machines in the no-man's land between the firewalls is crucial. An attacker able to launch processes on these machines could use them to launch attacks on machines inside the inner firewall, and an insider intent on improper communication with the outside world coult use these machines as intermediate points for such communication.
The strategies for dealing with these threats include:
The phrase "to tunnel through a firewall" is used to refer to applicatons that allow users to defeat firewalls. Typically, such a software system encrypts the material that the user desires to send through the firewall, disguises it as permitted material, and then passes it through the firewall for decryption on the other side. Development of such tunneling software has been spurred by the Great Firewall of China and by voluntary restriction of material delivered to Chinese sites by some web content providers and search engines.
My workstation in MacLean Hall has log files on its network interfaces, and one day, I came in to find the machine dead. On attempting to reboot the machine, it went into a very minimal service mode. It turned out that the file system was full, and it didn't take much study to determine that the culprit was a log file associated with the E-mail system.
The eventual diagnosis was interesting: My E-mail program was configured to bounce undeliverable mail back to the sender -- this is normal. A mailer in Montreal had been misconfigured so that when bounced E-mail arrived back at this sender, it too bounced. This caused no problem at all until someone sent some misaddressed E-mail to me via that system in Montreal. At that point, this misaddressed E-mail began to bounce back and forth between Montreal and my machine, adding one entry to the mailer log file for each bounce. The network connections were good enough that, over the weekend, this added up to many megabytes of log information.
Under normal circumstances, this log file grew by less than a kilobyte per week, and there was no mechanism to automatically recycle the log disk space. This was before the spam epidemic, so the only mail traffic to and from my machine under normal circumstances was mail I sent or recieved.
Cliff Stoll, a system administrator at the University of California at Berkeley, was one of the first to thoroughly document a serious attack on a computer system. His book, The Cuckoo's Egg, documents his experience tracking down the attacker. The Wikipedia entry on this book is worthwhile reading, but the book is even better.
It is noteworthy that Stoll detected the hacker through an accounting discrepancy in the log tracking charges for machine usage. Unix is an old enough operating system that it includes tools for tracking and charging for use of CPU time and memory, on the assumption that a machine is shared by many different users who must pay for their usage. In theory, the log files reflecting time charges to users ought to add up to the total amount billed, and what Stoll started out with was a 75 cent discrepancy.
Another thing to note is that Stoll didn't immediately shut down the machine. He was curious about his attacker, so he decided to monitor the attacker, and he was very careful not to do his monitoring in a way that the attacker could detect -- so he didn't change any software or add any mechanisms that would change the response times seen by the attacker.
By allowing the attacker to continue in operation for some time, he was able to eventually observe the range of activities that the attacker was involved in, and to trace the attacker. It turned out that international espionage against military targets was involved.
Cliff Stoll's monitoring focused on attackers using a production machine to attack other production machines, which allowed the attackers to threaten the integrity of the machines he was using. His machines were not critical systems -- being involved in astrophysics research. Many machines are sufficiently critical that we don't have the luxury of observing an attacker at work over a long enough period that we can track down the attacker and make an arrest.
If our machines are critical, one option we have is to install an extra machine specifically for monitoring attack attempts. Such a machine is called a honey pot -- like a pot of honey used to attract flies, honey pot machines are used to attract attackers.
Building a good honey pot is not easy. First, it must look convincing to an attacker. It must be convincingly well defended, so that the attacker believes that it is a real machine, and once the attacker breaks in, there must be attractive data or resources on the machine that the attacker can attempt to utilize.
Typically, the defenses on a honeypot machine should be weaker than the defenses on the machines it is designed to defend, but there is some value in building honey pot machines with a range of defenses, in order to test the skills of the community of attackers.
The second part of building a good honey pot involves installing sufficiently strong monitoring equipment that you can monitor the attacker's activity. Without this, the honeypot is of no value.
Honeypots may involve entire networks of decoy machines, and they may involve "honeywalls" -- that is, firewalls that exist not to block access to anything, but rather, to monitor the traffic into the honeypot.
As usual, the Wikipedia is a good source. See: