Anda di halaman 1dari 47

FREE

ORACLE FORENSICS
Detection of Attacks Through Default Accounts and Passwords in Oracle

VOL. 1 NO. 1

ADVANCED STEGANOGRAPHY: ADD SILENCE TO SOUND LIVE CAPTURE PROCEDURES MOBILE PHONE FORENSICS: HUGE CHALLENGE OF THE FUTURE ISSUES IN MOBILE DEVICE FORENSICS INVESTIGATING FRAUD IN WINDOWS-BASED DRIVING EXAMINATION THEORY SYSTEMS AND SOFTWARE DRIVE AND PARTITION CARVING PROCEDURES
Issue 1/2012 (1) July
www.eForensicsMag.com 1

Improve your Firewall Auditing


switches, routers and other infrastructure devices As a penetration tester you have to be an expert in multiple technologies. Typically you are auditing sys- this could mean manually reviewing the configuration files saved from a wide variety of devices. tems installed and maintained by experienced people, often protective of their own methods and technologies. On Device Auditing Scanners Nipper Studio any particular assessment testers may have to perform an analysis of Windows systems, UNIX systems, web applications, databases, wireless networking and a variety of network protocols and firewall devices. Any security issues identiPassword Encryption Settings fied within those technologies will then have to be explained in a way that both management and system Physical Port Audit maintainers can understand. The network scanning phase of a penetration assessment will quickly identify a number of security weaknesses and services running on the scanned systems. This enables a tester to quickly focus on potentially vulnerable systems and services using a variety of tools that are designed to probe and examine them in more detail e.g. web service query tools. However this is only part of the picture and a more thorough analysis of most systems will involve having administrative access in order to examine in detail how they have been configured. In the case of firewalls,
Network Address Translation Network Protocols Time Synchronization Warning Messages (Banners) Network Administration Services Network Service Analysis Password Strength Assessment Software Vulnerability Analysis Network Filtering (ACL) Audit Wireless Networking
* Limitations and constraints will prevent a detailed audit

* * * * * * * *

enquiries@titania.com T: +44 (0)845 652 0621


2

infrastructure devices, you can speed up the audit process without compromising the detail. You can customize the audit policy for your customers specific requirements (e.g. password policy), audit the device to that policy and then create the report detailing the issues identified. The reports can include device specific mitigation actions and be customized with your own companies styling. Each report can then be saved in a variety of formats for management of the issues.

Although various tools exist that can examine some elements of a configuration, the assessment would typically end up being a largely manual process. Nipper Studio is a tool that enables penetration testers, and non-security professionals, to quickly perform a detailed analysis of network infrastructure devices. Nipper Studio does this by examining the actual configuration of the device, enabling a much more comprehensive and precise audit than a scanner could ever achieve. With Nipper Studio penetration testers can be experts in every device that the software supports, giving them the ability to identify device, version and configuration specific issues without having to manually reference multiple sources of information. With support for around 100 firewalls, routers, switches and other

Ian has been working with leading global organizations and government agencies to help improve computer security for more than a decade. He has been accredited by CESG for his security and team leading expertise for over 5 years. In 2009 Ian Whiting founded Titania with the aim of producing security auditing software products that can be used by non-security specialists and provide the detailed analysis that traditionally only an experienced penetration tester could achieve. Today Titanias products are used in over 40 countries by government and military agencies, financial institutions, telecommunications companies, national infrastructure organizations and auditing companies, to help them secure critical systems.

www.titania.com
www.eForensicsMag.com 3

Dear Readers!
Logo eForensics Magazine napis Free TEAM Editor: Aleksandra Bielska aleksandra.bielska@software.com.pl Associate Editors: Sudhanshu Chauhan (sudhanshu.chauhan@software.com.pl), Praveen Parihar (praveen.parihar@software.com.pl), Hussein Rajabali (hussein.rajabali@software.com.pl) Betatesters/Proofreaders: Nicolas Villatte, Jeff Weaver, Danilo Massa, Cor Massar, Jason Lange, Himanshu anand, Dan Hill, Raymond Morsman, Alessandro Fiorenzi, Nima Majidi, Dave Mikesch, Brett Shavers, Cristian Bertoldi, Jacopo Lazzari, Juan Bidini, Olivier Caleff, Johan Snyman Senior Consultant/Publisher: Pawe Marciniak CEO: Ewa Dudzic ewa.dudzic@software.com.pl Art Director: Mateusz Jagielski mateuszjagielski@gmail.com DTP: Mateusz Jagielski Production Director: Andrzej Kuca andrzej.kuca@software.com.pl Marketing Director: Ewa Dudzic Publisher: Software Media Sp. z o.o. SK 02-682 Warszawa, ul. Bokserska 1 Phone: 1 917 338 3631 www.eforensicsmag.com

DISCLAIMER! The techniques described in our articles may only be used in private, local networks. The editors hold no responsibility for misuse of the presented techniques or consequent data loss.

Digital forensics is a very young field of science but nowadays its becoming more and more popular. Although it was originally designed for investigating crimes, soon it has become a big part of computer systems engineering and contributed to the development of mobile devices. To meet your professional interests we have created a new publication devoted to digital forensic issues. I present to you our first eForensics offspring - eForensics Free Magazine. Its a monthly compilation of the best articles from four titles: eForensics Mobile, eForensics Computer, eForensics Database and eForensics Network. Within the issue of eForensics Free you will find two positions concerning mobile forensics, an article about network forensics, three pieces focused on computer forensics and an article about database forensics. The article created by M-Tahar Kechadi and Lamine Aoud will discuss an increasingly important role of mobile forensics in criminal investigations, law disputes and in information security. Eamon Doherty will describe tools used to recover data from mobile devices. Craig S. Wright will introduce you to free tools which can be used to create a powerful network forensics and incident response toolkit. Arup Nanda will show you how to identify potential attacks by adversaries through default accounts. George Chlapoutakis guides you step by step through digital forensic investigation. Last but not least, I would like to announce the beginning of two article series. One of them, by Craig S. Wright, will take you through the process of carving files from a hard drive . The other, by Praveen Parihar, will take you on a journey through advanced Steganography. Thank you all for your great support and invaluable help. Enjoy reading! Aleksandra Bielska & eForensics Team

6 . ISSUES IN MOBILE DEVICE FORENSICS by Eamon Doherty

MOBILE

This article discusses some of the mobile devices and accessories that one may encounter on a suspect during an investigation, examples of usage of these mobile devices and accessories and the tools that one can use to examine them. The article also starts off with some certifications that make one more marketable in this emerging field. In this article author discusses using tools such as Access Datas FTK, Guidance Softwares Encase, and RecoverMyFiles to recover evidence from a digital camera with a FAT file system.

12. MOBILE PHONE FORENICS: HUGE CHALLENGE OF THE FUTURE by M-Tahar Kechadi, Lamine Aouad

While the processes and procedures are well established in traditional hard drive based computer forensics, their counterparts for the rapidly emerging mobile ecosystem have proven to be much more challenging. In this article author shares some thoughts about the reasons leading to this, as well as the current state of mobile digital forensics, what is needed, and what to expect in the future.

8. LIVE CAPTURE PROCEDURES by Craig S. Wright

NETWORK

As we move to a world of cloud based systems, we are increasingly finding that we are required to capture and analyse data over networks. Once, analysing a disk drive was a source of incident analysis and forensic material. Now we find that we cannot access the disk in an increasingly cloud based and remote world requiring the use of network captures. This is not a problem however. The tools that are freely available in both Windows and Linux offer a means to capture traffic and carve out the evidence we require. In this article author introduces a few tools that, although free, can be used together to create a powerful network forensics and incident response toolkit.

24. ADVANCED STEGANOGRAPHY: ADD SILENCE TO SOUND by Praveen Parihar

COMPUTER

Steganography is a very comprehensive topic for all techno-geeks because it involves such an interesting and comprehensive analysis to extract the truth, as we have heard this term many times in the context of terrorist activities and their communications. In this article author discusses methods of Steganography.

28. INVESTIGATING FRAUD IN WINDOWS-BASED DRIVING EXAMINATION THEORY SYSTEMS AND SOFTWARE by George Chlapoutakis

Fraud can take many forms, can take place practically anywhere, any when and any how. Theoretical driving examinations are now computerized in most parts of the world and the overwhelming majority of such systems tend to have some to no security at all, relying instead on the invigilators of the exam to catch those suspected of fraud. But, what happens when the invigilators fail and you, the digital forensic investigator, is asked to look into the case?In this article author shares his experience from the point of view of the digital forensics investigator.

32. DRIVE AND PARTITION CARVING PROCEDURES by Craig S. Wright

This article is the start of a series of papers that will take the reader through the process of carving files from a hard drive. We explore the various partition types and how to determine these (even on formatted disks), learn what the starting sector of each partition is and also work through identifying the length the sector for each partition. In this, we cover the last two bytes of the MBR and why they are important to the forensic analyst. We start by learning about hard disk drive geometry. In this article author takes the reader through the process of carving files from a hard drive.

38. DETECTION OF ATTACKS THROUGH DEFAUL ACCOUNTS AND PASSWORDS IN ORACLE by Arup Nanda

DATABASE

An Oracle database comes with many default userids (and, worse, well known default passwords), which ideally shouldnt have a place in a typical production database but database administrators may have forgotten to remove the accounts or lock them after setting up production environment. This provides for one of the many ways an adversary attacks a database system by attempting to guess the presence of a default userid and password, either by brute force or by a social engineering techniques. In this article author will show you how to identify such attacks and trace back to the source quickly and effectively. You will also learn how to set up a honey pot to lure such adversaries into attacking so as to disclose their identity.
www.eForensicsMag.com 5

MOBILE

ISSUES IN MOBILE DEVICE FORENSICS


This article discusses some of the mobile devices and accessories that one may encounter on a suspect during an investigation. It is important to know about many of the new devices that are wireless and provide storage or those that utilize GPS, mark routes as well as points of interest. This article discusses examples of usage of these mobile devices and accessories and the tools that one can use to examine them. The article also starts off with some certifications that make one more marketable in this emerging field.

Eighty percent of the people in the world have a cell phone [1]. Many of these devices have cameras, Internet connectivity, and could hold evidence that could help prove someone innocent or guilty with regards to a crime. Some of the evidence gathered to bring down the largest American spy in history, Robert Hanssen, was gathered from a PDA [1]. Tiger Woods cell phone played a part in revealing that he had contact with at least one other woman [6]. Sexting, Cyberbullying, and other modern activities are now making it mandatory for investigators to learn about digital forensics on mobile devices. This article discusses using tools such as Access Datas FTK, Guidance Softwares Encase, and RecoverMyFiles to recover evidence from a digital camera with a FAT file system. The article also discusses GPS forensics, GPS Spoofing, and tools such as Berlas Blackthorn 2 to recover routes, waypoints, and phone calls that occurred in the motor vehicle. This article also discusses some of the certifications one should obtain to make oneself more knowledgeable and marketable in this field.

It seems that lawyers are now taking more graduate classes and continuing education in digital forensics to improve their knowledge in this area so that they can better defend clients and spar with digital forensics experts on technical issues. It seems no longer sufficient to just try to find fault in the chain of custody or make sure that a Faraday Bag was used because of possible issues with connectivity, contamination, and tampering. Becoming a Certified Computer Examiner (CCE) and being a lawyer sounds like a great combination for those who are employed as defense attorneys. It is also good to join organizations such as ASIS International, The American Society of Digital Forensics and eDiscovery, The High Tech Crimes Investigative Association (HTCIA), and the International Association of Computer Investigative Professionals (IACIS) to network with people and find out what is the current news in digital forensics. It is good to ask what type of cases are going on, what type of tools are needed, and what type of certifications are needed.

COMBINING CERTIFICATIONS TO MAKE ONE SELF FOR MARKETABLE


A digital forensics student had told Dr. Doherty and his fellow classmates many years ago that he had a strategy to make himself more marketable to the digital forensics industry. He said that he was getting his Encase Certification EnCE and getting various Hazmat certifications while working as a volunteer fireman. He said that while many people could go to a crime scene and image a computer, collect digital media and accompanying peripheral devices, only a few could do that in an environment where a biological, radiological, or chemical attack occurred. Soon after graduation he obtained a position in digital forensics. 6

DIGITAL CAMERA SEIZURE AND EXAMINATION

It is important to put the digital camera in a Faraday Bag if one seizes a digital camera because many of these devices have infrared or some type of wireless connectivity [1]. The Faraday Bag, like Parabens Stronghold Bag, is a good way to prevent others from connecting to the camera and altering the evidence. There are many people who carry and use PDAs, iPADs, iPAD and palmtops that are small and could go unnoticed, so it is important to protect against possible tampering by isolating the evidence with that bag [1]. That is also good to note on the chain of evidence. The camera should also be transported in a cool place so the heat does not damage it. It is also good to protect the digital camera from radio frequency waves that could damage the media by not putting it near a

ISSUES IN MOBILE DEVICE FORENSICS


police radio or ham radio transceiver or antenna. The battery should be kept dry. Many mobile devices have batteries with an indicator that turns red or pink if the battery becomes wet and unusable. It would be useful to keep extra batteries of commonly used mobile devices in the examination lab in case a battery was wet and the device will not power on. At the lab, a person should first verify that no wireless or wired connectivity exists with the examination machine. Then one can reimage the examination machines hard drive to its original state so no residue from the last examination is there and run antivirus, antispyware, and anti-rootkit software from a CD that is put in the CD drive. The running of this software is not needed but such overt acts of being careful can help prevent defense lawyers from having another point to argue and try to detract from the examiners credibility. The examiner may then connect a Tableau Write blocker or Belkin external write blocker to the USB port so that the cameras evidence is not altered when connected to the examination station. Some writeblockers also have slots for SD cards and many of the digital media cards that are commonly available on the market. Everything stays read only. Then the camera is connected to the write blocker and powered on. Forensic programs such as Parabens Device Seizure, Access Datas FTK, Get Datas RecoverMyFiles, or Guidance Softwares Encase can be used to seize the evidence from the camera, help organize it, and create a neat standardized report of what was recovered. Bookmarked marked evidence can be commented on with Device Seizure and added to the report. Many digital cameras use a FAT 16 or FAT 32 file system for their internal memory and external cards. This means that the forensics tools that can be used to recover files on FAT file system devices such as external USB drives, PDAs, laptops, and desktops can also be used with these digital cameras too. If the person deleted the file, the pointer is gone and the entry is gone, but that picture is still there until a new picture or any chunk of data is written over it [1]. It is also important to remember that digital cameras and many cell phones such as the Blackberry Curve go into mass storage mode when plugs in the USB to mini USB synchronization cable. That means the device acts like a big thumb drive or ash drive and that files can be dropped and dragged on the cameras internal storage or external storage card. Some of those files could be hidden files that are not visible when the directory is viewed. That is why it is important to use tools such as Access Datas FTK which look for hidden files and graphic files where the file extension is changed to doc to fool people. FTK and some other forensic tools are also useful because they contain hash file libraries and can easily find known existing child pornography that may be hidden on a digital camera. Intellectual property, films, secret documents, and any other file one can think of. Many people are unaware of the EXIF Exchangable Image File Format metadata that exists in a digital photograph that was taken with a digital camera. The brand and model of camera and information about the lens and sometimes GPS coordinates are often found in this metadata igure two shows an example of metadata and information from the operating system in a picture that was taken of Dr. Doherty getting first aid training at the New Jersey Emergency Management Conference in Atlantic City, New Jersey. If the picture is ported to a Microsoft Windows 7 machine and one right clicks on the photo and selects properties, some operating system data about the file is viewable. The operating system provides data about the last modification of the file is recorded as well as when the file was first created. Some systems also show last www.eForensicsMag.com accessed date even if no changes were made. There is an option to remove some or all metadata and information provided about the file by the operating system. This is good if one wishes to release a picture to the public and one wants to redact data about the location and photographer for example. It is also important to remember that other copies of this same picture may exist on another laptop, desktop, blog site, or in an email attachment to a friend. Some social networking sites have been reported to redact GPS coordinates while other sites do not. These nuances about metadata need to be researched by the investigator. If the case is important but too much for your police department, please consider contacting one of the RCFLs, Regional Computer Forensics Labs and getting assigned a case manager. A good place to learn about EXIF standards is http://www.cipa.jp/english/hyoujunka/kikaku/ pdf/DC-008-2010_E.pdf

THE GPS DIGITAL CAMERA AND SHDC CLASS 6 WIRELESS MEMORY CARDS
The GPS coordinates have been embedded in cell phone camera pictures for many years but now it is common to find them embedded in digital camera pictures. The Nikon COOLPIX 510 is one such camera. Dr. Doherty saw a Nikon camera similar to it in action after a Fourth of July fireworks celebration. A man used the geotagging feature as he was taking pictures of a low orbiting orange object with an orange fiery glow in it. A nearby spectator told the photographer that it was not a malfunctioning satellite or UFO but only a Chinese Sky Lantern. This type of lantern has caused many false reports of UFOs in recent years [2]. The photographer also showed Dr. Doherty that his 8 GB digital media card could connect to a wireless networks within range of his present location and send the photos home. It was a Mobile X2 SHDC Class 6 Wireless Memory Card EYE-FI 8MD. The photographer said that if the picture was deleted by accident, another picture would exist at home. The picture would also have been home already since he has it configured to send a copy of all pictures taken to go to his home. Such advancements in technology are important for all digital evidence investigators to be aware of because it shows that evidence can be both covertly and rapidly distributed to one or more points and then deleted on the camera. GPS coordinates are not easy for the average person to decipher so a good way to determine the location of where a photograph was taken is to link it with a free website such as www.gpsvisualizer.com [3]. There is an option to link the photo to Google Maps and determine the location of the photo. Dr. Doherty took a picture of a university open house inside the Marriott Hotel in Teaneck, New Jersey. The hotel complex and parking lot can be seen in figure one near the Frank W. Burr Blvd but the picture shows it was taken approximately one yard away on a golf course nearby. The inaccuracy sometimes happens if one is in a building with a lot of steel building materials and if the digital cameras GPS receiver can only reach one GPS satellite to triangulate the current position. Weather and terrain can impact GPS coordinate accuracy too. The digital investigator should also be aware that people can spoof the GPS coordinates within a picture. A person could use a product such as EYE-FI to have the picture automatically sent to his or her laptop and then go into the photo editing program known as Picasa Three and use the geocoding feature to encode new GPS coordinates in the picture. An example of a legitimate use of geocoding is to embed GPS coordinates in your vacation pictures so that one remembers where one went years later. The investigator needs to look at the other metadata associated with the graphics file to try to determine if the picture has been altered or not. A person may try to use

the GPS data to fake an alibi. It is important to look at other digital evidence that can refute or collaborate with that data. Let us now consider all the digital data that I created when going from home to the Teaneck Marriot. The reason for mentioning this is that it is necessary to realize how much data must be faked if one was truly to going to change all records of their activity and not leave a digital footprint. If I was at the Marriot Hotel that day, my cell phone should have accessed certain cell phone towers from my home to those in Teaneck and those records can be checked. The EZPass transponder in my car should have created some records that show I paid some some tolls on the NJ Turnpike near the hotel. The Marriot Hotels digital video recorder should have a record of me passing through the door into the hotel that day since they have bubble cameras in many places in the lobby. The important point is to realize that there is a plethora of digital data that we create using our devices or as we pass by other peoples devices.

PEN CAMERAS

TRAVEL HONEY GPS LOCATION FINDER

Chinavision makes a device known as the Travel Honey GPS Location Finder. This device sells for approximately fifty five dollars U.S. online and is no bigger than an automobile key FOB. Many people carry these devices and it holds a plethora of data about where someone was. The travel honey can be used to mark everywhere one ew in an airplane for example. It may show the route and altitude. Some people use it to mark a hike they went on. Some people may misuse the device to geotag a place where they hid drugs or the body for a contract killing. If a suspect had one of these devices, it would be a good idea to include the desktop and laptop in the search warrant so that the investigator had the mapping software specific to that device and could see the trips that the suspect went on.

There are a variety of ball point pens that one can put in ones front pocket that collects both audio and video. An example of a legitimate use of such a device is the recording a meeting concerning ones last will and testament or grandmas birthday party at the long term care facility. However; unauthorized filming in an American hospital could be a HIPAA violation and lead to a criminal investigation with penalties such as $25,000 fines and/or six months in jail. Many of these devices can be found on eBay and are shipped from overseas. An example of a video camera pen such as the Mini Spy Pen HD Video Hidden Camera TF/ MicroSD Card Camcorder 1280 X 960 DVR can be found on eBay at the cost of about $24.00 U.S.D. and does not need batteries. These pen cameras typically charge via a USB cable. They also allow the recording of about one hundred minutes of video. The resolution for still photographs is considered high and the pixel rate 3840 x 2880 and the format is JPG. The format for movies is AVI. FAT file format systems are often used with USB devices because of the ease of portability to a variety of operating systems. The forensic tools Encase, Access Datas FTK, and Recover My Files should be sufficient for both seizure and examination of the device.

LAPTOP INVESTIGATION OF VIDEO CONFERENCE SESSION

GPS NAVIGATION DEVICES FOR THE VEHICLE

The Garmin Nuvi 1300 is an example of a GPS device that sits in the car on the windshield or dashboard and can be used for navigation. The device keeps track of waypoints and routes. Favorite bookmarked places can be stored and they often hold clues to a persons lifestyle such as who they do business with, where they eat, and where they recreate. I was able to use Blackthorn 2 to perform an experiment to determine if it was possible to recover my trips where the Garmin Nuvi 1300 was in operation in my vehicle. It was possible to trace my routes, how fast I was going, and the exact start and stop times on my trip. Many of the Garmin devices have a Bluetooth capability that works with the phone so that people can make hands-free calls. Blackthorn 2 allows the investigator to get information about the duration of the call and the telephone number of the person that the driver called. Tomtology is a program for investigating the Tom Tom navigation devices.

DRIVECAMS

There are approximately nine thousand families in the United States that get reduced insurance rates because they have partnered with insurance companies to have video cameras installed in cars [4]. Sudden turns or road rage driving should cause the camera to activate itself and the session to be recorded. The family can get an email about poor driving habits and then go to a website to view the video footage. This type of video footage could be important if an accident occurred. There are tools such as Camtasia, Jing, and SnagIt that can capture on-screen activity but it would behoove insurance examiners to see if these tools will work with online DriveCam video viewed on a browser. DriveCam is also the name of a company that is located in San Diego, California. 8

Inappropriate videoconference sessions may happen in modern society. Perhaps one worker was videoconferencing with another and made an obscene gesture or said something sexually suggestive. Video fragments are important evidence in an investigation. There are two ways to investigate this. The network has traffic on it and the traffic is often archived. The reviewing of archived calls, email, and other digital traffic should not raise any privacy issues in a corporation where people have signed policies indicating that they have no expectation of privacy and that all systems are owned by the corporation. Tools such as Netwitness Visualizer show various graphic symbols representative of all types of traffic on the net including voice over IP conversations, email, webpages, and videoconference sessions. One would just have to find that part of the traffic, select the correct traffic, and play back the session. If that is possible, that might be the easiest option. There is a magnificent seven minute video on YouTube that explains the tool and demonstrates the uses of it [5]. It is possible to get some preliminary information on how to use forensic tools and some of their capabilities by reviewing some YouTube videos and website marketing material. However; it is then important to go a step further and read academic journals and talk to law enforcement investigators at places such as the HTCIA meetings or conference to get more reliable information. A second option is to use a tool such as CnW recovery software to recover the video fragments. The company is reported to have good technical support so that is important. Data carving video fragments is a challenging task for any investigator. It would help to understand something about headers, footers, and the chaining of clusters. The television show, To Catch a Predator discusses how often inappropriate video chatting occurs between adults and minors. It becomes apparent that data carving is a necessary skill that is needed by prosecutors offices all over the United States.

ISSUES IN MOBILE DEVICE FORENSICS


CONCLUSION
Figure 2 Metadata from Dr. Doherty at a Conference in Atlantic City

It is necessary for digital forensic investigators to keep getting continuing education and some type of digital forensics certifications such as EnCE and the CCE in the field of digital evidence as the industry changes. It is recommended that investigators, lawyers, IT incident response team members, and human resources personnel take continuing education classes in mobile device forensics since a large part of the workforce and population uses them and sometimes misuses them in the workplace. There are also fabulous combinations of certifications that can make one truly unique and more marketable such as HAZMAT certified and CCE / EnCE. Who else could gather the evidence at a site where a chemical, biological, or radiological attack occurred? It is important to create a validation plan and use different tools on a mobile device and compare the results. In theory, the results should be the same but may not be. It is also important to make sure that both the methods and the tools that the investigator use are acceptable to both the law enforcement and legal community. The tools and methods should pass the FRYE Test.

Figure 1. The Location of a Picture

www.eForensicsMag.com

Author Bio

Dr. Eamon Doherty has a Ph.D. from the computer science department at University of Sunderland in England and a Masters of Science in Computer Science from Montclair University in New Jersey, and an undergraduate in computer science from Marquette University in Milwaukee, Wisconsin. Dr. Doherty is also a member of ASIS International, The High Tech Crimes Investigative Association (HTCIA), and the American Society of Digital Forensics and eDiscovery. Dr. Doherty is also the Cybercrime Training Lab Director at Fairleigh Dickinson University and teaches a variety of graduate school classes and continuing education classes in computer security, computer seizure and examination, cell phone forensics, and digital camera forensics. Some classes are in person, some are online, and some are done by a state of the art encrypted video conference system. The students are often practitioners in law enforcement, hospitals, or the military. This mix of people makes a great interactive learning environment where both the students and professor learn something from each class. The authors picture is in Figure 3.

For many years, Joe Weiss has been sounding the alarm regarding the potential adverse impact of the law of unintended consequences on the evolving convergence between industrial control systems technology and information technology. In this informative book, he makes a strong case regarding the need for situational awareness, analytical thinking, dedicated personnel resources with appropriate training, and technical excellence when attempting to protect industrial process controls and SCADA systems from potential malicious or inadvertent cyber incidents.

DAVE RAHN, Registered Professional


Other Materials References
1. Doherty, E. (2012), Digital Forensics for Handheld Devices, CRC Press, ISBN 978-1-4398-9877-2, 2. Thompson, J, (2010), How UFO hunters are turning to the web: UFOs = Chinese lanterns?, PC Plus, Issue 291, February 20th, URL accessed 7/7/2012 http://www.techradar.com/ news/world-of-tech/how-ufo-hunters-are-turning-to-the-web -670669?artc_pg=2 3. URL Accessed 7/9/2012 http://www.gpsvisualizer.com 4. Cambria, N., (2011), Car camera lets parents keep eye on teen drivers, Friday, January 7, 2011 | 10:59 a.m. CST, BY NANCY CAMBRIA/St. Louis Post-Dispatch , http://www. columbiamissourian.com/stories/2011/01/07/car-camera-lets -parents-keep-eye-teen-drivers/ 5. URL Accessed 7/8/2012: http://www.youtube.com/ watch?v=p4nIqIWKiMo 6. URLAccessed 7/9/2012: http://www.extratv.com/2009/12/02/ hear-tiger-woods-panicked-voicemail-to-mistress/

Engineer, with 35 years experience.

www.momentumpress.net
PHONE 800.689.2432

FOR US ORDERS:

FOR INTERNATIONAL ORDERS: McGraw-Hill Professional


www.mcgraw-hill.co.uk
PHONE: 44 (0)1628 502700

10

CYBER CRIME LAWYERS

Pannone are one of the rst UK rms to recognise the need for specialist cyber crime advice. We can both defend and prosecute matters on behalf of private individuals and corporate bodies. We are able to examine material or secure evidence in-situ and will then represent your needs at every step of the way. Our team has a wealth of experience in this growing area and are able to give discrete, specialist advice.

Please contact David Cook on

0161 909 3000


www.pannone.com
www.eForensicsMag.com

for a discussion in condence or email david.cook@pannone.co.uk

11

MOBILE

MOBILE PHONE FORENSICS:

HUGE CHALLENGE OF THE FUTURE


While the processes and procedures are well established in traditional hard drive based computer forensics, their counterparts for the rapidly emerging mobile ecosystem have proven to be much more challenging. This article shares some thoughts about the reasons leading to this, as well as the current state of mobile digital forensics, what is needed, and what to expect in the future.

The information and data era is rapidly evolving. As a result, there has been an exponential growth of consumer electronics, and especially mobile devices over the past few years, with ever-increasing trends and forecasts for the coming years. Mobile devices have already overtaken PCs, and mobile data traffic is expected to increase 18-fold over the next five years to approach 11 Exabyte per month, according to Cisco systems [1]. Their computing power, storage, and functionality have tremendously increased. Phones have been transformed from simple handheld devices, essentially emitting and receiving calls or text messages, into highly effective devices capable of doing more or less everything a desktop or a laptop computer can do, and even more. A large range of Android -based smartphones, iPhones, BlackBerrys, and even tablets products, are all examples of these mobile devices. Their typical storage capacity today is higher than a powerful desktop back in the late 1990s! And the vast majority can also be fed memory cards. This tremendous computational and storage capacity have turned mobile devices into data repositories capable of computing and storing a large amount of personal, organisational and also sensorial information. Indeed, although these devices can be input limited, they have remarkable context awareness because of all the sensors and various connectivity options. Unfortunately, criminals use this technology. They have not missed this proliferation of mobile systems and its data revolution, and these devices are being used as a support to criminal activities. For instance, earlier this year, a US officer found out that the 12

suspect he was about to arrest was using his smartphone to listen to the police secure channels streaming via the Internet! [2]. All classes of crimes can involve some type of digital evidence (a photo, a video, a received or emitted call, messages, web pages, etc.). These devices are also commonly used is social networking nowadays, and in carrying out sensitive operations online, including online banking, shopping, electronic reservations, etc. Hacking becomes then a huge problem. In February 2011, hackers were remotely monitoring the calls made and received from about 150,000 infected mobile devices in China [3]. Another example is the Zeus man-in-the -mobile Trojan, discovered in September 2010, which was the first Trojan in the mobile devices environment to compromise the online bankings two-factor authentication mechanism [4] [5]. It is indeed quite easy for cyber criminals to build a Trojan application nowadays [6], because these mobile systems are at their early stages. Valuable information can then be obtained from a mobile device: text messages, e-mails, communication logs, contacts, multimedia files, geo-location information (GPS and Wi-Fi hotspots), etc. These can only help answering crucial questions in cybercrime investigations, and solve the related cases. However, there are still a huge number of challenges facing a forensics investigator in obtaining forensically sound evidence from these devices. In this article, we present the process of recovering digital evidence and its challenges, and then share some information about current methods and tools, and few prospects for the future.

MOBILE PHONE FORENSICS: HUGE CHALLENGE OF THE FUTURE


WHAT IS DIGITAL FORENSICS?
again has indeed shown to produce some data changes. These devices also use different file systems especially designed for ash memory features, such as YAFFS (Yet Another Flash File System), or JFFS, among others [16]. However, forensic support for these special-purpose file systems is still limited, and more research is needed to converge, may be, to some standards. Also, there is no simple way for acquiring, mounting, and analysing their memory images [9]. Standardisation is another big challenge. There have been attempts to bring together providers and device manufacturers, such as the WAC organisation (Wholesale Applications Community) [11], which is more concerned by creating an open and unified platform and API for application developers. This set of standardised technologies, or guidelines, which would be adopted by manufacturers, will lower the cost and speed up the process of recovering data for investigations. It is, however, a difficult task to create standards for such a large group of manufacturers who use proprietary circuits, and do not seem to agree on communications. Apple has for instance already stated they will not join any standards. So, this is not likely to happen anytime soon, and fragmentation issues are more likely to worsen. Another challenge for a forensic investigator is to obtain evidence using acceptable methods so the evidence will be admitted according to the law in the trial. The forensics investigator should also be aware that laws might vary. Evidence admissibility requires a lawful search and the strict adherence to chain of custody rules including evidence collection, preservation, analysis, and reporting. The process of acquiring the data is indeed often more scrutinised than the actual evidence recovered for a criminal investigation. An important part of the preservation of evidence is in securing and isolating the devices. Some of the devices can be remotely wiped for instance (such as the iPhone). Keeping the device connected to the carriers network or Wi-Fi can also lead to potential updates of the system, incoming signals, messages, etc., which might alter or corrupt the data and potentially affect evidence.

Digital forensics is the process of recovering digital evidence from a computer or other electronic devices. As part of this discipline, mobile digital forensics carries out the evidence recovery process on mobile devices. While the methods, processes, and procedures are more or less well established in traditional hard drive based computer forensics, the counterparts for mobile devices have proven to be quite challenging. The recovery process has to be carried out under what is referred to as forensically sound conditions. The term forensically sound is used in the community to justify the use of a particular acceptable method or technology. There are a few reasons to this extra complexity in mobile digital forensics. Mobile devices vary largely in design, hardware and software, depending on the manufacturer, their type, target functionality, etc. These are also continually evolving; as existing technologies progress new ones are being introduced. It is then important for the forensics toolkits developer, as well as the forensics investigator, to develop a good understanding of the way the components of a mobile device work and the features they possess, in addition to the appropriate methods and tasks to perform while dealing with them on a forensic basis. This is currently far from being straightforward.

CHALLENGES

With the exponential development of smarter and more sophisticated devices in recent years, the amount of information that can be retrieved from them has increased significantly, as has the complexity of acquiring it. Forensics investigators are still struggling to effectively acquire and manage digital evidence obtained from mobile devices. The main reasons are the huge variety and specificity of their APIs, the specialised communication protocols, the mobile operating systems, and the diversity of the technologies (in files systems, databases, etc.). All these make it very difficult to extract evidentiary data for investigations and also preventing attacks. Moreover, short product cycles from the manufacturers are making it even more difficult for investigators to remain up-to-date with current technologies. Indeed, given the huge variety of these systems and devices: the Android OS alone for instance is compliant with more than 300 different smartphone models (and this is a growing number!), it should come as no surprise that it is quite a large list of specifications. The successive versions of the OS itself change the file system hierarchy, or the databases structure. This changes the way these databases are queried to extract data (on contacts, calls, text messages, e-mails, and so on); adding another complexity layer. So no standardised or generalised method exists so far, either software or hardware. In the following section, we will evoke some general views about the methods used for evidence extraction and few related tools. Before, let us explain in more details the reasons that make mobile digital forensics a challenging discipline. For mobile devices, hard disks are too large in size, too fragile, and consume too much power to be useful. These devices use then ash memories, which provide relatively fast read access times and better kinetic shock resistance than hard disks. When it comes to examining evidence, the basic rule is to keep the data held on the storage medium unchanged. For ash memories, this principle is more challenging than it is in hard disks. Techniques used to work around the erase cycles limitations, such as wear levelling for instance, might cause unpredictable data changes [7]. Switching a phone off and on www.eForensicsMag.com

FORENSICS METHODS & TOOLS

The increasing interest on mobile device forensics has led to several research and development tools. Many surveys on current methods and existing tools can be found in the literature, such as in [12] or [13]. An interesting fact is that most of the existing tools are commercial, with unspecified implementation, and no or very little documentation of their architecture or the way they do the acquisitions, logical or physical (cf. below), from these devices. There is then an apparent need to set and document fundamentals of acquiring and analysing data from all sorts of mobile devices.

Logical or physical?

In mobile device forensics, there are two main acquisition methods: logical and physical. Logical methods interact with the devices to extract storage objects, such as directories or files, using communication protocols or through APIs. This usually only extracts data that is accessible through the OS and the specified interfaces; i.e. it might not produce deleted information, or even actual existing information not accessible via these APIs. The physical acquisition, on the other hand, extracts the memory content in its entirety through a bit-by-bit imaging of the devices ash memory. This depends on the way the device stores data in memory structures, and can be done using 13

a system access to the ash memories or by the communication ports using some specifications (such as JTAG boundary-scan [8]). These would represent a set of commands used for the testing and debugging of these devices hardware components. However, these interfaces specifications are not available for all devices, and the connections are also different for every device model. The vendors also secure their devices against access to the memory by locking them, which makes it extremely hard and very challenging to achieve perfect imaging. Physical acquisition can also be done via direct memory chip access, but this requires a more advanced lab setting and knowledge of interpreting the acquired raw structures.

ted by investigators for different profiles and cases. Interested readers can find detailed descriptions and also comparisons between some of the existing tools in the already mentioned references.

Methods

Tools

There are many software tools for mobile digital forensics in the market. They are obviously different in terms of their capabilities and also their effectiveness in acquiring information on different devices. These include tools like .XRY, EnCase Neutrino, CelleBrite UFED, Oxygen Forensic Suite, viaExtract, among many others. They also differ in their performance, supported platforms, and also in their reporting and their accuracy in different scenarios and the target data: communication logs, text messages, e-mails, multimedia files, maps, cookies, deleted data, etc. A number of well-established commercial products are doing a good job for supporting a range of devices and systems, which is difficult enough considering the short product cycles. They have also built the necessary expertise on the mobile operating systems and are continually developing and improving their functionality. One of the tools that stand out for acquisition and reporting is .XRY. It can perform logical and physical extractions through a unified wizard, and presents the reports of the full contents of the device in an efficient manner. It also presents a complete and detailed list of the available support for each device; identifying what data can be retrieved, and also what cannot be recovered which is just as relevant to investigators. Anther popular product with investigators is the CelleBrite UFED (Universal Forensic Extraction Device) standalone system. It includes a set of solutions that support a wide variety of profiles and carriers worldwide, and thousands of devices (3100+ currently). The Ultimate service can perform physical extractions, and include a physical analyser to search for specific content types. It also performs logical acquisitions, and includes full complement of cable, card readers, and other accessories for forensics investigations in the field or the research lab. No or very little set up required, and no PC installation, drivers, etc., which is quite appealing. It provides an intuitive reporting interface and allows the export of the reports in a variety of formats. CelleBrite UFED solutions lead the industry with the widest support, and also in terms of ease of use. There are also a number of dedicated tools, delivering targeted functionality, such as viaExtract for Android, which performs only logical recovery on a specific set of data. This system injects an application to the device that seems to use content providers, which implement a common interface for data exchange among applications, to query a range of databases and return results. This obviously limits the amount of data that can be recovered (depending on the one exposed by the above-mentioned API), and it does not produce deleted information. There is a long list of tools, most of them proprietary, but few open source as well, that have been adop14

While all the existing software tools can discover a range of information, it has been shown that memory imaging of the original media is one of the most accurate of any of the tested methods. Memory imaging is considered to be the holy grail of any forensic acquisition. By analysing the resulting images, the investigator can discover a wealth of information that the tools simply cannot provide in many cases. One of the main advantages of acquiring physical memory images is a more complete capture of the data, including deleted items. Physical acquisition methods can also work with damaged devices. In addition, investigators can use the same approaches used in standard hard drive based forensic investigations, which are more mature and well established. In [9], we introduced a simple method for physical acquisition and analysis of memory images of Android devices. It presented an easy end-to-end procedure for the acquisition of data partitions on a range of Android systems using YAFFS2, as well as the mounting and analysis of these images on a Linux workstation. This can be carried out without the need of any specific forensic tool. It includes a set of steps and shell commands on how to use nunddump to create images of nand partitions [9], [14], and their mounting and some analysis techniques, including carving and string analysis. Another important aspect in evidence acquisition is the recovery of deleted information. Many mobile operating systems use the SQLite database engine to manage their database files, including Android, Apples iOS and Blackberry. The database file keep track of information deleted by the user, to a certain extent. However, traditional database browsers and tools cannot access this information. There is a need of alternative techniques. In [10], we have proposed a generic framework for deleted data recovery that can be used with SQLite databases on a variety of systems and devices. Indeed, while initially targeting the Android OS and deleted text messages, we realised that with some small adaptation, the proposed method used on a wide range of different database files.

WHATS NEXT?

The total market for cloud-based applications in the mobile space is predicted to grow from $400 million back in 2009 to an estimated $9.5 billion by 2014 (and nearly $39 billion by 2016), according to Juniper Research [15]. Most of the existing Apps are already cloud-based: 15 out of the top 20 iPhone Apps use cloud services. A recent study on mobile social Apps has shown that they have 5 to 800 times of the usage compared to their desktop/laptop counterparts. The combination of mobile and cloud is actually making things look more possible than it has been in the past. The cloud is considerably contributing to the large mobile systems spread. Mobile devices can use the cloud for advance contexts. An example would be in fine-grained locations for instance. A combination of Bluetooth and Wi-Fi signals can be sent to the cloud and have that come back with a more precise location in the interior space, where GPS is not very helpful. The cloud will also back up the devices in different types of external processing, for photos, audios, videos, etc. In addition of becoming a distributed computing challenge, this combination

MOBILE PHONE FORENSICS: HUGE CHALLENGE OF THE FUTURE


raises huge security concerns, and also issues in data and forensics evidence acquisition, in terms of the integrity of the methods applied when dealing with information contained in both the devices and the clouds. using JTAG (boundary-scan). Digital Investigation. March 2006. [9] Aouad L. and Kechadi T. Android Forensics: a Physical Approach. The International Conference on Security and Management. July 2012. [10] Aouad L. and Kechadi T. ANTS ROAD: A New Tool for SQLite data Recovery On Android Devices. Technical Report, University College Dublin. July 2012. [11] Wholesale Apps Community, wacapps.net. [12] Hoog A. and Gaffaney K. iPhone forensics. Via Forensics White paper, 2009. [13] Hoog A. Android Forensics - Investigation, Analysis and Mobile Security for Google Android. Elsevier, 2011. [14] Quick D. and Alzaabi M. Forensic analysis of the Android file system YAFFS2. Australian Digital Forensics Conference. December 2011. [15] Rahman M. Mobile Cloud Applications Revenues To Hit $9.5bn by 2014, Driven by Converged Mobile Services. The Juniper Research Blog at juniperresearch.com. [16] Homma T. Evaluation of Flash File Systems for Large NAND Flash Memory. CELF Embedded Linux Conference, Toshiba Corporation. April 2009.

CONCLUSION

Mobile device forensics has been proven to be more difficult than traditional computer forensics due to the ever-evolving nature of the mobile marketplace. There have been massive changes in their specifications and functionality in the last few years. The iPhone for instance has significantly changed in its design (with its 5th generation release last year) since its lunch in 2007. It has also considerably increased in market share, its just 5 years old, and Apple has already shipped 250 millions devices worldwide! As for Android, data from Google shows projections of an activation rate of one million per day by mid August 2012, and if it continues we could see 1.5 million per day by end of 2013! This makes the field filled with huge challenges, but also opportunities, when analysing these devices for forensic evidence. Current software tools, and also forensics experts, are still struggling to keep up-to-date with recent releases, as well as in providing acceptable and forensically sound methods and techniques. The majority of the existing tools are either not fully developed or do not yet provide full functionality for the large range of systems and devices. The investigator might need to use a combination of tools for instance. However, budget constraints of law enforcement departments do not usually allow that. Finding out the appropriate toolset for a particular purpose is also challenging, as the analyst has to understand and test each one as a support to his/her case. Mobile forensics will play an increasingly important role in criminal investigations and law disputes, and in information security. Investigators must be aware of the issues concerning evidentiary data acquisition, and ways to preserve this digital evidence for a criminal investigation. Inherent context awareness, the omnipresent connectivity, and the Apps invocation, all mean that the data is very susceptible to alteration, updates, or deletion. Mobile devices are naturally more prone to incorrect or inappropriate digital forensics processes than computers. It is essential to document best practices and measures to be taken to ensure the reliability and accuracy of these forensics processes, also with due regard to jurisprudence issues. References [1] Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2011-2016. Cisco White Paper. February 2012. [2] Purvis C. Police Worry Criminals Use Smartphones To Monitor Radio Traffic. Securitymanagement.com, ASIS International. January 2012. [3] Virus Hacks 150,000 Mobile Phones. Beijing Times, February 2011. [4] Zeus in the Mobile Facts and Theories. Securelist.com. October 2011. [5] The Current State of Cybercrime and What to Expect in 2012. RSA 2012 Cybercrime Trends Report. [6] Smartphone scams: Owners warned over malware apps. BBC reports. November 2011. [7] Breeuwsma M. et al. Forensic Data Recovery from Flash Memory. Small Scale Digital Device Forensics Journal, 1:1, June 2007. [8] Breeuwsma M. Forensic Imaging of Embedded Systems www.eForensicsMag.com

Author Bio

Prof. M-Tahar Kechadi Centre of Cybersecurity and Cybercrime Investigation, School of Computer Science and Informatics, University College Dublin, Belfield, Dublin, Ireland. M-Tahar Kechadi is professor in School of Computer Science and Informatics, University College Dublin (UCD), Ireland. He was awarded PhD and a Masters degree in Computer Science from University of Lille 1, France. He is currently head of teaching and Learning and director of the Parallel Computational Research group in the School, UCD. His research interests span the areas of forensic computing and cybercrime investigations, data Mining, and distributed data mining techniques and algorithms, and Cloud and Grid computing. He is in the editorial board of the Journal of Future Generation of Computer Systems and of IST Transactions of Applied Mathematics-Modelling and Simulation. He is a member of the International Knowledge Cloud Consortium (IKCC). He is full member at CERN and He is a visiting professor at the University of Liverpool, UK.

15

Dr. Lamine Aouad Centre of Cybersecurity and Cybercrime Investigation, School of Computer Science and Informatics, University College Dublin, Belfield, Dublin, Ireland. Lamine was awarded PhD in Computer Science at the University of Lille I in 2006. Prior to that, he received a Masters in Distributed Computing from the University of Paris XI. He is currently a research fellow at the Centre of Cybersecurity and Cybercrime Investigation at the University College Dublin. He has also been a Marie-Curie research fellow and worked with the CNGL (Centre of Next Generation Localisation) in the last few years. His main interests lie in the fields of distributed computing, mobile forensics, the clouds, and analytics.

The Most Comprehensive Exhibition of the Fastest Growing Sectors of recent years

in the Center

of Eurasia

INFORMATION, DATA AND NETWORK SECURITY EXHIBITION

OCCUPATIONAL SAFETY AND HEALTH EXHIBITION SMART HOUSES AND BUILDING AUTOMATION EXHIBITION

16th INTERNATIONAL SECURITY AND RFID EXHIBITION

16th INTERNATIONAL FIRE, EMERGENCY RESCUE EXHIBITION

SEPTEMBER 20th - 23rd, 2012 IFM ISTANBUL EXPO CENTER (IDTM)

16

THIS EXHIBITION IS ORGANIZED WITH THE PERMISSIONS OF T.O.B.B. IN ACCORDANCE WITH THE LAW NUMBER 5174.

secureninja.com
Security+ CISSP CEH (Professional Hacking) v7.1 CAP (Certified Authorization Professional) CISA CISM CCNA Security CWNA CWSP DIACAP ECSA / LPT Dual Certification ECSP (Certified Secure Programmer) EDRP (Disaster Recovery Professional) CCE (Computer Forensics) CCNA Security CHFI ISSEP Cloud Security Digital Mobile Forensics SSCP Security+ Security Awareness Training And more

Forging IT Security Experts

Expert IT Security Training & Services

Free Hotel Offer on Select Boot Camps


mention code: PentestNinja to secure your special rate. Offers ends on Jan 31, 2012 Call 703-535-8600 and

Welcome Military Veterans Benefits & GI Bill Post 9/11 Approved WIA (Workforce Investment Act) Approved

www.secureninja.com
www.eForensicsMag.com

703 535 8600

Sign Up & Get Free Quiz Engine From cccure.org


17

NETWORK

LIVE CAPTURE PROCEDURES


As we move to a world of cloud based systems, we are increasingly finding that we are required to capture and analyse data over networks. Once, analysing a disk drive was a source of incident analysis and forensic material. Now we find that we cannot access the disk in an increasingly cloud based and remote world requiring the use of network captures. This is not a problem however. The tools that are freely available in both Windows and Linux offer a means to capture traffic and carve out the evidence we require.
As we move to a world of cloud based systems, we are increasingly finding that we are required to capture and analyse data over networks. To do this, we need to become familiar with the various tools that are available for these purposes. In this article, we look at a few of the more common free tools that will enable you to capture traffic for analysis within your organisation. Once, analysing a disk drive was a source of incident analysis and forensic material. Now we find that we cannot access the disk in an increasingly cloud based and remote world requiring the use of network captures. This is not a problem however. The tools that are freely available in both Windows and Linux offer a means to capture traffic and carve out the evidence we require. For this reason alone we would require the ability to capture and analyse data over networks, but when we start to add all of the other benefits, we need to ask, why are you not already doing this?

Tcpdump

Tcpdump uses the libpcap library. This can capture traffic from a file or an interface. This means that you can save a capture and analyse it later. This is a great aid in incident response and network forensics. With a file such as, capture.pcap, we can read and display the data using the -r ag. For instance: tcpdump -r capture. pcap will replay the data saved in the file, capture.pcap. By default, this will display the output to the screen. In reality, the data is sent to STDOut (Standard Out), but for most purposes the console and STDOut are one and the same thing. Using BPF (Berkley Packet Filters), you can also restrict the output - both collected and saved. In this way, you can collect all data to and from a host and then strip selected ports (or services) from this saved file. Some of the options that apply to tcpdump include (quoted with alterations from the Redhat tcpdump MAN file): -A -c -C Print each packet (minus its link level header) in ASCII. Exit after receiving a set number of packets (defined after c). Before writing a raw packet to a savefile, check whether the file is currently larger than a given file_size. Where this is the case, close the current savefile and open a new one. -d -dd Dump the compiled packet-matching code in a human readable form to standard output and stop. Dump packet-matching code as a C program fragment. ded with a count). -D Print the list of the network interfaces available on the system and on which tcpdump can capture packets.

LIVE CAPTURE PROCEDURES

In the event that a live network capture is warranted, we can easily run a network sniffer to capture communication ows to and from the compromised or otherwise suspect system. There are many tools that can be used (such as WireShark, SNORT and others) to capture network traffic, but Tcpdump is generally the best capture program when set to capture raw traffic. The primary benefit is that this tool will minimize any performance issues while allowing the data to be captured in a format that can be loaded into more advanced protocol analysers for review. That stated there are only minor differences between Tcpdump and Windump and most of what you can do in one is the same on the other (some ags do vary).

-ddd Dump packet-matching code as decimal numbers (prce

18

LIVE CAPTURE PROCEDURES


This can be useful on systems that do not support the use of the ifconfig -a command. -e -F -i -L -n -nn -N -p -q -r -S -s -T -t -tt -ttt -tttt -v -vv -vvv -w -x -xx -X -XX -y -Z Print the link-level header on each dump line. Use file as input for the filter expression. An additional expression given on the command line is ignored. Listen on interface. This specifies the system interface to listen for traffic on. List the known data link types for the interface and exit. Dont convert host addresses to names. This can be used to avoid DNS lookups. Dont convert protocol and port numbers etc. to names either. Dont print domain name qualification of host names. Dont put the interface into promiscuous mode. Quick (quiet?) output. Print less protocol information so output lines are shorter. Read packets from file that has been created using the -w option. Print absolute, rather than relative, TCP sequence numbers. Snarf snaplen bytes of data from each packet rather than the default of 68 bytes. Force packets selected by expression to be interpreted the specified type. Dont print a timestamp on each dump line. Print an unformatted timestamp on each dump line. Print a delta (in micro-seconds) between current and previous line on each dump line. Print a timestamp in default format proceeded by date on each dump line. When parsing and printing, produce (slightly more) verbose output. Even more verbose output. Even more verbose output. Write the raw packets to file rather than parsing and printing them out. Print each packet (minus its link level header) in hex. Print each packet, including its link level header, in hex. Print each packet (minus its link level header) in hex and ASCII. Print each packet, including its link level header, in hex and ASCII. Set the data link type to use while capturing packets to datalinktype. Drops privileges (if root) and changes user ID to user and the group ID to the primary group of user. This is by no means the complete list of options for tcpdump and I recommend that the reader looks over the man page to learn more. When we are capturing data for incident handling and forensic purposes, remember NOT to notify the source www.eForensicsMag.com In this case, we have the options to use the Ethernet (eth0), Loopback(lo) or any option to listen on all configured interfaces (any). The following command is a fairly standard way to initiate tcpdump: tcpdump -nqtp As noted, the -n option is essential in incident response work. Using the -p option stops the host entering promiscuous mode. This does not make a great deal of difference on modern switched (other than span ports) networks as the traffic is isolated with each host becoming its own collision domain. It does stop rogue broadcast traffic to some extent from being logged and unless your desire is actually to capture all traffic (such as for an IDS), the -p option can be advantageous. The -q and -t options are used to limit the volume of information returned (see above for details). Other options include using -v to -vvv to make a more verbose output. This will depend on your desired result (and disk space). An interface in promiscuous mode will accept and store/ display all packets that are received by the host whether or not they were intended for this current host. That is, packets for any destination, not just the host you are on. When listening or sniffing whilst connected to either a hub or a switch SPAN port, you will be able to listen for packets for any machine on a particular Ethernet segment when in promiscuous mode. In *NIX (Unix/Linux), interface names will take the form of one of the following: eth0, eth1, ... Ethernet ppp4, PPP le0. BSD Ethernet interface l0. The loopback interface. hosts of your captures. Use the (-n) ags where possible to ensure that you are not looking up IP addresses from a remote DNS Server. This can ag that you are investigating. In nearly all cases packets will ow over the screen far quicker than you can hope to read them. This is why we should always try to capture data to file even if we read it immediately (as noted using the -r option from above). To capture data to a file, we should always specify the interface that we intend to capture on (this is using the -i ag). This makes our life a little easier by ensuring that tcpdump does not inadvertently start on an interface other than the one we want it to start on.

Loopback is a special virtualised interface for the host to send packets to itself. In *NIX, you can listen on the loopback. By issuing the following command, we can obtain a list of the interfaces that tcpdump may use tcpdump D We also see this in Figure 1. Figure 1 Display interfaces in TCPDump

19

To listen for a selected port or protocol we can simply note what we are filtering for. As an example, if we want to watch TCP 80 (usually HTTP or web traffic - although not always) we can specify this using the terminology, tcp port 80. We can see this in the image below where the -X option has been used to capture the payload as well (Figure 2) Figure 2 A TCPDump capture

Ideally, we could capture all of the traffic to a file such as will result from the following command: tcpdump -npXA tcp port 80 -s 1500 -w www.pcap In this instance, we are saving the complete web traffic for the host to a file called www.pcap. We can then analyse this file using tcpdump, or we can access it with any other program that supports the pcap format (such as WireShark or NGrep).

Remember... It is possible to capture and save all packets and trafc to and from a host to a pcap format le and then to extract selected information at a later time.
BPF (Berkeley Packet Filter)
Berkeley Packet Filters (BPFs) allow you to create detailed fine grain filters in Tcpdump and other libpcap based programs. As BPFs run at the libpcap level, they are fast. They can be used to extract and filter date for display or to save far quicker than many other methods. BPFs can also be made into a file for processing if there are many options. Using Snort, the -F filter is used to load such a file - e.g.: snort {some options} -F filter_file.bpf Some of the primary terms used in creating BPFs are listed below: dst host [host name] - Filter on the destination host address. src host [host name] - Filter on the source host address. gateway [host name] - The packet used the selected host as a gateway. dst net [network identier] - In this example, the destination address of the packet belongs to the selected network. This can be from /etc/networks or a network address. src net [network identier]- In this instance, the source address of the packet is selected. dst port [Port Number] - In this example, the packet is either IP/TCP or IP/UDP with a destination port value that we have selected. src port [Port Number] - Similarly, we can filter on packets based on a source port. tcp src [Port Number] - Or we can match only TCP packets whose source port is port. less [length] - Is used to select packets less than a certain length These are a SMALL selection of the many options. In addition we can use logical operators to refine our values: Negation (`! or `not). Concatenation (`&& or `and). Alternation (`|| or `or).

If we now increase the SNAPLEN value (this is the maximum size of the packets we capture using the -s option, we can see the full payload (Figure 3). Figure 3 increased Snaplen in TCPDump to capture the entire packet

As with all tools, the best way to get to know these is to start using them. You will discover that you can filter on IP or TCP options, and you can go right down to the individual ags and options in a packet.

20

LIVE CAPTURE PROCEDURES


WireShark
WireShark can even use the information in the scan to build a set of Cisco access filters (see Figure 6). Filtering the pcap capture file will allow you to create specific filters for the attack that has been recorded. This allows us to simplify the process of taking captures related to an incident and to making a specific set of filters to block known bad traffic. In the same way, we can capture a set of traffic associated with an unknown and even proprietary service and create allow filters to allow this traffic through our filtering router when blocking all other traffic. When conducting the analysis always: Document the collection procedure. Record both the commands run during data gathering evidence and their results. When possible, send any digital data to a remote host or save it to external media. Many devices can be accessed using a command line. These systems can generally be scripted to minimize the interaction required.

With a captured file, we can use other tools to visualise and view the data. WireShark (Ethereal) is one such tool. Figure 4 Wireshark is a free network capture tool

Ngrep

WireShark (Figure 4) can capture and analyze network traffic. Configuring WireShark as a sniffer on the interface where an attack is originating from will allow you to capture information related to the router. This can be saved in the pcap format to be used as evidence or analyzed further ofine. Figure 5 Wiresharp can follow TCP Streams

NGrep is a network (pcap format) based search tool. Just like its file based compatriot, Grep, it is used when searching for strings in the payload (or the header for that matter). Ngrep supports: Basic ASCII strings (e.g. ASCII), Regex (this is Regular Expressions such as GET.*php), and Hex strings (e.g. 0029A). The optimum results can be achieved using RegEX. These are a little arcane, so I will recommend a few tools to help use and learn these (there are also many good books and web sites for this). Hex also proves good if you know exactly what you are looking for, this can include using sections for captured data. Just like any good command line program, we have options. Some of these are listed below: -i -d -I -O -x -X Ignore case for regular expression Read data from a specic live interface Read data from the specied pcap-le Save matching packets to a pcap format le Dump packet contents as hexadecimal and ASCII Dump packet contents as hexadecimal and ASCII and then treat the matched expression as a hexadecimal string

Using the Follow TCP Streams feature of WireShark, you can isolate individual communications of interest. This feature even allows you to extract (carve) files that have been set to (or through) the router. The PCAP format allows you to save the network capture and to replay or investigate this later in a forensically sound manner. Figure 6 WireShark Cisco ACL Feature

As a result, we can use NGrep as a filter and poor-mans scanner to find traffic on a network. It is best to start with capturing a file using Tcpdump into a PCAP format and then reading this into NGrep to extract the information we seek. The reason, Tcpdump is fast and comprehensive with few of the errors and issues associated with a richer platform (such as WireShark). Step 1 The first stage is to capture all of the data we need into a PCAP file. For this we will use TCPDump. To do this, we are going to capture all traffic (with the complete payload, -s 1500) to a file called capture.pcap. tcpdump -nn -i eth0 -s 1500 -w capture.pcap The command above and in the image below is what we have used for this. See the posts on Tcpdump for more details on the TCPDump command settings. In the image below, I have

www.eForensicsMag.com

21

used the date command before and after the capture so that you can see this has run for a number of minutes. Figure 7 Starting the capture

Step 4 Now, say we are looking for a specific incident that occured using HTTP on TCP 80 (the default) to the server http://www. msnbc.com, we could first restrict the output to HTTP on TCP port 80 only (port 80). ngrep -W byline -I working.pcap port 80 The output for this command is displayed in Figure 10. Figure 10 We can see the captured web pages

Note that we are not translating the IP address and protocols. This means that we have to learn what the protocols usually are and also we are less likely to make assumptions (for instance, TCP 80 is not always web traffic). Step 2 Now that we have our PCAP capture, we can start an examination. It is always a good idea (in incident response and forensic situations) to make a hash and a copy of the file. Figure 8 Lets make a copy and save the original

You can see how we have done this and validated the copy as well in the image above. Step 3 We can see the contents of the entire capture with NGrep by looking at our capture file. ngrep -xX -I working.pcap The command above and in the image below as output displays the packet capture in Hex and ASCII. This is of course far too detailed and we need to filter the results to make any detail from this. Figure 9 The data is captured

Notice that the output contains the details of the HTML recieved from the web server. This is from any web server however and we wish to restrict this to a single host. We will also save this output to a file using the followin g command (where we have the server,www.montrealgazette.com): ngrep -I working.pcap -O /tmp/traffic.pcap GET host www. montrealgazette.com and tcp port 80. The command and output is displayed in Figure 11. Figure 11 NGrep to extract data

In this instance, the packet output displayed shows a HTTP GET request and associated data. The Proxy used is clear inside the packet as is the web site called (http://www.msnbc. com). From the GET request data we can see that the user is making a search request (the page is - /search?).

22

We have now saved the selected traffic we wish to investigate. We can verify that we can read this using another PCAP compatible program (in the instance in step 5 Tcpdump). Step 5 Here we are validating that we have saved the data capture we wanted using TCPDump to read the extract capture file (/ tmp/traffic.pcap): /tmp/traffic.pcap. The output is displayed in figure 12 showing us that the packet capture has worked. Figure 12 The data we required

TO CONCLUDE

Live data capture is an essential skill in required for both Incident Handlers as well as Forensic practitioners and it is one that is becoming more, not less, important over time as we move towards networked and cloud based systems. This article has introduced a few tools that, although free, can be used together to create a powerful network forensics and incident response toolkit. Like all of these tools, the secret comes to practice. The more you use them, the better you well become at not only doing the basics, but in innovating and experimenting. This will allow you to extend the use of even these simple tools.

Author Bio
Dr. Craig S. Wright (Twitter: Dr_Craig_Wright) is a lecturer and researcher at Charles Sturt University and executive vice president (strategy) of CSCSS (Centre for Strategic Cyberspace+ Security Science) with a focus on collaborating government bodies in securing cyber systems. With over 20 years of IT related experience, he is a sought-after public speaker both locally and internationally, training Australian and international government departments in Cyber Warfare and Cyber Defence, while also presenting his latest research findings at academic conferences. In addition to his security engagements Craig continues to author IT security related articles and books. Dr Wright holds the following industry certifications, GSE, CISSP, CISA, CISM, CCE, GCFA, GLEG, GREM and GSPA. He has numerous degrees in various fields including a Masters degree in Statistics, and a Masters Degree in Law specialising in International Commercial Law. Craig is working on his second doctorate, a PhD on the Quantification of Information Systems Risk.

HERE YOU CAN SEE WE HAVE A LIMITED SET OF TRAFFIC TO A SPECIFIC HOST WITH A SPECIFIC REQUEST (HTTP GET). NGREP IS A POWERFUL TOOL AND WELL WORTH TAKING THE TIME TO LEARN HOW TO USE. IT CAN BE FAR EASIER TO HAVE A SMALL EXTRACT OF A CAPTURE TO WORK WITH OR EVEN TO DETERMINE IF A FILE CONTAINS INFORMATION WORK INVESTIGATING BEFORE YOU GO INTO DETAILS. NGREP CAN DO THIS FOR US. NGREP WILL EITHER TREAT THE SEARCH INPUT AS CASE SENSITIVE (THE DEFAULT) OR WITH THE USE OF THE -I OPTION WILL TREAT IT AS BEING CASE INSENSITIVE. THIS IS, YOU CAN SEARCH WITH OR WITHOUT CAPITALISATION IN A STANDARD STRING. OF COURSE, THIS DOES NOT MATTER IF YOU ARE USING A WELL FORMED REGEX. TCPDUMP IS SIMPLER AND AS A RESULT MAKES A BETTER CAPTURE ENGINE THEN NGREP (NOT THAT YOU CANNOT DO THIS WITH IT). AS SUCH, IT IS BETTER TO USE TCPDUMP TO CAPTURE TO A PCAP FORMAT FILE, AND THEN TO ANALYSE THE SAVED DATA WITH NGREP. THE -I FLAG IS USED TO SELECT THE PCAP FORMAT FILE THAT YOU WANT TO READ. BY DEFAULT, NGREP WILL DISPLAY THE ASCII OF A FILE AND INSERT A . IN PLACE OF UNPRINTABLE CHARACTERS. THE -X PARAMETER IS USED TO PRINT BOTH HEX AND ASCII IN THE OUTPUT (THIS IS RECOMMENDED).
www.eForensicsMag.com

23

COMPUTER

ADVANCED STEGANOGRAPHY: ADD SILENCE TO SOUND


Steganography is a very comprehensive topic for all techno-geeks because it involves such an interesting and comprehensive analysis to extract the truth, as we have heard this term many times in the context of terrorist activities and their communications.

Steganography means covert writing: hiding confidential Information into a cover file. This cover file can be in the form of pdf, xls, exe, jpeg, mp3 or mp4, etc. Least Signicant Bit (LSB) Method is very famous & fascinating when Steganography is discussed because when we discuss the case study of hiding a secret text behind an image it actually sounds interesting, To understand this concept, first we need to understand how an image is classified and what happens when a small bit is altered in an image which has been described below: Images are composed of small elements which are called pixels and we have basically three types of images. A pixel is the essential component of an image: 1) Black and white each pixel is composed of a single bit and is either a zero or a one. 2) Grayscale each pixel is composed of 8 bits (in rare cases, 16 bits) which defines the shade of grey of the pixel, from zero (black) to 255 (white). 3) Full color also called 24-bit color as there are 3 primary colors (red, green, blue), each of these are defined by 8 bits. Although we can have different types of images, but we assume that a grayscale image has been used and 8-bit grayscale consists of pixels which have 28 = 256 possible levels of grey, and each component in an image contributes its different parts such as: 1. LSB (Least Signicant Bit) contributes 1/256th of the information 2. MSB (Most Signicant Bit) contributes of the information

So, changing that LSB only affects 1/256th of the intensity and humans simply cannot perceive a difference. In fact, it is difficult to perceive a difference in 1/16th of an intensity change, so we can easily alter the 4 LSBs with little or no perceptible difference. Here we have shown these two images which illustrates why Steganography has become famous and how an image does not get distorted even if we embed secret or confidential information.

(Original Image)

24

ADVENCED STEGANOGRAPHY: ADD SILENCE TO SOUND


Original Bit (Before replacing) 00101111 00011101 Changed Bit (After replace) 00101100 00011111 Bit Change Bit change Bit Change

You can see that two bits got changed and this much change in the bits of an image would not make a big difference which proves that even if you embed secret information it is altered but can never be detected by a human eye directly. Hiding Encrypted Text in the Image We have used the basic Steganography with which a person can append confidential information behind an image or audio file. But if a person is aware of extracting the secret text from an image then he can easily extract the hidden text because it is stored in plain text. But if we encrypt the secret text and then add it to our image and audio which is AES 128-bit encrypted, and which cannot be decrypted (condition if you wait for couple of years to decrypt) then if you want to perform this activity you need two things: 1. Image key or text key for authentication to make sure that only reliable person is able to open the secret message. 2. Secret text. Here we have written an image key for authentication, which means in place of a text password to open the secret message, one can also provide a specific image as a password to provide authentication, which is a fantastic technique of Steganography to make it more secure and safe. According to the Stanford Research team it is observed that you can perform this step using a customized tool paranoia which is made in java (JAR). Here we show you how can we embed the encrypted text and then hide particular information behind a cover file (image in this case). As we have shown, we have embedded the secret or confidential information by using a simple command prompt which is nothing but a feature of a command prompt (shown in figure above) and it essentially creates and performs Steganography. It is very simple to detect, as one can use a histogram analyzer, but it cannot be detected with the human eye. Later on we will prove that even your 8 MB image can contain 1.6 MB of covert data and I am damn sure you can never find out which image is original and which one is consisting of confidential information. But if you analyze the histogram of both the images you can easily detect which one is consisting of confidential Information, and we will explain here how these LSBs are changed, and how humans cannot perceive a difference in between original and Steganographic images. Here we assume the LSB and cover file (original file in which we need to append secret text) LSBs and we will try to replace those bits by which we can hide our confidential text. Assume 2 bytes in a cover image file for which we want to perform this Steganography technique: 00101111 00011101. Now here we add the secret text and whatever secret text we are going to add that would be added in the form of binary. Therefore suppose we want to embed a number 236 in the above bit pattern. The binary representation of 236 is 11101100. Now we want to insert this bit pattern in above 2 bytes. To embed 11101100 we will choose last significant bits of the above 2 bytes of the cover file. We have a secret text and a cover file (GIF image).Here we have a tool developed by Stanford Research team known as Paranoia. The beauty of this project is that we can even use an image as a password authentication at the time of encryption and decryption. Using an image key as an authentication would definitely distort an image a bit. Here we have shown how this technique is executed:

(4 Bits LSB changed with Secret.txt) Steganography can be performed using different tools such as S-Tool, Hiderman, and Stegspy, which detects the presence of Steganography, but conventional Steganography is performed using the following method:

ORGINAL IMAGE

www.eForensicsMag.com

25

As we have loaded the image and a secret text, which has been embedded and encrypted with AES-128 bit encryption. We can load an image key which would be used to authenticate and decrypt the secret message that has been shown in the next step:

text to an audio file and how to decide that only this much information would be enough to insert in the audio file, which will not alter the sound, and will not create a noise so that it cannot be detected by a casual observer, we will explain this concept as well: The amplitude and sampling in an audio file is analyzed and it is adaptive in the sense that longer periods of low amplitude samples allow for a greater number of bits to be embedded. You must be thinking that if we analyze the histogram then how we can detect the presence of secret text. The answer is: the low order bits of the new length of samples within the threshold are called our secret message. Thus we can actually alter the audio file as per the threshold value because if this threshold value is less, then audio will be distorted and can easily be observed by a casual observer. A histogram generated for an audio file reveals the truth about an original audio file and embedded audio information. Along with that we should determine the positive and negative am-

Now we can easily encrypt and hide the information behind this image:

IMAGE KEY
plitude and low amplitude because low amplitude reveals the secret text in the audio file. As we have shown the same method in which we have not manually analyzed with an equalizer to scrutinize the positive and negative amplitude, and we have also not determined the threshold which decides the distortion in noise and audio signal. Randomly four LSBs would be altered for an audio file and it should be noted that within the threshold if we embed the audio file it doesnt distort the audio and we can successfully embed the secret text. The Image which we have shown using an arrow key, is used as an image key, which means at the time of AES-128 bit Encryption, this image key would be used as an authentication. Hiding secret text in the audio is very sensitive in respect to the cover file. Classical music, for example, which has precise single tones, is a terrible cover medium. Even slight alterations are easily noticed by a casual observer. Jazz is a little better but affects the sound a little bit. Typical pop and country music are decent cover files. The best cover files for this technique are hard rock and heavy metal, although it is not easily detectible because there are no values of samples altered, but rather fake ones are inserted or legitimate ones are removed. But a histogram reveals the concept behind hiding a secret 26

Here we can see that four LSBs of an audio signal have been changed, which have revealed the secret text itself, al-

though we have stored the plain text and simply embedded the confidential information inside an audio file. It is quite simple to extract because if we get to know that Steganography is used then one can easily extract the secret information. Now you must be thinking why dont we use cryptography inside Steganography? Yes readers you are right. Even If you use IDEA, DES, Tripple DES, or AES encryption then once the encryption algorithm is determined you are finished. It can be easily cracked using decryption of the cipher text then you must be thinking what is the perfect way to actually hide the information and also encrypt it so that an attacker can never extract the Information which you have embedded? Can you think of Such Method? Yes we have such a method of encrypting plain text and then hiding it behind a cover file, a new randomization method for generating the randomized key matrix to encrypt plain text files and to decrypt cipher text files. There is a new algorithm for encrypting the plain text multiple times. This method is totally dependent on the random text key which is to be supplied by the user. What are you waiting for? A method for Randomized Key Yes definitely we will explain it. More Advanced Steganography Method which we are going to show you along with demonstration just stay in touch & keep reading e-Forensics Magazine

PC Fix

Author Bio

Praveen Parihar is an information security enthusiast, having 2 years of experience in this field. The author is an RHCE, CEH, CCNA certified professional along with having a rich experience in vulnerability assessment. At present Praveen is working as an Information Security Auditor at Aneja Associates, Mumbai (India).

Before you continue:


Free scan your Computer now! Improve PC Stability and performances Clean you registry from Windows errors

www.eForensicsMag.com

27

COMPUTER

INVESTIGATING FRAUD
IN WINDOWS-BASED DRIVING EXAMINATION THEORY SYSTEMS AND SOFTWARE

Fraud can take many forms, can take place practically anywhere, any when and any how. Theoretical driving examinations are now computerized in most parts of the world and the overwhelming majority of such systems tend to have some to no security at all, relying instead on the invigilators of the exam to catch those suspected of fraud. But, what happens when the invigilators fail and you, the digital forensic investigator, is asked to look into the case? Where does one start, where does one go and where does one end up? What do we investigate, how do we go about it and what tools with?

In this article, I will attempt to share my experiences investigating such systems from the point of view of the digital forensic investigator who first arrives in the scene of the crime, from the moment of arrival to the end report submitted to the client. Let us, then, start our journey from the moment we (the digital forensic investigators) get the fateful call, where we are told its a case of fraud in the Driving Test Centre and we have been called to investigate it and present a report. To begin with, it should be stated that, as most driving test centres are part of a countrys internal services, we are going to always be dealing with a mixture of government officials (of middle-management persuasion) and local law enforcement, and we are always going to be needing to deal with red-tape -style bureaucracy, where everything is moving much more slowly than when dealing with the private sector. 28

This means we are going to be dealing with the nightmare scenario where our crime scene is possibly several months old and very seriously tainted (as non-essential government bodies tend to respond fairly slowly and after much red-tape to such cases), and where normal digital forensic processes and practices dont usually work. The nightmare comes from the fact that, in such a scenario, you cannot explicitly trust the data you collect or any information that you are given and cannot corroborate in a straightforward way. The data has been tainted, the exams are running 2-3 times a week and the test centre cannot be closed down for the duration of the investigation, so we are told we have to release the (many, plus servers) computers within a very specific and finite length of time (1-2 days at most). So, we arrive in the vicinity of the crime scene (the building).

INVESTIGATING FRAUD IN WINDOWS-BASED DRIVING EXAMINTION THEORY SYSTEMS AND SOFTWARE


First and foremost in our minds should be that this is a place where practically everybody has access to the building between government office working hours, and where nowadays Internet access via wireless access points scattered throughout the building is the norm, and they are used by both staff and examination candidates at the same time. This means we can assume three things: a. That the test area is not completely isolated. b. That the fraud may not simply involve the test areas workstations but also the devices of the examination staff and the candidates. c. If points a and b hold true, there is nothing we, the police or the government body can do about it, nor is there any reliable way for us to find out who did what using the wireless access point, when and how (forget about going through logs (if they exist at all) of the free wi-fi access points in a public area). So, we map and find the range of the wireless Internet access points, see whether that range extends to the testing room, detail the findings in our contemporaneous notes and move on to the testing room itself. In an examination environment, the location of everything and everyone in it before, during and after examination time is as important as the actual workstations used in the examination. Some workstations will be full computer systems utilizing touch-screen technology with more conventional input devices (keyboard & mouse) and main processing units hidden away inside panels, such as the system shown in Figure 2. Others will be complete kiosk-style systems with such components integrated to form as small a footprint as possible. The type of workstation used, as well as how the layout of the room is utilized to take this into account, is also very important. For example, in one of my investigations, I asked a member of the invigilation team to take part in a small experiment where I was the person taking the exam and they were the invigilator in the room, under proper examination conditions. By doing everything I could to gain physical access to the system without being seen by the invigilator (also accounting for the invigilators heightened perception of the possibility of me committing fraud), I was able to better understand the possible physical access points, barriers to access and subversion mechanisms. Such experiments can give you a better idea of the strengths and weaknesses of certain physical approaches to fraud. After we detail and provide a set of photographs and drawings of such information on the physical locale of the crime scene, we need to move to the more traditional aspects of a digital forensic investigation, namely the processes of seizure, acquisition, analysis and reporting. As we discussed before, however, the rules governing such processes, as defined by NIJ (2004) or ACPO (2011) in their good practice guides, have to be applied to a crime scene that has been contaminated repeatedly over a potentially large period of time. Combine the above with the requirement for the seizure of any equipment and the acquisition processes to be performed in the shortest possible timeframe so that the test centre can be up and running, for example, the next day, and you are faced with the nightmarish scenario of having to perform said processes in such a way so as to comply with the requirements of the public body and, at the same time, not compromise whatever shred of evidential value the artifacts may still retain. For example, you have to immediately discard any (even remote) possibility of getting anything useful from sources such as cache, RAM, page-files and running memory/processes after all the time that has passed since the commission of the alleged crime. That assumption, which is perfectly valid given the circumstances, (and should be clearly stated and the reasons detailed in both your contemporaneous notes and in your report) allows you to reduce the number of artifacts you will seize to just the workstations and servers hard drives. Additionally, you should also liaise with the IT department and ask to be given access to the locale where the networking infrastructure for that particular room resides. There you need to identify and photograph any equipment used and assess the evidential value of said equipment. A managed CISCO switch or router, for example, can be of much evidential value, as opposed to an un-managed un-named-brand switch/router. Any exotic hardware should also be seized. What you should also be looking for, while liaising with the IT department, is whether the computer network in the room is completely isolated from the rest of the network (as it should, in an ideal world, be) or not (as the case will more often be) and, if it is not isolated, what the keyholes the IT department 29

Figure 1: Workstation placement in the examination room. Figure 1 shows three of the more common workstation placement schemes in an examination room. As we can see, the types of workstations, their placement in the room, the tables they are using for the workstations (e. g. Figure 2), the invigilators seat & point of view and even their walking patterns are of importance if one is to try to map the scene of crime in this instance. So, we deal with the physical crime scene as we would in a normal forensic investigation in the real world. The reason for this is we want to establish how the alleged crime has been perpetrated, or how it has not been perpetrated (at this point, we either prove or disprove theories).

Figure 2: A typical examination workstation www.eForensicsMag.com

has installed for ease of administration are. As before, note it down in your contemporaneous notes, but be prepared for such information to be either false or not entirely true. Another point you should consider, while liaising with the IT department, is the possibility of their complicity in the alleged crime. In a lot of cases involving fraud, such crimes are perpetrated through the collusion of the offender with one or more insiders. So, with the process of seizure complete and the hard-drives in your possession, the acquisition process begins and is conducted in exactly the same way the manual says (image the drives using a hardware and software blocker, hash the images, make copies of the images etc.) and return them, through the proper channels, to the driving examination centre. An important tip, at this stage, is that you should always expect to have to deal with any and all kinds and sizes of storage mediums (IDE, SATA, SSD etc), with any and all kinds of partitioning schemes. And you should always expect to consume more storage space for the disk images than originally given to understand you will require. The process of analysis when investigating Theoretical Driving Examination centres is both more and less complex than in your average type of casework. You dont have to deal with very exotic hardware, the Operating System in use is always Microsoft Windows (2000, XP, Vista or 7, depending on the age of the computer systems used by the testing centre), and the filesystem in use is either FAT32 or NTFS. Standard digital forensic analysis methodologies and tools of the OS and the filesystems have been amply documented by such authors as Casey (2011), Carrier (2005) and Carvey (2009, 2011). One such system can be seen in Figure 3, where we can see the partitioning scheme is split into two partitions, both of which are NTFS, and whereas the first partition is devoted solely to the OS, the other contains the driving examination software.

would prove or disprove the actual commission of the aleged crime has been either: a. not recorded in recordable (and thus retrievable) locations; b. overwritten by the passage of time and by log-file retention policies; c. deleted multiple times through re-formatting of the hard-drive; d. do not exist because the hard-drive has been replaced at an unspecified time in the past. d. The sheer number of hard-drive images that you have to analyze just for this one case. When it comes to exotic driving examination software, there are a number of things one has to understand and be able to deal with. First of them is the fact that the contract for the development of the driving examination software will have been auctioned off to the bidder with the lowest offer, price-wise. This means that the software itself will not behave in a predictable and identifiable manner (e. g. as identified in the National Software Reference Library (http://www.nsrl.nist.gov/)) and will have to be analyzed in order to identify any points that the offender(s) will attempt to exploit by themselves, any points that the offender(s) will attempt to exploit through the help of a third-party (insider) and any points that the third-party (insider), or even fourth-party (driving instructor paid off to take the examination on behalf of the interested party) will attempt to exploit by themselves on behalf of the offender, with or without the help of the insider. One way of analyzing the driving examination software is to reverse-engineer it, which is a very long and arduous process that may or may not be a waste of time in terms of what its outcome will be but can potentially give us easily verifiable data of evidential value. It should be noted, here, that the process of reverse-engineering the actual software is not only time-consuming but also requires a very good knowledge of reverse-engineering principles and methodologies and a very good knowledge of low-level computer programming and debugging. Another way of analyzing the driving examination software is to use a copy of the server and workstation disk images, convert said copies into virtual machine disk images and recreate either the entire or part of the network in a virtualized environment (through VirtualBox or VMWare, for example). Then, using the credentials previously supplied by the IT department of the examination centre and all the information you have gathered about the network topology and the operation of the network, attempt to simulate a full theoretical driving examination, at the same time monitoring the network through the use of such tools as registry change/modification viewers, system resource/modification viewers and packet capturing solutions (eg. wireshark (http://www.wireshark.org)). The advantages of this analysis method is that, through simulation and observation, you will gain a deep enough level of understanding of the network in question and its operation, and you will have the opportunity to observe, record and analyse all the effects the operation of this piece of software has on the workstations and the network in general, thus identifying the points made earlier in the article. Another advantage of

Figure 3: A hypothetical exotic conguration of an examination workstation. What you do have to deal with, however, are: a. Exotic driving examination software. b. Exotic driving examination software update mechanisms. c. The fact that the information you are trying to find that 30

INVESTIGATING FRAUD IN WINDOWS-BASED DRIVING EXAMINTION THEORY SYSTEMS AND SOFTWARE


this analysis method is that it will allow you to also identify, at the same time, the updating mechanism the software has for updating the driving examinations. The major disadvantage of this approach, however, is that it will effectively destroy the evidential integrity of all the copies of the disk images acquired in the previous step of the investigation, and should only be attempted when multiple copies of those disk images have been made. A third approach to analyzing the driving examination software is to also acquire the program itself through a request made through the law enforcement body. However, the major stumbling block Ive run across in my attempts to do that comes from the way the software tends to be distributed to the driving examination centres, namely through subcontracting the task to private IT businesses, who then image the system the software is installed and install the image on a number of workstations and servers which they set up in the test centres themselves. Attempting, thus, to acquire an intact copy of the software in question, involves any number of red-tape riddled steps and have an equal chance of being a waste of time and completely fruitless as they have of actually allowing you to acquire the software in a reasonable timeframe. With regards to the rest of the points (points c and d, specifically) made earlier in terms of what you have to deal with when dealing with this kind of a case, the major stumbling block of any analysis process is the time-frame in which you have to perform the investigation. In my personal experience, the usual time-frame you are given to conduct the investigation ranges between one and two months from seizure to filing your expert witness report. Any analysis, thus, that is to be attempted should mainly concentrate on the retrieval of any possible information that can be readily retrieved through standard digital forensic methodologies in the slice of time the alleged crime has been committed, and the analysis of the actual driving examination software in so far as possible. You are, after all, dealing with information that has been tainted by time and by further use of the workstations before the digital forensic investigation has begun, and you dont have either unlimited time or resources at your disposal. Also, any analyst working on the case should expect and be prepared to have to work with little in the way of retrieved/ carved information (regular files or log-files) aside from the actual driving examination software. After all, it is easily possible for the hard-drives of some (or even all) the workstations to have been replaced in the time between the alleged crime and the beginning of the investigation and the previous hard-drives destroyed before the alleged crime has been reported. All this information, and the various reasons for the lack of available data due to the time between the commission of the alleged crime to the time the investigation of said crime has begun has to be very clearly stated in the reporting process. As you have seen, heard and/or know, a digital forensic investigation is normally a relatively lengthy and difficult process, in the best of cases, that is highly dependent on the length of time between an offence and the beginning of the investigation. As you can see in this article, the difficulty of processing cases such as those of fraud in theoretical driving examination tests is twofold: not only do you have to deal with a red-tape-derived increased time-frame between the offence and the investigation, but you also have to deal with issues ranging from time-constraints in the seizure, acquisition and release of the artifacts to issues such as completely custom www.eForensicsMag.com 31 and unpredictable software whose behavior is non-consistent and non-standards-compliant. But it is my hope that this article will give you a few pointers as to where to start looking and what to look for when such casework falls in your lap.

References
ACPO, (2011), Good Practice Guide for Computer-Based Electronic Evidence, Association of Chief of Police Officers, UK, (http://www.7safe.com/electronic_evidence/#), Last seen: 10/07/2012 Carrier B., (2005), File System Forensic Analysis, March 2005, Addison-Wesley Professional Series, UK, ISBN-10 0321268172 Carvey H., (2009), Windows Forensic Analysis DVD Toolkit, Second Edition, June 2009, Syngress, UK, ISBN-10 1597494224 Carvey H., (2011), Windows Registry Forensics: Advanced Digital Forensic Analysis of the Windows Registry, Syngress, UK, ISBN-10 1597495808 Casey E., (2011), Digital Evidence and Computer Crime, Third Edition: Forensic Science, Computers and the Internet, May 2011, Academic Press, UK, ISBN-10 0123742684 NIJ, (2004), Forensic Examination of Digital Evidence: A Guide for Law Enforcement, April 2004, National Institute of Justice, US, (http://nij.gov/nij/pubs-sum/199408.htm), Last seen: 10/07/2012

Author Bio

George Chlapoutakis is a Network Security and Digital Forensic researcher, developer, investigator and consultant working in both the academic sector, as a part-time lecturer in Teesside University, UK, and in the business sector through his private business, SecurityBible Networks (http://www. secbible.com), which is based in Greece but operates in the European Union.

DRIVE AND PARTITION CARVING PROCEDURES


This article is the start of a series of papers that will take the reader through the process of carving files from a hard drive. We explore the various partition types and how to determine these (even on formatted disks), learn what the starting sector of each partition is and also work through identifying the length the sector for each partition. In this, we cover the last two bytes of the MBR and why they are important to the forensic analyst. This process is one that will help the budding analyst or tester in gaining an understanding of drive partitions and hence how they can recover and carve these from a damaged or formatted drive. We start by learning about hard disk drive geometry.
This article is the start of a series of papers that will the reader through the process of carving files from a hard drive. We explore the various partition types and how to determine these (even on formatted disks), learn what the starting sector of each partition is and also work through identifying the length the sector for each partition. In this, we cover the last two bytes of the MBR and why they are important to the forensic analyst. This process is one that will help the budding analyst or tester in gaining an understanding of drive partitions and hence how they can recover and carve these from a damaged or formatted drive. We start by learning about hard disk drive geometry. The format of this article is a step by step process that is designed to take the reader through the analysis of a hard drive. Although the process may vary somewhat for each drive, the fundamentals remain the same and following these steps will allow the analyst to recover drive partitions that have been damaged or formatted even when the automated tools fail. The commands we will start with to copy our MBR (master boot record): dd if=Image.dd of=MBR.img bs=512 count=1 ls -al *img khexedit MBR.img & Here, we first extract the MBR from our image file (in this case IMG.dd) and extract the data to a file called MBR.img. Note that we have extracted only the first 512 bytes and we can validate the size of this image file using the command ls -al *img.

COMPUTER

MASTER BOOT RECORD (MBR)

THE BEGINNING

There are a number of commands we shall be using in this article that are fairly standard on most Linux distros. In this article, it is assumed that the analyst has already creates a bitwise raw image of the hard disk drive to be examined using dd or a similar tool.

In most drive formats (there are exceptions with some RISC systems etc.) that we will analyse, each Partition entry is always 16 bytes in length. More, the end of any MBR marker is 0x55AA (ALWAYS)! Many modern Linux, Macintosh and the most recent of Intel PCs have started using GPT instead of MBR. MBR limits the size of partitions to 2.19TB, this is why it starts to be replaced. We will look at other partition formats in later papers.

Partition
1st 2nd 3rd 4th

Offset
0x01BE 0x01CE 0x01DE 0x01EE

Byte Place
446 462 478 492

Table 1 The HDD table 32

DRIVE AND PARTITION CARVING PROCEDURES


We see from the file MBR.img the partition information displayed in Table 1 and Figure 1. The offset for each of the partition locations remains the same. In this way we can easily determine where the required partition data resides and hence extract and analyse it.

The partition table

If we view the MBR in a hex editor (such as KHexEdit in Linux), we see the following partition values in Figure 1. Partition 1 80 01 01 00 06 1F 7F 96 3F 00 00 00 E1 0C 00 00 Partition 2 00 00 41 97 05 1F BF 0B 20 85 0C 00 60 99 03 00 The offset values displayed in Table 1 are important. These are always the same and allow us to extract the partition information displayed in Figure 1.

WHAT ARE THE PARTITION TYPES?

Each drive is divided into a number of partitions (these are the things we see in Windows as C:, D: etc). These are defined in Table 2 from the offset location defined in Table 1.

Offset (Dec)
0

Length bytes
1

Content
State of partition: 0x00 if active, 0x80 if not active The head where the partition starts The sector and cylinder where the partition is started Type of partition (see Table 3) Head on which the partition ends Sector and cylinder where the partition ends Distance in sectors from the partition start table to the starting sector (the 1st sector of the partition) Number of sectors contained in the partition (Length of the partition)

1 2

1 2

4 5 6

1 1 2

Figure 1 The MBR in KHexedit Notice that the end of the partition entry in Figure 1 is set using the value 0x55AA. As stated, the end-of-MBR marker is always defined with the value 0x55AA.

Partition 1
We can extract the data for Partition 1 from the MBR (Table 4).
80

12

01 1

01 2

00 3

06 4

1F 5

7F 6

96 7

3F 8

00 9

00 A

00 B

E1 C

84 D

0C E

00 F

Table 2 The partition types And the form of the partition (it is formatted as and more) is set through the values displayed in Table 3. This is held in offset 4 as detailed in Table 2

Table 4 Partition 1 The data in Table 4 is highlighted in Figure 2. Given a set offset and a defined byte length (Table 1), we can always carve the partition information (such as that displayed for the example drive in Table 4) from the MBR. This is why the set offsets are important.

Hex
0x01 0x0E / 0x06 0x0C / 0x0B 0x82 0x83 0x05 0x07 0x0F

Partition Type
FAT12 FAT16 FAT32 Linux Swap Linux Native Extended NTFS Microsoft Extended

Table 3 The partition types As one of my students pointed out, you can find the partition types on Wikipedia. http://en.wikipedia.org/wiki/Master_boot_record http://en.wikipedia.org/wiki/Partition_type www.eForensicsMag.com

Figure 2 The MBR in KHexedit

33

Now, if we take the descriptions listed in Table 2, we can extend our description of Partition 1 with a list of the data displayed in Table 5.
80

The 3 key areas for forensics in the MBR include: 1 The partition type (offset 4) 2 The logical starting point for the partition as an offset in sectors (offset 8) 3 The length of the partition in sectors (offset 12) So you can see that the data contained within this small section of the hard drive expands to provide a good deal of encoded information detailing and describing the drives features. If we look at Table 6, we see section 8 which we have expanded and calculated to determine the number of sectors used by the partition.
Sector Distance 8 0x0000003F = 63 12 Number of Sectors 0x000C84E1

01 1 Head where partition begins = 0x01 01 1

01 2

00

06 4

1F 5 Head where the partition ends

7F 6

96

3F 8

00

00

00

E1

84

0C

00

0 State of Partition = 0x80 = Active


80

C (or) 12 Distance in sectors from the partition start table to the starting sector (the 1st sector of the partition) Number of sectors contained in the partition (Length of the partition)

Table 5 Partition 1 with denitions The data in the table is displayed in little endian format in Intel systems (and those such as AMD following this standard). This means that the byte order is displayed in reverse (Table 6).
01 2 0X0001 00 06 4 0X06 1F 5 7F 6 96 3F 8 0X0000003F = 63 Sectors 00 00 00 E1 84 0C 00

Sector and Cylinder where the partition starts

Type of Partition = 0x06 = FAT16

Sector and cylinder where the partition ends

Table 7 Looking at the partition information that we extracted for Partition 1 we can see at offset 4 the value of the drive format type defined for this device (Table 8). Here the value 0x06 can be matched to Table 3 and we see that the partition has a FAT16 partition type.

Table 8 The drive type In Table 9 we see how the partition length and thestarting position are defined.

0 Active

C (or) 12 0X000C84E1 = 820,449

Little endian = backwards

Table 9 The size and location of the drive First, the partition starts 0x3F (or decimal 63) sectors into the drive. The value at sector 8 is 0x3F000000. This value is stored in little endian format and as such is written in reverse order. Offset 0xC (12 decimal) can be seen to contain the value 0x000C84E1 with provides us with the length of the partition in sectors (820,449 sectors in decimal).

Table 6 Partition 1

34

DRIVE AND PARTITION CARVING PROCEDURES


1 Sometimes the automated tools will fail. Commands such as mmls work well most of the time, but do fail in situations where we really need to obtain the data. 2 The use of a manual process teaches far more than running a tool ever can. In using the tool to validate our checks we can see if we have made any foolish errors, but at the same time learn more about the system and how it is designed. The highlighted data in Figure 3 (on line 02) shows us that our calculations are correct. This shows the partition as a DOS FAT16 partition, that the start of the partition is at sector 63 and that the partition size is 820449 sectors, as we have manually calculated.

Table 10 The State of Partition = 0x80 = Active The value at offset 0x0 of our first partition is 0x80. This value ags the partition as being active and as such can be used as a boot device.

WHAT IS THE STARTING SECTOR OF EACH PARTITION?

In the hex editor, you have found the 1st and 2nd primary partitions located at offsets 446 and 462 respectively (Figure 2). We know this as the partitions are always set at a predefined location. The hex editor here displays the 3rd and 4th partitions are all 0s as they are unused. Each partition entry is 16 bytes long. This is always the case. When we have extended partitions, these are similarly defined at later locations in the drive. We will cover this in a follow up article to this one. As the values defined are all zeros, we can see that the 3rd and 4th partitions are empty and do not exist. There are extended partitions (defined within Partition 2) contained in this drive we will continue further the analysis on the next article (Table 13).

BACK TO PARTITION 1
Table 11 Partition Type = 0x06 = FAT16 Taken together, offsets 0 and 4 (Table 11) allow us to determine that the first partition is a FAT16 formated primary boot partition of the drive.

VERIFICATION - MMLS
We can check our results using the mmls command from The Sleuth Kit forensic analysis tools in Linux. For what we are doing, we are not using this command for the exercise, just as a check.

Table 12 back to partition 1 If we extract the 16 bytes beginning at offset 446, we can examine the three sections we need. The 4-byte fields are stored in little endian order or backwards (table 12). From this we determine that the initial partition begins 63 sectors into the drive image. Again, the length of the image is 0x003FFA86 which calculates to 820,449 sectors and the Partition type is 0x06 (or FAT16).

PARTITION 2

In the examples above, we looked at Partitions 1 and 2. Notice there are more partitions. We will cover extended partitions shortly.

Figure 3 MMLS can validate the results For the most part, mmls will perform all of the steps we have completed so far and more. The reason for doing this exercise manually is twofold: Table 13 Partition 2 www.eForensicsMag.com 35

If we extract the 16 bytes beginning at offset 462 this time, we can examine the three sections we need. As always, the 4-byte fields are stored in little endian order or backwards. From this (Table 13) we see the initial partition begins 820,512 sectors into the drive image. Next, we can see that the length of the image is 0x00039960 which calculates to 235,872 sectors. Now, partition 2 has a value for the Partition type of 0x05 (or Extended). We can guess that Linux was installed first as a Microsoft Extended Partition would have been stored using the value 0x0F. Doing this, we have identified the first two partitions. Now we need to go to the extended partition and find the others that are contained later in the drive. Doing this, we will find another 512 byte section stored later in the drive which we can analyse in the same manner. This we will leave to the follow-up article.

WHAT ARE THE LAST TWO BYTES OF THE MBR?

This question is simple. The final 2-bytes of the MBR are always 0x55AA (Figure 4).

Figure 4 The MBR ends at 0x55AA This fact allows us to find extended partitions throughout the drive. We can seek the value 0x55AA and look to see if other partition information exists on the drive. We will do this in the next article when we examine the extended partitions.

TO CONCLUDE

In a follow-up article to this one, we will continue into the Extended Partitions. In this process we will take what we have learnt of extracting the data from the MBR and now how we can find the other partitions. More, we will extend this into actually carving the partitions and when we have a good idea of just how we can find and calculate the partitions (including formatted ones), we will extend this process to recovering lost data.

Dr. Craig Wright (Twitter: Dr_Craig_Wright) is a lecturer and researcher at Charles Sturt University and executive vice president (strategy) of CSCSS (Centre for Strategic Cyberspace+ Security Science) with a focus on collaborating government bodies in securing cyber systems. With over 20 years of IT related experience. In addition to his security engagements Craig continues to author IT security related articles and books. Dr Wright holds the following industry certifications, GSE, CISSP, CISA, CISM, CCE, GCFA, GLEG, GREM and GSPA. Craig is working on his second doctorate, a Ph.D on the Quantification of Information Systems Risk.

Author Bio

36

Global I.T. Security Training & Consulting


IS YOUR NETWORK SECURE?

www.mile2.com
In February 2002, Mile2 was established in response to the critical need for an international team of IT security training experts to mitigate threats to national and corporate security far beyond USA borders in the aftermath of 9/11.
TM

mile2 Boot Camps

A Network breach... Could cost your Job!


Available Training Formats
CISSPTM C)ISSO C)SLO ISCAP GENERAL SECURITY TRAINING CISSP & Exam Prep Certified Information Systems Security Officer Certified Security Leadership Officer Info. Sys. Certification & Accred. Professional PENETRATION TESTING (AKA ETHICAL HACKING) Certified Penetration Testing Engineer Certified Penetration Testing Consultant SECURE CODING TRAINING Certified Secure Coding Engineer WIRELESS SECURITY TRAINING Certified Wireless Security Engineer Certified Wireless Network Associate / Professional DR&BCP TRAINING Disaster Recovery & Business Continuity Planning VIRTUALIZATION BEST PRACTICES Certified Secure Virtual Machine Engineer DIGITAL FORENSICS Certified Digital Forensics Examiner
1. 2. 3. 4. 5. F2F CBT LOT KIT LHE

Classroom Based Training Self Paced CBT Live Online Training Study Kits & Exams Live Hacking Labs (War-Room)

Other New Courses!!


ITIL CompTIA ISC Foundations v.3 & v.4 Security+, Network+ CISSP & CAP

C)PTETM C)PTCTM C)SCETM C)WSETM C)WNA/PTM DR/BCP C)SVMETM C)DFETM

SANS GSLC GIAC Sec. Leadership Course SANS 440 Top 20 Security Controls SANS GCIH GIAC Cert Incident Handler

INFORMATION ASSURANCE SERVICES

We practice what we teach.....

Other Mile2 services available Globally: 1. Penetration Testing 2. Vulnerability Assessments 3. Forensics Analysis & Expert Witnesses 4. PCI Compliance 5. Disaster Recovery & Business Continuity

(ISC)2 & CISSP are service marks of the IISSCC. Inc. Security+ is a trade mark of CompTIA. ITIL is a trade mark of OGC.GSLC & GCIH are trademarks of GIAC.

11928 Sheldon Rd Tampa, FL 33626


37

1-800-81-MILE2 +1-813-920-6799

www.eForensicsMag.com

Worldwide Locations

DATABASE

DETECTION OF ATTACKS

THROUGH DEFAULT ACCOUNTS AND PASSWORDS IN ORACLE


An Oracle database comes with many default userids (and, worse, well known default passwords), which ideally shouldnt have a place in a typical production database but database administrators may have forgotten to remove the accounts or lock them after setting up production environment. This provides for one of the many ways an adversary attacks a database system by attempting to guess the presence of a default userid and password, either by brute force or by a social engineering techniques. In this article you will learn how to identify such attacks and trace back to the source quickly and effectively. You will also learn how to set up a honeypot to lure such adversaries into attacking so as to disclose their identity. Besides, you will also be able to determine why a legitimate user account gets locked out that needs unlocking or a password reset.
BACKGROUND

An Oracle database typically comes with several default accounts. Some of them are necessary for database operations. Examples of such userids are SYS and SYSTEM which have the DBA privileges. Other default accounts such as SCOTT, SH, BI, etc. are for demonstration only and are never needed by an application using that database. These accounts should not have been created in the first place. The database creation assistant (DBCA) has a checkbox to install samples schema (the SCOTT user), which should have been unchecked for a production database. Many DBAs, while creating the databa38

se, likely ignore it resulting in the schema being present. In other cases, the production database may be an upgrade from its earlier incarnation as a development or QA database where these sample schemas were indeed necessary and created. With the upgrade, these schemas have lost significance; but in the spirit of changing as little as possible during the database upgrade, they are usually left untouched and continue to linger. Whatever the reason was, these default accounts leave a backdoor entry to the database. Another problem is the presence of default passwords.

DETECTION OF ATTACK
During the database creation, these userids are usually given well known passwords. For instance, SCOTTs password is TIGER, which is what it has been since the schema was introduced in Oracle some 25 years ago. Default passwords of other default accounts such as SYS, SH, etc. may also be predictable. Earlier versions of Oracle database assigned well known values as default password, e.g. SYSs password by default used to be change_on_install, SYSTEMs used to be manager, and so on. Recent versions of Oracle ask the password during database creation; but administrators may supply some non-secure, easy-to-guess password which is vulnerable to brute force attacks, e.g. oracle or some sort of dictionary word. They probably intended to change it after the go-live; but as more pressing issues come up this little task is often forgotten leaving a backdoor as wide open as a barnyard door. The combined presence of default accounts and default passwords is one of the biggest risks to database security leading to attacks, especially from insiders. Since these accounts provide legitimate sources of access, the attacks often go unnoticed until its too late. The default account and password problem doesnt stop there. Some databases are behind additional firewalls, within the corporate firewall; so an attacker supposedly cant reach the database server. But a database is not really an isolated island. It still needs to service some application, get data from or send data to other databases, etc. So there are bound to be bridges and tunnels that puncture through the firewall. Granted, these bridges have defenses in place to filter out unwanted traffic; but legitimate database traffic is not blocked. As a forensic investigator investigating a breach after the fact your tasks are: 1. Identifying if such attacks are taking place; but doing so stealthily. You do not want to tip off the attacker about your investigation. 2. Identifying the source of the attack the client machine, the operating system (not the database) userid, etc. 3. Establishing an early warning system that automatically alerts you when such an attack is either occurring or has occurred. 4. Setting up a honeypot database with the default accounts to lure the potential adversaries to disclose their identities and locations. At the first blush it looks like a tall order requiring mobilization of some expensive specialized tools. In reality, fortunately, its relatively trivial to accomplish all of the above objectives with tools you can build yourself using freely available building blocks. The rest of the article describes how to build those tools.

SETTING UP

The key to accomplishing these objectives is setting up session auditing. Note the stress on the word session. Default Oracle-supplied auditing, in general, is perceived to be pretty expensive as it supposedly increases I/O and load on the CPU. In a system that is already constrained on CPU and I/O, its a general concern that the additional load due to auditing will be hugely detrimental and therefore often not implemented by administrator. However, the thought that all auditing is expensive is a myth. There are some types of auditing that put additional load on CPU and I/O; not all. There are two basic types of auditing in Oracle: Action Auditing, where you record who did what on which object, e.g. user A selected a record from table T1, or user B executed stored procedure P1, etc. This type of auditing imposes additional load on the CPU and I/O and depending on the details you want in the audit trail, that additional load may be high enough to be of concern. Session Auditing, where an audit record is created when the session is established for the first time and it gets updated when the session ends. No other records are created in between, regardless of the accessed objects in the session. Since sessions are created and ended sporadically in a normal database operation, this type of auditing imparts no appreciable load on the system. In a reasonably sized system, the impact of this auditing may not even be visible. While its less expensive, session auditing offers powerful features for forensic analysis. To enable session level auditing, the parameter audit_trail has to be set to DB. Check for the value of the parameter: SQL> show parameter audit_trail NAME TYPE VALUE ------------------------------------ ----------- ----audit_trail string DB

Figure 1 Database Attack through Proxy Database Consider the scenario in Fig 1. The business user is allowed connections to Database A but not Database B which happens to be inside an interior firewall. However, database A is allowed to connect to database B through a DB Link; so the internal firewall rules are modified to allow traffic through port 1521 between hosts of A and B. This firewall rule prevents connection from business user to connect to Database B through port 1521. A savvy adversary may be able to use Database A as a proxy to connect to Database B since that traffic is allowed. To accomplish that the adversary needs a userid and password on Database B and therefore the first thing he tries are the default userids and the default passwords.

www.eForensicsMag.com

39

In this example there is confirmation that the value is set to DB. If you see a blank here, you have to set the parameter. Put the following line in the parameter file of Oracle instance (often named as init.ora): audit_trail = db If you use an spfile instead of pfile, use the following command as a SYSDBA user: alter system set audit_trail = db scope = sple; Regardless of where you put this parameter, recycle the database for it to take effect. Since the database reads this parameter only during startup, I always set this parameter when I build a database. Remember, this parameter by itself does not start auditing; it simply sets the stage for the auditing to occur. The parameter specifies the location of the audit trail, which in this case, is the database. You could also specify the trail to go to a filesystem instead; but best forensics is done through a database resident audit trail and thats what we will use in this article. After this parameter is in effect, issue the following statement from SQL command prompt while connected as a SYSDBA user: SQL> audit session; The action does not require the database to be recycled. Now the auditing of the sessions will start. The audit trails go into a special table in the SYS schema of the database called AUD$. You, as a forensic investigator can look at a more user-friendly view known as DBA_AUDIT_TRAIL. Fig 2 shows the different columns of the view. Of these, we are concerned about a handful only. Here are the columns:

We dont need the other columns for our forensic analysis. Explaining the usefulness of all these columns is beyond the scope of this article.

Figure 2 Structure of the Database Audit Trail When the user connects to the database, the session based auditing writes an audit trail. But the real usefulness for forensic analysis does not stop there. The audit trail is written even if the user is unsuccessful and can be used later by forensic analysts to discover attacks. Lets examine it with an example. Suppose a user SCOTT tries to login; but gets a message as shown in Fig 3. This attempt also triggers an audit event; but the RETURNCODE column records the error number. You can check the audit entry as shown in the Fig 4.

Column
USERNAME

Description
The database user that connected to the database. If the user logged in as the database user SCOTT, you will see SCOTT here. The operating system of the user. Suppose the user logged in from his Windows laptop, then the Windows userid (with domain name, if applicable) is shown here. The machine from where the user logged in (not the server where the database runs). In case of Windows user, it will show the name of the Windows machine. The data and the connection was established. The error code experienced by the user. If the action is successful, this column shows 0.

OS_USERNAME

USERHOST

Figure 3 Error Message for Locked Accounts

TIMESTAMP RETURNCODE

40

DETECTION OF ATTACK
due to invalid password, you should use the query shown in Fig 7. The output shows some interesting facts. It seems all the invalid attempts have come from one specific Windows user (arupnan) and from one specific client machine (CORP\ STARUPNANT420), which seems to be a laptop assigned to the user. Why would that user attempt to login to the default userids such as BUS_SCHEMA, etc.? Its unlikely that it will have rational explanation and should definitely be pursued.

Figure 4 Audit Trails for SCOTT Account Note how the column RETURNCODE shows the SQL code returned to the user. When SCOTT wanted to login, he got the error message ORA-28000: account is locked. This return code 28000 - is recorded in the audit trail. From the trail, we know that SCOTT tried to login to the database at 4:45:41 PM on July 6th, 2012 from the client machine named CORP\STARUPNANT420. With the prior knowledge about the naming convention followed in the organization, this machine appears to be a laptop with the name STARUPNANT420 in the CORP domain. The user arupnan logged into the corp domain from that laptop. Details like this are hard to spoof and lead straight to the real person who attempted that database login. It will be interesting to understand why a real person Arup Nanda (gleaned from the LDAP database) was trying to login to an account called SCOTT which was locked. Figure 6 Error Message for Incorrect Password

Figure 7 Audit Trail for Incorrect Password Attempts This SQL script is so useful that you may want to keep it in your forensic toolbox. Here is the script for you to copy and save in a file. Figure 5 Audit Trail for Locked Accounts Now that you know the error code 28000 is recorded for failed attempts of logins to locked accounts, you should check the audit trail for all such attempts. Fig 5 shows the query and the output. The results uncover some potentially disturbing information. A Windows user called arupnan logged into the domain CORP from the client machine (ostensibly a laptop) called STARUPNANT420 has attempted logging into several default database accounts DBSNMP, SYSMAN, SH and SCOTT. A prudent DBA anticipating this possibility has locked all these accounts earlier; so the connection attempts failed as expected. Now the forensics unearthed these attempts that deserve some serious explanation from that user. Consider another scenario, where the adversary knows the presence of a specific unlocked userid such as one used for a business application but not the password. The adversary tries to guess the password and gets an error message as shown in Fig 6. This attempt triggers an audit event and the audit trail is written. The error code in this case is 1017. To find out which users have been attempted logged in but have failed www.eForensicsMag.com REM Purpose: to get the failed logins due to incorrect REM password supplied (ORA-1017). REM REM col audit_time format a20 col username format a10 col os_username format a20 col userhost format a15 select to_char(timestamp,dd-MON-yy hh24:mi:ss) audit_time, USERNAME, OS_USERNAME, USERHOST from dba_audit_trail where returncode = 1017 order by timestamp;

NON-EXISTENT ACCOUNTS

So far we have talked about adversaries that attack known user accounts assuming they are present. What if they are just fishing for such accounts? For instance, the demo userids such as SCOTT, SH, BUS_SCHEMA, etc. should not be 41

present in a production database, or even in staging, QA or integration databases. They are merely for demonstration purposes. There is no reason for them to continue to be present in a regular database servicing a business application. The fact that someone is looking to connect to the database using these accounts has to be regarded with suspicion. When the user tries to login with a non-existent account, the user gets the same error as the case with an incorrect password, as shown in Fig 6. There is no special error message for non-existent accounts. It is that way to protect the information around the presence of the accounts. Had it shown a more specific username not found, the adversary would have been alerted to the absence of the account and eliminated that from the list easily. Instead, a more generic invalid username/password keeps the adversary guessing and not certain whether the password is wrong or the account is non-existent. You need to find out if such attacks are occurring (or, occurred) and if so, where they are coming from. The error code alone does not provide a clue to whether the adversary was fishing for accounts or genuinely fat fingered the password entry. But that information is easy to find. All the valid users are shown in a view called DBA_USERS. If you check for the userids the potential adversary is trying in the DBA_USERS view, you will be able to find out the non-existent ones. Fig 8 shows that query and the output. You can see that the person has tried to login as SCOTT1, SI and ARUP all of which are not present in the database. There could be a very innocuous explanation the user was trying to login to a wrong database where those userids are not present. But its very likely that that the user has been fishing for a backdoor entry and was trying to see if any of these accounts existed. This definitely requires further explanation.

ALERT SYSTEM

The scenario described in the previous section presents an interesting dichotomy. On one hand the presence of default accounts and that someone is trying to get into them is an indication of a serious aw in the database security; but on the other hand it also offers an opportunity to observe how the attacks are coming. Now that you have learned how to collect the forensic data about folks trying to guess the presence of default accounts from the audit trail, you may want to make it an automated process that acts as a sentinel and alerts you whenever such activity occurs. With the proper building blocks in place, its not difficult to build such an alerting system. This section describes the details. First, we dont want to send the same alert multiple times. We need to create a table to hold the timestamp the alert was sent last time. create table audit_alert ( last_alert_datetime date ); Next, we insert a record with a time so old that any alert will satisfy the condition of new. insert into audit_alert values (01-jan-1900); commit; With this in place, we can create the main body of the tool, as shown in the code listing below. DECLARE l_audit_time varchar2(20); l_username varchar2(30); l_os_username varchar2(30); l_userhost varchar2(30); l_last_alert_datetime date; l_alert_check_datetime date; l_body varchar2(2000); CURSOR cur_audit_trail ( p_cutoff_timestamp IN date ) IS SELECT TO_CHAR(timestamp,dd-MON-yy hh24:mi:ss) audit_ time, username, os_username, userhost FROM dba_audit_trail WHERE username NOT IN ( SELECT username FROM dba_users ) AND timestamp >p_cutoff_timestamp; BEGIN -- nd the last_time the alert was checked SELECT last_alert_datetime INTO l_last_alert_datetime FROM audit_alert; -- nd out the current time when the alert is sent l_alert_check_datetime := SYSDATE;

Figure 8 Identifying Login Attempts to Non-existent Accounts This script, mentioned here for you to copy and save, should also be part of your toolbox. select to_char(timestamp,dd-MON-yy hh24:mi:ss) audit_time, USERNAME, OS_USERNAME, USERHOST from dba_audit_trail where username not in ( select username from dba_users ); 42

DETECTION OF ATTACK
-- open the cursor to chek from last alert time OPEN cur_audit_trail (l_last_alert_datetime); -- loop for all audit records LOOP FETCH cur_audit_trail INTO l_audit_time, l_username, l_os_username, l_userhost; EXIT WHEN cur_audit_trail%NOTFOUND; /* Following lines are for debugging only */ DBMS_OUTPUT.put_line(l_audit_time=||l_audit_time); DBMS_OUTPUT.put_line(l_username=||l_username); DBMS_OUTPUT.put_line(l_os_username||l_os_username); DBMS_OUTPUT.put_line(l_userhost=||l_userhost); /* End of debugging */ -- construct the body of the message l_body := l_audit_time||:|| l_username||:|| l_os_username||:|| l_userhost; -- send the mail for each audit record -- Important: the DBA must have set up mail handling from the -- database for the following to work. UTL_MAIL.send ( sender => Audit Alert System, recipients => secalert@acmeco.com, cc => secmanager@acmeco.com, bcc => tophoncho@acmeco.com, subject => Non-existent User Login Attempted, MESSAGE =>l_body ); END LOOP; -- update the alert table with the current timestamp -- so that next time the audit trails will be checked -- from this point onwards. UPDATE audit_alert SET last_alert_datetime = l_alert_check_datetime; COMMIT; END; / This is a regular PL/SQL anonymous code block. You can put it in any scheduling utility such as DBMS Scheduler or cron (in unix) or Scheduler (in Windows). The tool will check for attempts to connect as a non-existent user and send an email when an attempt is made. Since there is a check for the last time the audit trail was checked, it does not repeatedly send alerts for the same audit trail entry. If your security policy prohibits you from sending emails from a production database, dont despair. Simply remove the UTL_MAIL portion of the code and run it from a SQL prompt. The results will be shown on the screen thanks to the dbms_ output.put_line calls.

ACCOUNT LOCKS

You may have a security policy in place to locks the accounts that have been attempted to log on with incorrect passwords more than a certain number of times. If an account is locked, the user gets an error message as shown in Fig 9. Occasionally users complain that their accounts are getting locked even when they have not been attempting to login with an incorrect password. To solve this mystery you dont have to look any further than the little session auditing system you have built.

Figure 9 Locked Account Error Remember, when the user gets an error during login, the error code is recorded in the audit trail under the column RETURNCODE. You can find out about all the login attempts successful and failed from the audit trail and trace back into the history of the cause of the locking. Suppose a user called SH complains that the account is locked, you can use the query shown in Fig 10.

Figure 10 Login History for a Specic User Note the last lines of the output, where the attempt to login was met with the error ORA-28000; the return code was 28000. Before that line, there are 10 lines with the return code 1017, which prove that the user had received ORA-1017 (invalid username/password; logon denied) error. From this data you can clearly establish that there were 10 attempts with an incorrect password. After the 10th attempt the database 43

www.eForensicsMag.com

security policy of locking accounts kicked in and locked the account as expected. The next big question now is why there were so many invalid password attempts. There are several possibilities as shown below. 1. The user genuinely forgot the password and was repeatedly trying. A simple check with him will confirm that. This case is no cause for alarm. 2. The user runs several batch programs that with embedded passwords. Recently he changed the password but forgot to update the batch programs to reect the new password. This led to the batch programs attempting to connect using the old password and failing. The USERHOST column shows the machine these connection attempts came from and will help identify the batch programs. This is the most common case of account locks-outs. Without the previously described procedure in place, it would have been difficult and time consuming for the user to identify the programs that need update. 3. Someone else is attempting to login as the user by guessing the password. This case, of course, is quite serious. If you have ruled out the first two cases, this is the most likely reason. The USERHOST column shows the machine from which the connection attempt came in. The OS_USERNAME shows the login used on the client machine. In case of Windows networks, the LDAP authentication is shown here, which is difficult to spoof leading to the identity of the potential adversary.

Note, in the previous sections you issued a different command - audit session. This statement audit all will create audit trails for all actions, not just for login attempts, which will help not only the attempts; but actions as well. Thats it. Now sit back and watch as the potential adversaries try to guess the default accounts and passwords. They will be rewarded for their efforts, though, since you have taken the measures to ensure the default accounts exist, with their default passwords and are completely unlocked. That will likely embolden them to explore the other parts of the database such as trying to guess the columns containing credit card numbers or Social Security Numbers. If they are successful, there is nothing to worry about since the data is fake. But in doing so, the adversary leaves a footprint in the form of audit trails whether successful or not. This is what you want for forensic analysis later. Its not important what type of data the adversary tries to get from the credit card numbers table. The fact that he tried to get the data from the table should be a clear indication of the intent needing serious investigation.

CONCLUSION

HONEYPOTS

Honeypots are lures to attract potential adversaries into a controlled harmless environment but making them believe that they are entering a real production database. Using a honeypot, you can identify their presence and ush them out. Here are rough steps to build a honeypot. 1. Build a database 2. Export the relevant schemas from production database. 3. Import these into the honeypot database. 4. In the honeypot database, update all relevant columns to harmless values, e.g. emails can be joe.smith@cashxxxdomain.com, etc. Use any obfuscation method such as redaction or alteration; but not encryption so that the adversary is not alerted. 5. Create all the Oracle default accounts, e.g. SCOTT, SH, BI, etc. You can do this from another development database, or from the companion CD that comes with Oracle database software. 6. Make sure the default accounts have the default passwords (e.g. SH should have password SH, etc.) and not locked. 7. Set up auditing as mentioned in the beginning of the article. 8. Set up alerting mechanism as shown. Issue the following statement from a SQL prompt while connected as SYS: audit all; 44

In this article we showed you how to identify a specific threat to database security where an adversary attempts to login to a database system using certain accounts that come with the database, either during the database installation or later as demos. The adversary leaves digital fingerprints behind in the form of audit trails, which are immensely valuable as forensic data with virtually no impact on performance of the database. Since audit trails are persistent and owned by SYS, the adversaries do not have ability to erase them. As a result, the data helps both in case of a breach that has already happened and one that may be ongoing. Hopefully, with this article, you will have realized how easy it is to get that information that has so much value. Happy forensics.

Author Bio

Arup Nanda (arup@proligence.com) has been an Oracle technology professional for 17 years touching all aspects of database from modeling to security planning. He has authored 4 books, written about 500 articles for various publications including Oracle Magazine and OTN , delivered about 300 technical sessions and 30 seminars in 22 countries. He blogs at arup.blogspot.com, tweets with handle @arupnanda, mentors and conducts security audits and planning sessions. Recognizing his technical prowess and contributions to user community, Oracle awarded the coveted DBA of the Year in 2003 and OTN ACE Director-ship.

In the Upcoming Issue of

FREE

Smartphone Forensics & More... Available to download on August 13th

If you would like to contact eForensics team, just send an email to en@eforensicsmag.com. We will reply a.s.a.p. eForensics Magazine has a rights to change the content of the next Magazine Edition.
www.eForensicsMag.com 45

Now Hiring
Teamwork Innovation Quality Integrity Passion

Compliance, Protection
and

Sense of Security

Sense of Security is an Australian based information security and risk management consulting practice. From our offices in Sydney and Melbourne we deliver industry leading services and research to our clients locally, nationally and internationally. Since our inception in 2002, our company has performed tremendously well. We thrive on team work, service excellence and leadership through research and innovation. We are seeking talented people to join our team. If you are an experienced security consultant with a thorough understanding of Networking, Operation Systems and Application Security, please apply with a resume to careers@senseofsecurity.com.au and quote reference PTM-TS-12.

46

info@senseofsecurity.com.au www.senseofsecurity.com.au

The Only Magazine about Pentesting

200 Pages of the Best Technical Content Every Month 8500 Readers 4 Specialized Issues

From theory to practice, from methodologies and standards to tools and real-life solutions!
PenTest gives an excellent opportunity to observe security trends on the market for the readers, and for companies to share their invaluable knowledge. To learn more visit: http://pentestmag.com/. www.eForensicsMag.com For any questions or inquiries please mail us at: en@pentestmag.com.
47

Anda mungkin juga menyukai