Forensic procedures have been developed with the USA in mind, but it’s important to note that different countries have different legal systems and approaches to evidence. For example, in the USA, the concept of “chain of custody” is crucial, which involves documenting the handling and transfer of evidence from the moment it is collected to its presentation in court. Admissibility of evidence is also a key consideration. However, in countries like Italy, the evaluation and admissibility of evidence are primarily determined by the judge rather than relying solely on the chain of custody. It’s worth noting that in the US, the judge has the authority to discuss the admissibility of evidence and may choose to withhold certain information from the jury. In contrast, in Italy, the judge has more discretion in evaluating evidence and presenting it to the jury is not a common practice.

International standards play a crucial role in the field of digital forensics, especially in Council of Europe states. One notable standard is the Convention of Budapest on Cybercrime, established in 2001. This treaty aims to combat cybercrime by harmonizing national laws, enhancing investigative techniques, and promoting international cooperation. It specifically addresses offenses such as copyright infringement, computer-related fraud, child pornography, and network security breaches. Additionally, there are other relevant standards like ISO/IEC 27037:2012, which provides guidelines for the identification, collection, acquisition, and preservation of digital evidence. ISO/IEC 27035:2011 focuses on information security incident management, while the Guidelines for Evidence Collection and Archiving (RFC 3227) offer further guidance in this field.

Brittleness of Digital Evidence

Digital evidence is inherently fragile, meaning that if it is altered or tampered with, there is no reliable way to detect it. It lacks built-in mechanisms to indicate if it has been modified. This poses a significant challenge in ensuring the integrity and authenticity of digital evidence.

For instance, in the Garlasco case, the suspect exploited a thesis alibi to fabricate a false alibi, highlighting the potential for creating deceptive evidence. Additionally, it is possible to manipulate files and modify their timestamps, further complicating the verification process.

To address these concerns and establish tamper-evident digital evidence, it is crucial to implement robust procedures that encompass legal compliance, ethical conduct from all involved parties, the ability to identify unintentional errors, and mechanisms to detect natural degradation over time. The following factors need to be considered:

  • Ensuring adherence to legal requirements and regulations.
  • Promoting ethical behavior and professional conduct among all parties involved in the handling and presentation of digital evidence.
  • Implementing measures to identify and rectify unintentional errors made during the acquisition and analysis process.
  • Establishing mechanisms to detect and account for the natural decay of digital evidence over time.

Hashes in Digital Forensics

In order to seal digital evidence, hashes (and digital signatures) are routinely used. If the hash of a digital object is recorded at a given step of acquisition and then constantly checked in further steps, it can ensure the identity, authenticity, and non-tampered state of the evidence from that step on. If there is any discrepancy, the evidence is considered compromised because it’s impossible to determine what happened before the discrepancy.

It is important to understand that:

  • Hashes are not a dogma, meaning that you cannot just say “there is no hash” and dismiss everything. You need to ask why there is no hash, if any other measure has been taken, and if we can still reconstruct the chain of acquisition.
  • Hashes are not magic, meaning that computing a hash does not say anything about what happened before the hashing took place. So a proper procedure needs to be adopted.

To be useful, hashes must be either sealed in writing (e.g., on a signed report) or encrypted to form a digital signature. Hashes are stored both in digital domain (while performing the acquisition) and in the physical domain (on paper, in a safe, etc.) in order to have always a reference to check against and a backup in case of digital corruption.

Hardware and software devices for acquisition

For hardware acquisition, commonly used devices include removable HD enclosures or connectors with various plugs. These allow you to connect the source media to the analysis station. External disks and devices connected via USB, firewire, SATA, and e-SATA controllers can also be utilized whenever possible. A crucial tool for acquisition is a write blocker, which is an external USB drive that prevents any writing to the source media.

From a software perspective, Linux is the preferred operating system for acquisition. It offers extensive native file system support and facilitates easy access to drives and partitions without the need for mounting them. Additionally, Linux distributions like Helix, CAINE, and DEFT are specifically designed for forensic analysis and acquisition.

During the process of acquiring digital evidence, commonly used software tools include dd, dcfldd, ddrescue, and dc3dd. These tools enable the creation of a bitstream image of the source media, which is an exact copy of the original data at the sector level. This ensures that all data, including deleted files and unallocated space, is preserved. The acquired image can then be analyzed without making any changes to the original evidence.

Bitstream Images

We want to acquire, if possible, a bitstream image, which is a bit-by-bit clone of the original evidence media. The reason will become evident when we discuss analysis, but basically, if we only copy the allocated content, we lose potentially information. This may be different in special cases, such as RAID drives, encrypted or virtual drives.

A bitstream image is often called a “forensic clone,” “clone copy,” or “image.” Acquisition is also called “freezing” sometimes.

Basic Procedure of Acquisition

The basic acquisition of a powered-down system involves the following steps:

  • Disconnect the media from the original system (if not possible see ahead for the usage of forensic distributions).

  • Connect the source media to the analysis station, if possible with a write blocker in between to prevent any writing to the source media done accidentally or by the system.

  • Compute the hash of the untouched source media using a tool like sha256sum or md5sum.

    dd if=/dev/sda conv=noerror,sync | sha256sum
  • Copy the source media to a destination media (e.g., an external hard drive) using a tool like dd or dcfldd.

    dd if=/dev/sda of=/tmp/acquisition.img conv=noerror,sync
  • Compute the hashes of the source and the clone

    dd if=/dev/sda conv=noerror,sync | sha256sum
    sha256sum /tmp/acquisition.img
  • Compare the three hashes (source, clone, and clone clone) to ensure that the clone is a perfect copy of the source.

It could be good to compute also MD5 and SHA-1 hashes of the image at least, for redundancy and to be sure it can be compared.

Challenges of Acquisition

Time

The current standard for hard drive capacity is 1TB, with transfer speeds varying depending on the type of drive. Traditional rotational drives typically achieve speeds of around 100MB/s, while USB transfers range from 20 to 100MB/s. However, using a SATA2 controller can enable transfer speeds of over 300MB/s. It’s important to note that traditional rotational drives peak at approximately 100MB/s and have an average speed of around 80MB/s, whereas SSDs can reach the maximum speed supported by the controller.

Considering these transfer speeds, it’s essential to be aware that copying or running a hash on a 1TB drive can take several hours to complete. To streamline the process, certain software tools like dcfldd offer automation features, such as simultaneously computing the source hash while copying the data. This parallel operation can help expedite the overall procedure.

dcfldd if=/dev/sda hash=md5,sha256 md5log=md5.txt sha256log=sha256.txt of=/tmp/acquisition.img hashconv=after bs=512 conv=noerror,sync
  • dcfldd is a program that extends the functionalities of the Linux dd command, used for copying and converting data. In this case, it is used to acquire an image of a storage device.
  • if=/dev/sda specifies the path of the source storage device from which the image will be acquired. In this specific case, /dev/sda represents the first hard drive in the system.
  • hash=md5,sha256 specifies the hash algorithms to be used for calculating the hash values of the acquired image. In this case, the MD5 and SHA256 algorithms are used.
  • md5log=md5.txt specifies the path of the log file where the MD5 hash value of the acquired image will be recorded.
  • sha256log=sha256.txt specifies the path of the log file where the SHA256 hash value of the acquired image will be recorded.
  • of=/tmp/acquisition.img specifies the path and name of the file where the acquired image will be saved. In this case, the image will be saved as /tmp/acquisition.img.
  • hashconv=after specifies that the hash values will be calculated after the image acquisition.
  • bs=512 specifies the data block size to read or write during the image acquisition. In this case, a block size of 512 bytes is used.
  • conv=noerror,sync specifies the conversion options to use during the image acquisition. noerror indicates to continue the acquisition even if there are read errors, while sync indicates to fill any read errors with zeros.

Size

Dealing with the storage capacity required for large-scale investigations can be a complex task. Using external media, such as USB drives, can significantly slow down operations. To address this challenge, many forensic shops utilize NAS (Network Attached Storage) or SAN (Storage Area Network) systems. These systems allow for efficient storage and retrieval of forensic images. In some cases, it may be necessary to move images across a network. One simple method to accomplish this is by setting up a host to listen on a specific port and receive the image using the nc command:

nc -lp 5678 > /tmp/acquisition.img

On the acquisition side, you can use the following command to send the image to the host:

dd if=/dev/sda conv=noerror,sync | nc -p 5678 <address>

Replace <address> with the IP address or hostname of the host where the image will be sent. This method allows for efficient transfer of forensic images over a network, eliminating the need for physical media and reducing the time required for acquisition.

Encryption and its Impact on Acquisition

The widespread adoption of encryption in modern computing devices, including laptops and PCs, has posed significant challenges to the process of digital acquisition. This trend is particularly evident in the realm of mobile devices, where encryption has become a standard security measure. Even with access to the encryption key, performing a repeatable and reliable acquisition of encrypted data has become increasingly complex.

Encryption serves as a powerful safeguard for sensitive information, ensuring that it remains secure and inaccessible to unauthorized individuals. However, from a forensic standpoint, encryption introduces a layer of complexity when it comes to acquiring digital evidence. Traditional acquisition methods that rely on direct access to the storage media may no longer be sufficient in the face of encryption.

When attempting to acquire data from an encrypted device, forensic investigators must navigate through various encryption mechanisms and security protocols. These measures are designed to protect the confidentiality and integrity of the data, making it challenging to extract information without the proper authorization and decryption keys.

To overcome these challenges, forensic professionals must employ specialized techniques and tools that are capable of handling encrypted data. This may involve utilizing advanced software solutions that can bypass encryption barriers or collaborating with experts in cryptography to obtain the necessary decryption keys.

Furthermore, the acquisition process itself must be carefully executed to ensure the integrity and authenticity of the acquired data. Any misstep or alteration during the acquisition could compromise the evidentiary value of the data, rendering it inadmissible in legal proceedings.

Alternate Operating Procedures

There are some common variants to the basic procedure of acquisition: booting from a live distribution, acquiring a powered-on target, and performing live network analysis.

All these procedures are based on the same principles of acquisition, but they are adapted to different scenarios and constraints. The main goal is to acquire the data in a way that is as close as possible to the original state, ensuring the integrity and authenticity of the evidence. The procedures may vary depending on the specific requirements of the investigation and the nature of the digital evidence being acquired. It is essential to follow established best practices and guidelines to ensure the accuracy and reliability of the acquired data.

Alternate 1: Booting from Live Distribution

There are instances when it becomes necessary to work directly on a machine, especially in cases involving systems with unconventional hardware and controllers, physical scenarios, RAID devices, or specific investigation constraints. In such situations, a viable approach is to boot the system under assessment using a Linux distribution specifically designed for forensic analysis, like Tsurugi and BackBox.

By employing a live distribution, we gain the ability to access the system and perform necessary tasks from within the operating environment. It is worth noting that regular live distributions may automatically mount certain partitions, such as swap partitions. However, in the context of forensic analysis, it is crucial to ensure that the integrity of the evidence is maintained throughout the process.

Once the system has been successfully booted using the live distribution, we can leverage the command-line tools and techniques that we have previously discussed to clone the drives or perform other relevant actions from within the system. This approach allows us to work directly on the machine, overcoming any hardware or controller limitations that may have hindered the acquisition process otherwise.

By utilizing a live distribution, forensic professionals can effectively address unique challenges presented by specific hardware configurations, physical scenarios, or specialized investigation requirements. This alternative procedure ensures that the integrity and authenticity of the acquired data are preserved while enabling comprehensive analysis and investigation.

Alternate 2: Target Powered On

In certain scenarios, it may not be feasible or advisable to turn off the system for the purpose of acquisition. In such cases, an alternate approach known as “volatility order” can be employed, which involves performing specific steps to gather the required data while the target system remains powered on.

The volatility order method focuses on capturing crucial information in a specific sequence to ensure the preservation of volatile data. This approach typically involves dumping the system’s memory, saving runtime information, and finally acquiring the disk contents.

When dealing with a powered-on target, it is important to consider the potential risks and challenges associated with acquiring data in this state. However, if it is determined that a shutdown is not possible or not recommended, the volatility order method can be a viable solution.

To effectively execute the acquisition process without shutting down the system, it is essential to follow certain guidelines and document all activities performed before sealing the evidence. This documentation ensures a clear record of the steps taken and helps maintain the integrity and admissibility of the acquired data.

During the acquisition of a powered-on target, various commands and tools can be utilized to gather specific types of data. For example, network-related information can be obtained using commands such as ifconfig -a, netstat -anp, route -n, and arp. These commands provide insights into network configurations, active connections, routing tables, and ARP cache. To gather process-related data, commands like ps aux and lsof file can be employed. These commands reveal information about running processes and open files, which can be valuable for forensic analysis. User-related data can be obtained using commands such as who, last, and lastlog. These commands provide details about currently logged-in users, previous login sessions, and user login history.

In addition to capturing network, process, and user data, memory acquisition plays a crucial role in acquiring volatile information from a powered-on target. Specialized tools like Mantech mdd, win32dd, and Mandiant Memoryze can be utilized to capture the system’s memory, allowing for the analysis of volatile data such as running processes, open network connections, and other runtime information.

Alternate 3: Live Network Analysis

There are situations where it becomes necessary to observe an attacker in real-time, such as when setting up honeypots or when dealing with an intruder who may react if they feel they are being watched. In such cases, conducting live network analysis becomes crucial.

Live network analysis involves actively monitoring and analyzing network traffic and logs to gain insights into the activities of potential attackers. By observing the network in real-time, forensic professionals can gather valuable information about the attacker’s techniques, tactics, and potential vulnerabilities they may be exploiting.

During live network analysis, various tools and techniques can be employed to capture and analyze network traffic. These tools can help identify suspicious patterns, detect unauthorized access attempts, and uncover potential security breaches. Additionally, monitoring network logs can provide valuable evidence of unauthorized activities, such as attempts to gain unauthorized access or exfiltrate sensitive data.

One common use case for live network analysis is the deployment of honeypots.

Definition

Honeypots are decoy systems or networks that are intentionally designed to attract attackers.

Another scenario where live network analysis is valuable is when dealing with sophisticated attackers who may be actively monitoring their targets for signs of detection. By conducting live analysis, forensic professionals can gather real-time intelligence on the attacker’s activities, enabling them to respond effectively and mitigate potential damage.

It is important to note that conducting live network analysis requires careful planning and consideration of legal and ethical implications. Privacy laws and regulations must be adhered to, and proper authorization must be obtained before monitoring network traffic or accessing logs. Additionally, steps should be taken to ensure the integrity and confidentiality of the captured data to maintain its admissibility in legal proceedings.


Further Reading