Cargando…
Large-scale digital forensic investigation for Windows registry on Apache Spark
In this study, we investigate large-scale digital forensic investigation on Apache Spark using a Windows registry. Because the Windows registry depends on the system on which it operates, the existing forensic methods on the Windows registry have been targeted on the Windows registry in a single sys...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9728846/ https://www.ncbi.nlm.nih.gov/pubmed/36477435 http://dx.doi.org/10.1371/journal.pone.0267411 |
Sumario: | In this study, we investigate large-scale digital forensic investigation on Apache Spark using a Windows registry. Because the Windows registry depends on the system on which it operates, the existing forensic methods on the Windows registry have been targeted on the Windows registry in a single system. However, it is a critical issue to analyze large-scale registry data collected from several Windows systems because it allows us to detect suspiciously changed data by comparing the Windows registry in multiple systems. To this end, we devise distributed algorithms to analyze large-scale registry data collected from multiple Windows systems on the Apache Spark framework. First, we define three main scenarios in which we classify the existing registry forensic studies into them. Second, we propose an algorithm to load the Windows registry into the Hadoop distributed file system (HDFS) for subsequent forensics. Third, we propose a distributed algorithm for each defined forensic scenario using Apache Spark operations. Through extensive experiments using eight nodes in an actual distributed environment, we demonstrate that the proposed method can perform forensics efficiently on large-scale registry data. Specifically, we perform forensics on 1.52 GB of Windows registry data collected from four computers and show that the proposed algorithms can reduce the processing time by up to approximately 3.31 times, as we increase the number of CPUs from 1 to 8 and the number of worker nodes from 2 to 8. Because the distributed algorithms on Apache Spark require the inherent network and MapReduce overheads, this improvement of the processing performance verifies the efficiency and scalability of the proposed algorithms. |
---|