• No results found

An evaluation of agreement and conflict among computer forensics experts

N/A
N/A
Protected

Academic year: 2021

Share "An evaluation of agreement and conflict among computer forensics experts"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

An evaluation of agreement and conflict among computer forensics experts

Gregory H. Carlton

California State Polytechnic University College of Business Administration Computer Information Systems Department

(ghcarlton@csupomona.edu)

Reginald Worthley University of Hawaii Shidler College of Business

Dept. of Information Technology Management (worthley@hawaii.edu)

Abstract

The use of computer data as evidence within litigation is growing rapidly. Additionally, courts define computer data as a form of scientific evidence. The courts recognize that subject matter of scientific evidence is outside the general knowledge of the public, and it is beneficial for someone with special skills in the subject to explain the scientific evidence to the court; therefore, expert witnesses are permitted to enter their opinions into evidence to explain the data. However, a recent study identified widespread conflict among professionals in the field of computer forensics. This conflict raises serious questions concerning the data presented as evidence, the conclusions drawn by judges and juries, and the impact of those affected by the outcomes of legal proceedings. This paper discusses the findings of an analysis performed on data collected from computer forensics examiners and attorneys with computer forensics experience and provides a call for additional research.

1. Introduction

The courts now recognize the significance of examining digital data from computer systems, personal data assistants (PDA), and cellular telephones in virtually all cases. As typewriters have become relics of the past, it is largely accepted that records of individuals’ correspondence, calculations, and documentation are maintained on computer systems. This wealth of information is available to be submitted as evidence in legal matters when acquired, analyzed, and reported using forensics methodology, and the volume of digital evidence is growing rapidly [1].

The usage of computer forensics methodology is required when submitting digital data as evidence, as the courts have ruled that digital data is a form of scientific information [2]. Digital data, like all scientific information, is considered by the courts to be of a complexity that is beyond the understanding of the general public; therefore, an expert with specialized education, experience, and training within this field is

needed to explain this complex material to the judge and jury, who represent members of the general public.

Computer forensics methodology is based on the scientific premise that an established, measurable process is followed that is generally accepted within the field [3]. Individuals qualified by the courts to provide expert testimony in trials are uniquely permitted to provide their opinions as evidence when their opinions are derived from their analysis of data within their area of expertise [1]. This unique ability to enter an individual’s opinion as evidence is very powerful within the legal process, and it may represent the single factor that sways the opinion of a judge or jury [1]. Although the legal theory of qualified experts in the scientific field of computer forensics being permitted to offer their opinions as evidence in legal matters may be sound, many of the generally accepted computer forensics procedures were not established by scientific methods [4].

For example, although the European Network of Forensic Science Institutes (ENFSI) and the United States Department of Justice’s National Institute of Justice (NIJ) produce numerous publications concerning digital forensics and best practices, as of this date, their publications represent the opinions of the authors rather than being derived from empirical studies of best practices. The NIJ’s special report, titled Forensic Examination of Digital Evidence: A Guide for Law Enforcement states, “Opinions or points of view expressed in this document represent a consensus of the authors and do not represent the official position or policies of the U.S. Department of Justice. The products, manufacturers, and organizations discussed in this document are presented for informational purposes only and do not constitute product approval or endorsement by the U.S. Department of Justice” [5].

An extensive literature review prior to a doctoral dissertation in 2006 revealed that much of the protocols, instructional materials, and training courses available for computer forensics procedures were largely based on anecdotal opinions or experiences of the authors and instructors [6]. To provide an initial,

(2)

empirical study of forensic data acquisition tasks, the dissertation used Grounded Theory to identify and measure a set of 103 tasks performed by forensic computer examiners pertaining to the data acquisition of personal computer workstations [7]. Forensic examiners were then asked to identify to what extent they perform each of the 103 tasks. Additionally, performance measures for each of the 103 tasks were obtained from two expert review panels, one panel of technical experts and the other panel of legal experts. Lastly, the performance measures from the forensic examiners and expert review panels were compiled into a task performance guide [6].

Although the study described above fulfilled its objective by establishing an empirical set of forensic data acquisition tasks, it also provided data from which additional questions arose. Additional analysis of this data revealed numerous conflicts among the participants of the study concerning task performance. For example, for certain tasks, there appeared to be a high degree of agreement between the responses of forensic computer examiners, the review panel of technical experts and the review panel of legal experts. However, for another subset of tasks, the responses of the forensic computer examiners were in conflict with the opinions of the expert review panel members. Additionally, there were tasks in which the responses from the panel of technical experts were in conflict with the panel of legal experts, and lastly, tasks were identified where members within either expert review panel had conflicting responses.

Given the significance of expert testimony in the legal environment pertaining to computer forensics, conflicts among experts in this field present a dilemma. This dilemma stems from the fact that judges and juries rely on the opinions of experts to explain scientific material that is outside the area of expertise of the general public; however, it appears that frequently within this field, the experts have conflicting opinions. This paper discusses the analysis of agreement and conflict among the participants of this study.

2. Data collection

The goals of the initial study described above were to identify forensic data acquisition tasks and then measure the extent to which these tasks are performed. To achieve these goals, data were collected from forensic computer examiners and attorneys with expertise in computer forensics. The data collection process consisted of two phases of the initial study, whereby the first phase was concerned with identifying the tasks and the second phase was concerned with measuring the tasks indentified during the first phase.

To identify the forensic data acquisition tasks of personal computer workstations, Grounded Theory was utilized in a series of four surveys, and the questions on these surveys evolved from general, open-ended questions on the first survey to more specific, closed-ended questions on the fourth survey. A point of theoretical saturation was reached during the fourth survey when 103 forensic data acquisition tasks emerged from the data. A thorough discussion of the survey instruments is presented in Carlton’s dissertation [6]. Refer to Table 1. Data acquisition tasks for a complete listing of the task descriptions.

During the second phase of data collection, a fifth survey was administered that consisted of specific, closed-ended questions designed to measure the extent in which forensics examiners perform each task.

Each of the five questionnaires surveyed members of the High Technology Crime Investigation Association (HTCIA), and procedures were established to ensure that no one responded more than once. Additionally, the first question on each of the five questionnaires asked whether the respondent performs forensic data acquisitions, and only the records for those that responded positively to the first question on each survey were evaluated.

Also, during the second phase of the data collection process, two expert review panels, a panel of technical experts and a panel of legal experts, were questioned to measure the importance of the performance of each task.

2.1. Examiner task performance

The fifth questionnaire consisted of closed-ended questions that asked HTCIA members to indicate a measure of their task performance by selecting one of four choices that range within a scale from never performing the task to always performing the task for each of the 103 tasks. Those four choices are: I always perform the task; I typically perform the task, but I may omit it in some cases; I typically omit the task, but I may perform it in some cases; I never perform the task.

The respondents were also asked to indicate the conditions that would cause them to add or omit each of the 103 tasks from a set of 8 conditions that emerged from the data collected in the previous four surveys. Additionally, respondents were asked a series of questions regarding characteristics, such as, their education, experience, training, certifications, type of employment, age, gender, self-ratings, and their opinions concerning qualities that they consider to be

“good measures of a forensics examiner’s qualifications.” The data concerning task conditions

(3)

and examiner qualities are addressed in another paper, as this paper focuses on expert agreement and conflict.

2.2. Expert review panel ratings

Two expert review panels were subsequently surveyed regarding each of 103 tasks identified in section 2.1. One expert review panel consisted of five HTCIA members recognized for their technical prowess as forensic computer examiners, and the second review panel consisted of five attorneys with extensive experience with cases involving computer forensics [3]. The expert review panel members were asked to consider the performance of each of the 103 tasks solely on the basis of their area of expertise, namely technical merit or legal merit, and each expert review panel member was asked to indicate his or her opinion for each of the 103 tasks by selecting from one of five choices that range within a scale from the task being absolutely prohibited to the task being absolutely essential. The five choices are: performance of the task is absolutely prohibited; performance of the task is undesired; performance of the task makes no contribution and causes no harm; performance of the task is desired; performance of the task is absolutely essential.

The data collected from the survey of expert review panel members resulted in three merit ratings, an overall expert panel merit rating, a technical expert panel merit rating, and a legal expert panel merit rating. The examiner performance measures and the expert panel merit ratings were compiled into a monograph yielding a task performance guide, thus providing a previously unavailable empirical study from which forensic computer examiners and attorneys can refer when preparing for expert testimony to support their decisions to perform or omit specific tasks concerning a given case [8].

3. Analysis and findings

The 5 technical and 5 legal experts rated each of the 103 tasks on a scale ranging from 0 (i.e, absolutely prohibited) to 4 (i.e., absolutely essential). There were many tasks where the experts agreed with one another within their panel and also between panels. There were, however, many tasks where the legal experts did not agree with the technical experts, and there were also tasks where there was conflict within the respective panels with respect to rating a task.

Table 1. Data acquisition tasks.

Task Task Description Technical

Mean Legal Mean Technical SD Legal SD

1 Purchase new target drives. 2.6 2.4 0.55 0.89

2 Wipe target disk drives. 4 3.6 0 0.55

3 Verify target disk drives are wiped. 3.6 3.6 0.55 0.55

4 Initialize & format target disk drives. 3.4 3.8 0.55 0.45

5 Prepare & verify toolkit – ensure equipment is fully functional. 3.6 3.6 0.55 0.89 6 Prepare & verify toolkit – ensure that all necessary HW

connectors & adapters are fully stocked.

3.6 3.4 0.55 0.89

7 Prepare & verify toolkit – ensure that all consumable items are fully stocked (bags, tags, forms, & log books).

2.6 3.4 0.89 0.89

8 Add additional items to forensic toolkit based on pre-acq. intelligence from requestor.

2.8 3.2 0.84 0.84

9 Obtain latest versions, releases, or updates for forensic SW tools. 3.2 3.2 0.84 0.45

10 Test forensic SW tools. 4 3.2 0 0.84

11 Create a write-blocking forensic boot floppy disk &/or CD. 3.2 3.6 0.84 0.89 12 Refer to checklist to ensure that all equipment is available prior to

beginning the data acq.

3 3 1 0.71

13 Receive written authorization to proceed with the case. 2.8 3.6 1.1 0.89

14 Assign an identification code to the case. 2.6 3.2 0.55 0.84

15 Obtain instructions from requestor concerning covert or overt data acq.

3 3.6 1 0.89

16 Document preparation tasks in log book prior to beginning the data acq.

2.6 2.8 0.55 0.45

17 Follow procedures identified in the acq. checklist. 2.6 3.2 0.89 0.84

18 View location of wkstn. prior to acq.. 2.8 3 0.45 0.71

19 Document all items connected to the wkstn. 3.2 3.6 0.84 0.55

20 Determine whether the wkstn. is powered on. 3.6 3.8 0.55 0.45

21 If the wkstn. is powered on, then reboot it. 0.2 0.8 0.45 0.84

(4)

Task Task Description Technical Mean Legal Mean Technical SD Legal SD blank, move the mouse to end the screen saver.

23 If the wkstn. is powered on & the workstation’s monitor is powered on & blank, press the space bar to end the screen saver.

1 1 1.22 0.71

24 If the wkstn. is powered on, examine it prior to powering it down to determine whether encryption may be in use.

2.4 1 1.34 0.82

25 If the wkstn. is powered on, perform a RAM dump. 2.8 1 0.45 0.82

26 If the wkstn. is powered on, collect volatile data. 2.8 1.5 0.45 1.29 27 If the wkstn. is powered on, perform a live acq.. 2.2 1.5 0.84 0.58 28 If the wkstn. is powered on, determine the type of OS in use prior

to selecting the power off method.

1.8 2 1.64 1

29 If wkstn. is powered on, photograph the displayed image shown on the wkstn’ monitor.

2.8 3 0.84 0.71

30 If wkstn. is powered on, determine the programs running. 2 2.4 1.41 0.89 31 If the wkstn. is powered on, power off the unit by using the OS

shutdown method.

1.4 0.8 0.89 0.45

32 If the wkstn. is powered on, power off the unit by pulling the electrical cord from the rear of the wkstn.

2 3.6 1.41 0.55

33 If the wkstn. is powered on, power off the unit by pressing & holding the power switch until the wkstn. is powered off.

1 1.4 0.71 0.55

34 If the wkstn. is powered off, leave it off until storage media is removed.

3.6 2.8 0.89 0.84

35 If the wkstn. is powered off, power it on. 0 0.25 0 0.5

36 Determine the current date & time from a reliable source. 3.8 3.8 0.45 0.45 37 Document the current date & time in log book. 3.6 3.8 0.55 0.45 38 Look for any potential devices detrimental to individual or

evidence safety.

3.6 3.4 0.89 0.89

39 Document the wkstn’ manufacturer, model & serial number. 3.8 4 0.45 0 40 Photograph the wkstn., including information regarding

manufacturer, model, & serial number.

3.2 3.4 0.45 0.89

41 Photograph the inside of the wkstn. 2.8 3 0.45 1

42 Photograph all sides of the wkstn. 2.8 2.8 0.45 0.84

43 Photograph the entire area surrounding the seized wkstn. 2.8 3.4 0.45 0.55 44 Sketch a diagram of the wkstn. with reference to its location &

connections in log book.

2.6 2.4 0.55 0.55

45 Document identity of individuals present at the scene of data acq. 2.6 3 0.89 1.22

46 Document the wkstn’ components in the log book. 3.2 3.4 0.45 0.55

47 Document the manufacturer, model, & serial number of all storage media in the log book.

2.8 3.8 0.84 0.45

48 Document irregularities, modifications or damage to the wkstn. 3.2 3.8 0.45 0.45

49 Remove the hard disk drive(s) from the wkstn. 3.6 2.8 0.55 0.84

50 Photograph the HDD(s) taken from the wkstn. including manufacturer, model, & serial number(s).

3 3.6 1 0.55

51 Document the pin settings of HDD(s) in log book. 3.6 3.6 0.55 0.55

52 Photograph the pin settings of HDD(s). 2.6 3.2 0.55 0.84

53 Remove diskettes from the wkstn. 3.2 3.8 1.3 0.45

54 Remove CDs from the wkstn. 3.2 3.8 1.3 0.45

55 Remove thumb drives from the wkstn. 3.2 3.8 1.3 0.45

56 Disconnect all USB devices from the wkstn. 3.2 3.2 1.3 1.3

57 ID any network connections, & document findings. 3.6 3.8 0.55 0.45 58 ID any telephone modem connections, & document findings. 3.6 3.8 0.55 0.45 59 ID & document all peripherals attached to wkstn. 3.6 3.8 0.55 0.45 60 ID & document all peripherals available to the wkstn. through

wired or wireless network connections.

3.4 3.8 0.55 0.45

61 Assign lab inventory numbers to each item seized & document in log book.

2.6 3.8 0.55 0.45

62 Document number of HDDs, size & disk geometry. 3.8 3.8 0.45 0.45 63 Using a write-protected method, preview contents of the suspect

wkstn to determine whether an image of the suspect wkstn is necessary.

2.4 2.4 1.14 0.89

(5)

Task Task Description Technical Mean Legal Mean Technical SD Legal SD

65 Seize external storage devices. 3.8 3.2 0.45 1.1

66 Seize documentation, manuals, & miscellaneous notes found in the proximity of the suspect wkstn.

3.4 3.2 0.55 1.1

67 Connect suspect HDD to a HW, write-blocking device, & obtain an image onto target media using a forensic wkstn.

3.8 3.8 0.45 0.45

68 Ensure that the suspect wkstn will boot from a SW, write-blocking forensic diskette or CD, replace the HDD in the wkstn., & obtain an image using a network crossover cable method to a target HDD attached to a forensic wkstn.

1.8 2.8 0.84 1.3

69 Install a known disk controller card in the suspect wkstn, connect the target HDD to the disk controller card, boot the suspect wkstn with SW write-protection forensic tools, & create an image to the target HDD using the suspect wkstn.

1.4 2.8 1.14 0.84

70 Use EnCase to obtain an image of suspect media. 2 2.8 0.71 0.45

71 Use AccessData’s FTK to obtain an image of suspect media. 2 2.4 0.71 0.89

72 Use Safeback to obtain an image of suspect media. 2 2.2 0.71 0.84

73 Use SPADA 3 to obtain an image of suspect media. 2 2 0.71 0.82

74 Use UNIX/Linux dd command to obtain an image of suspect media.

2.6 2.2 0.89 0.84

75 Generate a MD5 hash value of the forensic image. 3.8 3.6 0.45 0.55

76 Generate a SHA-1 hash value of the forensic image. 3.4 3.4 0.89 0.89 77 Allow the forensic SW used for imaging to automatically calculate

a MD5 hash value & then verify the MD5 hash value.

2.8 3.6 1.1 0.55

78 Perform a visual comparison using a hex editor to ensure that byte swapping or sector rotation did not occur during imaging.

2.4 2.8 0.89 0.84

79 Perform a visual comparison of the directory structure of the image & the suspect disk to verify that the image is readable.

2.4 3 1.52 0.71

80 With storage media removed, power on suspect wkstn. & document the date & time settings from BIOS.

3.4 3.6 0.55 0.55

81 With storage media removed, power on suspect wkstn. & determine the boot sequence settings from BIOS.

3.4 3 0.55 0.71

82 Reinstall media in suspect wkstn. 2.8 2.4 0.84 1.52

83 Preserve suspect media in its original condition & seal it. 3.4 3.4 0.55 0.89 84 Return wkstn to original condition & test for functionality if on-site. 2.8 3 1.3 1

85 Return Suspect wkstn. to the submitting agency. 2.4 3 0.89 1.41

86 Place suspect media in a secure storage area. 3 3.4 1 0.89

87 Place image sets in a secure storage areat. 3 3.8 1 0.45

88 Tag suspect media with chain-of-custody labels. 3.2 3.6 0.84 0.89

89 Replace suspect media in suspect wkstn., but don’t attach data & power cables to suspect media.

2.6 1.4 0.55 0.55

90 Place label on the suspect wkstn. to prevent powering on unit. 3 2.2 1 0.84 91 Place suspect media in an anti-static bag & store inside a manila

envelope in the lab.

3 2.4 1 0.55

92 Store suspect media in an offsite, confidential storage facility. 2.6 2.2 1.34 0.45 93 If instructed to do so, the equipment is returned as close as

possible to the original condition after imaging is complete.

3 3 1 0.71

94 Create a restore image of the suspect media onto a new HDD to be returned to the owner.

2.4 1.8 0.89 1.1

95 Create a clone copy of suspect media for analysis. 3 1.8 1 1.1

96 Write handwritten reports to document all activity performed during the data acq.

2.6 2.4 0.55 1.14

97 Print computer generated reports to document all activity performed during the data acq.

2.6 2.8 0.89 1.3

98 Issue a receipt for the items seized. 3 3.4 1 1.34

99 Make sure all items are identifiable by serial number or applied number/tag.

3.2 3.8 0.84 0.45

100 Archive image to DVDs. 2.8 2.6 0.45 0.55

101 Make additional copies of images for attorneys. 2.6 2.2 0.55 1.48

102 Request a written data destruction form to be sent to suspect if drive contains objectionable material.

2.2 2.5 0.45 1

103 During a field acq., obtain signed waiver from owner indicating that forensic image is now the “best evidence.”

(6)

3.1. Agreement among the experts

Table 1. Data acquisition tasks shows all 103 tasks with means and standard deviations for both the panel of technical experts and the panel of legal experts. This table is shown for completeness, but most of the discussion revolves around the tables that follow. Table 2. Correlations between the 10 experts shows correlations between each of the experts for the 103 tasks.

The average correlation is also shown for both the subset of all other experts and for the subset of experts in the panel that they belong. The correlations range from a low of 0.178 to a high of 0.62. All of the correlations, except the two under 0.2, are significantly different from 0 at a level of significance of .05. Correlations tend to be a little higher among the legal experts than among the technical experts.

Table 2. Correlations between the 10 experts. Part 1 Expert Tech 1 Tech 2 Tech 3 Tech 4 Tech 5 Tech1 1.000 0.178 0.425 0.453 0.390 Tech2 1.000 0.271 0.222 0.469 Tech3 1.000 0.465 0.686 Tech4 1.000 0.461 Tech5 1.000 Part 2 Expert Legal 1 Legal 2 Legal 3 Legal 4 Legal 5 Tech1 0.374 0.432 0.297 0.354 0.395 Tech2 0.324 0.336 0.292 0.452 0.183 Tech3 0.504 0.567 0.553 0.506 0.441 Tech4 0.234 0.418 0.288 0.395 0.431 Tech5 0.488 0.558 0.620 0.498 0.537 Legal1 1.000 0.441 0.585 0.520 0.459 Legal2 1.000 0.421 0.541 0.412 Legal3 1.000 0.539 0.419 Legal4 1.000 0.384 Legal5 1.000

Part 3 Average Correlation Expert All others Within Group Tech1 0.367 0.362 Tech2 0.330 0.346 Tech3 0.491 0.457 Tech4 0.374 0.400 Tech5 0.523 0.502 Legal1 0.436 0.501 Legal2 0.458 0.490 Legal3 0.446 0.491 Legal4 0.466 0.496 Legal5 0.407 0.419

Table 3. Top rated tasks shows all tasks with a mean of 3.5 or higher. Although there were no tasks that showed a consistent score of 4 for all members of the combined panel, three tasks had a consistent score of 4 within a given panel. Task 2 was given a score of 4 for each member of the technical panel and a mean score of 3.6 from the members of the legal panel.

Task 39 was given a score of 4 by each member of the panel of legal experts and a mean score of 3.8 from members of the panel of technical experts. The only other consistent score of 4 by all members of the panel was for task 10, where the mean score for the panel of legal experts was quite a bit less at 3.2. Table 3. Top rated tasks shows a lot of agreement between the two different panels with a mean difference of 0.6 or less for all tasks, except task 10, which has a difference of 0.8.

Table 3. Top rated tasks. Task Overall Mean Technical Mean Legal Mean 39 3.9 3.8 4 2 3.8 4 3.6 36 3.8 3.8 3.8 62 3.8 3.8 3.8 67 3.8 3.8 3.8 20 3.7 3.6 3.8 37 3.7 3.6 3.8 57 3.7 3.6 3.8 58 3.7 3.6 3.8 59 3.7 3.6 3.8 75 3.7 3.8 3.6 3 3.6 3.6 3.6 4 3.6 3.4 3.8 5 3.6 3.6 3.6 10 3.6 4 3.2 51 3.6 3.6 3.6 60 3.6 3.4 3.8 6 3.5 3.6 3.4 38 3.5 3.6 3.4 48 3.5 3.2 3.8 53 3.5 3.2 3.8 54 3.5 3.2 3.8 55 3.5 3.2 3.8 65 3.5 3.8 3.2 80 3.5 3.4 3.6 99 3.5 3.2 3.8

3.2. Conflict among the experts

(7)

Tables 1 and 3 reveal much similarity between the ratings of tasks between the two panels, but Table 4. Largest technical and legal conflicts features tasks where the two panels rate the same tasks very differently showing conflict between the two panels. Tasks 25, 24, 26, 95 and 89 are all rated much higher, on average, by the technical panel members than by the legal panel members, with all showing a mean difference of 1.0 or more. Tasks 32, 69, 61, 47 and 68 all are rated much higher, on average, by the legal panel members than by the technical panel members, also with all showing a mean difference of 1.0 or more. Table 4. Largest technical and legal conflicts also reports the p-value from a t-test for differences in means. Most of the p-values do not show a significant difference at a usual significance level of .05 because the sample sizes of 5 are very small and also because of conflict within each panel that results in large standard deviations.

Table 4. Largest technical and legal conflicts Task Technical Mean Legal Mean p-value 25 2.80 1.00 0.004 32 2.00 3.60 0.046 69 1.40 2.80 0.058 24 2.40 1.00 0.111 26 2.80 1.50 0.071 61 2.60 3.80 0.005 95 3.00 1.80 0.108 89 2.60 1.40 0.009 47 2.80 3.80 0.046 68 1.80 2.80 0.187 10 4.00 3.20 0.065 87 3.00 3.80 0.141 13 2.80 3.60 0.242 34 3.60 2.80 0.182 49 3.60 2.80 0.111 77 2.80 3.60 0.182 7 2.60 3.40 0.195 90 3.00 2.20 0.207 70 2.00 2.80 0.065 22 1.00 1.80 0.308

Tables 5 and 6 reveal the tasks that show the most conflict within each panel. Table 5. Technical panel conflict examines the dynamics within the Technical panel of experts and shows 18 different tasks that have a standard deviation of more than 1. It is ordered by the magnitude of standard deviation, and it shows the actual ratings from each expert as well. There is an extremely large variety of ratings for these tasks. For example, task 28 shows one expert giving a rating of 0 (i.e., absolutely prohibited) and another giving a rating of 4 (i.e., absolutely essential). Tasks 64 and 79 are similar in that they

exhibit the same range of responses. The remaining tasks also show a wide range of ratings among the panel members, varying from either 1 to 4 or from 0 to 3, except for task 13. Task 13 shows three members of the panel with a rating of 2 and the remaining two members agreeing with a rating of 4.

Table 5. Technical panel conflict Task Technical SD T 1 T 2 T 3 T 4 T 5 28 1.64 1 3 1 4 0 64 1.52 4 1 1 1 0 79 1.52 4 2 3 3 0 30 1.41 3 3 1 3 0 32 1.41 4 1 3 1 1 92 1.34 2 4 1 2 4 24 1.34 3 3 3 3 0 56 1.30 4 4 3 1 4 84 1.30 4 4 1 2 3 53 1.30 4 4 3 1 4 54 1.30 4 4 3 1 4 55 1.30 4 4 3 1 4 22 1.22 1 3 0 1 0 23 1.22 1 3 0 1 0 63 1.14 4 2 1 3 2 69 1.14 3 2 1 1 0 13 1.10 2 4 2 2 4 77 1.10 4 1 3 3 3

Table 6. Legal panel conflict is similar to Table 5 in that it shows a wide range of tasks that show conflict among members of the panel. It is interesting to note that the two tables overlap with only three tasks, 22, 56 and 64. The largest range occurred in tasks 82 and 101 where the experts’ ratings vary from 0 to 4, the largest possible discrepancy. Most of the rest of the tasks show variability in ratings from either 1 to 4 or from 0 to 3. The only exceptions are tasks 22, 65 and 66.

Table 6. Legal panel conflict

Task Legal SD L1 L2 L3 L4 L5 82 1.52 0 3 3 4 2 101 1.48 0 4 3 2 2 85 1.41 4 1 4 4 2 98 1.34 4 1 4 4 4 56 1.30 4 4 3 4 1 97 1.30 3 4 2 4 1 68 1.30 4 1 3 4 2 26 1.29 0 3 1 2 64 1.22 3 0 1 0 1 45 1.22 4 4 3 3 1 103 1.14 2 0 3 1 2 96 1.14 3 2 2 4 1 22 1.10 3 1 1 3 1 95 1.10 0 2 2 2 3 94 1.10 0 3 2 2 2

(8)

66 1.10 4 4 2 4 2

65 1.10 4 4 2 4 2

3.3. Topics of agreement or conflict

A closer inspection of the task descriptions where the experts largely were in agreement or conflict helps to identify topics where additional clarification is beneficial. Although not problematic, the tasks identified as those with high agreement scores represent tasks where the technical and legal aspects of those tasks are likely to be better understood by the experts. The areas of concern focus on those tasks where conflict scores were highest.

Two topics are observed as being particularly problematic regarding tasks with high levels of conflict. The first area represents those tasks pertaining to a suspect computer workstation that is running at the time the forensic examiner encounters it. The second area represents those tasks pertaining to disconnecting or removing secondary storage devices other than hard disks. Both of these areas are discussed below.

Overall, fifteen of the 103 tasks represent conditions dependent upon the computer workstation being either on or off. Tasks 21 through 33 begin with the condition, “If the computer workstation is powered on,” and tasks 34 and 35 being with the condition, “if the computer workstation is powered off.” First, it is interesting to note that none of these tasks are listed in Table 3. Top rated tasks, as the experts did not reach high levels of agreement on any of these fifteen tasks. More problematic is the high level of conflict that occurred among the tasks within this topic. Six of these tasks had the highest levels of conflict within the panel of technical experts, two tasks had the highest level of conflict within the panel of legal experts, and four of the tasks had the highest level of conflict between the two panels of experts.

For example, task 22, which states, “if the computer workstation is powered on and the workstation’s monitor is powered on and blank, move the mouse to terminate the screen saver,”

obtained high levels of conflict within both panels of experts. One member of the panel of technical experts and two members of the panel of legal experts indicated that this task was desired, two members of the panel of technical experts and three members of the panel of legal experts indicated that this task was undesired, and two members of the panel of technical experts indicated that this task is absolutely prohibited.

Additionally, although tasks 24 through 26 concern similar concepts pertaining to gathering information from a computer workstation prior to

powering it off, each these three tasks achieved high levels of conflict between the two expert panels, while task 24 obtained a high level of conflict within the panel of technical experts and task 26 obtained a high level of conflict within the panel of legal experts.

Clearly, the experts were not in agreement concerning the performance of tasks when confronting a computer workstation that is powered on. This represents an area where additional information would be helpful to provide a better understanding of best practices among computer forensics practitioners.

High conflict scores were also observed in the four tasks concerning the topic of disconnecting or removing secondary storage devices other than hard disks; however, it is particularly confounding that three of these tasks are also among those tasks with the highest levels of agreement. Tasks 53, 54, and 55 are listed in Table 3. Top rated tasks, as their mean scores are all 3.5. Also notice that each of these tasks earned consistent scores from the individual experts. In other words, although there were differences among the scores assigned by the panel members for each of these tasks, each expert was individually consistent by assigning the same score for all three tasks. For each of these three tasks, technical panel member numbers 1, 2, and 5 assigned a score of absolutely essential to the task, technical panel member number 3 assigned a score of desired, and technical panel member number 4 assigned a score of undesired. These scores resulted in a high level of conflict within the panel of technical experts. However, when considering the overall scores of the panel of technical experts with the panel of legal experts, with four members indicating that the tasks were absolutely essential and one member indicating that the tasks were desired, the two panels were largely in agreement

While the conditions found in tasks 53, 54, and 55 might indicate an outlier with technical panel member number 4, this view looses some merit when these three tasks are considered along with the scores of similar task 56. Task 56, disconnect all USB devices from the system unit, does not achieve a high level of agreement between the expert panels, and there is a high level of conflict within each panel. Interestingly, both panels had the same number of members issuing the same scores for task 56. Both panels had three members issue a score of absolutely essential, one member indicated desired, and one member assigned the score of undesired to the task.

Again, there appears to be disagreement among the experts concerning the treatment of secondary storage devices other than hard disks. Additional

(9)

clarification concerning the best practices within this topic seems necessary, as mishandling of secondary storage media is likely to result in lost or inadmissible data.

In addition to the two topics discussed above, it is also interesting to note that of the ten tasks identified as having the highest level of conflict between the panels, scores indicating high conflict within one panel occurred in six of them. Additionally, three tasks were identified as having high conflict within the panel of technical experts and within the panel of legal experts. Of those three tasks, tasks 22 and 56 were discussed above; however, task 64, regarding filtering data based on attorney-client privilege prior to imaging, does not fit into the two topics discussed above. For task 64, one member of the technical panel indicated that it is absolutely essential, one member of the legal panel indicated that the task is desired, three members of the technical panel and two members of the legal panel indicated that it is undesired, and one member of the technical panel and two members of the legal panel indicated that is absolutely prohibited. This high value of conflict within both panels illustrates confusion in an area concerning e-discovery matters, and this is an area that is thought will experience high growth rates within the next several years. Computer forensics examiners will be well-served by additional clarification concerning best practices in e-discovery matters too.

4. Conclusions

Our analysis of the data resulted in several interesting findings involving agreement and conflict among experts of computer forensics. Although the observations are interesting, our findings are bound by several limitations, and we see the need for more work to be done on this topic. We will summarize our observations, discuss limitations of our study, and present a call for additional research below.

4.1. Summary of observations

It is interesting to note the differences in the levels of agreement observed from the various experts that participated in this study. While it is relatively easy to understand that some of the differences are due to the different perspectives from which the legal experts and the technical experts were asked to evaluate the tasks, there were many differences among each group, as well as differences between the groups. Also, from the complete set of 103 tasks evaluated, only 26 tasks achieved a level of

high agreement among all of the experts, representing only 25% of the tasks.

Tasks where general agreement occurred within each group of experts, yet conflicting ratings occurred between the groups were observed; however, this condition accounted for just under 10% of the total tasks evaluated. For example, only 10 tasks from the set of 103 tasks represent conditions where agreement occurred among the members of each panel, and the two panels reached conflicting results. Although the differences in ratings between panels can be explained through the specific conditions from which each panel evaluates tasks, differences between members within a panel of experts is more difficult to rationalize, and it occurred more frequently than did the instances where agreement occurred within panels yet conflict occurred between panels. In 16.5% of the tasks evaluated, members of the panel of legal experts reached conflicting ratings whereby at least one member felt very strongly that the task should be performed while at least one other member of the panel strongly felt that the task should not be performed. Even more conflicting was the level of disagreement among the members of the panel of technical experts, as they reached conflicting ratings in 17.5% of the tasks.

The level of conflict identified in this study cannot be attributed to one panel member providing outlying responses, as only one task among the subset of tasks that highly aligned (i.e., those shown in Table 3. Top rated tasks) was included in the subset of tasks with largest conflicts (i.e., Table 4. Largest technical and legal conflicts). This task, task 10, concerned testing forensic software tools.

Overall, the large extent of conflict among forensic computer experts raises concern regarding reaching predictable outcomes when used in legal matters. Forensic science is based upon using a measureable, scientific process to reach an unbiased conclusion, yet as this study illustrates, different forensic computer experts frequently do not reach the same conclusion concerning the importance to forensic task performance.

4.2. Limitations

Although we attempted to be thorough in our analysis, it is important to note that numerous limitations exist, especially concerning the data collected. This study limited its survey population to the HTCIA; therefore, bias from the study population may impact the data collected [8]. However, it is thought that opinions of experts within an organization, such as the HTCIA, are more likely to

(10)

align than would opinions from a more diverse group of experts, thus measures of conflict are thought to be conservative in this report.

Also, concerning respondent bias, this study generated its output from a limited number of responses. Non-respondents expressed reasons for not participating that included distrust, being too busy, vacation, and difficulty authenticating themselves on the survey’s Website. Invalid e-mail addresses and spam blocking filters also contributed to the reduction of responses [8].

The set of 103 tasks presented within this report are not implied to represent a comprehensive set of tasks forensic examiners perform pertaining to the forensic data acquisition of personal computer workstations. This set of tasks is limited to those that were identified by respondents of this study. No conditional logic regarding the performance of tasks is suggested nor is the sequence of the performance of tasks [8].

4.3. Call for additional research

Given the importance of expert testimony in legal proceedings and the level of conflict among forensic computer experts revealed within this study, more study is needed to develop a better understanding of the causes of conflict and solutions to reduce conflict. For example, future studies may identify beneficial solutions from licensing organizations, industry standards, mandatory training, or legislation regarding the credentials of forensic computer examiners. Clearly, the inconsistency among forensic computer examiners’ opinions identified within this study illustrates a weakness within our legal system that has the potential to alter trail outcomes, thus

allowing the guilty to be acquitted and the not-guilty to be wrongly convicted.

5. References

[1] Volonino, L., Anzaldua, R., and Godwin, J., Computer Forensics Principles and Practices, Prentice Hall, Upper Saddle River, New Jersey, 2007.

[2] Nelson, B., Phillips, A., Enfinger, F., and Stewart, C., Guide to Computer Forensics and Investigations, 3rd Ed.,

Thomson, Boston, 2008.

[3] Kerr, O.S., “Digital Evidence and the New Criminal Procedure”, Columbia Law Review, 105(1) 2005, p.279-318.

[4] Knapp, K.L., Meeting the Daubert Challenge: A Model to Test the Relevance and Reliability of Expert Testimony, ProQuest, Ann Arbor, Michigan, UMI 3098259, 2003. [5] National Institute of Justice, Forensic Examination of Digital Evidence: A Guide for Law Enforcement, (NCJ 199408), U.S. Government Printing Office, Washington, DC, 2004.

[6] Carlton, G.H., A Protocol for the Forensic Data Acquisition of Personal Computer Workstations, ProQuest, Ann Arbor, Michigan, UMI 3251043, 2007.

[7] Glaser, B.G., and Strauss, A.L., The Discovery of Grounded Theory: Strategies for Qualitative Research, Aldine Publishing Co., New York, 1967.

[8] Carlton, G.H., Forensic Data Acquisition Task Performance Guide – The Identification and Measurement of a Protocol for the Forensic Data Acquisition of Personal Computer Workstations, http://www.htcia.org, 2006.

References

Related documents

In this design, the computation and storage capacity of a single controller is multiplied as the number of nodes in the cluster increases, which is ideal for a component controller

For the topologies studied, this suggests that in an idealized fractional bandwidth routed network the increase in network throughput achieved by improving the transceiver coding can

The ethno botanical efficacy of various parts like leaf, fruit, stem, flower and root of ethanol and ethyl acetate extracts against various clinically

Assessment of genetic diversity and relationship of coastal salt tolerant rice accessions of Kerala (South India) using microsatellite markers.. Jithin Thomas * and

Remittances to urban households have a larger impact on moving children completely out of the labor force, but a smaller increase in remittance size is needed for an equal reduction

Finally, HRM issues, even when strategic, are considered by top management to strategy implementation phase and not strategy formulation phase (Russ et al. , 1998) study

activities. The revenue portion is a linear function of throughput: the number of trains served by the railroad per unit time multiplied by a destination speeific railroad charge.

Visit California has a content management team that sits under the marketing organization and the social media program is managed in conjunction with brand, digital and