Replication and automation of expert judgments: Information engineering in legal e-discovery

TitleReplication and automation of expert judgments: Information engineering in legal e-discovery
Publication TypeConference Papers
Year of Publication2009
AuthorsHedin B, Oard D
Conference NameIEEE International Conference on Systems, Man and Cybernetics, 2009. SMC 2009
Date Published2009/10/11/14
PublisherIEEE
ISBN Number978-1-4244-2793-2
Keywordsauthorisation, authority control, Automation, civil litigation, CYBERNETICS, Delay, digital evidence retrieval, discovery request, Educational institutions, expert judgment automation, Human computer interaction, Human-machine cooperation and systems, human-system task modeling, information engineering, Information retrieval, interactive task, Law, law administration, legal e-discovery, Legal factors, PROBES, Production, Protocols, search effort, Search methods, text analysis, text retrieval conference legal track, United States, USA Councils, User modeling
Abstract

The retrieval of digital evidence responsive to discovery requests in civil litigation, known in the United States as ¿e-discovery,¿ presents several important and understudied conditions and challenges. Among the most important of these are (i) that the definition of responsiveness that governs the search effort can be learned and made explicit through effective interaction with the responding party, (ii) that the governing definition of responsiveness is generally complex, deriving both from considerations of subject-matter relevance and from considerations of litigation strategy, and (iii) that the result of the search effort is a set (rather than a ranked list) of documents, and sometimes a quite large set, that is turned over to the requesting party and that the responding party certifies to be an accurate and complete response to the request. This paper describes the design of an ¿interactive task¿ for the text retrieval conference's legal track that had the evaluation of the effectiveness of e-discovery applications at the ¿responsive review¿ task as its goal. Notable features of the 2008 interactive task were high-fidelity human-system task modeling, authority control for the definition of ¿responsiveness,¿ and relatively deep sampling for estimation of type 1 and type 2 errors (expressed as ¿precision¿ and ¿recall¿). The paper presents a critical assessment of the strengths and weaknesses of the evaluation design from the perspectives of reliability, reusability, and cost-benefit tradeoffs.

DOI10.1109/ICSMC.2009.5346118