000 | 03411nam a2200409 i 4500 | ||
---|---|---|---|
001 | 00012172 | ||
003 | WSP | ||
007 | cr cnu|||unuuu | ||
008 | 210428s2021 si ob 000 0 eng d | ||
040 |
_a WSPC _b eng _e rda _c WSPC |
||
010 | _z 2021009800 | ||
020 |
_a9789811232732 _q(ebook) |
||
020 |
_z9789811232725 _q(hardback) |
||
043 | _an-us--- | ||
050 | 0 | 4 |
_aKF9223 _b.F67 2021 |
072 | 7 |
_aCOM _x004000 _2bisacsh |
|
082 | 0 | 4 |
_a345.7300285/63 _223 |
100 | 1 |
_aForrest, Katherine Bolan, _d1964- _eauthor. _9178472 |
|
245 | 1 | 0 |
_aWhen machines can be judge, jury, and executioner : _bjustice in the age of artificial intelligence / _cby Katherine B. Forrest. |
264 | 1 |
_aSingapore : _bWorld Scientific, _c2021. |
|
300 | _a1 online resource (xxiii, 134 pages) | ||
336 |
_atext _btxt _2rdacontent |
||
337 |
_acomputer _bc _2rdamedia |
||
338 |
_aonline resource _bcr _2rdacarrier |
||
538 | _aMode of access: World Wide Web. | ||
538 | _aSystem requirements: Adobe Acrobat Reader. | ||
504 | _aIncludes bibliographical references. | ||
505 | 0 | _aAcknowledgments -- About the author -- Introduction -- Utilitarianism versus justice as fairness -- AI and how it works -- Transparency in decisions about human liberty : the means to the end do matter -- Decision-making : the human as case study -- Decision-making : AI as case study -- As old as the hills : when humans assess risk -- AI risk assessment tools : achieving moderate accuracy -- COMPAS : case study of an AI risk assessment tool -- COMPAS is not alone : other AI risk assessment tools -- Accuracy over fairness -- Lethal autonomous weapons and fairness -- Conclusion -- Suggested additional reading. | |
520 | _a"This book explores justice in the age of artificial intelligence. It argues that current AI tools used in connection with liberty decisions are based on utilitarian frameworks of justice and inconsistent with individual fairness reflected in the US Constitution and Declaration of Independence. It uses AI risk assessment tools and lethal autonomous weapons as examples of how AI influences liberty decisions. The algorithmic design of AI risk assessment tools can and do embed human biases. Designers and users of these AI tools have allowed some degree of compromise to exist between accuracy and individual fairness. Written by a former federal judge who lectures widely and frequently on AI and the justice system, this book is the first comprehensive presentation of the theoretical framework AI tools in the criminal justice system and lethal autonomous weapons utilize in decision-making. The book then provides the most comprehensive explanation as to why, tracing the evolution of the debate regarding racial and other biases embedded in such tools. No other book delves as comprehensively into the theory and practice of AI risk assessment tools"--Publisher's website. | ||
650 | 0 |
_aCriminal justice, Administration of _zUnited States _xData processing. _9178473 |
|
650 | 0 |
_aArtificial intelligence _xLaw and legislation _zUnited States. _9178474 |
|
650 | 0 |
_aJudicial process _zUnited States _xData processing. _9178475 |
|
650 | 0 |
_aDecision making _xMoral and ethical aspects _zUnited States. _9178476 |
|
655 | 0 |
_aElectronic books. _93294 |
|
856 | 4 | 0 |
_uhttps://www.worldscientific.com/worldscibooks/10.1142/12172#t=toc _zAccess to full text is restricted to subscribers. |
942 | _cEBK | ||
999 |
_c97791 _d97791 |