Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Every Tuesday we publish a list of links to articles and debates that have happened over the past week in the community, allowing you to stay as up-to-date as possible on developments and facts. We will often link to arguments from all sides of the debate, even if the opinions may be controversial. We would like to mention, however, that EI does not endorse any of the information published, all links are reflections of the author's opinions and not that of Ethical Intelligence.
The State of AI Ethics Report (June 2020)
"We cover a wide set of areas in this report spanning Agency and Responsibility, Security and Risk, Disinformation, Jobs and Labor, the Future of AI Ethics, and more. Our staff has worked tirelessly over the past quarter surfacing signal from the noise so that you are equipped with the right tools and knowledge to confidently tread this complex yet consequential domain."
The Oxford Handbook of Ethics of AI: An Annotated Bibliography
The new The Oxford Handbook of Ethics of AI will be released June 30, and C4E has published an annotated biblography in advance.
If AI is going to help us in a crisis, we need a new kind of ethics
"Ethics for urgency means making ethics a core part of AI rather than an afterthought, says Jess Whittlestone"
Magic of the machine: can artificial intelligence invent?
"The Copyright, Designs and Patents Act 1988 provides at section 9(3) that author of a computer-generated work 'shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken', there is a similar provision in the Registered Designs Act 1949. So, the authorship issue associated with copyright has at that level been resolved. But the greater and more fundamental problem remains: when should a work by a computer attract statutory protection as intellectual property?"
AI experts say research into algorithms that claim to predict criminality must end
"A coalition of AI researchers, data scientists, and sociologists has called on the academic world to stop publishing studies that claim to predict an individual’s criminality using algorithms trained on data like facial scans and criminal statistics."
Abolish the #TechToPrisonPipeline
More on the response to Springer and "A Deep Neural Network Model to Predict Criminality Using Image Processing"
SciFri Extra: A Pragmatic Wishlist For AI Ethics
"In this SciFri Extra, we continue a conversation between producer Christie Taylor, Deborah Raji from NYU’s AI Now Institute, and Princeton University’s Ruha Benjamin about how to pragmatically move forward to build artificial intelligence technology that takes racial justice into account—whether you’re an AI researcher, a tech company, or a policymaker."
AI Weekly: A deep learning pioneer’s teachable moment on AI bias
"The entire episode between two of the best-known AI researchers in the world started about a week ago with the release of PULSE, a computer vision model created by Duke University researchers that claims it can generate realistic, high-resolution images of people from a pixelated photo."
California city bans predictive policing in U.S. first
"As officials mull steps to tackle police brutality and racism, California’s Santa Cruz has become the first U.S. city to ban predictive policing, which digital rights experts said could spark similar moves across the country."
The Futility of Bias-Free Learning and Search
"Building on the view of machine learning as search, we demon-strate the necessity of bias in learning, quantifying the role of bias (mea-sured relative to a collection of possible datasets, or moregenerally, in-formation resources) in increasing the probability of success. For a givendegree of bias towards a fixed target, we show that the proportion offavorable information resources is strictly bounded from above. Further-more, we demonstrate that bias is a conserved quantity, suchthat noalgorithm can be favorably biased towards many distinct targets simul-taneously. Thus bias encodes trade-offs"
Detroit Police Chief: Facial Recognition Software Misidentifies 96% of the Time
"Detroit regulated facial recognition software. It's still used only on Black people."
Coronavirus: NHS hospitals turn to algorithms to help clear post-COVID backlog
"NHS hospitals are using algorithms to sort patients waiting in the vast backlog of appointments caused by coronavirus, Sky News has learned."
The Purgatory of Digital Punishment
"It doesn’t matter whether they’re accurate—criminal records are all over the internet, where anyone can find them. And everyone does."
Mathematicians Urge Ending Work With Police
"The letter writers take particular aim at "predictive policing," which involves using data and mathematics to predict where crime will happen."
Wrongfully Accused by an Algorithm
"In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit."
IReEn: Iterative Reverse-Engineering of Black-Box Functions via Neural Program Synthesis
"In this work, we investigate the problem of revealing the functionality of a black-box agent. Notably, we are interested in the interpretable and formal description of the behavior of such an agent. Ideally, this description would take the form of a program written in a high-level language. This task is also known as reverse engineering and plays a pivotal role in software engineering, computer security, but also most recently in interpretability. In contrast to prior work, we do not rely on privileged information on the black box, but rather investigate the problem under a weaker assumption of having only access to inputs and outputs of the program."