Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Every Tuesday we publish a list of links to articles and debates that have happened over the past week in the community, allowing you to stay as up-to-date as possible on developments and facts. We will often link to arguments from all sides of the debate, even if the opinions may be controversial. We would like to mention, however, that EI does not endorse any of the information published, all links are reflections of the author's opinions and not that of Ethical Intelligence.
Using artificial intelligence to scale up human rights research: a case study on Darfur
"In this article we provide a technical exploration of the potential of artificial intelligence for large-scale analysis of satellite data to detect the destruction of human settlements, with a case study on Sudan’s Darfur region."
Artificial intelligence and crime: A primer for criminologists
"This article introduces the concept of Artificial Intelligence (AI) to a criminological audience. After a general review of the phenomenon (including brief explanations of important cognate fields such as ‘machine learning’, ‘deep learning’, and ‘reinforcement learning’), the paper then turns to the potential application of AI by criminals, including what we term here ‘crimes with AI’, ‘crimes against AI’, and ‘crimes by AI’. In these sections, our aim is to highlight AI’s potential as a criminogenic phenomenon, both in terms of scaling up existing crimes and facilitating new digital transgressions. In the third part of the article, we turn our attention to the main ways the AI paradigm is transforming policing, surveillance, and criminal justice practices via diffuse monitoring modalities based on prediction and prevention. Throughout the paper, we deploy an array of programmatic examples which, collectively, we hope will serve as a useful AI primer for criminologists interested in the ‘tech-crime nexus’."
First, They Came for the Old and Demented:Care and Relations in the Age of Artificial Intelligence and Social Robots
"Health care technology is all the rage, and artificial intelligence (AI) has long since made its inroads into the previously human-dominated domain of care. AI is used in diagnostics, but also in therapy and assistance, sometimes in the form of social robots with fur, eyes and programmed emotions. Patient welfare, working conditions for the caretakers and cost-efficiency are routinely said to be improved by employing new technologies. The old with dementia might be provided with a robot seal, or a humanoid companion robot, and if these companions increase the happiness of the patients, why should we not venture down this road? Come to think of it, when we have these machines, why not use them as tutors in our schools and caretakers for our children? "
The Loss Of Public Goods To Big Tech
"We cannot automate the tough decisions, the redistributions of power and the everyday behavior it will take to make just societies. We will not compute our way out of these crises to the better future we want."
Bias in AI: Taking the Broad View
"I’m going to riff a bit on the idea of where bias comes from in AI systems. Specifically, in today’s episode of the podcast featuring my discussion with AI Ethics researcher Deb Raji I note, “I don’t fully get why it’s so important to some people to distinguish between algorithms being biased and data sets being biased.”"
What ethical models for autonomous vehicles don't address, and how they could be better
"There's a fairly large flaw in the way that programmers are currently addressing ethical concerns related to artificial intelligence and autonomous vehicles (AVs). Namely, existing approaches don't account for the fact that people might try to use AVs to do something bad. "
MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs
"MIT has taken offline its highly cited dataset that trained AI systems to potentially describe people using racist, misogynistic, and other problematic terms."
An unethical optimization principle
"If an artificial intelligence aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk."
Doomsday predictions’ about AI replacing radiologists are unrealistic, dangerous
"“Image recognition technologies are among the most familiar AI models in health care research,” Banja wrote. “But it is a long way from observing the success of an AI model in a research setting to implementing it in routine clinical practice. And it is a much longer way still to replacing human radiologic expertise.”"
Bias in AI: Much more than a Data problem
"While data bias is a very well-known cause for AI unfairness, it is definitely not the only one"
Teaching AI to be Evil with Unethical Data
"For AI Machine Learning (ML) and Deep Learning (DL) frameworks, the training data sets are a crucial element that defines how the system will operate. Feed it skewed or biased information and it will create a flawed inference engine."
How AI can empower communities and strengthen democracy
"This story is written with a clear understanding that techno-solutionism is no panacea and AI can be used to achieve both positive and negative aims. But this annual series highlights beneficial uses of AI because we all deserve to keep dreaming about ways the technology can empower people and help build stronger communities and a more just society."
Privacy is not the problem with the Apple-Google contact-tracing toolkit
"New tools give tech giants the power to shape communities and change behaviour, all without any data leaving our phones"