Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Every Tuesday we publish a list of links to articles and debates that have happened over the past week in the community, allowing you to stay as up-to-date as possible on developments and facts. We will often link to arguments from all sides of the debate, even if the opinions may be controversial. We would like to mention, however, that EI does not endorse any of the information published, all links are reflections of the author's opinions and not that of Ethical Intelligence.
First Ever Decision of a French Court Applying GDPR to Facial Recognition
A French court canceled a decision by the South-Est Region of France to undertake a series of tests using facial recognition at the entrance of two High schools considering that this would be illegal. This is the first decision ever by a French Court applying the General Data Protection Regulation (GDPR) on Facial Recognition Technologies (FRTs).
A Framework for Responsible Limits on Facial Recognition
"The World Economic Forum’s Framework for the Responsible use of facial recognition technology seeks to address the need for a set of concrete guidelines to ensure the trustworthy and safe use of this technology. This framework enables Governments to protect citizens from various harms potentially caused by facial recognition technology while supporting beneficial applications. It also enables industry actors to demonstrate that they have implemented robust risk mitigation processes through an independent audit of their systems."
Crowdsourcing Moral Machines
"... We believe bringing about accountable intelligent machines that embody human ethics requires an interdisciplinary approach. First, engineers build and refine intelligent machines, and tell us how they are capable of operating. Second, scholars from the humanities—philosophers, lawyers, social theorists—propose how machines ought to behave, and identify hidden moral hazards in the system. Third, behavioral scientists, armed with tools for public engagement and data collection like the MM, provide a quantitative picture of the public's trust in intelligent machines, and of their expectations of how they should behave.b Finally, regulators monitor and quantify the performance of machines in the real world, making this data available to engineers and citizens, while using their enforcement tools to adjust the incentives of engineers and corporations building the machines."
Bad news for explainability?
"One of the strangest mysteries in AI is that you can average two models and get a result superior to either model alone."
An interesting twitter discussion on a surprising way to improve ML, the exact characterization of which is not quite certain, but perhaps by mitigating overfitting.
How hard will the robots make us work?
"In warehouses, call centers, and other sectors, intelligent machines are managing humans, and they’re making work more stressful, grueling, and dangerous"
As humanity’s relationship with AI grows, experts call for protective framework
"Imperial College London researchers have suggested a new regulatory framework with which governments can minimise unintended consequences of our relationship with technology. The comment piece is published in Nature Machine Intelligence."
"The proposed framework, known as the Human Impact Assessment for Technology (HIAT), would be designed to predict and evaluate the impact that new digital technologies have on society and individual wellbeing. This, they argue, should focus on ethical considerations like individual privacy, wellbeing and autonomy."
Can you sell your own data and therefore consent to how it will be used downstream?
This is an important question in light of regulations like the CCPA and calls for users to be compensated for the data extracted from them. A nice thread from Rachel Thomas.
In Coronavirus Fight, China Gives Citizens a Color Code, With Red Flags
"As China encourages people to return to work despite the coronavirus outbreak, it has begun a bold mass experiment in using data to regulate citizens’ lives — by requiring them to use software on their smartphones that dictates whether they should be quarantined or allowed into subways, malls and other public spaces.
But a New York Times analysis of the software’s code found that the system does more than decide in real time whether someone poses a contagion risk. It also appears to share information with the police, setting a template for new forms of automated social control that could persist long after the epidemic subsides."
How Adversarial Attacks Could Destabilize Military AI Systems
"Artificial intelligence and robotic technologies with semi-autonomous learning, reasoning, and decision-making capabilities are increasingly being incorporated into defense, military, and security systems. Unsurprisingly, there is increasing concern about the stability and safety of these systems. In a different sector, runaway interactions between autonomous trading systems in financial markets have produced a series of stock market “flash crashes,” and as a result, those markets now have rules to prevent such interactions from having a significant impact"
More on Adversarial AI and The risks of algorithmic (il)literacy on healthcare platforms
We missed this one last week, a very nice discussion of the use of machine learning in health care and some of the ethical problems this raises, in particular, the problem of expertise that we must trust, but that we cannot engage with.
We also missed a related piece from Wired on how easily algorithms can be fooled, for example, when assessing a medical claim.
Even more further reading on deception from IEEE, AI Deception: When Your Artificial Intelligence Learns to Lie
And there's even a conference coming up: 1st International Workshop on Deceptive AI @ECAI2020
Can YouTube Quiet Its Conspiracy Theorists?
The extreme and radicalizing nature of Youtube's algorithm has been a topic of significant discussion, an now "A new study examines YouTube’s efforts to limit the spread of conspiracy theories on its site, from videos claiming the end times are near to those questioning climate change."