Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Every Tuesday we publish a list of links to articles and debates that have happened over the past week in the community, allowing you to stay as up-to-date as possible on developments and facts. We will often link to arguments from all sides of the debate, even if the opinions may be controversial. We would like to mention, however, that EI does not endorse any of the information published, all links are reflections of the author's opinions and not that of Ethical Intelligence.
Happy New Year!
The use of AI in job search processes and tools
The Artificial Intelligence Video Interview Act takes effect Jan 1 2020. Many video interview tools now incorporate some kind of AI screening to generate reports on candidates, a practice which raises several AI ethics and safety concerns.
A new article at the WSJ discusses some of these: How Job Interviews Will Transform in the Next Decade. "Recruiters using AI and virtual-reality simulations may hire based on a candidate’s behaviour, personality traits and physiological responses—no resumes needed"
And you can already purchase countermeasures, if you can afford it. South Korean job applicants are learning to trick AI hiring bots that use facial recognition tech
This is a topic EI has been thinking deeply about for a few months, and we have a detailed blog article in the works ... stay tuned!
Comparison of the GoogleHealth breast AI paper against the RSNA Editorial Board recommendations for Assessing Radiology Research on Artificial Intelligence
Are we holding AI diagnostic tools to the right standards? A great tweet from @DrHughHarvey. In the replies there's another good piece from October Using artificial intelligence to read chest radiographs for tuberculosis detection: A multi-site evaluation of the diagnostic accuracy of three deep learning systems
Technology Can't Fix Algorithmic Injustice
We need greater democratic oversight of AI not just from developers and designers, but from all members of society.
Data and Justice in 2019 — Who can afford big tech, and who can live without it?
"What we see is that while you may not have access to the cloud, you can still be tracked and controlled by your government’s AI. This increase in the reach of data and analytics is even more noticeable for those who don’t have a country to call home. "
Deepfakes: The Looming Threat Of 2020
Deepfakes have been lurking on the internet for years now. But in 2020 the AI technology will become a powerful weapon for misinformation, fraud, and other crimes.
How will we remain USEFUL HUMANS? A longer post on the future of work, jobs, education and training
"As human intelligence (HI) encounters AI, will humans really become useless? Will all this progress be heaven (working only four hours per day, four days a week, but for the same money), or will it be hell (50% unemployment, rampant inequality and global civil unrest)? Or will it be both i.e. a kind of #hellven? Let’s have a look!"
Building an Ethical Career
Not an AI ethics piece, but some interesting reflections on how to build ethical awareness into our professional development.
The US just released 10 principles that it hopes will make AI safer
"The White House has released 10 principles for government agencies to adhere to when proposing new AI regulations for the private sector."
Human-like robots spark fear in users according to researchers
"Japanese researcher Masahiro Mori’s “uncanny valley” theory, which he developed in the 1970s, states that we react positively to robots if they have physical features familiar to us -but they disturb us if they start looking too much like us."