Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Every Tuesday we publish a list of links to articles and debates that have happened over the past week in the community, allowing you to stay as up-to-date as possible on developments and facts. We will often link to arguments from all sides of the debate, even if the opinions may be controversial. We would like to mention, however, that EI does not endorse any of the information published, all links are reflections of the author's opinions and not that of Ethical Intelligence.
AI Index 2019 Report
An independent initiative within Stanford University’s Human-Centered Artificial Intelligence Institute, the report is in its third year and is the result of a collaborative effort led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry, in collaboration with more than 35 sponsoring partners and data contributors. The purpose of the project is to ground the discussion on AI in data, serving practitioners, industry leaders, policymakers and funders, the general public and the media that informs it.
When should we decline to write code? A small case study.
We picked this up on twitter when Emily Bender tweeted that there was a task in an AI competition to create an AI that would solve problems involving the "Prediction of Intellectual Ability and Personality Traits from Text". She's since posted a thoughtful followup
This is an important problem. Technical and regulatory solutions should be augmented by professional codes of conduct and ethics if we want to ensure the safe and fair development of AI.
Do You Trust Jeff Bezos With Your Life? Tech Giants Like Amazon Are Getting into the Health Care Business
Would you trust the Tech Giants with your health data in exchange for more personalized and on-demand healthcare? This article covers the current initiative of telehealth by Amazon and dives into a few key implications that this new commodity would carry for society at large.
"What health insurance companies, as well as employers who foot the bulk of the U.S.'s health care bill, especially fear from telehealth is that it's so easy to use that people will reach out more often for care. "It creates the risk that every little ache and pain results in a claim that has to be paid out," says the University of Pennsylvania's Asch. "Making people come into the office is health care rationing by inconvenience."
A tug-of-war over biased AI
"A critical split divides AI reformers. On one side are the bias-fixers, who believe the systems can be purged of prejudice with a bit more math. (Big Tech is largely in this camp.) On the other side are the bias-blockers, who argue that AI has no place at all in some high-stakes decisions."
Emotion-detecting tech should be restricted by law
"The AI Now Institute says the field is "built on markedly shaky foundations". Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices. "
Would you let a Robot Take Care of Your Mother?
AI usage for social care is not a new concept. However, as it becomes more and more of a reality, we are forced to shift our questions from theoretical to personal.
"Some worry robot care would carry a stigma:the potential of being seen as “not worth human company,” said one participant in a study of potential users with mild cognitive impairments."
The United States Patent and Trademark Office is trying to answer a very complicated question: who owns artificial intelligence?
AI Ethics for Systemic Issues: A Structural Approach
"This paper calls for a "structural" approach to assessing AI’s effects inorder to understand and prevent such systemic risks where no individual can beheld accountable for the broader negative impacts. This is particularly relevantfor AI applied to systemic issues such as climate change and food security whichrequire political solutions and global cooperation. To properly address the widerange of AI risks and ensure ’AI for social good’, agency-focused policies must becomplemented by policies informed by a structural approach."