Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Every Tuesday we publish a list of links to articles and debates that have happened over the past week in the community, allowing you to stay as up-to-date as possible on developments and facts. We will often link to arguments from all sides of the debate, even if the opinions may be controversial. We would like to mention, however, that EI does not endorse any of the information published, all links are reflections of the author's opinions and not that of Ethical Intelligence.
New Zealand claims world first in setting standards for government use of algorithms
"Exclusive: Statistics minister says new charter on algorithms – used from traffic lights to police decision-making – an ‘important part of building public trust’"
Cultural Differences as Excuses? Human Rights and Cultural Values in Global Ethics and Governance of AI
"Cultural differences pose a serious challenge to the ethics and governance of artificial intelligence (AI) from a global perspective. Cultural differences may enable malignant actors to disregard the demand of important ethical values or even to justify the violation of them through deference to the local culture, either by affirming the local culture lacks specific ethical values, e.g., privacy, or by asserting the local culture upholds conflicting values, e.g., state intervention is good. "
Car Companies Want to Monitor Your Every Move With Emotion-Detecting AI
In-car camera systems are being marketed as a safety feature, but their creators' ambitions go beyond alerting drowsy drivers.
Oxford University launches commission on AI in the public sector
The Oxford Internet Institute is launching a new commission on AI and good governance which will examine artificial intelligence in the public sector. The commission aims to work with policymakers from around the world to advise on the most effective and principled ways of using AI, while analysing implementation and procurement challenges faced by governments.
Predictive policing algorithms are racist. They need to be dismantled.
Lack of transparency and biased training data mean these tools are not fit for purpose. If we can’t fix them, we should ditch them.
Researchers find evidence of bias in facial expression data sets
"Researchers claim the data sets often used to train AI systems to detect expressions like happiness, anger, and surprise are biased against certain demographic groups. In a preprint study published on Arxiv.org, coauthors affiliated with the University of Cambridge and Middle East Technical University find evidence of skew in two open source corpora: Real-world Affective Faces Database (RAF-DB) and CelebA."
Understanding Amazon: Making the 21st-Century Gatekeeper Safe for Democracy
The paper describes how Amazon is a commercial and political institution that has flourished within a particular regulatory model, attempts to demystify Amazon’s unfair and abusive behavior, and summarizes some of its most pernicious effects.
NSCAI: Key Considerations as a Paradigm for Responsible Development and Fielding of Artificial Intelligence.
The National Security Commission on Artificial Intelligence is identifying a set of challenges and making recommendations on directions with responsibly developing and fielding AI systems, and for pinpointing the concrete actions that should be adopted across the government to help overcome these challenges. Collectively, they form a paradigm for aligning AI system development and AI system behavior to goals and values. The first section, Aligning Systems and Uses with American Values and the Rule of Law, provides guidance specific to implementing systems that abide by American values, most of which are shared by democratic nations. The section also covers aligning the run-time behavior of systems to the related, more technical encodings of objectives, utilities, and trade-offs. The four following sections (on Engineering Practices, System Performance, Human-AI Interaction, and Accountability & Governance) serve in support of core American values and further outline practices needed to develop and field systems that are trustworthy, understandable, reliable, and robust.
GPT-3: The First Artificial General Intelligence?
"An AGI, or a “strong AI,” which could perform any task as well as a human being, is a much harder problem. It is so hard that there isn’t a clear roadmap for achieving it, and few researchers are openly working on the topic. GPT-3 is the first model to shake that status-quo seriously."
Q&A: AI and financial services
"In our latest Q&A, Janet Wong, an engager for EOS at Federated Hermes, talks to Dr David Hardoon, a senior AI adviser to the Union Bank of the Philippines, and the former chief data officer and senior AI adviser to the Monetary Authority of Singapore, about AI and data governance in financial services."
OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless
The AI is the largest language model ever created and can generate amazing human-like text on demand but won't bring us closer to true intelligence.
Q&A: The Data Delusion
Protecting Individual Isn't Enough When the Harm is Collective. A Q&A with Marietje Schaake and Martin Tisne on his new paper The Data Delusion.
The Pandemic Could Obliterate a Last Frontier in Our Privacy: Our Biological Selves | Opinion
An invisible hand: Patients aren’t being told about the AI systems advising their care
"Since February of last year, tens of thousands of patients hospitalized at one of Minnesota’s largest health systems have had their discharge planning decisions informed with help from an artificial intelligence model. But few if any of those patients has any idea about the AI involved in their care. That’s because frontline clinicians at M Health Fairview generally don’t mention the AI whirring behind the scenes in their conversations with patients."
What is the difference between artificial neural networks and biological brains?
"...as some scientists argue, brute-force learning is not what gives humans and animals the ability to interact the world shortly after birth. The key is the structure and innate capabilities of the organic brain, an argument that is mostly dismissed in today’s AI community, which is dominated by artificial neural networks."
Overseeing AI: Governing artificial intelligence in banking
Ensuring ethical, fair and well-documented AI-based decisions will gain urgency in the post-pandemic era. A review of global regulatory guidance given so far reveals the key risks and recommendations.
RSS to set data science standards
The organisation is leading the project in partnership with the BCS, the Chartered Institute for IT, the Operational Research Society, the Royal Academy of Engineering, the National Physical Laboratory, the Royal Society and the Institute of Mathematics and its Applications.
The aim is to develop industry-wide standards for data science, starting with existing academic qualifications.
Ethical labels not fit for purpose, report warns consumers
EI: We often think about developing standards and labels for ethical AI, but there are challenges with this approach, as can be seen in other domains.
"Certification schemes may serve to mask human rights abuses and allow government inaction, study claims"
Report Launch: Bridging AI’s trust gaps: Aligning policymakers and companies
Bridging AI’s trust gaps: Aligning policymakers and companies is a new global survey conducted by The Future Society in collaboration with EYQ (EY’s Think Tank). Common understanding between companies and policymakers is key to build a governance framework and protect citizens’ rights. Our survey reveals ethical gaps between company and policy leaders that need to be addressed for the trustworthy adoption of AI across sectors.
Who Wants to Be a Cyborg?
"This being said, I suspect that the first few generations of synthetic general intelligences will be deficient in ways normal adult humans are not. They will be savantlike, surpassing us in certain ways that involve sophisticated memory databases, pattern recognition, mathematical processing and so on. I call these hypothetical general intelligences “savant systems” because they have all sorts of deficits relative to normal humans."