Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Every Tuesday we publish a list of links to articles and debates that have happened over the past week in the community, allowing you to stay as up-to-date as possible on developments and facts. We will often link to arguments from all sides of the debate, even if the opinions may be controversial. We would like to mention, however, that EI does not endorse any of the information published, all links are reflections of the author's opinions and not that of Ethical Intelligence.
Fresh concerns about AI bias in the age of COVID-19
"Businesses facing unprecedented demands during the coronavirus pandemic have boosted their use of artificial intelligence in some of society's most sensitive areas."
"Why it matters: Algorithms and the data they rely on are prone to automating preexisting biases — and are more likely to do so when they're rushed into the field without careful testing and review."
IBM will no longer offer, develop, or research facial recognition technology
"IBM will no longer offer general purpose facial recognition or analysis software, IBM CEO Arvind Krishna said in a letter to Congress today. The company will also no longer develop or research the technology, IBM tells The Verge. Krishna addressed the letter to Sens. Cory Booker (D-NJ) and Kamala Harris (D-CA) and Reps. Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY)."
The two-year fight to stop Amazon from selling face recognition to the police
"This week’s moves from Amazon, Microsoft, and IBM mark a major milestone for researchers and civil rights advocates in a long and ongoing fight over face recognition in law enforcement."
"The cynical part of me says Amazon is going to wait until the protests die down...to revert to its prior position."
Final Analysis of the EU Whitepaper on AI
"The White Paper is the European Commission’s first concrete attempt at discussing AI policy beyond the high-level statements of previous Communications.In this sense, the Commission takes up a rule setting role(rather than a referee role). In our opinion, this is a good first step. "
"In its Whitepaper on Artificial Intelligence, Europe took a clear stance on AI; foster uptake of AI technologies, underpinned by what it calls ‘an ecosystem of excellence’, while also ensuring their compliance with to European ethical norms, legal requirements and social values, ‘an ecosystem of trust’. While the Whitepaper on AI of the European Commission does not propose legislation yet, it announces some bold legislative measures, that will likely materialize in the beginning of 2021."
Joanna Bryson: Regulating AI as a pervasive technology: My response to the EU AI Whitepaper Consultation
"Basically it's a good document heading the right direction."
- Explanation is actually easy / AI is never necessarily opaque
- Humans are always responsible, AI can only be transparent.
- AI isn't really some weird byproduct of data (the Rumpelstilzchen fallacy). Computation and cybersecurity are more essential than data storage for any length of time.
- AI is produced by programmers, so the EU needs to add program code, architecture documents and specifications to the list of documents they require companies to be able to produce for inspection.
Trust and excellence — the EU is missing the mark again on AI and human rights
"The European Commission’s consultation on the “White Paper on Artificial Intelligence — a European approach to excellence and trust” is closing on Sunday, June 14. In its current form, the policy approach of the EU will bring about neither trust nor excellence in automated decision-making (ADM) / artificial intelligence (AI) systems, and will do nothing to ensure that both the private and public sector respect and promote human rights in the context of artificial intelligence."
Forgeries, interference, and attacks on Kremlin critics across six years and 300 sites and platforms.
"Secondary Infektion is a series of operations run by a large-scale, persistent threat actor from Russia that worked in parallel to the Internet Research Agency and the GRU but was systematically different in its approach."
"The campaign used fake accounts and forged documents to sow conflict between Western countries and most often targeted Ukraine. It produced at least 2,500 pieces of content in seven languages across over 300 platforms from 2014 into 2020."
High-tech surveillance amplifies police bias and overreach
"Police use of these national security-style surveillance techniques – justified as cost-effective techniques that avoid human bias and error – has grown hand-in-hand with the increased militarization of law enforcement. Extensive research, including my own, has shown that these expansive and powerful surveillance capabilities have exacerbated rather than reduced bias, overreach and abuse in policing, and they pose a growing threat to civil liberties. "
Why robustness is key to deploying AI
"The takeaway for policymakers—at least for now—is that when it comes to high-stakes settings, machine learning (ML) is a risky choice. “Robustness,” i.e. building reliable, secure ML systems, is an active area of research. But until we’ve made much more progress in robustness research, or developed other ways to be confident that a model will fail gracefully, we should be cautious in relying on these methods when accuracy really matters."
How Fair Is Zoom Justice?
"Court hearings are going virtual in response to COVID-19. Studies show they can lead to harsher outcomes for defendants."
Digital inclusion and data literacy
ICYMI: Special Issue of Internet Policy Review
"As more of our everyday lives become digital, it has become crucial to include everyone in the digital society. This special issue is examining the different layers of digital inclusion and data literacy by drawing on research, policy, and practice developments around literacies in various regions and contexts. It highlights the politics around them so as to propose policies that are needed to include more people in datafied societies, and what types of literacies they should learn. This issue includes three commentaries by experts in the field and five peer-reviewed academic papers that go towards tackling digital inclusion. This means to find solutions to the fact that many people are left behind technological advancements, and that these create what is commonly called - the digital divide. "
Point and Counterpoint on Robot Rights
NOEMA published A Misdirected Application Of AI Ethics
"The debate about robot rights diverts moral philosophy away from the pressing matter of the oppressive use of AI technology against vulnerable groups in society."
@eripsa collected critical responses on twitter
Data & Dystopia
"Europe is increasingly caught between upholding privacy of citizens and promoting intrusive artificial intelligence. "
The Ethical Governance of the Digital During and After the COVID‑19 Pandemic
Mariarosaria Taddeo's editor's letter for Minds & Machines on the ethical governance of digital technologies during and after the COVID pandemic.
Transparency in Language Generation: Levels of Automation
"Language models and conversational systems are growing increasingly advanced, creating outputs that may be mistaken for humans. Consumers may thus be misled by advertising, media reports, or vagueness regarding the role of automation in the production of language. We propose a taxonomy of language automation, based on the SAE levels of driving automation, to establish a shared set of terms for describing automated language. It is our hope that the proposed taxonomy can increase transparency in this rapidly advancing field. "
Czech civil society fights back against fake news
"In the Czech Republic, the media ecosystem is plagued by disinformation. A group of PR professionals have teamed up to cut off dodgy outlets from their main, and often only, source of income — online ads."
The Ethical Balance of Using Smart Information Systems for Promoting the United Nations’ Sustainable Development Goals
"SIS have the potential to exacerbate inequality and further entrench the market dominance of big tech companies, if left uncontrolled. "
"The paper explores how technology can be used to address the SDGs and in particular Smart Information Systems (SIS). SIS, the technologies that build on big data analytics, typically facilitated by AI techniques such as machine learning, are expected to grow in importance and impact. Some of these impacts are likely to be beneficial, notably the growth in efficiency and profits, which will contribute to societal wellbeing. At the same time, there are significant ethical concerns about the consequences of algorithmic biases, job loss, power asymmetries and surveillance, as a result of SIS use."
Privacy-preserving A.I. is the future of A.I.
"...But researchers have shown that this kind of anonymization doesn’t guarantee privacy: There are often other fields in data, such as location, age, or occupation, that might allow you to re-identify an individual, especially if you are able to cross-reference it with another dataset that does include personal information."
From robodebt to racism: what can go wrong when governments let algorithms make the decisions
"Algorithmic decision-making has enormous potential to do good. From identifying priority areas for first response after an earthquake hits, to identifying those at risk of COVID-19 within minutes, their application has proven hugely beneficial."
"But things can go drastically wrong when decisions are trusted to algorithms without ensuring they adhere to established ethical norms. Two recent examples illustrate how government agencies are failing to automate fairness."
AI firm that worked with Vote Leave given new coronavirus contract
"Deal may allow Faculty, linked to senior Tory figures, to analyse social media data, utility bills and credit ratings"
Data versus lore: an introduction to the ethical concerns surrounding Artificial Intelligence
Tracking the debate on COVID-19 surveillance tools
"Contact-tracing apps could help keep countries open before a vaccine is available. But do we have a sufficient understanding of their efficacy, and can we balance protecting public health with safeguarding civil rights? We interviewed five experts, with backgrounds in digital health ethics, internet law and social sciences.