Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Every Tuesday we publish a list of links to articles and debates that have happened over the past week in the community, allowing you to stay as up-to-date as possible on developments and facts. We will often link to arguments from all sides of the debate, even if the opinions may be controversial. We would like to mention, however, that EI does not endorse any of the information published, all links are reflections of the author's opinions and not that of Ethical Intelligence.
Examining the Black Box: Tools for Assessing Algorithmic Systems
A new report by the Ada Lovelace Institute and DataKind UK clarifies the terms around algorithmic audits and impact assessments, and the current state of research and practice.
Expectations of artificial intelligence and the performativity of ethics: Implications for communication governance
"...despite societal expectations that we can design ethical AI, and public expectations that developers and governments should share responsibility for the outcomes of AI use, there is a significant divergence between these expectations and the ways in which AI technologies are currently used and governed in large scale communication systems. We conclude that discourses of ‘ethical AI’ are generically performative, but to become more effective we need to acknowledge the limitations of contemporary AI and the requirement for extensive human labour to meet the challenges of communication governance. An effective ethics of AI requires domain appropriate AI tools, updated professional practices, dignified places of work and robust regulatory and accountability frameworks."
Don't Regulate Artificial Intelligence: Starve It
We think a better approach is to make AI less powerful. That is, not to control artificial intelligence, but to put it on an extreme diet. And what does AI consume? Our personal information.
Ethics of Artificial Intelligence and Robotics @ The Stanford Encyclopedia of Philosophy
The Stanford Encyclopedia of Philosophy is a high quality resource produced and edited by domain experts and widley relied on. Vincent C. Müller's new entry on AI Ethics is now live!
AI Ethics Guidelines Global Inventory UPDATED
The AI Ethics Guidelines Global Inventory is a project by AlgorithmWatch that maps frameworks that seek to set out principles of how systems for automated decision-making (ADM) can be developed and implemented ethically.
AI Ethics: A Self Reflection
"From my readings, I realised that there are some fundamental ethical aspects of AI, which I have listed below.Transparency & explainability, Privacy protection and security, Human-centred values, Accountability. Based on the above aspects, I believe that the following steps will help the organisations to fully utilise the potential of AI without compromising on the ethical side."
Too Big a Word: What does it mean to do “ethics” in the technology industry? We found four overlapping meanings
"A broad range of crises have convulsed the tech industry in recent years, from the Snowden revelations to racially biased algorithms, from Cambridge Analytica to the Google Walkout, or from ICE contracts for data brokers to censored search engines. The keyword inextricably bound up with discussions of these problems has been ethics. It is a concept around which power is contested: who gets to decide what ethics is will determine much about what kinds of interventions technology can make in all of our lives, including who benefits, who is protected, and who is made vulnerable."
Is AI trustworthy enough to help us fight COVID-19?
"Considering the rapid adoption of AI in high-stakes domains, the question of “how do we ensure trustworthy use of AI through audit frameworks?” is too important to be left to the industry. On the other hand, its responses vary a lot depending on the use-case being considered and requires a sound understanding of the system introduced and therefore cannot be addressed by policymakers alone. "
Power of AI has limits in fight against Covid-19, experts caution
"Artificial intelligence forecasts useful for allocating healthcare resources, but less so for pinpointing end of pandemic."
"Recently, a team at the Singapore University of Technology and Design sought to come up with an answer using artificial intelligence. Their algorithm predicts the end of the Covid-19 pandemic in different countries as well as for the world - and the charts have understandably been making the rounds on Twitter and picked up by media.
But experts warn this type of certainty is - certainly - “too good to be true”, and an example of what to watch out for amid a cacophony of research. "
Bruce Schnier on COVID-19 Contact Tracing Apps
"Assume you take the app out grocery shopping with you and it subsequently alerts you of a contact. What should you do? It's not accurate enough for you to quarantine yourself for two weeks. And without ubiquitous, cheap, fast, and accurate testing, you can't confirm the app's diagnosis. So the alert is useless."
COVID-19, Content Moderation and the EU Digital Services Act: Key Takeaways from CDT Roundtable
"As government leaders, policymakers, and technology companies continue to navigate the global coronavirus pandemic, CDT is actively monitoring the latest responses and working to ensure they are grounded in civil rights and liberties. Our policy teams aim to help leaders craft solutions that balance the unique needs of the moment, while still respecting and upholding individual human rights. "
Computers Do Not Make Art, People Do
"I do not believe any software system in our current understanding could be called an "artist." Art is a social activity, and our "AI" software is still just software, mechanically following the instructions we give it.
Moreover, calling a software system an artist is irresponsible, because it is misleading: it could make people think that the software has human-like intelligence, autonomy, and emotions."