Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Every Tuesday we publish a list of links to articles and debates that have happened over the past week in the community, allowing you to stay as up-to-date as possible on developments and facts. We will often link to arguments from all sides of the debate, even if the opinions may be controversial. We would like to mention, however, that EI does not endorse any of the information published, all links are reflections of the author's opinions and not that of Ethical Intelligence.
The fundamental challenge of techlaw is not how to best regulate novel technologies, but rather how to best address familiar forms of uncertainty in new contexts. Accordingly, we construct a three-part framework, designed to encourage a more thoughtful resolution of techlaw questions. It (1) delineates the three types of tech-fostered legal uncertainty, which facilitates recognizing common issues; (2) requires a considered selection between permissive and precautionary approaches to technological regulation, given their differing distributive consequences; and (3) highlights techlaw-specific considerations when extending extant law, creating new law, or reassessing a legal regime.
Seven intersectional feminist principles for equitable and actionable COVID-19 data
This essay offers seven intersectional feminist principles for equitable and actionable COVID-19 data, drawing from the authors' prior work on data feminism. Our book, Data Feminism (D'Ignazio and Klein, 2020), offers seven principles which suggest possible points of entry for challenging and changing power imbalances in data science. In this essay, we offer seven sets of examples, one inspired by each of our principles, for both identifying existing power imbalances with respect to the impact of the novel coronavirus and its response, and for beginning the work of change.
A very short history of some times we solved AI
"The history of AI, then, can be seen as a prolonged deconstruction of our concept of intelligence. As such, it is extremely valuable. I think we have learned much more about what intelligence is(n't) from AI than we have from psychology. As a bonus, we also get useful technology. In this context, GPT-3 rids us from yet another misconception of intelligence (that you need to be generally intelligent to produce surface-level coherent text) and gives us a new technology (surface-level coherent text on tap)."
Carl Bergstrom: 'People are using data to bullshit'
The evolutionary biologist on data manipulation, fake news, and the importance of using science as a lie detector
The problems AI has today go back centuries
Algorithmic discrimination and “ghost work” didn’t appear by accident. Understanding their long, troubling history is the first step toward fixing them.
An embedded ethics approach for AI development
There is a need to consider how AI developers can be practically assisted in identifying and addressing ethical issues. In this Comment, a group of AI engineers, ethicists and social scientists suggest embedding ethicists into the development team as one way of improving the consideration of ethical issues during AI development.
PAVE's Virtual Panel "When Humans Meet Automation: What the Research Tells Us"
Driving automation technology is deeply complex field requiring profound technical knowledge, but it becomes even more complex in systems that require interaction between cutting edge technologies and the difficult-to-measure intricacies of human psychology. To help us start to understand this fascinating and challenging interdisciplinary meeting point, our twelfth panel turns to our Academic Advisory Council for insight into the research that can help us make sense of the best practices in this area. Our three guests are some of the most respected academics working on human-automation interaction in the automated vehicle realm, and we’ll ask them to explain the challenges in both system design and human behavior, how they are working to better understand them, what they’ve learned that might be useful to people developing “human-in-the-loop” automation, and what they hope the public learns about these systems.
Why a Data Breach at a Genealogy Site Has Privacy Experts Worried
Nearly two-thirds of GEDmatch’s users opt out of helping law enforcement. For a brief window this month, that didn’t matter.
AI is struggling to adjust to 2020
"Computer vision models are struggling to appropriately tag depictions of the new scenes or situations we find ourselves in during the COVID-19 era. Categories have shifted. For example, say there’s an image of a father working at home while his son is playing. AI is still categorizing it as “leisure” or “relaxation.” It is not identifying this as ‘”work” or “office,” despite the fact that working with your kids next to you is the very common reality for many families during this time."
Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI
Many organizations have published principles intended to guide the ethical development and deployment of AI systems; however, their abstract nature makes them difficult to operationalize. Some organizations have therefore produced AI ethics checklists, as well as checklists for more specific concepts, such as fairness, as applied to AI systems. But unless checklists are grounded in practitioners' needs, they may be misused. To understand the role of checklists in AI ethics, we conducted an iterative co-design process with 48 practitioners, focusing on fairness. We co-designed an AI fairness checklist and identified desiderata and concerns for AI fairness checklists in general. We found that AI fairness checklists could provide organizational infrastructure for formalizing ad-hoc processes and empowering individual advocates. We highlight aspects of organizational culture that may impact the efficacy of AI fairness checklists, and suggest future design directions
GPT-3: an AI game-changer or an environmental disaster?
The tech giants’ latest machine-learning system comes with both ethical and environmental costs
Myth-busting AI won’t work
"Misunderstandings actually originate in the opacity of the discipline, they don't create it. The myths emerge from ignorance, are a product of ignorance because no one has bothered to explain the science. "
Application of Artificial Intelligence (AI) in Surgery
Our researchers at the Hamlyn Centre review the recent successful and influential applications of AI in surgery from pre-operative planning and intra-operative guidance to its integration into surgical robots.
This review paper not only presents an overview of the requirements, challenges and sub-areas of each surgical application segment that applied AI techniques, but also brings attention to the main challenges and provides the potential solutions for the future development of AI in surgery.
AI Ethics Living Dictionary
The Living Dictionary was designed by the Montreal AI Ethics Institute to inspire and empower you to engage more deeply in the field of AI Ethics. With technical computer science and social science terms explained in plain language, the Living Dictionary aims to make the field of AI ethics more accessible, no prior knowledge necessary! We hope that the Living Dictionary will encourage you to join us in shaping the trajectory of ethical, safe and inclusive AI development.
Why Companies Need Their Own AI Code Of Conduct
I recently asked some of the major companies in tech and telecom if they have published their own AI Principles and Guidelines and have been surprised that very few of them are even in the process of doing this. They admit that it is vital to have their own AI guidelines in place and are doing some work on it but revealed that they are nowhere close to having a comprehensive AI Ethics strategy ready to publish.
‘This Is a New Phase’: Europe Shifts Tactics to Limit Tech’s Power
The region’s lawmakers and regulators are taking direct aim at Amazon, Facebook, Google and Apple in a series of proposed laws.
AI-Generated Text Is the Scariest Deepfake of All
Synthetic video and audio seemed pretty bad. Synthetic writing—ubiquitous and undetectable—will be far worse.
Congress forced Silicon Valley to answer for its misdeeds. It was a glorious sight
The five and a half hour long hearing on Capitol Hill offered a stunning illustration of the extent of misdeeds by big tech
Philosophers On GPT-3 (updated with replies by GPT-3)
Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On, guest edited by Annette Zimmermann.
Banks grapple with the ethical use of AI
"Commonwealth Bank is among five companies (the others are National Australia Bank, Microsoft, Telstra and Flamingo AI) that will provide the government with case studies detailing their experiences applying the ethics principles when the trial concludes later this year."
Service that uses AI to identify gender based on names looks incredibly biased
Meghan Smith is a woman, but Dr. Meghan Smith is a man, says Genderify