Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Every Tuesday we publish a list of links to articles and debates that have happened over the past week in the community, allowing you to stay as up-to-date as possible on developments and facts. We will often link to arguments from all sides of the debate, even if the opinions may be controversial. We would like to mention, however, that EI does not endorse any of the information published, all links are reflections of the author's opinions and not that of Ethical Intelligence.
Don’t ask if artificial intelligence is good or fair, ask how it shifts power
It is not uncommon now for AI experts to ask whether an AI is ‘fair’ and ‘for good’. But ‘fair’ and ‘good’ are infinitely spacious words that any AI system can be squeezed into. The question to pose is a deeper one: how is AI shifting power?
There’s Still Work to Do Addressing Ethics in Autonomous Vehicles
"Current approaches to ethics and autonomous vehicles are a dangerous oversimplification – moral judgment is more complex than that ..."
There’s a fairly large flaw in the way that programmers are currently addressing ethical concerns related to artificial intelligence (AI) and autonomous vehicles (AVs). Namely, existing approaches don’t account for the fact that people might try to use the AVs to do something bad.
Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence
This paper explores the important role of critical science, and in particular of post-colonial and decolonial theories, in understanding and shaping the ongoing advances in artificial intelligence. Artificial Intelligence (AI) is viewed as amongst the technological advances that will reshape modern societies and their relations. Whilst the design and deployment of systems that continually adapt holds the promise of far-reaching positive change, they simultaneously pose significant risks, especially to already vulnerable peoples. Values and power are central to this discussion. Decolonial theories use historical hindsight to explain patterns of power that shape our intellectual, political, economic, and social world. By embedding a decolonial critical approach within its technical practice, AI communities can develop foresight and tactics that can better align research and technology development with established ethical principles, centring vulnerable peoples who continue to bear the brunt of negative impacts of innovation and scientific progress.
Algorithmic state surveillance: Challenging the notion of agency in human rights
This paper explores the extent to which current interpretations of the notion of agency, as traditionally perceived under human rights law, pose challenges to human rights protection in light of algorithmic surveillance. After examining the notion of agency under the European Convention on Human Rights as a criterion for applications' admissibility, the paper looks into the safeguards of notification and of redress – crucial safeguards developed by the Court in secret surveillance cases – which are used as examples to illustrate their insufficiency in light of algorithmic surveillance. The use of algorithms creates new surveillance methods and challenges fundamental presuppositions on the notion of agency in human rights protection. Focusing on the victim status does not provide a viable solution to problems arising from the use of Artificial Intelligence in state surveillance. The paper thus raises questions for further research concluding that a new way of thinking about agency for the protection of human rights in the context of algorithmic surveillance is needed in order to offer effective protection to individuals.
Portland’s Radical Facial Recognition Proposal Bans the Tech From Airbnbs, Restaurants, Stores, and More
After eight months of speculation, details are finally emerging about Portland, Oregon’s groundbreaking legislation that would ban facial recognition in privately owned businesses and spaces accessible to the public. The law would prohibit the use of facial recognition technologies at stores, banks, Airbnb rentals, restaurants, entertainment venues, public transit stations, homeless shelters, senior centers, services like law or doctors’ offices, and a variety of other types of businesses. And it would allow people to sue noncompliant private entities for damages.
Why are Artificial Intelligence systems biased?
"Indeed, the web and internet have become a repository of our Jungian collective subconscious - and a convenient way to train AI systems. A problem with the collective subconscious is that it is often raw, unwashed and rife with prejudices; an AI system trained on it, not surprisingly, winds up learning these and, when deployed at scale, can unwittingly exacerbate existing biases."
Why the Digital Society needs the Open Society
Our study indicates that a collaborative mindset, built of on trust, matters just as much for digital innovation as fast Internet access and computing power.
Meet the Secret Algorithm That's Keeping Students Out of College
The International Baccalaureate program canceled its high-stakes exam because of Covid-19. The formula it used to "predict" scores puzzles students and teachers.
Interpretative Pros Hen Pluralism: from Computer-Mediated Colonization to a Pluralistic Intercultural Digital Ethics
Intercultural Digital Ethics (IDE) faces the central challenge of how to develop a global IDE that can endorse and defend some set of (quasi-) universal ethical norms, principles, frameworks, etc. alongside sustaining local, culturally variable identities, traditions, practices, norms, and so on. I explicate interpretive pros hen (focal or “towards one”) ethical pluralism (EP(ph)) emerging in the late 1990s and into the twenty-first century in response to this general problem and its correlates, including conflicts generated by “computer-mediated colonization” that imposed homogenous values, communication styles, and so on upon “target” peoples and cultures via ICTs as embedding these values in their very design.
Prepare for Artificial Intelligence to Produce Less Wizardry
A new paper argues that the computing demands of deep learning are so great that progress on tasks like translation and self-driving is likely to slow.
Take this dystopian job interview with an AI hiring manager to experience what life could be like if machines fully take over the workplace
But in a world where the hiring process relies on artificial intelligence, bizarre and socially inappropriate questions might not be off limits, at least according to one digital artist who wants to warn us about what the future could hold if we're not careful.
5 Key Research Findings on Enterprise Artificial Intelligence
Hot off the press today is a FICO-commissioned research study on artificial intelligence and how Chief Analytics Officers (CAOs) and Chief Data Officers (CDOs) are responding to the current pandemic, economic uncertainty, and renewed focus on social justice. In additional to a survey, in-depth interviews with the top AI leaders at HSBC, AXA PPP, Banorte, and Chubb provides additional perspective and commentary.
Ethics in the Balance: AI’s Implications for Government
As automation becomes an ever-more viable tool for government for everything from cameras on light poles to using AI to set prisoners’ bail, can policymakers ensure it is used responsibly and ethically?
Reducing bias in AI-based financial services
Artificial intelligence (AI) presents an opportunity to transform how we allocate credit and risk, and to create fairer, more inclusive systems. AI’s ability to avoid the traditional credit reporting and scoring system that helps perpetuate existing bias makes it a rare, if not unique, opportunity to alter the status quo. However, AI can easily go in the other direction to exacerbate existing bias, creating cycles that reinforce biased credit allocation while making discrimination in lending even harder to find. Will we unlock the positive, worsen the negative, or maintain the status quo by embracing new technology?
AI’s struggle to reach “understanding” and “meaning”
Machine learning algorithms are designed to optimize for a cost or loss function. For instance, when a neural network undergoes training, it tunes its parameters to reduce the difference between its predictions and the human-provided labels, which represent the ground truth. This simplistic approach to solving problems is not what “understanding” is about, the participants at the Santa Fe Institute workshop argued. There’s no single metric to measure the level of understanding.