Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Although the article was on arxiv last week, the author publicized it this week, and it's a good one. There's poor alignment between operationalized definitions of fairness in machine learning and the legal definitions that may in fact apply to the deployment of these systems.
"Past literature has been effective in demonstrating ideological gaps in machine learning (ML) fairness definitions when considering their use in complex socio-technical systems. However, we go further to demonstrate that these definitions often misunderstand the legal concepts from which they purport to be inspired, and consequently inappropriately co-opt legal language. In this paper, we demonstrate examples of this misalignment and discuss the differences in ML terminology and their legal counterparts, as well as what both the legal and ML fairness communities can learn from these tensions. We focus this paper on U.S. anti-discrimination law since the ML fairness research community regularly references terms from this body of law."
A critical reflection on the problems that arise when the pursuit of good is taken on as a technical objective too hastily, and why sustained and rigorous ethical reflection is a necessary if we want to have any confidence that such efforts will actually succeed.
"Despite widespread enthusiasm among computer scientists to contribute to “socialgood,” the field’s efforts to promote good lack a rigorous foundation in politicsor social change. There is limited discourse regarding what “good” actuallyentails, and instead a reliance on vague notions of what aspects of society aregood or bad. Moreover, the field rarely considers the types of social changethat result from algorithmic interventions, instead following a “greedy algorithm”approach of pursuing technology-centric incremental reform at all points."
The UK’s Data Protection Authority just issued much-anticipated guidance that clarifies the complicated issue of the GDPR’s ‘right to explanation’. Here is some background on the issue and what the new information means.
Here are three arguments for the idea that ethics is subjective, presented with thoughtful rebuttals. This is a theme we took up in our last bog post, where we argued that there is a very large chunk of territory in tech ethics where ethical imperatives can be uncovered and agreed upon by sincere inquiry, even by those who disagree on more fundamental ethical and moral questions.
When health-related disinformation is available online, who is responsible? There's a growing backlash against the idea of platforms as "mere tools", but perhaps we should think the same of search engines. We don't usually think that a library is responsible for dangerous information in its books, but should we think differently about Google?
Multiple fairness constraints have been proposed in the literature, motivated by a range of concerns about how demographic groups might be treated unfairly by machine learning classifiers. In this work we consider a different motivation; learning from biased training data.
In the course of researching and discussing AI ethics challenges, we might run across the claim while the rate and scope of our generation of data has increased, it can be understood on a continuum with the ways in which human activity have always left traces and records. This article on the concept of "datafication" argues against this, and shows several ways to understand what is distinctive about the new systems and actors that collect and use our data.
"Datafication is not just the making of information, which, in one sense, human beings have been doing since the creation of symbols and writing. Rather, datafication is a contemporary phenomenon which refers to the quantification of human life through digital information, very often for economic value. This process has major social consequences. Disciplines such as political economy, critical data studies, software studies, legal theory, and—more recently— decolonial theory, have considered different aspects of those consequences to be important. Fundamental to all such approaches is the analysis of the intersection of power and knowledge. "