Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Every Tuesday we publish a list of links to articles and debates that have happened over the past week in the community, allowing you to stay as up-to-date as possible on developments and facts. We will often link to arguments from all sides of the debate, even if the opinions may be controversial. We would like to mention, however, that EI does not endorse any of the information published, all links are reflections of the author's opinions and not that of Ethical Intelligence.
Coronavirus Pandemic Could Elevate ESG Factors
"Environmental, social and governance investing was growing in popularity before the virus began to circulate, as investors flocked to companies that have taken steps to manage nonfinancial risks related to matters such as climate change, board diversity or human rights issues in the supply chain.
But the pandemic has demonstrated on a large scale the importance of other factors that are paramount to ESG investors. Among them: disaster preparedness, continuity planning and employee treatment through benefits such as paid sick leave as companies direct employees to work from home."
AI can help with the COVID-19 crisis - but the right human input is key
"Artificial intelligence (AI) has the potential to help us tackle the pressing issues raised by the COVID-19 pandemic. It is not the technology itself, though, that will make the difference but rather the knowledge and creativity of the humans who use it. "
Data Protection Impact Assessments as rule of law governance mechanisms
"This article explores how Data Protection Impact Assessments (DPIAs) could provide a mechanism for improved rule of law governance of data processing systems developed and used by government for public purposes in civil and administrative areas. Applying rule of law principles to two case studies provides a sketch of the issues and concerns that this article’s proposals for DPIAs seek to address. "
Shannon Vallor on AI and Covid-19
"Thoughts on the growing debate over whether COVID-19 illustrates the moral necessity of using AI and other tech for more expansive and intrusive forms of public health surveillance: a thread"
Adversarial Perturbations Fool Deepfake Detectors
Hope that the deep fake problem can be solved technologically might be misplaced, instead we might have an arms race which further undermines the digital epistemic environment.
"This work uses adversarial perturbations to enhance deepfake images and fool common deepfake detectors.The DIP defense achieved 95 perturbed deepfakes that fooled the original detector, while retaining 98 accuracy in other cases on a 100 image subsample."
Publication Norms for Responsible AI
Some have argued that dangerous AI technology like deep fake generators should be protected from disclosure to the public. The partnership on AI is working on new standards for publication, and is looking for comments.
Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI
"In recent years a substantial literature has emerged concerning bias, discrimination, and fairness in AI and machine learning. Connecting this work to existing legal non-discrimination frameworks is essential to create tools and methods that are practically useful across divergent legal regimes. While much work has been undertaken from an American legal perspective, comparatively little has mapped the effects and requirements of EU law. This Article addresses this critical gap between legal, technical, and organisational notions of algorithmic fairness"
Recommendations on privacy and dataprotection in the fight against COVID-19
"Governments, companies, NGOs, and individuals alike have a responsibility to do their part tomitigate the consequences of COVID-19 and to show solidarity and respect for each other. Inthis paper, we will provide privacy and data protection recommendations forgovernments to fight against COVID-19 in a rights-respecting manner."
‘Trustworthy AI’ is a framework to help manage unique risk
"Artificial intelligence (AI) technology continues to advance by leaps and bounds and is quickly becoming a potential disrupter and essential enabler for nearly every company in every industry. At this stage, one of the barriers to widespread AI deployment is no longer the technology itself; rather, it’s a set of challenges that ironically are far more human: ethics, governance, and human values."
The Law of Informational Capitalism
"I construct an account of the “law of informational capitalism,” with particular attention to the law that undergirds platform power. Once we come to see informational capitalism as contingent upon specific legal choices, we can begin to consider how democratically to reshape it. Though Cohen does not emphasize it, some of the most important legal developments—specifically, developments in the law of takings, commercial speech, and trade—are those that encase private power from democratic revision. Today’s informational capitalism brings a threat not merely to our individual subjectivities but to equality and our ability to self-govern. Questions of data and democracy, not just data and dignity, must be at the core of our concern."
Vulnerable robots positively shape human conversational dynamics in a human–robot team
"In this work, we explore how a social robot influences team engagement using an experimental design where a group of three humans and one robot plays a collaborative game. Our analysis shows that a robot’s social behavior influences the conversational dynamics between human members of the human–robot group, demonstrating the ability of a robot to significantly shape human–human interaction."
Empathy Machine: Humans Communicate Better after Robots Show Their Vulnerable Side
"“While other work has focused on how to more easily integrate robots into teams, we focused instead on how robots might positively shape the way that people react to each other,” says Sarah Sebo, a graduate student at Yale University and co-author of the research, published this month in Proceedings of the National Academy of Sciences USA. To measure these changes in reactions, researchers at Yale and Cornell University assigned participants to teams of four—consisting of three people and one small humanoid robot—and had them play a collaborative game on Android tablets. In some groups, the robots were programmed to act “vulnerable.” These machines performed actions such as apologizing for making mistakes, admitting to self-doubt, telling jokes, sharing personal stories about their “life,” and talking about how they were “feeling.” In control groups, the human participants teamed up with robots that made only neutral statements or remained entirely silent."
Federal Court Rules ‘Big Data’ Discrimination Studies Do Not Violate Federal Anti-Hacking Law
"The ACLU challenged a provision of the CFAA that the government argues makes it a crime to violate a website’s terms of service. Those terms, which are unilaterally set by individual sites and can change at any time, often prohibit researchers and journalists from creating tester online identities or recording what content is served up to those identities. These practices were used by, for example, investigative journalists who exposed that advertisers were using Facebook’s ad-targeting algorithm to exclude users from receiving job, housing, or credit ads based on race, gender, age, or other classes protected from discrimination in federal and state civil rights laws."
On the responsible use of digital data to tackle the COVID-19 pandemic
"Large-scale collection of data could help curb the COVID-19 pandemic, but it should not neglect privacy and public trust. Best practices should be identified to maintain responsible data-collection and data-processing standards at a global scale."