Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Every Tuesday we publish a list of links to articles and debates that have happened over the past week in the community, allowing you to stay as up-to-date as possible on developments and facts. We will often link to arguments from all sides of the debate, even if the opinions may be controversial. We would like to mention, however, that EI does not endorse any of the information published, all links are reflections of the author's opinions and not that of Ethical Intelligence.
AI Governance Forum 2020: Schedule
"The AI Governance Forum is a multi-stakeholders platform, open to all interested parties and dedicated to build Human-Trust in AI for the benefit of all. Stakeholders can be from public or private sector, scientific community and civil society. The AI Governance Forum considers each stakeholder as equal partner to the discussion.
The AI Governance Forum is based on the conviction that a collective intelligence process is an essential component to manage the AI impact on our society. It contribute to build an open artificial intelligence for the benefit of all."
IBM will no longer offer, develop, or research facial recognition technology
“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
This startup is using AI to give workers a “productivity score”
"Companies have asked remote workers to install a whole range of such tools. [...] Now, one firm wants to take things even further. It is developing machine-learning software to measure how quickly employees complete different tasks and suggest ways to speed them up. The tool also gives each person a productivity score, which managers can use to identify those employees who are most worth retaining—and those who are not."
Germany, France launch Gaia-X platform in bid for ‘tech sovereignty’
"The idea behind the project, named “Gaia-X” after an ancient goddess, is to convince firms to store their data with home-grown alternatives to U.S. and Chinese tech giants like Amazon Web Services and Alibaba — known in the industry as "hyperscalers."
But Gaia-X will not be a cloud service in itself. Set up as a nonprofit based in Belgium, it's conceived as a platform joining up cloud-hosting services from dozens of companies, allowing business to move their data freely with all information protected under Europe's tough data processing rules, France and Germany announced Thursday."
Legal Remedies For a Forgiving Society: Children's rights, data protection rights and the value of forgiveness in AI-mediated risk profiling of children by Dutch authorities
"30 years after the United Nations Convention on the Right of the Child (CRC) and two years after the new EU data protection regime, the social value of forgiveness is not part of these legal instruments. The lack of this value within these legal instruments and the lack of research on the subject of forgiveness in relation to improving the legal position of children require urgent addressing especially when children are exposed to artificial intelligence (AI)-mediated risk profiling practices by Dutch government authorities. Developmental psychologists underline that the erosion of this value could hamper children's ability to develop flourishing human relationships.
This article contributes to fill this niche."
EU signs contract for large-scale biometric database to protect borders
"As part of the four-year deal, technology providers IDEMIA and Sopra Steria will be involved in helping to build a new shared biometric matching system (sBMS), with the objective of fighting illegal immigration and trans-border crime across the 26 European countries in the passport-free Schengen area, eventually becoming one of the largest biometric systems in the world.
As part of the new set up, third-country nationals crossing the external borders of the Schengen states will be required to use the technology to submit their biometric data for identification purposes."
Hamid Khan: The activist dismantling racist police algorithms
"Algorithms have no place in policing. I think it’s crucial that we understand that there are lives at stake. This language of location-based policing is by itself a proxy for racism. They’re not there to police potholes and trees. They are there to police people in the location. So location gets criminalized, people get criminalized, and it’s only a few seconds away before the gun comes out and somebody gets shot and killed."
From Robodebt to Racism: What Can Go wrong When Governments Let Algorithms Make Decisions
"Algorithmic decision-making has enormous potential to do good. From identifying priority areas for first response after an earthquake hits, to identifying those at risk of COVID-19 within minutes, their application has proven hugely beneficial.
But things can go drastically wrong when decisions are trusted to algorithms without ensuring they adhere to established ethical norms. "
Racial Illiteracy in Tech
"Computer science, user experience, machine learning, data analysis — the practitioners of these fields and involved in the design, development, and deployment of technology, should consider racial literacy an essential, necessary skill. And there are so many places this skill can be taught, from computer science classrooms to the teams of social media platform companies."
The Case for Better Cybersecurity to Support the Bioeconomy
"The amount of data in the biotech sector is rapidly growing; both the number of genomes sequenced, and our worldwide sequencing capacity, double roughly every 7–18 months. The bioeconomy is built on this data, and on the software and hardware tools used to collect, process and store it. However, as we increase our dependence on data, securing that data is becoming increasingly more important. Biological data is extremely personal, and immutable; you cannot apply for a new fingerprint."
AI News Anchor deployed in China
"An editor must still type in text for Xin Xiaowei to say, but the AI anchor never needs a break and, perhaps more importantly for its users, does not need to be paid, putting it in direct competition with real news anchors, possibly heralding the future of televised news, at least in China."
Probability & Moral Responsibility with (our very own) Olivia Gambelin
Olivia Gambelin in conversation with Ben Byford.
Could Europe introduce a China-style internet firewall?
"The bloc’s Digital Services Act (DSA) – for which the consultation period is underway – is due to update the rules governing the internet, in particular targeting the dominance and impunity of monolithic big tech.
It’s not clear what form this legislation will take yet, but a new policy paper moots a radical idea: the creation of a “European internet”, which “like the Chinese firewall” could block services that condoned “unlawful conduct” from third parties."
Understanding Transparency in Algorithmic Accountability
"Transparency has been in the crosshairs of recent writing about accountable algorithms. Its critics argue that releasing data can be harmful, and releasing source code won’t be useful. They claim individualized explanations of artificial intelligence (AI) decisions don’t empower people, and instead distract from more effective ways of governing. While criticizing transparency’s efficacy with one breath, with the next they defang it, claiming corporate secrecy exceptions will prevent useful information from getting out. This chapter bucks the tide. Transparency is necessary, if not sufficient, for building and governing accountable algorithms. But for transparency to be effective, it has to be designed. It can’t be sprinkled on like seasoning; it has to be built into a regulatory system from the onset. And determining the who, what, when, and how of transparency requires first addressing the question of why."
"Confronting Our Reality: Racial Representation and Systemic Transformation with Dr. Timnit Gebru" on The Radical AI Podcast
"How do we respond to the racism in the world we have been given? What does it mean to transform technology systems in the spirit of justice and equity? How do we engage with diversity and representation without reducing our efforts to simple branding and lip service? To answer these questions and more the Radical AI Podcast welcomes one of our heroes Dr. Timnit Gebru to the show. Dr. Timnit Gebru is a research scientist at Google on the ethical AI team and a co-founder of Black in AI. "
Defining AI Ethics
"When it comes to Ethics in Artificial Intelligence there are many different views, perspectives and lexicons."
The Algorithmic Equity Toolkit
"A set of resources designed to identify & interrogate surveillance & automated decision systems used by governments."
The Case Against Pandemic Research Exceptionalism
Zachary Lipton and Alex John London on lowering the bar on research in a crisis.