Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Arvind Narayanan writes:
"If you think there's too much yelling about algorithmic bias, here's an analogy. By the mid 90s the privacy community knew there was a huge problem. But it took two decades of yelling and a million privacy disasters before the public and policy makers started taking it seriously."
Implementing Ethics Into Artificial Intelligence: A Contribution, From A Legal Perspective, To The Development Of An Ai Governance Regime
This is a new article added to a great issue of the Duke Law and Technology Review that came out in August, a Symposium for John Perry Barlow.
"This Article advocates for the need to conduct in-depth risk-benefit-assessments with regard to the use of AI and autonomous systems. This Article points out major concerns in relation to AI and autonomous systems such as likely job losses, causation of damages, lack of transparency, increasing loss of humanity in social relationships, loss of privacy and personal autonomy, potential information biases and the error proneness, and susceptibility to manipulation of AI and autonomous systems. "
When things go wrong and AI runs amok, the lawyers will be there to tell us the most company-friendly version of what happened. Most importantly, they’ll protect companies from having to share how their AI systems work.
We’re trading a technical black box for a legal one. Somehow, this seems even more unfair.
"Since traditional evaluation metrics such as accuracy are not sufficient for quantifying the adversarial threat, we propose the Adversarial Robustness Score (ARS) for comparing IDSs, capturing a common notion of adversarial robustness, and show that an adversarial training procedure can significantly and successfully reduce the attack surface."
Joelle Pineau doesn’t want science’s reproducibility crisis to come to artificial intelligence (AI).
An argument for better corporate governance around AI and data. Corporations should "treat data as an asset .... the same way organizations treat inventory, fleet, and manufacturing assets. "
A nice short piece from Luciano Floridi
"Most large organizations today across the United States and Europe are talking about “duty of care” and AI (i.e. the duty to take care to refrain from causing another person injury or loss). We also hear a lot about the need for clear normative frameworks in areas such as driverless cars, drones, facial recognition, and algorithmic decisionmaking guidelines in public-facing services such as banking or retail. I shall be surprised if we will have this conversation again in two years’ time and legislation hasn’t already been seriously discussed or put in place."