Welcome to the EI weekly round-up; a curation of quality posts to help you cut through the noise and get right to the heart of the discussion on AI and Tech Ethics.
Every Tuesday we publish a list of links to articles and debates that have happened over the past week in the community, allowing you to stay as up-to-date as possible on developments and facts. We will often link to arguments from all sides of the debate, even if the opinions may be controversial. We would like to mention, however, that EI does not endorse any of the information published, all links are reflections of the author's opinions and not that of Ethical Intelligence.
Would you take a drug discovered by artificial intelligence?
@DorotheaBaur sparked an interesting reaction on Twitter, where in a climate of frequent AI news that raises ethical concerns, she asked if we might agree that this was a good example an applicaiton of AI that was not ethically problematic.
"The British startup Exscientia claims it has developed the first medication created using artificial intelligence that will be clinically tested on humans. The medication, which is meant to treat obsessive-compulsive disorder, took less than a year from conception to trial-ready capsule. Human trials are set to begin in March, but would you take a drug designed using artificially intelligent software?"
Artificial Intelligence Is Not Ready For The Intricacies Of Radiology
"...while much of the theoretical basis for AI in the practice of radiology is extremely exciting, the reality is that the field has not yet fully embraced it. The most significant issue is that the technology simply isn’t ready, as many of the existing systems have not yet been matured to compute and manage larger data sets or work in more general practice and patient settings, and thus, are not able to perform as promised. Other issues exist on the ethical aspects of AI. Given the sheer volume of data required to both train and perfect these systems, as well as the immense data collection that these systems will engage in once fully mainstream, key stakeholders are raising fair concerns and the call for strict ethical standards to be put into place, simultaneous to the technological development of these systems."
Advancing impact assessment for intelligent systems
"We discuss how the EIA provides a partial blueprint for what we call a Human Impact Assessment for Technology (HIAT) and how more recent algorithmic and data protection impact assessment initiatives can contribute. We also discuss how ethical frameworks for such a human impact assessment could draw on recently established AI ethics principles. We argue that this approach will help build trust in an industry facing increasing criticism and scrutiny."
What Is A Data Passport: Building Trust, Data Privacy And Security In The Cloud
"Data passports allow you to extend the encryption technology that used to be only available on a physical mainframe to cloud computing. Each piece of data in the cloud has a passport assigned to it, and with the passport, you can verify if the data is misused, if the passport is still valid, etc. "
How the EU Should Revise its AI White Paper Before it is Published
"The European Commission is planning to release a white paper to support the development and uptake of artificial intelligence (AI). Early drafts of this white paper suggest that the Commission may call for additional AI regulations that would make it more expensive and more difficult for European businesses to use AI systems in many areas of the economy. Given the EU’s desire to be a leader in AI, and to use AI to bolster its global competitiveness, the Commission should avoid heavy-handed rules that would slow adoption of this emerging technology."
The Critics Were Wrong: NIST Data Shows the Best Facial Recognition Algorithms Are Neither Racist Nor Sexist
"NIST assessed the false positive and false-negative rates of algorithms using four types of images, including mugshots, application photographs from individuals applying for immigration benefits, visa photographs, and images taken of travelers entering the United States. NIST’s report reveals that:
a) The most accurate identification algorithms have “undetectable” differences between demographic groups b) The most accurate verification algorithms have low false positives and false negatives across most demographic groups c) Algorithms can have different error rates for different demographics but still be highly accurate"
COR-GAN: Correlation-Capturing Convolutional Neural Networks for Generating Synthetic Healthcare Records
"In this paper, we propose a novel framework called correlation-capturing Generative Adversarial Network (corGAN), to generate synthetic healthcare records. In corGAN we utilize Convolutional Neural Networks to capture the correlations between adjacent medical features in the data representation space by combining Convolutional Generative Adversarial Networks and Convolutional Autoencoders"
New surveillance AI can tell schools where students are and where they’ve been)
"Not all AI being used by schools is facial recognition. That doesn’t mean the tech doesn’t come with privacy risks. "
Connected cots, talking teddies, and the rise of the algorithmic child
"Digital technologies are now a ubiquitous part of our daily lives. And questions remain as to how these technologies are reshaping how we experience the world around us, and how the world around us is being reshaped. One area this is being played out is in the family – in changing the experience of not only childhood, but what constitutes good parenting."