On 22 April 2020, Ethical Intelligence hosted a virtual ethics workshop specifically targeted at better understanding the ethical issues presented by digital contact tracing, in light of the COVID-19 crisis. The event began with presentations on AI in healthcare and the ethics of digital contact tracing to lay the metaphorical building blocks for the group discussions that followed. This is a summary of the presentations and high level concepts that took place during the workshop.
AI, Healthcare & Ethics - Michael McAuley.
Disease has changed human history, with numerous disease hosts and transmission routes. The societal impact can range from moderate to serious, with the current Covid-19 crisis having parallels with the first recorded cholera pandemic. It is important to determine the disease involved initially, before mutation or DNA/RNA degradation, and easier to chart its spread at the beginning of an outbreak, potentially helping to change its course and save lives, alongside public engagement and education. AI technology can help by utilizing populations to track and trace disease, as well as alert people about any necessary behavioural changes, in addition to rapid vaccine development, virtual consultations, advanced screening and diagnostic software etc.
The ethical debate must consider the four main principles of medical ethics starting with Beneficence, which is to ensure what is being proposed is for the benefit of individuals, making the well-being of people the primary concern, with due diligence in the design of AI, consideration for the impact on the patient-prescriber bond and, vitally, the technological prowess of healthcare professionals. Non-Maleficence, which is to evaluate the merit of the proposal to ensure no harm is caused, concerns advising creators of AI systems to identify the potential human rights implications of their creations, with a timely redress of any discriminatory outputs, while bearing in mind the impact on research, as AI may struggle to take the same approach to hazard perception as human beings. Justice, which is concerned with the proposal being fair and cost-effective, requires looking for bias, which can be introduced in development through a lack of diversity in input information, and determining whether the parameters for adoption require more than one type of data e.g. simulation data, as well as performing a cost-benefit analysis. Autonomy considers if individuals are willing participants, so investigating if the technology is moral and its role in decision-making, which must be transparent, understandable and reviewable by a competent human authority, such as a doctor or pharmacist. Protecting patients and professional autonomy will require a large number of organisations to work collaboratively and consider both the legal implications of utilising AI and the negligence implications on registrants of professional bodies.
We have a robust understanding of medical ethics, but there remain many risks. In business these include future liability issues, financial penalties, loss of stakeholder trust and irreparable brand damage, while individual risks comprise matters such as privacy implications, data use and consent. We need to turn these risks into opportunities with positive, human-centric solutions.
Digital Contract Tracing - Andrew Buzzell
AI Ethics and tech ethics generally often leans heavily on functional and risk analysis. The justification for the use of technology, and for actions based on outputs from AI systems, depends on part on their strength, reliability, and the kinds of failures that can occur. Digital contact tracing depends on the viability of using bluetooth signal strength on mobile devices as a non-causal predictor of COVID-19 transmission. Properties of COVID-19, such as mode of transmission, infectivity, virulence, and pathogenicity, the nature of bluetooth radio signals, and the prevalence of compatible devices and the usage of the devices will all interact and affect the real life efficacy of DCT.
Further analysis reveals that this efficacy is not smoothly distributed - there will be uneven representation among identifiable groups of individuals along socioeconomic and other axes that will directly influence the degree to which DCT apps can detect real transmission events.
This in turn should inform our ethical reflection on the kinds of actions that we should take as a result of a transmission event predicted by DCT, and the kinds of coercion that could be acceptable to drive adoption. It also should colour our analysis of the tradeoffs we might make legally, economically, and socially in order to use DCT. There has been significant discussion of the privacy tradeoffs, and the interaction between the technical power of big tech powers to make DCT easier to use (for example, by relaxing limitations on running apps access to the bluetooth antenna, and facilitating the transfer of data), and the demands of governments for support implementing DCT.
There will be significant generation of false positives and false negatives, and highly uneven adoptions and coverage of DCT. As a result, the kinds of followup we take, such as automatic self-quarantine or mandated testing, will not only imperfectly track actual transmission risk, it will also affect some groups of people more than others. The technical question "can DCT work to control the spread of COVID-19" is thus tightly linked to the ethical question "should we use DCT as part of our effort to control the spread of COVID-19". There are some ethical questions we might wish to consider as being immune to context-sensitive factors - we might think some kinds of privacy are so important that no distal benefit could justify compromising. But a great deal of the ethical considerations we care about are in fact sensitive to the trade-offs we want to make, so there is value in being as clear as we can be about the real viability of DCT. Then we can lean on existing frameworks in public health ethics to begin to think about what ethical and effective deployment could look like, if it is possible at all.
Digital contact tracing, if deployed on national and international levels, has the potential to touch, figuratively speaking, millions of lives. If we are to fully understand the implications and impacts this technology can have on society, then we must consider it from multiple angles through a diverse set of stakeholders.
Although our various discussion groups were tasked with different topics of concentration, a common theme appeared across the board: the need for trust. If DCT is to be applied on a national and even international scale, users must first be able to not only trust that it works, but also trust that there is a transparent timeframe and scope for this technology. Currently, there is a significant amount of uncertainty surrounding the effectiveness of DCT in fighting COVID-19. Without a certain level of evidence that this technology does in fact help stop the spread of the virus, potential users are reluctant to use DCT applications as they feel that in doing so they have unevenly traded a level of privacy for an unknown, or even nonexistent benefit. In addition to this, there is fear of scope creep, with potential users lacking trust in the governments and companies deploying contact tracing to not use it for alternative means post COVID-19. Overall, the majority of considerations, implications, and fears surrounding DCT can all point back to the central concern of lacking trust in the technology and those deploying it.
Extending beyond this concern for lack of trust, we dug into the considerations that need to be made in terms of bias, consent and privacy, and the short term versus long term implications of DCT within the various discussion groups. Looking into the potential sources of bias within DCT, we saw concerns surrounding access appear in forms such as a lack of proper cell phone coverage in rural areas and lack of properly updated smartphones in lower income households. When reflecting on issues of privacy and consent, there was a general consensus that these two issues are often communicated as binary entities, when in reality they exist on a scale. When classified in binary terms, more complications are created, but when consent and privacy are allowed to exist on a scale, many concerns are already addressed by the increase of control of personal data. Beyond the current application of DCT, there were a plethora of concerns on what DCT could mean in the future for our digital identities. If companies or even governments, require the use of DCT, what does that mean for individual consent and privacy? Furthermore, if DCT applications are tied to our access to resources, there is an even bigger increase of risk for bias, discrimination, and “shaming”.
A global framework for track and trace data capture is possible, but many factors need to be addressed prior to implementation. Issues to address include international education and human rights, as well as ensuring that individuals are informed about data and how they consent to its use, as well as the implications of its capture and storage to them. Political freedoms need to be protected, and if this technology is misused or a framework cannot be agreed, then treaties such as CTBT may need to be used to control widespread human rights abuses. We must be positive about innovation and the future and do our best to build trust between nations and educate creators and innovators, while empowering everyone to confidently engage with this global issue.