Conversations about AI and Disability tend to focus on the ways in which AI is used in the deployment of technological tools that help disabled people to perform tasks they otherwise would not be able to carry out. Recently, this idea of help has seeped into common usage with the term “assistive” technology - a notion that is burdened with assumptions about disability which shed light on deep rooted social injustices that I will later explore.
Deployment of so-called assistive tech is widespread and there are many cases where they allow disabled people to participate more fully in society. The National Theatre, for example, offers a pair of ‘Smart Caption Glasses’ for deaf people. These glasses are essentially a speech recognition tool that uses a natural language processing (NLP) AI model and provides live captioning for what is in the visual field so that deaf people can enjoy a performance.
This is just one example but such language processing models are used for tools that are deployed more widely and at scale to help persons - whose disability affects their audio/visual skills - perform necessary everyday tasks.
Ways in which Assistive Technology Benefits Society
It’s clear that assistive technologies have a profound impact on the daily life of users. On Scope UK’s website, a disability equality charity, you see users of assistive technology talk about the instrumental role certain tools play in their lives. Raisa, whose disability means she cannot physically type with any proficiency, talks of her reliance on Apple’s voice recognition software to perform her “most important job” of “dictating and replying to emails”. For her, such assistive technology is paramount and she believes that it can “help you live the life you chose to live.”
Assistive tech is not limited to physical disabilities. “No Isolation”, a Norwegian health-tech start-up, deploys what we might call an assistive tool to help address social isolation and loneliness. According to No Isolation’s research, two of the most vulnerable groups are young children with long-term illness and those who are over 80 years of age. No Isolation addresses this vulnerability for young children, by deploying it’s AV1 robot to assist those who cannot physically participate at school. The robot attends classes by proxy, employing NLP and machine vision AI systems that allow the child to engage with the teacher and with the other children.
Challenging The Notion of “Assistive” Tech - The Language of Assistance
Whilst such technologies appear to be taking a positive step to bring about equality and opportunity for disabled people, we might want to challenge the common usage of the term “assistive”.
As Richard Ladner from the University of Washington points out, the term “assistive” technology is in some respects redundant. It’s hard to think of an example in which technology is not assistive in some way. Technology, generally, makes certain tasks possible or easier to do. So what is the distinction between “assistive” tech and plain old tech?
This is not immediately clear. When we think about people who have “Correctable Vision” - a condition not technically considered a disability - the glasses or contact lenses that help them to improve their vision are not commonly described as “assistive technology”. Yet surely glasses or lenses are in fact assistive. This begs the question which Ladner asks: “Why is it that people with disabilities have assistive technology while the rest of us just have technology?”
Given the redundancy of word assistive, the term assistive technology seems to indicate that disabled people require lots of extra help - evoking a sense of dependence and a lack of capability. As Ladner points out, it seems inherently paternalistic to say that disabled people receive assistance and it fundamentally challenges their identity, freedom and agency humans - ultimately it diminishes the autonomy individuals have over their actions.
Moreover, the idea of assistive technology highlights a certain “quick fix” attitude to technology which Mara Mills, Associate Professor of Media, Culture and Communication at NYU, claims ignores important advances of “education, community support and social change”.
These concerns raise some weighty concerns for disability and AI. Notably, the idea that the deployment of assistive tech carries a serious threat of power asymmetry between those who design and deploy AI systems and those for whom it is made.
This opens the door to further scrutiny around the design of AI systems that fuel these tools - time to get the magnifying glass out.
Design and Disability
When we think about the design stage of AI systems, specifically those that apply machine learning methods, it’s important to consider the nature of the dataset that an AI model is being trained on.
In general being excluded from the training data in an AI model can cause problems. Concretely, if an AI system is, for example, trained on a data set that has no images of bald people, then bald people will be missing from the AI model. As a result, the AI system won’t recognise bald people - which would, let’s say, make it almost impossible for a machine vision AI tool to hunt down a Jason Statham or Bruce Willis.
Moreover, the datasets that many AI systems use unfairly represent a number of groups - most commonly on the grounds of race, gender or disability. Unfair representation is a reflection of how people in these groups have been subject to historic marginalisation and discrimination.
These historic patterns are imprinted in the AI model datasets, which in turn are used to train an AI system - resulting in a so-called “algorithmic bias”. Ultimately, this bias leads to yet more unfair outcomes for people in these groups - pouring ever more fuel on the discriminatory fire.
AI Now, a NYU research body addressing the societal impacts of technology, calls this vicious circle “discriminatory logics” - that is to say: “those who have borne discrimination in the past are most at risk of harm from biased and exlusionary AI in the present”. This ongoing pattern reveals an inherent toxicity in AI systems for disabled people that has been at the forefront of wider AI bias concerns.
Moreover, it is particularly worrisome that the AI systems which use these encoded “discriminatory logics” carry a certain unchallengeable authority. There are countless cases of disabled individuals being discriminated against by AI systems in high-stakes decision making.
For example, as Kathryn Zsykowski observes with Amazon Mechanical Turk, a crowdsourcing marketplace for outsourcing virtual tasks, certain disabled clickworkers are unable to pass the CAPTCHA (a common reverse Turing Test that helps to prove an individual's humanity) or cannot complete work quickly enough so they are consequently rejected by the platform.
We need to be really careful about how we assign authority to technology. In the case of decision-making AI systems, we might think their authority often hinges tenuously on the fact that they are the product of powerful companies who employ clever people to create technology that very few people actually understand.
Immediately, this raises concerns of explainability and accountability around AI: we need to direct our attention to the companies that produce these tools and a) demand an explanation of the decisions that the system makes but, more importantly, b) hold them accountable when the decisions are discriminatory.
The idea that misplaced authority leads to discrimination against disabled persons reveals a deeper social injustice in the AI realm. That is: there is an imbalance in power between those who design and deploy AI systems and those who are “classified, ranked and assessed by these systems”. Ultimately, as I will suggest, to address this imbalance we need to look at concerns of AI bias and disability in tandem with these deep rooted social injustices.
In the wider tech ethics arena most of the discussion and many of the headlines are focussed on the axes of gender and racial bias, with comparably little literature discussing the treatment of disabled persons in the face of algorithmic bias.
As AI Now points out in its report: “disability has been largely omitted from the AI bias conversation”.
By and large, this statement is justified. Perhaps more importantly though, even when disabled persons seem to be part of the conversation, they are not properly included and the problems they face are not addressed in the right way. Most commonly, disabled persons are not included in the right way because of exclusion and “unfair representation”.
Issues of Exclusion and Unfair Representation
One example in recent years where exclusion from training data had a fatal impact involved an autonomous Uber vehicle that hit and killed Elaine Gerzberg in 2018 as she pushed her bike across the road.
In this case, the system’s training data clearly did not include enough images of a person pushing a bike, which lead to confusion for Uber's pedestrian recognition system. Not having enough representations in this case echoes the Jason Statham example above in the way that it is seemingly “unfair”.
If able bodied pedestrians are at risk from misrecognition due to exclusion and unfair representation in a dataset, it’s vital that we ask ourselves how we can avoid this happening for disabled people in wheelchairs or mobility scooters?
One solution might be to concentrate our design efforts on fairly representing disabled persons in the training data. We might suggest that a fair representation of disable persons equates to something like comprehensive representation - which fully and correctly classifies all kinds of disability.
Clearly this is easier said than done.
Disability is such a fluid and vast concept that encompasses an almost immeasurable range of physical and mental health conditions, and that can come and go throughout time. This means there are so many outliers and often no two disabilities are the same. As Dr. Stephen Shore famously said: “if you’ve met one person with autism, you’ve met one person with autism”.
Such fluidity is in direct conflict with the rigidity of AI systems. What this means is it is hard to concretely account for the multitude and variation of disabilities, which makes mapping disabilities onto AI model classifications seemingly impossible.
Moreover, as AI Now points out, even if we could account for such fluidity, to achieve fair representation “may require increased surveillance and invasion of privacy in the process” - opening another can of ethically questionable worms.
Can We Move Towards Inclusion and Fair Representation?
One option for solving the AI bias issue is to take a technical approach: classifying people into a single variable, for example race or gender, and then testing the system by applying a number of methods to see if it works across a variety of people.
This common technical approach is limited and simply won’t do for disability. Firstly, for the reasons of fluidity mentioned above. Secondly, and more importantly, if we take a social model of disability in which we understand it as a “product of disabling environments and thus an identity that can only be understood in relation to a given social and material context” we see that it’s very difficult to find technical fixes for social problems.
Another way to mitigate the issues of systemic algorithmic misrepresentation is to collect more data representing disabled people. However, simply augmenting the data collection process raises issues again of how this data is collected and also how it is classified.
For example, there are a number of grassroot collection efforts within the disability community to gather data in the hope of better understanding and addressing their health. But in such cases where healthcare isn’t guaranteed, it is difficult, as AI Now reports, to “ensure that such data won’t be reused in ways that could cause harm” - even if that was not the original intention.
Moreover, the workshop, which the AI Now report emerged from, revealed that much of the effort to increase the data representation of disabled persons is performed by “clickworkers” who “label data as being from people who are disabled based on what is effectively a hunch”. Paradoxically here, the effort to include more disabled people is stunted by the classification of categories that “effectively exclude many of those they are meant to represent”.
Going Back to the Drawing Board
Although the development of technological tools that help disabled people to participate in society is seemingly beneficial and perhaps life changing, the impact on the wider disability movement is momentary.
As I looked, in this article, at AI and disability through the critical lens of philosophy - I saw the issues of “assistance” and bias that reflected deeply rooted social concerns.
To properly address the issue of bias and disability, specifically addressing the concerns of algorithmic marginalisation and discrimination, we need to focus on disability in it’s social and environmental context. This is no small task and probably beyond the scope of this article. That said, there is slowly more and more work being done to take steps in the right direction.
The AI Now paper cited in this article is a great resource for an overview of the challenges that face the disability community with respect to AI. It is also particularly constructive in the way it draws our attention to the vital need to include disabled persons in the design stage of tech development.
The resounding message from AI Now is that disabled persons should be designing “with, not for” the mantra of the disability movement: “Nothing About Us Without Us”.