Study Reveals Gaps in Autism Inclusivity Among Computer Science Researchers
Nearly 90% of researchers who develop robots for autistic people didn’t bother to ask autistic people if they need the technologies, says Naba Rizvi, a computer science Ph.D. student in the University of California Jacobs School of Engineering and a self-identifying autistic woman.
Rizvi, who is with the Department of Computer Science and Engineering, raises this point in an impassioned video and in her work that investigates the stereotypes about neurodiversity perpetuated by computer science research. She is the first author of a new study, “Are Robots Ready to Deliver Autism Inclusion?: A Critical Review,” presented recently at the Association of Computing Machinery (ACM) CHI conference on Human Factors in Computing Systems.
In the qualitative study, Rizvi and her colleagues analyzed 142 human-robot interaction (HRI) papers published between 2016 to 2022 that explicitly identified autistic people as the end-users. They sought to determine whether autism is stigmatized in HRI research and to pinpoint how systemic social inequalities could be reproduced by them.
The team concluded that HRI research papers within their main corpus stigmatize autism and exclude the perspectives of autistic people. Roughly 93.5% of the research in the six-year period applied a model that pathologizes autism– focusing on “treating” autism as an illness– while many of the papers perpetuated gender and age biases as well as power imbalances. Meanwhile, fewer than 10% of the papers included a representative sample of women with autism.
“The marginalization of autistic people in our society today is multi-faceted,” the authors noted. “It is rooted in the dehumanization, infantilization and masculinization of autistic people and pervasive even in contemporary research studies that continue to echo ableist ideologies from the past.”
The Divergence Between Two Autism Models
Traditionally, studies in HRI research have centered around the medical model view of autism. The medical model, which for decades has been the prevailing model on autism in psychology and medical research, treats neurodiversity as a disorder that needs to be cured– a perspective that’s elucidated by the diagnostic name: Autism Spectrum Disorder, or ASD.
Not surprisingly, this widely accepted, deficit-based understanding of autism has also broadly shaped research in other fields, including robotics. Guided by the medical model, a common goal for robotics researchers is to introduce technologies to help autistic people cease less-desirable behaviors and conform to neurotypical social norms.
In contrast, a growing neurodiversity movement views autism as a difference and not a deficiency. According to this framework, autism is a distinct neurotype, or type of brain, and individuals with autism offer a valid, albeit unique, way of thinking and expressing themselves.
This viewpoint, known as the social model, promotes accommodations and equality over the pathologization of disabilities. Additionally, it engages the end-user– those in the autism community– to develop appropriate supports. When applied to HRI research, the social model yields robots designed for companionship, entertainment, and other forms of engagement.
It is time for us to stop perpetuating stereotypes on who autistic people are and who they ought to be.
Naba Rizvi
The Prevailing Model in HRI Research
According to the study, just a little more than 6% of the HRI research papers in the main corpus applied the social model. The remaining 93% applied the medical model, following the lead of psychology research. Consequently, the majority of HRI research papers excluded the perspectives of autistic people and focused on using robotics to treat autism.
“Researchers are out-designing technologies that will help us control our “aggression.” What we really need is to just unpack the trauma of being autistic in a society that just can’t seem to accept that we exist,” said Rizvi.
The data shows 76 studies in the corpus used anthropomorphic and humanoid robots to teach social skills. Another 15 studies used robots that look like animals for the same purpose. One study employed a robot to diagnose “abnormal” social interactions.
Rizvi and her co-authors argue that using robots in mentor roles perpetuates the belief that autistic people are deficient in their humanity and suggests robots are well-suited to help autistic people become more human.
The team also identified 11 papers employing deficit-based language and 27 papers which contrasted “typical” versus “abnormal” development to posit non-autistic people as the norm and their autistic peers as a deviation from the norm. More than 85% of these papers placed the burden of overcoming communications difficulties entirely on autistic people.
“Fellow scientists, I am speaking to you now. It is time for us to stop perpetuating stereotypes on who autistic people are and who they ought to be. It’s definitely possible for us to start promoting autism inclusion in our work. So why aren’t we doing it?” challenged Rizvi in her video.
Proposing an Inclusive Model for Robotics
Rizvi and her coauthors aim to move HRI research in a more inclusive direction – one that mirrors the autism community’s own perspective. In their paper, the team proposes a series of ethical questions to help HRI researchers avoid common harmful stereotypes of autism and historical misrepresentations. Specifically, they ask researchers to consider the following:
Are autistic people accurately represented in your research team?
Were any assumptions made about the autistic user’s autonomy that would not have been made for neurotypical users?
Is input from non-autistic third-parties given more weight than input from the autistic end-users in the design process?
Were the needs of the participants taken into consideration in the research methodologies?
Is your work inadvertently promoting harmful stereotypes?
They also encourage researchers to report participant demographics to help contextualize findings.
Rizvi’s work has been recognized by the National Center for Women and Information Technology, Google, Amazon and CSEdWeek (which prompted a personal note of congratulations from Vice President Kamala Harris), and she has spoken on multiple neurodiversity panels. Rizvi is advised by UC San Diego Department of Computer Science and Engineering Associate Professor Imani Munyaka, the paper’s senior author.