What could the AI boom mean for neurodivergent people?
Artificial intelligence has cemented itself as the latest world-altering technology. It follows the same lineage as electricity, the television, the internet — the list goes on.
Depending on who you talk to, AI is the greatest thing ever or the thing that will destroy the world, with lots of room in the middle for discussion and experimentation.
But within the bigger AI fascination, there’s a smaller yet critical conversation about how this technology might help or harm people who learn and think differently. That’s why we brought in Dr. Amy Gaeta for this week’s Hyperfocus. She’s an AI ethicist and researcher who also has autism, giving her a unique insight that she shares on our latest episode.
Related resources
Timestamps
(03:40) AI in the workplace
(10:16) Disability justice and technology
(13:45) AI’s built-in bias against disability
(17:15) How to find helpful, safe AI uses
(21:08) Chatbot therapy
(25:14) What does the future hold for AI and neurodivergence?
We love hearing from our listeners! Email us at hyperfocus@understood.org.
Episode transcript
Dr. Amy Gaeta: There has been a big turn to many people using chatbots as a form of therapy. These models are very dynamic, they're volatile. So this becomes a question of AI literacy, ultimately.
Rae Jacobson: That's Amy Gaeta. She's a researcher and AI ethicist at the University of Cambridge. She's also autistic, and in her work, she thinks a lot about how AI intersects with the disability community at large.
Amy: Actually, to prepare for this show, I tried out a few AI apps that are designed for people with ADHD. So one of them, which was a highly, highly rated app in the App Store, made me take a quiz to be able to look at the application. So I took this quiz, and this quiz was so long. I was like, nobody with ADHD is finishing this quiz. It was so long. I don't even have ADHD and I was like forgetting about it every two minutes. So I finally finished this quiz and then it was like, 98% you have ADHD. And I was like, I don't think I do.
Rae: It feels like every day we hear something new about AI. It's great. It's terrible. It'll do your homework or maybe just end up giving you a recipe for glue sauce pizza. And this is the thing with every big tech boom since the beginning of time: a cycle of anxiety and excitement. Warnings and exultations about how wonderful or dangerous it could be for how we think, act, feel, and learn.
And you know what? Unlike your average panic, these are pretty reasonable because these big waves of tech do change our culture and behavior, albeit in ways that we don't always immediately understand. Radio changed everything, so did the phone and television, internet, and social media. And now it's AI.
But the general AI hype and fascination, amply covered elsewhere, isn't why I wanted to talk to Amy. There's a smaller, more pointed version of the conversation playing out around how this new tech might help or harm people who learn and think differently. And Amy has a front-row seat. So today on "Hyperfocus," Dr. Amy Gaeta is here to help us with a big question: what could the AI boom mean for neurodivergent people?
Rae: I have to admit that I think my idea of AI ethics is based around the movie "Terminator." So I was wondering if you could tell me a little bit about what an AI ethicist does.
Amy: So an AI ethicist takes a couple different forms. Some are much more tuned into the technical side of AI, such as examining how do we have code that is explainable? How do we have LLMs that can be transparently explained to say policymakers? And then there are AI ethicists that are much more interested, I would say, in application. So they might work with policymakers or work with designers to think about, is this a good ethical use case of AI and what are the scales of harm or risk that could be done?
And then there's another category, which I would say I fall into, which is more interested in social justice, ultimately. So that's positioning AI into a longer trajectory or history of digital technologies and technologies more broadly that have been used in ways that have harmed primarily marginalized communities. So we're trying to think about how do we design better AI or use it differently or perhaps not use it at all.
(03:40) AI in the workplace
Rae: So I want to talk about AI in the workplace first. I know a lot of people have employers who are very AI-focused, and I'd really like to know more about how you see AI playing out in the workplace for people with disabilities.
Amy: Yes, absolutely. I think the workplace is such a good place to start because so many narratives about disability ultimately come from disabled people's relationship to labor. That many people come to understand themselves as disabled primarily because they cannot do certain tasks associated with a job.
So in the workplace, you know, there are various economic incentives right now for companies of all scales. It's increasingly being pushed on our throat as the future. And there's this narrative of the only way you're going to stay ahead, you're going to be a viable business is if you incorporate AI in meaningful ways. And what that means is up for debate.
Rae: When Amy says "up for debate," she means it. Because what does it mean to "incorporate AI in meaningful ways"? Who gets to decide what meaningful means? In many cases, Amy says what this does mean in a corporate context anyway is using AI to boost productivity at all costs and often in ways that are actively harmful for employees, especially employees who are neurodivergent.
Amy: For instance, there is a lot of work on AI-based surveillance, which has definitely impacted all workers, but I can see it specifically impacting disabled workers. So this is used in a lot of manual labor jobs. So it's security cameras, basically, that can track employees' productivity to make sure they're meeting certain quotas, to make sure they are not so-called slacking off or taking too much time. So it kind of becomes a computerized version of your boss just standing there watching you work all day.
Rae: Amy says this also happens in white-collar jobs with things like mouse or screen trackers that record how consistently or quickly employees are working. It's data that from a corporate perspective could be couched under employee productivity but that as a worker feels invasive and pretty unnerving. And these, like many other productivity-focused tracking technologies, can be devastating for neurodivergent people for so many reasons.
Amy: Why I can see this directly impacting disabled people in particular is that we're already hyper-scrutinized at work, both by external people, but also ourselves, in that, you know, I don't know about you, but I always feel like, you know, I know all this stuff about disability theory and anti-ableism, but I'm still always pushing myself to do more and to be better and to make sure nobody, you know, doesn't count me as valuable because I'm disabled.
Rae: Yeah, you still sort of feel the need to mask and to be the best person that you can be in the context without anyone seeing that you're being held back in any way.
Amy: Absolutely, right? So these AI-powered surveillance tools just amplify that. You know, but there's also things that disabled people also just experience different temporalities, right? I can't work eight hours a day. It would be catastrophic for me. I need to work in like two-hour shifts, take a break, you know, have some burn-down time, sit in the room with like a pillow on my eye and experience no light or no noise, and then I can work again. A linear shift doesn't work for me. So how does that work in a manual service job? How does that work for them if they are held to these really ableist productivity standards that are then forced upon them? So I find that quite concerning.
(10:16) Disability justice and technology
Rae: Amy isn't only concerned with AI's limited idea of what intelligence is. She's concerned with what AI's limited idea of what humanity might be.
Amy: So a lot of my perspectives on disability come from a framework called "disability justice." And this was a foundational framework that tried to understand disability not as an identity or just as a medical condition, but understand it as something that's intertwined with capitalism and is intertwined with the environment and with race and class and gender.
So they sought to understand disability justice as something that's truly transformational. And instead of saying something like, disability justice is when someone can do better at their job and feel like they're flourishing, a disability justice framework would step back and it would look at the entire AI supply chain and say, okay, what's going on here of how is disability really entangled with this technology?
So at its core, right, you have to think about artificial intelligence as built on a very limited understanding of intelligence. There's been numerous studies showing about how so much knowledge about AI is just coming from the West as well. So many models are developed primarily by American or some European countries. So there's a very limited idea of intelligence that's at work here that is directly at odds at what we talk about in neurodivergent communities, which is embracing and trying to support these flourishing ideas of what an intelligence and what a mind and what a person may be.
So I always try to hold those in mind when I'm thinking about what is a good application of AI for somebody or what's a bad application.
(13:45) AI’s built-in bias against disability
Rae: It is really easy, not just as a neurodiverse person who is seeing these technologies infused into our daily lives to see this as this massive good or this massive bad, you know, that there's so much potential and so much potential for harm. I imagine that it's neither of those things, that or both, but that it's something that has a middle path. And I'm wondering from your perspective, where does this go for neurodiverse people from the least hysterical prediction that you can give?
Amy: So increasingly, there's been more and more scholarship on disability and how it's represented. So I worked with "Wired" a few months ago on a piece about this, about Sora, which is one of OpenAI's video models, so it can produce videos. So we were interested in trying to create a framework to understand, you know, okay, how does it produce a representation of a disabled person? And then how does it do that if we put in descriptors like "woman" or "Black" or "gay," and how does it handle that complexity of identity? And it turns out it doesn't. And then very often it will do something selective, like it'll just ignore something of the prompt, or it will produce a very limited understanding where I think almost every time you put in "disabled person," it would give you an image of someone in a wheelchair.
Rae: Ah.
Amy: Which is the only way it could describe disability.
Rae: So these models that we're sort of turning to as, like you said, they can remove a barrier, like making writing an email easier, are also trained on datasets that reduce us to one image, one thing, one way. And that is very limiting or can be anyway.
Amy: Absolutely.
(17:15) How to find helpful, safe AI uses
Rae: I know so many people who are choosing to adopt AI, people with learning disabilities, people who are in the neurodiversity community who are choosing to adopt it for reasons that are very personal, that feel very helpful to them. I knew somebody who mentioned using it to do meal planning for herself. I have a friend who uses it to make herself a schedule every day. These little things that do go back to that kind of like chore-helper robot mode. There are these highly personalized applications that people are creating for themselves. Are those something that you see as having a benefit because people are able to repurpose this tech in a way that is individualized, is personalized, is helpful regardless of sort of the intent behind it? Or are those also things that bring concerns for you?
Amy: I think it's a mixed bag where I do think it's a benefit for some people. And, you know, I'm not interested in shaming by any means. The purpose of these large models is for a lot of personalization, right, especially these general purpose models like ChatGPT. They're meant to be used in all these kind of unique ways, and I think there's something really inventive about people finding all these ways to personalize these tools. So you know, if someone is using it for meal planning, I would say that that's fine, as long as that person has thought about what that means for them. Right, because there's a question of agency there.
Whereas with any assistive technology, there's, you know, a question of consent. You're giving away a bit of your agency to say, "Okay, I'm going to let this technology, you know, translate all of my speech into text," say, or "fix all of my typos," whatever it might be. So you're giving up a task that you might do yourself to this technology. So if people want to use it for these small-scale tasks, I would say, you know, think about what that means long-term when you are kind of giving up something. And for many people, they say, you know, "I don't care, I just need to have food," then that's their choice as long as they understand it. So this becomes a question of AI literacy, ultimately, and kind of critical thinking skills when you're thinking about what does this tool mean in a long-term capacity.
Rae: When you mention critical thinking skills, that brings me to another big question that I've heard bandied around about AI, which is in the LD community, we do a lot of skill building. It's like you have to really practice skills that come very easily to other people. For example, I have a terrible time with time management. I struggle a lot with prioritization. I have all of these sort of challenges that are very executive functioning-based. I have a lot of systems that I use, which could be sort of similar to assistive tech, to manage them: timers, I use reminders on my phone, all of these things that are sort of AI-ish. I find those very helpful because those are things that I couldn't train my brain to do. It's just a gap in my ability. I have dyscalculia; I use a calculator. My brain is not going to suddenly learn to do math, right? And I'm okay with that. That for me is something where I'm willing to cede that amount of agency, like you said, to this tech because it makes my life genuinely easier.
The other side of that that I've been hearing a lot about is that when you cede things that maybe could be trained or could be helpful to tech, it can have a negative impact on critical thinking skills. If you are trying to find ways where AI can be helpful to you, like my calculators and emails and reminders, and then on the other side, still trying to be maybe thoughtful, careful, conscious of places where using your brain in this very active way is really important, really helpful, how do you find that balance?
Amy: Ultimately, what we need to ask ourselves is how important is doing the task itself to your actual learning or to your lifestyle, right? So I'm an academic, I read a lot, and reading for me is fundamentally important. It is like a bedrock of scholarship, right? To be able to thoughtfully engage with someone's words, produce your own original ideas about it, and then respond. You know, so for me, one of my most concerning things is academics who are using AI to summarize research papers for them.
So this is particularly concerning because for me, reading is a process task where the summary doesn't really matter, right? What I learn is what I learn from reading it. The same way that for me, writing an essay is how I learn. I figure out my ideas as I'm writing. So I could never ask AI to do that task for me because then I'd learn nothing. So when it comes to writing an email, you know, if you're someone that says, "Oh, I really need this process of writing an email and when it comes to writing an email to a loved one, maybe that makes you feel closer to them," right? So you wouldn't want to outsource that task to AI.
But if you're writing a boring email because you want to return a shirt to a clothing store and you're like, "I'm really tired, I'm just going to use AI to do it," I don't think that's a process-oriented task. That's an output task. That's like, I just need this done. It doesn't matter if I learn anything from writing this email. You can send that one off. So I think that's how I'd make this, you know, this difference and in thinking about, is this process-oriented or is this output-oriented?
(21:08) Chatbot therapy
Rae: One thing that I think about a lot with AI that I'm very curious about your take on is there is a big overlap between people who are neurodivergent in, you know, especially with learning disabilities or autism, and people who have mental health challenges that are quite serious or even just that pass by: anxiety, depression, bipolar disorder. We know that there's a high correlation. I keep hearing these genuinely unnerving stories about people using AI for therapy, and I know it's being integrated into a lot of talk therapy online telehealth models. I'd be interested to know what you think about that and how maybe that affects neurodivergent people differently than it affects the general population, if that's something you feel like you can speak to.
Amy: So this has been kind of a big point of controversy in a lot of AI communities, and I think it has to do with kind of short versus long-term harms or benefits, but also thinking kind of smaller scale and larger scale. Right? So on the small scale, you know, if someone's having a really bad day and they want to go vent to a chatbot instead of their wife, for instance, you know, that doesn't sound like the worst thing in the world to me.
It can also be the case that for vulnerable people, such as people who are already, you know, experiencing some kind of distress, whether that's formally recognized in diagnosis, such as, you know, anxiety or depression or maybe informally, such as, you know, someone's passed away and they're grieving. You know, in those cases, it can be initially quite beneficial to just have some positive reinforcement and to have this other kind of voice speak to you. In the long term, though, we know that AI is really good at reinforcing. So unless you directly tell an AI model, "Hey, argue with me about this topic," you know, it's generally going to keep agreeing with you and keep reinforcing things.
So if you're saying something like, "Oh, I'm having a really horrible day, you know, my girlfriend broke up with me, you know, where can I go to cry or something like that?" It'll say, "Oh, I'm really sorry, that seems so difficult. Here's 10 places near you where you can cry alone," or something, right? It's still ultimately, right, very task-oriented, right? You're giving it an input and it's designed to give you an output that it thinks that you want, ultimately. So there's no critical dialogue with a chatbot that doesn't happen.
So there sadly have been many cases of vulnerable people using these chatbots for an extended period of time and then increasingly having feelings of suicidal ideation. It's difficult to say, "Oh, this chatbot is causing it." I think that would be unfair. But there does seem to be some kind of relationship between people who are already in a difficult space then using these chatbots and then experiencing an amplification of those difficult feelings. So for neurodiverse people who are already susceptible to experiencing, let's say like more difficult or distressing mental health conditions...
Rae: An impulsivity.
Amy: And impulsivity, absolutely. I think that's for me quite concerning. One that was quite personal to me, I have a history of eating disorders, which is deeply related to my own kind of learning disabilities here. So the National Eating Disorder Association, is it in 2023?
Rae: Oh, I heard about this.
Amy: Yes. So they integrated a chatbot into their website because they were just receiving so many visitors and they thought, "Oh, this is a great way to support more people." And very quickly, you know, this chatbot was, you know, first just kind of giving advice and giving general support about, "Oh, here's what you do if you're having difficult feelings around your body, around your food, your food behaviors." And then very quickly the chatbot reverted to certain biases in the training data and was telling people, "Oh, here's how you can lose weight," which is, of course, one of the last things someone with an eating disorder ever needs to hear. So, you know, these models are very, they're dynamic, they're volatile. They very quickly churn out information that is sometimes quite surprising and increasingly just very wrong and incorrect. So you know, we need to be quite careful when we're using them, especially if we are, you know, part of communities that are already vulnerable.
(25:14) What does the future hold for AI and neurodivergence?
Rae: I imagine that it's neither of those things, that or both, but that it's something that has a middle path. And I'm wondering from your perspective, where does this go for neurodiverse people from the least hysterical prediction that you can give?
Amy: You know, with any emerging technology, there's always these kind of extremist ends that then just get touted so heavily in the media where it's, you know, AI is the end of the world or AI is the only future of the world we could possibly have. And the reality is often quite different. And like you said, sometimes there is a middle path.
I think it is the case now that we're at a bit of a turning point where, you know, AI is heavily integrated into a lot of places, but it's not yet everywhere. And it's still the case that a lot of people have choice, right? So unlike something like a smartphone where it's increasingly the case that you don't have a choice, such as you need this application to pay for parking. I need an application to be able to show my immigration status when I enter the U.K., for instance, right? So it's increasingly becoming compulsory to move through daily life.
So you know, we're at this turning point where people can still make this choice about whether or not they want to try out this tool and make it part of their life. So you know, I encourage people to think carefully at this turning point about how you're going to use it, especially to neurodiverse people.
I think for me, one of the most important promises of neurodiversity, right, neurodiversity just being the fact that everybody has different brains, one of the most important promises is that difference. And I think I'm concerned about AI sometimes flattening that difference the more that people use these tools for cognitive tasks that again, really should be thinking tasks.
So I think the middle path is that it will hopefully inspire and give rise to a lot of interesting, new, and innovative accessibility tools that will ultimately help a lot of people. And I think we'll also hopefully inspire, and I definitely see this, a lot of disabled or otherwise neurodiverse designers, computer scientists, and coders thinking more carefully about how to design models and datasets that actually have the best interests of their communities at heart. You know, so we will see, and we already are, these kind of smaller-scale AI models being built in opposition to these large-scale models that are run by companies that may not always, you know, view disabled people in the best or most flattering light, we could say. So I think the middle path is ultimately trying to think carefully about your needs, thinking carefully about what you want out of these tools and what you really want to maintain for yourself.
I think this is ultimately about what matters to you, right? If your creativity is really important to yourself, then you may not want to use AI-generated art, for instance. But if you're someone that you're building a website and you really just need some new images, then maybe you'll use an AI-generated image to produce them, for instance. So it's ultimately going to come down to what people care about.
What I would really encourage neurodivergent people is to also think of the collective here. You know, it's easy to revert to this idea of, "Oh, everybody will just use AI in their own individual ways." But you know, we are all ultimately a community. The choices that we make will impact other people in many ways, especially when you're using AI to communicate with people or to replace labor. So I'd also encourage us to maybe think about how our relationship to AI may affect our relationship to other people.
Rae: That's a beautiful sentiment to end on because it, it is really what you've been saying this whole time, which is this isn't all good or all bad, but it is something that has to have an enormous amount of consideration not just from a personal perspective, but from the collective perspective that AI is something that is here, but what it becomes is kind of at least for now up to us.
Amy: Absolutely. We're at a turning point, right? It is, you know, still kind of up for grabs in some ways. And I love seeing there's definitely been a bigger push in academia and disability studies scholarship to really center the voices of disabled people. So there is still a lot of good that can be done here. And I think it's important that we don't treat it as all hope is lost or all futures with AI is bad, right? It's not a technophobia, it's a turning point of critical thinking about what we want this technology to be.
Rae: If you want to learn more about Amy's research, and there is a lot of great stuff there, we'll have links to her research portfolio and other work in the show notes.
"Hyperfocus" is made by me, Rae Jacobson, and Cody Nelson.
Our music comes from Blue Dot Sessions. Our research correspondent is Dr. KJ Wynne. Video is produced by Calvin Knie and edited by Alyssa Shea.
Briana Berry is our production director. Neil Drumming is our editorial director. Production support provided by Andrew Rector.
If you have any questions for us or ideas for future episodes, write me an email or send a voice memo to hyperfocus@understood.org.
This show is brought to you by Understood.org. Our executive directors are Laura Key, Scott Cocchiere, and Jordan Davidson.
Host

Rae Jacobson, MS
is the lead of insight at Understood and host of the podcast “Hyperfocus with Rae Jacobson.”









