Who owns your brain data and neural privacy (Conversations from Davos)

Neurotechnology is moving rapidly from hospitals to everyday consumer devices. Headsets that promise better sleep, focus, or productivity may also collect neural data that reveals cognition, emotion, and mental health.

In this conversation, Stephen Damianos, executive director of the Neurorights Foundation, talks to host Dr. Maureen Dunne about the emerging field of neuro rights and the urgent policy questions surrounding brain data, privacy, and consent. Drawing on work with neuroscientists, policymakers, and industry leaders, he explores how new technologies can decode brain signals — and what that means for personal freedom.

What happens when technology can decode thoughts? And how can safeguards ensure that innovation benefits everyone, including neurodivergent users?

This special season of Minds at Work is brought to you by Understood.org and the Davos Neurodiversity Summit.

Maureen Dunne: Welcome to "Conversations from Davos", a special season of "Minds at Work" brought to you by Understood.org and the Davos Neurodiversity Summit. Here we are unpacking the most important themes from this year's summit. I am Maureen Dunne. I am a cognitive scientist, author, and the founder of the Davos Neurodiversity Summit.

Today it is an absolute pleasure to have Stephen Damianos as my guest. Stephen is the executive director of the NeuroRights Foundation, where he leads efforts with neuroscientists, policymakers, and industry leaders to ensure that neurotechnologies are leveraged for social good and that we put safeguards in place to prevent misuse.

This, of course, is something that's of particular interest when it comes to protecting and empowering users who are neurodivergent. Stephen is also a research affiliate at the Neurotechnology Center at Columbia University. He has a doctorate from the University of Oxford, and we both have a passionate interest in where the future is going and issues pertaining to ethical neurodata frameworks. There is so much to discuss on this topic, and I am really looking forward to this conversation. Stephen, welcome to the podcast.

Stephen Damianos: Thank you very much for having me, Maureen. It's a pleasure.

Maureen: To get started, I think it might be good to define some of the terms that we're using because many people might not be familiar. So if you could start out, if you could explain just what is neurodata?

Stephen: Absolutely. Well, I would even suggest taking a step back and saying what is neurotechnology? So when we're talking about neurotechnology, we're talking about any device that is capable of monitoring or manipulating the brain.

So if this technology is capable of recording the brain or in some way altering the brain, then we're in the world of neurotechnologies. And even within that, we can move into two different classes: ones that are invasive, which require implantation or a neurosurgery to access, and then wearable, non-invasive ones that you can take on and off.

So neurodata, the data that are collected from neurotechnologies, refers to measurements of neural activity.

Maureen: One thing I think is important — could be helpful then — along that line is to just clarify what kinds of things can neurodata reveal? Right? Attention, emotion, intention — what can we actually infer from this kind of data?

Stephen: Absolutely. So all of those states that you've just discussed are able to be inferred from neurodata. And I should note that when we're talking about neurotechnology, we're really talking about the neuro-AI frontier, the convergence of neurotechnologies and artificial intelligence.

Because these technologies utilize AI systems to decode and make meaning of the brain scans and the neurodata. And so already, using wearable neurotechnology, it is possible to access information about people's cognition, about their mental health status, about their neurological health, and there is growing ability also to even decode inner speech and thought using AI technologies.

Essentially, you can think about these devices being able to access information about your cognition, about the way that your brain works, about your emotional states and — increasingly also — your emotions and what you may be thinking.

(05:51) The potential benefits and significant risks neurotechnology poses for neurodivergent individuals.

Maureen: I guess something I'd want to ask you is then could you share a concrete example of how, say, your or my neurodata might be used in ways that we might not want it to be?

Stephen: I absolutely can. So these are magnificent scientific breakthroughs that have real potential to transform the way that we relate to ourselves and to others. But, of course, like with any technological advance, this brings significant risk. And one of them is mental privacy.

To some, that might sound redundant or repetitive — mental privacy — because for most of human history, it's been assumed and taken as fact that the contents of your brain are, in fact, private.

Sure, with social media and with our engagement with online platforms, there can be trace-based inferences from your digital exhaust and from the way that you leave footprints of yourself online, but to actually be able to have access to your emotional states and potentially what you are thinking is new terrain for us.

So a concrete example I can give is, say, you might be wearing a device that is marketed as a sleep aid device, something that you use to help you fall asleep faster and wake up feeling more rested. I would love that, by the way. I am on my like fifth cup of black tea of the day. I'm not proud to say it, so if I could wake up feeling less tired, I would leap at that.

So users may be using this for that explicit purpose. Meanwhile, the companies are, of course, scanning the brain to be able to offer that service. And brain scans are overbroad, meaning that although a portion of it might be required to deliver the service, the company will scan the entire brain.

The company can then use AI algorithms and make inferences about your mental health status, for example, and say they determine that you are someone with high levels of aggression, or you have a heightened susceptibility towards addiction or alcoholism, or that you have chronic depression.

This information could be shared with your insurance provider, for example, and suddenly you might see a change in your insurance premiums.

Maureen: And so then, given my work, I guess I'm curious on your views in terms of how this could make some neurodivergent people maybe particularly vulnerable to this kind of cognitive data collection? In some other ways, there could be obviously some positive use cases, and it could be both. So just curious what your thoughts are on that. Because if you think about someone, say, with ADHD, for example.

Stephen: Sure. So neurotechnologies present enormous benefits to many kinds of users, and individuals who are neurodivergent are one of the sub-demographics that really could have life-changing opportunities with these devices.

This ranges from treatment opportunities for those who choose that that is something that they want, but also as you were mentioning, Maureen, anyone who might have ADHD or any challenges with focus or attention, many of these devices are neurofeedback devices.

Which means that they are designed and intended to help people better train their brains around a specific function or a specific way of performing. And so certainly, there are many ways in which this could benefit individuals.

But similarly, there are many risks. One of them we've already talked about involves the sale and sharing of data and the ways in which this could be used to discriminate against individuals.

We could think about how this data could inform hiring pipelines, how this could lead to social stigma, insurance premiums. We also could think, however, about the fact that many AI systems will be trained on neurotypical brains and data from neurotypical brains, which means that they might misread neurodivergent brains and lead to downstream harmful effects there.

And then I'd also note that we said at the beginning of this conversation that neurotechnology can do one of two things: they can monitor the brain, but they can also manipulate the brain.

And a real risk here is that some individuals or institutions could pathologize neurodivergent brains and present them as brains that should be fixed, and that there's one type of brain we should have or there's a specific way that we should think. And so in thinking about preserving diversity of brains, of cognition, of human experience, this is also a potential risk.

(11:00) The current lack of legal protections for neural data and the efforts to pass the "MIND Act".

Maureen: Right. And do you have any ideas of how we move in a direction where all kinds of minds, including people who think and learn differently, are part of these policy conversations, part of these innovation conversations included in ways that maybe will de-risk that issue?

Stephen: Sure. So at the NeuroRights Foundation, we believe in the power of the technology and we want it to be in the hands of as many people as possible, as safely as possible. But beyond that, we want to make sure that those who have a stake in the outcomes have a voice in the design of the rules and regulations.

At the NeuroRights Foundation, we're at the forefront of neurotechnology governance. We have been behind the first-ever laws in history to protect consumers of neurotechnology and treat their neurodata as sensitive and as requiring special protections and safeguards.

And we've found increasing momentum for this movement and that it's increasingly bipartisan and unanimous or near-unanimous. But we're also finding that these conversations are not always happening within the broadest tent.

And that end users of the technologies and people who live with neurodiversity are not often brought to the table and asked what their use cases could be, what their concerns are, and what type of world they want to create rather than just inherit. And so at the NeuroRights Foundation, we're really interested in finding partner institutions and individuals who can be thought leaders with us on creating that together.

Maureen: I know you've been heavily involved in influencing policy and legislation both at the state level in Colorado and California, but also with the "MIND Act". For listeners who may not be that familiar with that legislation, maybe you can explain a little bit about what you hope to accomplish there and why it's important.

Stephen: Absolutely. I should also note, as some important background information as well, that the technology that we're talking about is not new, but the applications are. So neurotechnologies have existed in medical settings for decades and have pioneered treatments for epilepsy, Parkinson's, stroke, and many more medical situations.

And what's new is the proliferation of neurotechnologies outside of laboratories, outside of hospitals, and onto the consumer market. And in medical settings, neurotechnologies have many protections and regulations thanks to HIPAA and FDA and a number of other regulatory regimes.

But when taken out of medical settings, even if the technology is the same, those regulations fall away. And so this was our starting point with our policy efforts at the NeuroRights Foundation, was saying how can we build safeguards into the consumer neurotechnology space as a way of de-risking engagement with these technologies, but also de-risking the wider space?

Right? I mean, we want the technology to be perfected and to take off and benefit people, but that also requires making it safer and transparent. And so we engaged with lawmakers in the state of California to fill a loophole that we're seeing in a lot of places.

And this loophole is very simple: it's that there are state privacy laws that stipulate what needs to happen when handling particularly sensitive data, but neurodata is not mentioned. And that wasn't really an act of omission, but rather, as we mentioned before, it was not previously conceivable that neurodata would ever be accessed outside of medical settings.

And so now we've been able to essentially make the very narrowly tailored and common sense argument that if you would want your credit card information protected, if you would want your geolocational information protected, wouldn't you also want protected the information that can reveal information about your mental health, your cognition, your bodily health, and potentially, your thought and emotional states as well?

Maureen: Right. And if we now talk about this and we think about this topic in a more, I guess, optimistic direction, if we're able to get this right, what does that future look like in terms of ethical neurodata governance? Can you give some concrete examples of what that would look like?

Stephen: I definitely think that world exists, and I wouldn't be doing this job if I didn't think it could, because we're trying to build it. It's a world in which people who want access to these technologies can have access to them, and people who don't want to use them are not compelled to.

And I think that's a really important point, especially as employers may increasingly try and integrate neural monitoring into the workplaces to say, for example, who's the most focused, or who's the most productive, or if there's a competitive edge to your studying or your memory retention as a student as you're using these devices.

There will be real pressure to use these devices even if you have serious concerns about what they might do or what they might mean or how they might lead to you being integrated into some sort of surveillance pipeline.

And so I start there by saying that those who want to and maybe even need to use these devices should be able to, but no one should be compelled to. I think that element of bodily and mental autonomy is really, really central here. We're talking about a technology and a type of data that is so potentially invasive that we need to ensure that consent is at the forefront of these conversations.

The second part of your question was about data governance, and it's a much — we would go on for hours and hours mapping out what that actually looks like — but it begins with the user of the device being able to make meaningful decisions and exercise meaningful power over what happens to the data, who has access to it, and where it goes.

And so that also means being able to delete the data, being able to make requests about it, have it in a portable format that can go from company to company if you change the company, but it comes from centering the user's control over the data and not the company's business model.

Maureen: Right. Exactly. And it sounds like the concept of just having agency becomes central. And, of course, that is a really, really important issue for neurodivergent people. Being able to self-advocate for oneself, but also being informed enough to make these decisions for oneself and have agency.

Stephen: Absolutely. And your work shows this, and we've discussed this before, that, of course, the community of neurodiverse individuals is not a monolith, and this is an umbrella that includes so many different people with different kinds of brains and different kinds of experiences.

And so, as we touched upon a little bit before, there certainly is the risk that some people might say these should be used as treatment, whereas other people might not want or feel the need for any form of treatment. Others might say, okay, there's a need to fix or homogenize the brain in some way. Some might be excited about that. Others might feel that that's absolutely erasive and violent.

And so again, it's getting to the heart of can you choose what is happening to your body? So at the NeuroRights Foundation, our work focuses on five different pillars or the five neuro-rights. These are mental privacy, the right to agency, the right to identity, equal access to technologies, and non-discrimination as a result from using those technologies. Those are the pillars of the world that we are trying to build.

(18:57) A landmark report on the privacy practices of consumer neurotechnology companies.

Maureen: I have to ask because I know a lot of listeners on this podcast are business leaders, and I think many of them would be very interested in the research you did with that "Safeguarding Brain Data" report. And I remember when I read it myself, I was astonished with some of the results. But I don't know if you could maybe briefly just summarize some of the statistics that you found. I think it could be useful to CEOs and business leaders that are maybe thinking about entering this space.

Stephen: Absolutely. And so just a bit of context for the report that Maureen is talking about: this was a report that my team published in 2024 after looking at the privacy policies and user-facing documents like terms of use and service for 30 direct-to-consumer neurotechnology products. So we're looking at products that do not require any type of medical intermediary to access.

One of our main findings was that 29 of the 30 companies, so 96%, appear to have total access to the consumer's neurodata and there are no meaningful limitations to that access. Many of the companies did not even mention neural data at all, and so consumers really did not have adequate information about their data practices or their rights as users.

We found also — and this is a bit more of a technical point — but there's widespread ambiguity about whether companies consider neural data a form of personal data. So only 13 of the companies, so about 43%, even mention neural data in their policies. So there was enormous ambiguity about the storage and collection.

And we found that over 50% of the companies have explicit provisions that allow for the sharing of this data. So I could go on and on, but essentially, the picture that emerged to us was that the data can be scraped, collected, and retained almost indefinitely and also shared and sold.

And I do want to quickly mention this element of retention and why that's really important. So unlike many other types of data, the sensitivity of neural data increases over time. As neurodata sets grow, as AI algorithms also grow in parallel in their sophistication, the ability to decode a brain scan grows enormously.

So what you can decode from a brain scan today in 2026 is just a fraction of what you can decode from it in several years, in a year, in many years. And this really complicates notions of consent because do you actually know what you are handing over when you are using this device?

One of the examples I like to use is that if I were transiting through an airport and the TSA agent asked me to open up my carry-on bag, I would know exactly what they would and would not find in that bag because I packed it and I know what's in it.

But if a company is analyzing and selling and sharing and retaining indefinitely my brain scan, I have absolutely no idea what they can and cannot access today, in five years, in 10 years, and who will be able to access whatever they find in there and infer from it. And so this is really concerning and destabilizes some of the ways that we traditionally think about consent.

Maureen: Right. And as you mentioned, I think it won't be long into the future where I think this ends up being more front and center. And so it's good for the business leaders listening right now on this podcast to kind of get some early insights, I think, into why this is such a critical topic.

I know we're kind of starting to run out of time here. I just want to give you the opportunity if there was anything else that you would want to add that hasn't come up in our discussion so far, anything else you think that's just really important for listeners to understand.

Stephen: I do have to put a little plug here saying that this is an extremely fast-approaching issue that in many ways is already here, and I find that it has relevance to everyone. Which is actually something beautiful about it, right? I mean, we all have brains. We can start there.

And then sometimes I also ask people, I said, you know, does anyone in this room have either yourself or a loved one have a history of any mental health disorder, neurodiversity, or memory loss? Do you perhaps have an aging parent who is experiencing dementia or Alzheimer's?

And it is almost 100% of the room can raise their hand. And so therefore, even just from the way in which, again, we are cognitive creatures, we all have a stake in getting this right.

And if that's of interest to you, I'm very, very happy to hear from you and explore potential collaboration. We are growing our team. We're looking to have new partnerships, new donors, new research projects. And so this is a really exciting, necessary field of work, but it's one that we need a lot of hands on deck for.

Maureen: Yeah. And really exciting about the work you're doing. And I think this is also, as we talked about, is just really important to the neurodiversity community. And I think it is a topic that hasn't reached our community as much as I think it will in coming months and years. And so really appreciate the opportunity to have this conversation with you and just want to thank you so much for joining us.

Stephen: Oh, well, thank you very much. It's been my pleasure and I'm looking forward to continuing the conversation as well.

Maureen: As we close this special "Conversations from Davos" series, one message has become clear: Neurodiversity is not a niche issue. It is foundational to the future of leadership, innovation, and even global competitiveness across enterprise, higher education, culture, and policy.

One pattern emerged from these conversations. It's that neurodiversity is not merely an accommodation strategy; it's an infrastructure question. How we design learning environments, develop talent, cultivate neuro-inclusive leadership, and steward emerging neurotechnologies and artificial intelligence are systems decisions.

And in an era of accelerating technological change, institutions are being redesigned in real time. And while this special series comes to a close, the conversation does not. You can continue hearing powerful insights from business leaders and changemakers on "Minds at Work" brought to you by Understood.org and through the Davos Neurodiversity Summit community.

These dialogues continue year-round, connecting enterprise, education, policy, research, youth leadership, and lived experience in an ongoing global leadership circle. What began as conversations in Davos is becoming sustained systems change. Thank you so much for being part of this journey, and the work continues.

Nathan Friedman: You've been listening to "Conversations from Davos", a special season of "Minds at Work" hosted by Dr. Maureen Dunne and brought to you by Understood.org and the Davos Neurodiversity Summit.

If you want to know more about our guest today, please check out the show notes. For those looking for resources to better advocate for themselves and others at work, visit u.org/work and to learn more about the summit and Dr. Maureen Dunne, visit davosneurodiversitysummit.com.

"Minds at Work" is brought to you by Understood.org. Understood.org is a non-profit organization dedicated to empowering people with learning and thinking differences like ADHD and dyslexia. If you want to support our work, please donate at understood.org/give.

This show is produced by Julia Subra, Allison Haklander, Max McKenzie, and me, Nathan Friedman. The show is mixed by Justin D. Wright. Briana Barry is our production director and Laura Key is our executive director.

"Conversations from Davos" was produced in collaboration with the Davos Neurodiversity Summit. Each year alongside the World Economic Forum, leaders gather at DNS to explore how empathy and human-centered design can reshape work, education, and our key institutions. To support DNS 2027, go to davosneurodiversitysummit.com. And thank you so much for listening.

Host

  • Nathan Friedman

    leads the multifaceted brand strategy, product marketing, consumer engagement, communications, creative and production functions.

Latest episodes