A behind-the-scenes look: 3 ways Understood.org makes sure we get the facts right

Every day, more than 40 million people turn to tools like ChatGPT for health questions. And most Americans think AI-generated health information is accurate.
But research from Stanford and Harvard found that leading AI models can generate a lot of harmful or incorrect medical advice. Errors appear in a large share of responses.
AI-powered chatbots are a common way to search for answers — at home, at work, and at school. This technology isn’t going anywhere. That makes one thing clear: Accuracy is critical.
At Understood.org, we want to be sure you can trust the information we share. Our AI-powered chatbot, the Understood Assistant, draws on expert-vetted resources to deliver tailored answers. It offers reliable information, whether you’re learning the basics, trying to build better routines, or navigating tough conversations with teachers or doctors.
Here are three ways we make sure we’re offering information you can rely on:
1. The Understood Assistant draws only from our own content, not the open internet.
The Understood Assistant uses the same underlying technology as tools like ChatGPT or Claude. But there’s an important difference: It’s intentionally constrained.
Unlike tools that pull from across the web, the Understood Assistant draws only from our own expert-vetted resources. Answers aren’t crowdsourced or scraped from public message boards. Expertise is in the assistant’s DNA.
The assistant also flags what it can and cannot do. It doesn’t give medical advice. And it can’t replace personalized, professional guidance. Being upfront about its limits reduces the risk of misinterpretation.
2. The content it draws from is vetted by our own experts.
Other chatbots create answers based on text from anywhere. The Understood Assistant is built to pull only from Understood’s own materials.
What’s so special about our content?
Our articles, podcasts, tools, and digital experiences are reviewed by experts across fields like education, behavioral health, and public policy. Our experts include medical doctors, psychologists, public health experts, educators, and more. Just last year, these experts contributed almost 700 hours to content creation, review, and consultation.
3. Experts — real, human ones — continuously update it.
Accuracy isn’t a one-time check. It’s an ongoing process. Research on learning and thinking differences is constantly evolving. So Understood’s experts regularly update our content library. It needs to reflect the latest knowledge and best practices.
That same attention also applies to checking and updating how the Understood Assistant works. We’ve developed a rigorous process where experts routinely review responses. They make sure responses are accurate, relevant, and designed with both helpfulness and safety in mind. User feedback plays a critical role, helping flag answers that need improvement.
This human-in-the-loop approach means the assistant is built to get smarter and more reliable over time.
When it comes to information you need to navigate learning and thinking differences, “close” isn’t good enough. You deserve answers you can trust.


