In a groundbreaking study published in Science Advances (Horikawa, 2025), neuroscientists have unveiled a technology that edges us closer to reading minds — quite literally. The study, titled “Mind Captioning: Evolving Descriptive Text of Mental Content from Human Brain Activity,” introduces an artificial intelligence model that can generate descriptive text of what a person is viewing or recalling, by decoding patterns of brain activity captured through fMRI scans.
From brain signals to sentences
The system, developed by Dr. Tomoyasu Horikawa and his team at NTT Communication Science Laboratories in Japan, works in two stages. First, the AI decodes brain signals linked to visual perception using linear models. Then, another AI model — trained through masked language modeling — transforms those decoded signals into coherent text descriptions.
In simple terms, the AI reads your brain’s responses to what you see or imagine, and turns them into sentences like “A person jumps over a deep waterfall on a mountain ridge.”
The accuracy is startling. When participants watched videos, the AI correctly described the scenes nearly 50% of the time out of 100 options — far beyond random chance. Even more remarkably, it could describe recalled memories with up to 40% accuracy, demonstrating its ability to tap into mental imagery and memory, not just perception.
Reading thoughts without reading minds
Despite the sci-fi implications, Dr. Horikawa clarifies this isn’t “mind reading” in the dystopian sense. “The method doesn’t decode private thoughts,” he explained in the Science Advances report. “It interprets the brain’s semantic features — the way we internally represent meaning — to generate linguistic descriptions.”
Interestingly, the model achieved coherent text generation without depending on traditional language areas of the brain. This suggests that structured visual semantics — the relationships between objects, actions, and scenes — are distributed across multiple brain regions, even in non-language zones.
Why it matters: A new voice for the voiceless
Beyond its futuristic intrigue, the breakthrough carries profound humanitarian potential. The researchers believe “mind captioning” could one day enable communication for people with conditions like aphasia or paralysis — those who can think but not speak. By turning neural patterns into text, the system could act as an interpreter for minds locked within silent bodies.
“Our framework provides an interpretive interface between mental representations and language,” the study notes, “offering an alternative communication pathway for individuals with expression difficulties.”
The ethical frontier
However, with great potential comes great ethical complexity. The authors caution that as brain decoding becomes more sophisticated, protecting “mental privacy” will be critical. Unwanted interpretation of private thoughts, even unintentionally, could pose new risks in an era already dominated by data surveillance.
“The decoded content should be viewed as an interpretation,” Horikawa’s team emphasizes, “not as a pure reconstruction of the brain’s language.”
While the current system relies on large MRI machines and detailed brain imaging, researchers envision a future where smaller, implantable or wearable brain–AI interfaces could make this kind of decoding more practical. The team is also exploring extensions beyond visual thought — potentially decoding imagined speech, emotions, or abstract reasoning.
As AI grows better at understanding the brain’s complex code, the boundary between thought and technology blurs further. For now, this study stands as one of the most striking demonstrations yet that machines may not just process our data — they may soon begin to understand our minds.
From brain signals to sentences
The system, developed by Dr. Tomoyasu Horikawa and his team at NTT Communication Science Laboratories in Japan, works in two stages. First, the AI decodes brain signals linked to visual perception using linear models. Then, another AI model — trained through masked language modeling — transforms those decoded signals into coherent text descriptions.
In simple terms, the AI reads your brain’s responses to what you see or imagine, and turns them into sentences like “A person jumps over a deep waterfall on a mountain ridge.”
The accuracy is startling. When participants watched videos, the AI correctly described the scenes nearly 50% of the time out of 100 options — far beyond random chance. Even more remarkably, it could describe recalled memories with up to 40% accuracy, demonstrating its ability to tap into mental imagery and memory, not just perception.
Reading thoughts without reading minds
Despite the sci-fi implications, Dr. Horikawa clarifies this isn’t “mind reading” in the dystopian sense. “The method doesn’t decode private thoughts,” he explained in the Science Advances report. “It interprets the brain’s semantic features — the way we internally represent meaning — to generate linguistic descriptions.”
Interestingly, the model achieved coherent text generation without depending on traditional language areas of the brain. This suggests that structured visual semantics — the relationships between objects, actions, and scenes — are distributed across multiple brain regions, even in non-language zones.
Why it matters: A new voice for the voiceless
Beyond its futuristic intrigue, the breakthrough carries profound humanitarian potential. The researchers believe “mind captioning” could one day enable communication for people with conditions like aphasia or paralysis — those who can think but not speak. By turning neural patterns into text, the system could act as an interpreter for minds locked within silent bodies.
“Our framework provides an interpretive interface between mental representations and language,” the study notes, “offering an alternative communication pathway for individuals with expression difficulties.”
The ethical frontier
However, with great potential comes great ethical complexity. The authors caution that as brain decoding becomes more sophisticated, protecting “mental privacy” will be critical. Unwanted interpretation of private thoughts, even unintentionally, could pose new risks in an era already dominated by data surveillance.
“The decoded content should be viewed as an interpretation,” Horikawa’s team emphasizes, “not as a pure reconstruction of the brain’s language.”
While the current system relies on large MRI machines and detailed brain imaging, researchers envision a future where smaller, implantable or wearable brain–AI interfaces could make this kind of decoding more practical. The team is also exploring extensions beyond visual thought — potentially decoding imagined speech, emotions, or abstract reasoning.
As AI grows better at understanding the brain’s complex code, the boundary between thought and technology blurs further. For now, this study stands as one of the most striking demonstrations yet that machines may not just process our data — they may soon begin to understand our minds.
You may also like

Los Angeles Lakers vs Atlanta Hawks (11-08-2025) game preview: When and where to watch, expected lineup, injury report, prediction, and more

Ann Widdecombe skewers Prince Harry in brutal on air takedown

Indian-origin man stopped and questioned over immigration status in Chicago; governor says it was 'because of his skin color'

The proud UK town set to suffer fourth brutal blow to fading high street

1% Club contestant begs 'delete this episode' after flunking 'easiest' question ever





