Machines That Think Like Us: Converging Principles in Biological and Artificial Intelligence

 

In recent years, the boundaries between neuroscience and artificial intelligence (AI) have started to blur. As we build machines that simulate human reasoning and cognition, it's becoming increasingly clear that understanding how the brain works can guide AI—and vice versa. Dr. Michael Halassa, a psychiatrist and systems neuroscientist at Tufts University, has been at the forefront of this intersection, advocating for a computationally grounded approach to mental health through his Substack platform, Algorithmic Psychiatry.


Halassa's central thesis is that both brains and machines operate through computational principles—algorithms that manage perception, prediction, learning, and decision-making. The key difference lies in the medium. While machines rely on silicon and binary logic, the brain uses networks of neurons, synaptic weights, and neurotransmitters. But at a higher level of abstraction, both are solving similar problems: How do we represent uncertainty? How do we update beliefs based on new evidence? How do we flexibly shift attention and goals?

One of the most compelling arguments Halassa makes is that psychiatric illness may result from the breakdown of these core computational processes. Rather than simply viewing mental disorders as “chemical imbalances,” he suggests they could stem from specific algorithmic disruptions—malfunctions in how the brain updates predictions, evaluates rewards, or switches cognitive states. This is where the link to AI becomes particularly relevant.

Just as AI models like reinforcement learners use reward prediction errors to adjust behavior, the human brain uses dopamine-mediated signals to perform similar updates. When these signals go awry—as they might in depression or schizophrenia—the result is maladaptive behavior. In this way, algorithmic flaws in both artificial and biological systems can produce strikingly parallel outcomes: bias, rigidity, overfitting, hallucination.


Michael Halassa also emphasizes that modern neuroscience tools now allow us to test these ideas directly. Using technologies like optogenetics and high-density electrophysiology, his lab dissects how thalamocortical circuits implement core computations like working memory, attention shifting, and cognitive flexibility—functions often mirrored in AI architectures. For instance, his research on the mediodorsal thalamus suggests it plays a key role in updating prefrontal cortical states, a kind of biological version of control logic in neural networks.


What emerges from Halassa’s writings is a vision of convergence. The insights from neuroscience are informing new generations of AI models—ones that are more dynamic, adaptive, and context-sensitive. Meanwhile, AI offers a formal language for describing and simulating the computations that brains perform, helping researchers better frame psychiatric dysfunction.


This is not about creating robots with emotions or consciousness. It's about understanding the brain as a computational system—and using that understanding to improve mental health care. For Michael Halassa, the future of psychiatry lies in this algorithmic approach. And for anyone fascinated by the overlaps between machine intelligence and the human mind, his Substack offers one of the most thoughtful and clinically grounded explorations available.

Comments

Popular posts from this blog

Decoding Cognitive Flexibility: Dr. Ralf Wimmer's Exploration of Thalamic Circuits

From Cognitive Control to Schizophrenia: Key Lessons from Dr. Michael Halassa’s Substack Series

Michael Halassa