Decoding the brain

Nicholas Hatsopoulos, PhD, professor in the department of Organismal Biology and Anatomy, in his Culver Hall lab Thursday, Feb. 12, 2015, on the University of Chicago campus.    (Photo by Jean Lachat)

Nicholas Hatsopoulos, PhD, professor in the department of Organismal Biology and Anatomy, in his Culver Hall lab Thursday, Feb. 12, 2015, on the University of Chicago campus. (Photo by Jean Lachat)

Enabled by tools with power, sophistication and flexibility reaching once unimaginable levels, research into virtually every aspect of the brain is accelerating. Across the University of Chicago neuroscientists are taking advantage. Innumerable questions still remain, but deciphering the brain’s language and structures, functions and malfunctions, appears to be a question of when, not if. In the first of a four-part series excerpted from the spring issue of Medicine on the Midway, Science Life looks at the efforts of Nicho Hatsopolous, PhD, professor of organismal biology and anatomy and chair of the Committee on Computational Neuroscience, as he works to decode the brain.

Decoding the Brain

In an office enclosed by the gothic limestone walls of Culver Hall on the University of Chicago Quadrangle, Nicho Hatsopoulos, PhD, puts on a pair of broken glasses (he accidentally left his good ones at home) and opens a program on his computer. A grid appears, 16 columns by eight rows. In almost every square, multiple lines trace a short wave with a single dip and peak — resembling that of individual heartbeats from an electrocardiogram superimposed on top of each other.

Picture4“What you’re looking at is a recording of our acquisition system. Each panel here represents the action potential, the firing, of a single neuron,” says Hatsopoulos, a professor of organismal biology and anatomy and chair of the Committee on Computational Neuroscience. “We’re recording directly from the brain. This is how we give a subject control over a robot arm or a cursor on a screen with their thoughts.”

He plays a video.  A woman with quadriplegia is sitting in a wheelchair, a few feet away from where a robotic arm stands on a table. She concentrates and imagines moving her arm, activating a flood of neural activity. A microchip implanted in her motor cortex picks up the electrical signal of a few select neurons. It relays the information to a device attached to her scalp. These data — visualized in the grids of the neural interface software — are processed and translated by a computer into commands for the robotic arm. The robotic arm moves, reaches for a water bottle on the table, picks it up and brings it to her mouth. She takes a sip.

Hatsopoulos helped develop the technology that allowed this woman to move a robotic arm with her mind, and has dedicated his career to better understanding how the brain controls motor functions. Through the use of state-of-the-art tools, he and his colleagues have turned a basic understanding of individual neurons into commands for a robotic arm.

The human brain is perhaps the most complex structure in the universe. It contains something in the neighborhood of 100 billion neurons organized in vast and intricate networks. To put this in perspective, if 100 billion people were to stand shoulder to shoulder (picture a crowded concert), they would occupy an area the combined size of Rhode Island and Connecticut. But neurons are also wired to each other, communicating and interacting to perform all the brain’s functions. So imagine every single one of those 100 billion people having phone calls with 1,000 to 10,000 other people. All at once. From these trillions upon trillions of neuronal conversations arises every breath, thought and sensation; everything that every human has ever done.

array_on_fingertip

“We record anywhere from 30 to 160 neurons,” Hatsopoulos says. “This is obviously a tiny fraction of the neurons in the motor cortex, but it’s enough to give control. It’s remarkable.”

The brain makes the Gordian Knot seem like a shape block toy for toddlers, but there are ways to approach the problem without swords. Back in Hatsopoulos’s office, he displays a microchip used to collect neural data for brain-machine interfaces. Dozens of them could fit on a postage stamp. Up close, the chip looks like a tiny bed of nails, with sharp spines lined in neat rows. Each of these spikes is a single electrode that, when implanted into the motor cortex of the brain, measures the electrical activity of individual neurons.

“We record anywhere from 30 to 160 neurons,” he says. “This is obviously a tiny fraction of the neurons in the motor cortex, but it’s enough to give control. It’s remarkable.”

Reaching out with an arm involves movement in different directions, Hatsopoulos explains. It turns out that different groups of neurons are tuned to movements in different directions. A certain population might only fire in response to a signal to move an arm left. Another group for down, others for up, and so on. When a person is told to imagine moving their arm in a certain direction, a unique and measurable pattern of neurons fire. And how many times those neurons fire over a certain period of time is the key piece of information.

“If we look at one cell and its firing rate, it doesn’t tell us much,” he says. “But if we look at the firing rates of many cells simultaneously, we can actually predict the direction in which an arm will be moved with a mathematical model. It’s actually kind of simple.

He walks to a whiteboard and scribbles down an equation.

IMG_20150203_152434187_HDR

“This how we decode the brain,” Hatsopoulos says. “This is the basic underlying language of motion and movement.”

Different people have different definitions of simple, but he elucidates. The formula describes how to calculate a direction of movement over time, d, from a thought. This is done by using the microchip to record the firing rates, f, of individual neurons as a subject imagines moving his arm. These measurements are combined and multiplied by a coefficient, a, which is unique and calibrated to individuals. The resulting value, d, is the directional instruction that, when fed to a robotic arm, moves it. There are other factors that need to be considered — for example, electrode placement (it’s placed on a region that controls the arm) and spike sorting (electrodes that pick up multiple neurons have to be teased out) — but this formula is essential.

“This how we decode the brain,” Hatsopoulos says. “This is the basic underlying language of motion and movement.”

The method, for now, won’t help someone play tennis or a violin. But functional motor control, enough to help someone feed themselves, for example, is within sight. To work toward the next sea change for this technology, research groups are working to develop better ways to record neurons and improved robotic devices. More importantly, however, is augmenting these systems with sensory feedback, Hatsopoulos says. Almost all current technologies rely on visual feedback to correct errors. But real limbs rely on the sense of touch and proprioception – the ability to sense the relative position of body parts.

Bensmaia with a rotating drum used for research. The drum is covered with materials with different textures. Human subjects hold their finger against it and report different sensations as different materials touch their skin.

Sliman Bensmaia demonstrates a rotating drum used to study how to impart sensory feedback into robotic prosthetics. It covered with materials with different textures. Human subjects hold their finger against it and report different sensations as different materials touch their skin.

Fortunately, the same techniques used to create robotic prosthetics also allow scientists to impart sensory feedback into them. Mathematical models can decode the differences in activity of neurons that respond to the touch of silk or corduroy, for example; computer algorithms can recreate those patterns; and microchips can relay them back to the brain, directly stimulating neurons with the recreated pattern to generate a sense of touch. There are many complexities, but Hatsopoulos – as well as his colleague in the Department of Organismal Biology and Anatomy, assistant professor Sliman Bensmaia, PhD, and other groups around the country – are working to combine these technologies into a what will someday soon be a truly functional robotic prosthetic.

However, deeper questions still loom. Sophisticated brain-machine interfaces rely on understanding and utilizing the basic “language” of the brain, and have achieved remarkable results – more than enough for the Department of Defense to invest millions of dollars into research and development. But they do so by interpreting the activity of a few dozen neurons at once, akin to eavesdropping on a random conversation at Soldier Field to recreate a football game. And though they can move robotic arms, no computational model can identify how the brain decides to move. How neural patterns work across millions or billions of cells — how they create behavior, how decisions are made, how they form a single, unified experience — is still largely unknown.

About Kevin Jiang (147 Articles)
Kevin Jiang is a Science Writer and Media Relations Specialist at the University of Chicago Medicine. He focuses on neuroscience and neurosurgery, orthopedics, psychology, genetics, biology, evolution, biomedical and basic science research.
%d bloggers like this: