Biomedicine’s Next Top Model

f1mediumThere are many today who argue that the future of medicine is in data mining. Massive computational efforts are underway to collect mountains of data from multiple sources – genomic sequencing, clinical trial results, laboratory experiments – and put them to work in the unbiased, abstract mind of the high-throughput supercomputer. The biological world is too complicated in many places for us to make significant therapeutic advances, the argument goes, so only a computer can sniff out the intricate patterns that can be exploited to fight disease.

But eventually, this new approach to science will come up against an old obstacle: the best ideas, whether born in a computer or on a laboratory chalkboard, don’t always work. Coming up with a hypothesis – intervention x will affect disease y – is only the first part of the scientific method. Testing that hypothesis is where the real advances are made, and where many new drugs and therapies have crashed upon the shores of reality. In a new article this week in Science Translational Medicine, a University of Chicago surgeon argues that computers also have a role to play in that second part of the equation.

Gary An, a brand new member of the surgical faculty at the University of Chicago Medical Center, observed the “translational dilemma” firsthand early in his career. Working in the critical care unit at Cook County Hospital, An saw patient after patient succumb to sepsis, an overwhelming, stubborn infection of the body that leads to organ failure and often death. An enormous body of research has described the biological steps that underlie sepsis, but almost every intervention proposed by that research has failed in clinical trials.

“None of those things tried at that time worked; in fact, some of them were even detrimental,” An said. “The critical care literature were filled with editorials – ‘What are we doing wrong? What’s the problem?'”

Those failures led An to the study of complexity, systems where the overall behavior is not explained simply by the underlying rules. An reasoned that the only way to properly study such a system is to generate computer models capable of testing hypotheses, not just creating them. With computer modeling software that was relatively easy to learn – “designed to teach elementary school students about bird and traffic,” he said – An created an in silico model of sepsis.

The model drew upon published research to create a biological network of immune system factors that simulated a real patient experiencing sepsis. Then he re-ran the strategies tested in the clinical trials – and found results (published in 2004) that could have saved drug companies and researchers a lot of time and money.

“As it turns out, none of them worked, and some of them actually hurt simulated patients, which is what the trials showed,” An said. “The result of the simulated trial itself is not novel, we knew that was how it would happen. But if you had these means of testing the plausibility of the interventions that you had in mind [before the trial], it would have caused you to think about whether this is actually a good idea before you spend millions on clinical trials.”

In the Science Translational Medicine article, An argues for wider acceptance of that approach – not rational drug design, but rational drug evaluation. Potential new treatments suggested by computational data mining or laboratory animal studies can be tested on computer models before they are tested on patients, he suggests. The idea is similar to what engineers do for the design of bombs, airplanes or microchips, where the end product is extensively tested virtually before the first prototype is built.

The problem is that scientists’ understanding of the rules of biology lags far behind understanding of the laws of physics, which inform most engineering models. But An argues (and proves with his sepsis model) that one doesn’t always need a complete model to test a medical hypothesis, merely a sufficient one. And by lowering the technological bar to creating those models, both in terms of programming skill and computational power, different researchers can pit their own models against each other in an evolutionary process to determine which are most fit – or sufficient.

“The models themselves become a means of communication,” An said. “But in order for that to happen you have to dramatically decrease the technological threshold to be able to create those models.”

With the world’s fastest supercomputers recently breaking the “petaflop” barrier of over 1,000,000,000,000,000 operations per second, the power to do ever more complicated computations is at hand. But An’s research and opinion piece remind scientists that such power is only useful if invested wisely, and that generating questions without generating answers won’t help translate theoretical treatments into reality.

About Rob Mitchum (526 Articles)

Rob Mitchum is communications manager at the Computation Institute, a joint initiative between The University of Chicago and Argonne National Laboratory.

%d bloggers like this: