We talk about precision medicine all the time. Say you come in, and you are going to have a test. The test will match, and that will help us understand the disease. We will understand the pathophysiology that will be matched to a medicine. You will get the medicine. You will go home happy. Everyone is happy; that's the end of the story.
But in reality, medicine has not worked out so simply...
You go to a street fair, and someone is tossing coins. The coin lands heads the first time, heads again the second time, third time, fourth time, and fifth time. If you ask a pure statistician, "What is the chance that the coin will land heads the next time?" the pure statistician says "50/50." But if you ask a child, the child will say "Stupid, the coin is rigged." Right? That's why it's landing heads all the time. The child knows more than the pure statistician because the child understands that prior probability dictates posterior probabilities.
That idea has been difficult for even doctors to understand in medicine, that with the things we do that seem so complicated—genomic sequencing, epigenomic sequencing, complex family history mapping, etc—all we are really trying to do is take tests and switch their prior probability. That's the real message that we are trying to understand.
We cannot interpret the new data without the context. That's why medicine exists. One thing I conclude is that medicine exists because you need that prior context. When you come as a patient to a physician's office, one of the first things they do is get a history. That's why the medical data sheet begins with history, not tests. It begins with history and physical, not test and physical or conclusion and then physical.
That's the first law...
I'll begin with a simple analogy. A lot of what is happening in medicine today is that we have solved quite effectively what I call the "inlier problem." Inliers is a word I made up. The inlier problem is that we have a relatively well-demarcated idea of the normal range of physiology. There is a normal range of blood pressure, height, weight, etc, based on large population studies.
Once in a while, however, we have people who lie outside of that. There are people, for instance, who have very high blood pressure but who have never had a stroke or heart attack. In the past, that was dismissed as coincidence, and surely, some of that is coincidence. Some of that is because of random effects. In time, this is like the person who comes and says, "I smoked all my life and didn't get lung cancer. Therefore, cigarettes cannot possibly cause lung cancer." And you can say, "Well, that is obviously not true because we know from large amounts of data [that they do]." But the converse is very interesting, which is to ask the question, "Why is it that some people who have had long histories of smoking don't have lung cancer?"
Some fraction of that is clearly because of chance, but some fraction is not. And when you identify those people, you are going to get new insights into the pathophysiology of cancer. You could say the same thing about heart disease or strokes. The point about the second law is that we have spent a lot of time creating this understanding of the inlier problem. But what is really interesting is to find the outliers and figure out what they tell us about the deeper structure of the pathophysiology of a disease...
I thought to myself, if the doctors of the 1930s or 1950s were microbe hunters or cause hunters, what are we doing today? What are we doing today when we look at all of these clinical trials? I realized that one thing we are doing (not the only thing) is hunting bias. So much information comes to us from clinical trials, and the popular press is full of information.
As physicians, we have to figure out how to think about these trials in a critical way and be skeptical about them and say, "Look, that's an important piece of information, but let me tell you what the bias in that information is. I can now take the larger body of information and apply it to a single human being, to a single patient." I will give you one example. A large randomized trial is run on patients and clearly shows that tamoxifen works extremely well for preventing breast cancer in high-risk populations.
Ten days later, an African-American woman comes to your clinic, meets the profile criteria, and says, "Doc, should I take tamoxifen? I took it for 2 months and had a terrible reaction to it. I had terrible side effects. I really don't want to do it. It made my life miserable. But if you think it's useful, I'll do it." A stupid thing to say is, "Here's the trial. It says that women who take this drug have a benefit. You're a woman. Let me give you the drug."
The much more interesting and much more important way is to integrate all of this information and say, "Well, wait a second. That trial was run on white women in Kansas. What are the chances that the findings of this randomized trial—although it's a powerful, focused trial—are going to apply to you, a 36-year-old who was not in the initial group and who has a very different racial and genetic background?"
Our job right now is to interpret very complex pieces of data, some randomized, some nonrandomized, and other pieces of information and incorporate all of them into the treatment of an individual who is sitting in your office and needs help. That's basically the third law.