Predicting Human Health with AI

Published Jul 6, 2023, 4:05 AM

Charles Fisher is the co-founder and CEO of Unlearn AI. Charles’ problem is this: How do you build an AI model that can predict human health?

Charles and his colleagues have built a predictive model of health that is already being used in clinical trials, and might one day be deployed to predict individuals’ health outcomes.

Pushkin. Imagine something that is sort of like chat GPT, but for the human body. Chat GPT looks at a sentence and predicts what words are likely to come next. This thing would look at a human body and predict what diseases are likely to come next. The body is wildly complex and unpredictable. This seems like a very, very hard problem, but it is a problem people are working on, and at least in some circumstances, they're figuring out how to make predictions that are truly useful. I'm Jacob Goldstein, and this is What's Your Problem, the show where I talk to people who are trying to make technological progress. My guest today is Charles Fisher, co founder and CEO of Unlearned. Charles' problem is how do you build an AI model that can predict human health. Charles and his colleagues have built a predictive model of human health that's already being used in clinical trials for new drugs and new medical devices. But we started out talking about the big picture, about the very idea of trying to predict what's going to happen to a human body.

It's funny when I talk about trying to quantify biology and make it predictable. I often get hit with this critique that biology isn't physics. Biology is complex, biology is not physics. We're not going to be able to do that.

Let's deterministic.

Right, So for physics, for two thousand years, right, people started working on physics in ancient Greece. And for two thousand years, physics wasn't physics. Physics was unpredictable. Physics was too complex to understand until something was invented. And that thing was calculus.

Until new right.

Yeah, So once calculus was invented, all of the sudden, we had a new language. In this language, this new kind of mathematics allowed us to really easily describe lots of physical phenomena. And so now physics has become this thing that's very predictable and well understood. And that's what we've been waiting for in biology. We've been waiting for a new tool, a new language, a new mathematics that will allow us to understand these complex systems. And that's really what I think these new tools are.

So I think so your hope, your hope is that machine learning generative AI will do for medicine biology. What Calculus did for physics.

Exactly. That is big, big, It's exactly what I hope. That's exactly what I hope.

So okay, so this is your hope. You're starting this company to test your hypothesis. Uh, what do you do?

What do you mean? What do I do? What I do on day one? Or like, what are we doing? No?

No, no, We're back to twenty seventeen. You have this big up in the sky, two thousand year, thirty thousand foot idea. But you got to make a thing that somebody is going to pay you for that will hopefully use AI in medicine in some way. So what do you do?

So we didn't know what would work, so we focused on two different problems at the time. So one problem is, let's imagine we're going to have a bunch of data from some maybe a big large collection of patients. We're gonna have this data all over time, so the symptoms that a patient might have every week, four year or something like that. And our goal is to be able to create a simulator of a patient's future health. So, given what I know about a patient in the past, can I simulate what will happen to them in the future.

And presumably that is sort of probabilistic. I mean, what we know about health, Like you can say there's an x percent chance that in why years this person will have a heart attack something like that.

Exactly. Yeah, we want to yes, because so many things are undetermined in that you know, maybe yeah, exactly.

Right, and it's just the nature of the world, right, one hundred percent.

Yeah.

So okay, so you have this idea of basically where chat GBT, which didn't exist yet, but predicts the next word with some probability you want to predict the next health outcome.

For exactly that is the big idea. Yeah, So that that was one of them. The other that was not the only one that was the one that is what we do. The one that we didn't do is we were interested as well potentially so that's at a very macroscopic scale, that's at the scale of the person, whereas the other thing we were interested in was potentially could we go at the micro scale and look at what's happening inside individual cells. We were interested in this at the beginning. Basically, the way we figured this out is we signed a few deals with farmer companies to try these things, and we found found that the technology worked really well in this simulating health outcomes, and it didn't work very well when it comes down to simulating what's inside the cell. And I think this comes down to data, which is that we get a ton of data on human health outcomes, like literally every time you go to the doctor, there's data there on your health outcomes. But the data from the things inside the cell, there is a lot of it, but it's much more difficult to work with. So I think that was a lot of what drove us in this direction is really the focus on what we think we have the data to solve these kinds of problems.

So, Okay, you go in the direction of simulating health outcomes for patients, and in particular, sort of where you get to is working with companies that are running clinical trials. And I know eventually you get to a point where companies can use your model, use your software to run clinical trials with fewer patients. So just tell me about that, arc tell me how you get there.

Clinical trials are, well, they're super tick forever, and they're really really expensive. Something might take like five years and cost one hundred million dollars to run a clinical trial. Yeah, in the way that these are hundreds or thousands of patients, right, oh, thousands of patients typically, right, Yeah, And typically half of the patients in a clinical trial are receiving a PLACBO. So you're going to randomly assign half to receive an experimental treatment have to receive a PLACBO. And the reason is that every clinical trial is ultimately just doing a comparison. You're comparing how a patient responds to the new treatment to how they respond if they don't get that treatment.

And let me just give a shout out to the randomized controlled trial as like a really beautiful construct, right, not that old? Not that old. I learned that a ring for this interview, like less than one hundred years old, amazingly. But it's a perfect way to assess not perfect, it's a very very good way to assess causality. It's really elegant.

It is an elegant idea. But if you're a patient, why are you participating a clinical trial at all? What's the number one reason people participate in clinic trials. They participate in clinical trials because they want access to this experimental treatment that you can't get any other way. That's the number one reason why patients are participating in clinical trials. Number one, Now they.

Don't they don't want to be randomized to the placebo.

No, no, no, they don't.

I can certainly understand that it is the case, right that most trials fail, meaning the drug is not helping you and possibly hurting you, meaning on average, you're better off being in the placebo arm Like that is true, right.

Yea, there's a principle of equipoise. But that's an academic Ivory tower principle.

I mean, it also is true. Just sue, that's fine, that's.

Fine, but in the end, that's like, in the end, patients choose not to participate in clinical trials because they don't want to get a placebo. Patients drop out of clinical trials when they think they are getting a placbo. Those are also true. Number one reason those things happen. Are those reasons? Fair?

Okay?

Right? So, And in fact, twenty percent of clinical trials failed not because the drug didn't work, but because they just couldn't find enough people to participate, okay. And what we realized though, is that there was a way for us not to try to replace the randomized control trial, but to make it better, and that what we are doing is we could take what we call digital twins of the patients, so these are these simulations of their of their future outcomes, and that we could incorporate those data into our cts directly randomized control trials. We call it just kind of like a reimagining of our cts. It's it's you're going to have a RCT that is more accurate, that is has requires fewer patients, and as a result, you get a lot of the benefits of faster trials of things that are better for the patients. We can talk about that in a minute, but you keep all of the same scientific rigger.

So specifically, okay, that's a good like big picture. Specifically, how does it.

Work right now? We build one model per disease. So, for example, we have a model for patients with Alzheimer's disease. We have a separate model for patients with als, we have a separate model for multiple scleroses, et cetera.

Let's pick one model and talk about it. What's the one that's farthest along, Which is the one that works the best?

Yeah, So our Alzheimer's disease model is that was our first one that we've published scientific papers on and things like this, so that ones our most well known.

Okay, so you're setting out to build a model that will predict whether what's going to happen, presumably to a patient who has the early stages of Alzheimer's disease, How will their disease progress? A hard thing to know in the real world. How do you build that? What do you do?

So the first thing is that you need data to learn from. Yeah, it's kind of obvious. So our first step was like, oh, we say, okay, we want to have data sets where we get a ton of information about each patient. What's that mean? That means that any individual time, I want to have a lot of different different measurements made on that patient at each time.

So alsumably you want to have a lot of moments when lots of information exactly.

You also want to have lots of.

Lots of times over a long period of time, over a long period.

Yeah, and so you know these are going to be for Alzheimer's. You're looking at a bunch of things related to the patient's cognitive performance on different assessments. Just also there's things about just their daily life. How are they able to function in their daily life. There's things related to their caregivers actually, like how does their caregiver rate their behavior? Brain imaging, blood tests, all that kind of information. You want to have as much of it about each patient. You want to have it as many times as possible. Sure, and we'll try to get that for you know, like fifty thousand people. And that's the kind of data set that we that we're starting with.

And like, is there one repository that when you get that, you're like jackpot or what.

No, we we have to aggregate data from lots and lots of different places to be able to build a big enough data set.

Okay, so now you got the data, what do you do next?

Then we got to train a model to to to be able to learn from those data how to simulate things. And now actually what we do.

In particular in this case, how to predict, given some set of inputs for a patient, what's going to happen next exactly?

And so this does look you were using that analogy of like a language model predicts the next word. So given these words I've seen before, predicts the next word. And that's that is similar to how our models and these diseases work. So we're going to say, given I've observed these things in the past about a patient, what will happen to them next? That is is very analogous to kind of what we're doing.

It's okay, so you build the model, how does it work? How does it work in a clinical trial, specifically so that you know the people running the trial can can do it with fewer patients.

Sure. So in a typical case, we're involved at the beginning of the clinical trial in the design of the protocol. Okay, So there's a question of how many patients should you randomize to your control group, how many patients do you need overall, and how many should be in the treatment, how many should be in.

The control It's not always fifty to fifty.

It's not always fifty to fifty in our studies. Our typical goal is to try to minimize the number of people that you need to put in the control group. Okay, And so we're involved in doing helping to do that calculation to say, here's how big your trial should be. And so then as patients enroll in the study, we take data from their first visit before they receive whatever new treatment they're going to receive and we take those data, we input them into our pre trained model. So I like to think about you know, CHATCHBTU give it a prompt and it gives us output. Same thing. We take the data from the patient, we prompt the model and it outputs their predictions for what will happen.

In the And to be clear, you do that for all of the patients in both arms the treatments.

Yes, yeah, and we don't know, right, it's blinded blind it's you, it's blinded to us. We don't know what. Yeah, So we do that for one hundred percent of the patients and then we give those data to the customer, to the farmer company.

So then what happens next? What happens next?

We wait around for a while. Yeah. And then when the study is actually completed, right, and they they they do unblind the data. We have to help to to say here's how you now can incorporate these these predicted outcomes into the analysis.

Like so this is this is it. Now We're at the moment now when the thing you have built is useful. So so now it's it's they have done the study, they have the outcomes for the real human beings and they have the predicted outcomes from your model. How is your system? How's your model useful?

So the very first thing that we're basically going to do is what I'm going to say, We're going to recalibrate our model. Recalibrate and you're going to figure out a relationship between your predicted outcomes and your observed outcomes for the patients who really received the placebo, for.

The patients in the placebo group, And basically you're going to see how you did how do we do.

Yes, and in particularly going to find out not just it's not like a measure of was it good or bad, You're going to find out exactly how are they related? And then you can take that information in adjust your predictions. Okay for everybody. So you can say, let's imagine that I find out, well, on average, I'm i underestimating how much a patient would progress by one point per year. I'm on average underestimating it. Well, then I'll go through and I'll take my prediction and I'll be like, well add one point, add one point forer you. And then now you have said, okay, well, now I've taken the model and I've been able to do it in such a way where I've fixed these mistakes by looking at the actual patients who got place ebo, And now I'm going to apply that model to the patient and the treatment group, and I'm going to look at Now, I just look at that difference between the patients and the treatment group and their predictions from the model, and I average that and I get an estimate for the treatment effect. Now that is described in a two stage procedure. That it's not actually a two stage procedure. It's one mathematical analysis that you do it. But the thing that's really I think quite amazing actually is that this has a bunch of mathematical guarantees to it. We can actually prove that the estimate that you get for how effective the treatment is is still unbiased. So it's not an overestimate, it's not under ustan, it's on average correct. Can prove that if you compute a P value from the analysis like you would typically do, that it has exactly the right properties as it does out of a regular RCT.

P value is roughly the probability that the funding was a fluke.

Ye right, Yeah. If you compute an arabar the arabar you get from our analysis the air bar you would get from a normal there. They all have exactly identical statistics.

This is not intuitive, but but you're saying, the mathematical fact is that it works. Yes, And just to be clear, what this allows you or the people running the trial to do is to enroll fewer people in the placebo arm not none, but fewer than they otherwise would have had to get the same amount of statistical power. Right, that is the bottom line thing that you are delivering. Yes, that's correct, And it's something like a quarter or a third less, is that right? Yeah?

So it depends on how accurate our models are. The more accurate the model is, the fewer patients you need in your placebo group. Sure so typically right now, yet somewhere between like a quarter, like fifty percent. It depends on the specific details.

So tell me what is the effect of that at a macro scale? What does it mean to say a drug company can get the same statistical power by enrolling twenty five percent fewer people in their study, specifically in the placeboar.

Well, I think that there are two things. First is I think people don't always understand how expensive clinical trials are you know, companies are paying one hundred sometimes two hundred thousand dollars per patient in one of their clinical trials, So finding and enrolling and monitoring a patient for all that time is very, very expensive. It also just takes a long time to find people who are willing to participate. And so if you're talking about a large phase three trial, reducing the size of the control group by twenty five percent, that might mean like one hundred fewer patients that you need to actually recruit and enroll in your study, and that that could be like a year. But you know, so you can save six months to a year off of your total clinical trial timeline. That means a lot, right, but both for patients. If the drug is actually successful, that's a year faster it gets to market. And you know, for the farmer company, that's office a big value proposition being able to get the drug to market a year faster.

In a minute, moving from clinical trials to individual patients, now back to the show. What is the what's the big picture? Where are you trying to get to and you know, in the medium termament in the long term, So.

The ability to understand what a person's health outcome is going to be under different scenarios. This is I think what's really important. Is it not just hey, given that they would get a placebo, what's going to happen to the health outcomes? That's nice for clinical trials, but we want to know, hey, there's ten different treatment options for this patient, and if I were to give them each one of these different treatment options, what would their health outcomes look like in those different scenarios.

So there you're also moving out of the clinical trial into the realm of like a doctor seeing a patient. Let's just be very clear, like that that's a huge leap, and like that's what you're talking about.

I think that there's a really good pathway to being able to build these things and make them useful for problems that are at the individual patient level.

And is the narrow way to think about it, Like before you get to the magical computer that can predict everything for everybody, that you get to a very very good model that can predict for individuals in certain circumstances a certain set of outcomes. So, for example, you might have a very very good Alzheimer's model for certain patients at a certain stage of disease. This model is very powerful at the level of the individual. Is that the way to think about it, Yeah, the way I'll tell you.

The way I think about it. I think that the most important thing that models can do, which actually things like a chat ept are not good at, is that they can give you really well calibrated estimates of their own confidence. That's the most important thing that a model can do, because, like we said earlier, health is stochastic. There are all kinds of things that happens fundamentally exactly right. And so you know, we're going to make a prediction about somebody in the future, and sometimes we're going to be really confident in that prediction and then it's actionable, but sometimes you're not. It's not you're not confident, and maybe it's not actionable because you're really unconfident. And the most we're never going to get to the point that's going to say, hey, you're going to have a heart attack on July seventeenth of twenty thirty seven. It's like, it's never going to be like that detail. But the point question is can you believe the model's estimates of its own confidence? And if you can, then you when it is confident, you can act on it, and when it's not confident, you can do other things. And that's the that's so it's actually a really key technical thing, and we know what we need to work on.

If I were going to answer pomorphis it, I'd be like, it's like a it's like a humility. It's like an epistemic humility, Like it knows what it doesn't know.

It knows what it doesn't know, and it will tell you like I, yeah, here's my prediction. But yeah, exactly So if you can get it to that point where we were, where it's well calibrated that way, then they become really really useful for a whole bunch of things. And it's not going to say.

Become probably useful if they can have a relatively high degree of certainty about at least some things, right yea, just.

Like yeah, it's not very course, yeah, but exactly so. I think that that's the most important thing for these applications of AI in medicine is to have models that are going to be able to do that effectively.

If everything goes well, what problem will you be trying to solve in five years?

In five years, I hope that we are rolling out something that is a model for everything. That's what we want to be rolling out, not this one disease at a time thing, but one model for all disease. And the reason why I really want to do this is because if it's one model per disease, I need a ton of data on that disease, a ton. So we can work on these areas like Alzheimer's where I can get data from fifty thousand patients, But how do I work on the disease where I have fifty patients fifty patients in the world who have this rare disease. Those are really really important things. And the only way that we're going to be able to do that is to unlock a new kind of capability in our models to learn from a handful of examples. And so this is this is to me, the next frontier for our work is figuring out how we can do that and then bring that to market, because it opens up the ability to work on rare diseases that are really really important market very difficult to develop drugs for. And it's and again I'm I'm you know, as a scientist, I'm drawn to the technical challenges. Those are the things that.

It seems so hard, right, I mean, it seems like this really basic insight about genitive models is that like gigantic amounts of data feeding. You know, for a language model, you feed it the whole internet is the way to get it to understand how language works. And so how how can you do something for fifty people?

Like?

How how that in five years?

Yeah, it's really hard. How the analogy is actually perfect? Okay, if you want to build what we've learned is that if you want to build a really amazing language model that's really specific to some domain, so you only want a language model that's really good at biophysics, it knows biophysics really well. Would you be better off training a model trying to find as much biophysics as you can and training it on that or just training a model on the entire init And what we've learned is much better to train a model on the entire init that there's a lot of things that transfer from one domain to another. And so what we can do now is say we train the model on the whole Internet, and we have one biophysics paper, and we give it that one or two papers on the background of all of the knowledge from everywhere else, and that's much better than trying to get lots and lots of biophysics papers. So the analogy works perfectly in the exact same direction. That's the whole point. We want to be able to take all of the world's Imagine taking a model that has all of the world's health data and putting all of that into one So what seen everything and it can now draw analogies between because there's a lot of things you think about, like Parkinson's and Alzheimer's, they have a lot of similarities, Huntington's a lot of similarities. So why aren't we drawing kind of information or knowledge from one disease area and using it to inform another because they are similar. And so I'm allowing a model to have access to all of the data and figure out how to do it. I think is the right path forward. So is that.

Wildly capital intensive? Like what do you actually do to do that? You just get all the health data about all the people you can and say to the machine figure it out, Like what do you do?

Yeah? Yes, I mean the first step for us is you need to get a lot of data. The biggest thing is that we need to figure out a way to have the model map all of those data to the same representation.

What does that mean, map all of those data to the same representation.

So let's imagine that there is some unobservable state of a person which just describes their health. We can't actually observe it directly. It's we don't exactly know what it is, but we can make these measurements of it that tell us something about that underlying state. So I can measure BMI, I can measure heart rate, I can measure all the I can measure all of these different things. And what we want to be able to do is instead of working in the world of measurements, which is where we currently work, we want to be able to work at that underlying unobservable state because if you can, if you can see that, if you could reach through into that underlying state, you can answer any question about any.

Patient's health, like like like a number like this one state that is just like one.

High dimension the high dimension, right right, Well, okay, yeah, so I mean yeah, but basically talking about is there some vector, some really high dimensional space where we're able to take all diseases and look at them how they're related to each other in this really high dimensional space. That is the way language models work. That's exactly how I.

Love And that's intense, Like, that's pretty far out right. Doesn't that feel far out too?

I would say, talk like a hippie, But I if I describe this to a machine learning researcher, they're like, that sounds exactly like what you should do. So it doesn't seem far out to me. It seems it seems very clear that that's the direction that we should be taking things.

And does five years seem like a like you might actually do it in five years.

Yeah, we were hoping to be able to have a version next year. That's a pan neuroscience model, so we're starting with all. So we're starting with start with something more attractable, build a more tractable thing. So right now we're working on a neuroscience model. So we're hoping, I mean which to be totally. This might not work. This is a research idea, right, so it may work, it might not work. But that's you ask where I would hope to be. That's where I hope to be is that we're able to solve those those problems.

So we'll be back in a minute. With the Lightning Round, including what Charles learned when he worked as an ice hockey wrap back to the show. I'm going to finish with the Lightning Round. Will just be a few more minutes.

Okay.

As the name suggests, I've heard you say that you read academic preprints, which is basically studies that are about to be published, that you read them every day. What's one you read recently that you found particularly interesting?

Recently? There have been a number of papers that I've been reading around different ways of so training, the kind of neural networks that we use. All of them use a particular algorithm that people call ADAM. It's been used for a really long time, like everyone uses it now, and it has I don't know, it has some problems. There's a paper that was just really recently on a new algorithm people call LION. I don't know what it stands for. L I O N stands for something. And this was a discovered So they used a machine learning out a reinforcement learning algorithm to discover a new kind of optimizer.

So if this works, if Lion is better than ADAM, will it be like machine learning figuring out a better way to build machine learning. Is that what's happening here?

Yeah, that's what people are working on exactly.

This is like the takeoff. This is like the moment when GPT five builds GBT six or whatever.

I think the claim is it's like five percent better something. It's not. It's not.

Yes, Lion couldn't find the.

Time another thing yet Yeah. So yeah, that was a paper I read really recently.

If you couldn't work in AI, what field would you work in?

If I couldn't work in AI. Uh, I guess I would probably try to work in energy, maybe tim a change something related to that.

I think seeing bummed at the prospect of not being able to work in AI, I appreciate that. I don't want to make it.

I'm very bummed. Yeah, you know, I think it's the most exciting thing that's happened on Earth since the Industrial Revolution. So it's a new industrial revolution. Yeah.

Weirdly, you used to work at a virtual reality hardware company. I feel like VR is always about to break through, you know, like Apple just had this big announcement, had a Facebook did a while ago, but yet it never quite happens. Why not, Like, why are we not doing this interview in the metaverse.

So I only worked at that company for a few months. I spent my whole career working in biophysics. I moved to Pfiser. I was working at Pfiser, and then I got im just like, I'm gonna try something totally different, and I went and tried this work at the VR company. I was interested in that because of the underlying technical problems research that I had to do, not because I was drawn to the product. I have only ever used a virtual reality headset twice my entire life. Once was in the interview for that job, and once was testing something while I was working at that job. I'm not interested in it, so you want to know I was interested in the engineering. So you want to know why I don't think it's taken off. Is because most people don't have a compelling reason to use it. Neither do I. Yeah, what'd you learn working as an ice hockey referee? Ice hockey referee? Oh, that was like my super super young job. I would say that I learned it's best not to call penalties on little children. That's what I learned. You know, people would just like like run into each other and they'd fall down. You're like, is that a penalty. Was it on purpose? Not on purpose? If you call a penalty, the parents are going to be real upset at you. So you just just let them play.

Good early experiments, cost benefit analysis.

Just let them play.

Charles Fisher is the co founder and CEO of Umler. Today's show was edited by Sarah Nis, produced by Gabriel Hunter Chang and Edith Russlo, and engineered by Amanda k Wong. I'm Jacob Goldstein. One last note, the show is going to be off for the next several weeks and we'll be back with new episodes in August. Have a rad summer.

What's Your Problem?

Every week on What’s Your Problem, entrepreneurs and engineers talk about the future they’re trying  
Social links
Follow podcast
Recent clips
Browse 139 clip(s)