The Bioinformatics CRO Podcast

Episode 83 with Jenny Yang

Jenny Yang, co-founder and CEO of Outpost Bio, discusses her work at the intersection of biology, machine learning, and precision medicine.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Jenny Yang

Dr. Jenny Yang is the co-founder and CEO of Oupost Bio, which is focused on building a platform that pairs automated experimentation with machine learning to better understand how microbial communities shape drug response, nutrition, and health.

Transcript of Episode 83: Jenny Yang

Disclaimer: Transcripts are automated and may contain errors.

Grant Belgard: Welcome to the Bioinformatics CRO Podcast. I’m Grant Belgard. Today I’m joined by Dr. Jenny Yang, co-founder and CEO of Outpost Bio. Jenny works at the intersection of biology, machine learning and precision medicine. At Outpost, she and her team are building a platform that pairs automated experimentation with machine learning to better understand how microbial communities shape drug response, nutrition and health.

Grant Belgard: Before starting Outpost, Jenny studied engineering, physics and bioinformatics at UBC and completed her doctorate at Oxford, where her work focused on clinical machine learning. We’ll talk about what she’s building now, the path that brought her there, and what she’s learned about building important things at the edge of science and computation.

Grant Belgard: Jenny, welcome.

Jenny Yang: Thank you Grant, excited to be here.

Grant Belgard: I’m excited to have you. So for listeners just meeting you, what are you spending your time on right now and what are you trying to build?

Jenny Yang: So really we’re trying to make human microbiology computable. So what that really looks like day to day is we have team members in the wet lab and team members on the dry lab side working on machine learning models. So in the wet lab we’re actually, sourcing human derived microbial communities, running perturbation assays and taking multi omic measurements.

Jenny Yang: So think, metagenomics, sometimes transcriptomics, definitely metabolomics. So really trying to understand communities of bacteria before and after. With different chemical compounds and how those microbial communities actually break down these compounds. For example, microbial communities from the gut.

Jenny Yang: How do those break down a drug chemical or a food molecule? And then in the dry lab, we’re building machine learning models on top of this to help create tools that will allow other people to do their own analyses on these microbial communities.

Grant Belgard: What problem keeps you most intellectually stimulated right now?

Jenny Yang: I think I’m definitely motivated by the question of. How can we make meaningful pushes forward in the field of personalized health? I’ve definitely spent a lot of my last 10 years of my adult career looking at human genomics, and I’ve seen incredible advancements in that field. We can identify, exact mutations that lead to certain subtypes of disease and even choose targeted therapies towards that disease. And I really want to understand why despite all this progress, we still can’t push forward improvements in clinical trials and drug development. So the metric I like to show is that 90% of drugs failed 40 years ago and 90% of drugs are failing during clinical trials today. So why have we not pushed forward that metric despite all these huge advancements in the field? That definitely does keep me up. And I feel like the next addition of biological information that needs to be added to this picture is really all the microbes that are in and on our body that affect how we all independently experience health and disease.

Grant Belgard: So for what you’re doing what is the most difficult aspect of the problem from the biology side and what’s the most difficult from the data or a modeling side.

Jenny Yang: Definitely on the biology side it is reproducibility and scalability of these experiments. So when people have been doing microbiome based analyses in their own respective labs, there’s bias that’s going to exist. So it’s been really hard to reproduce findings in one lab to another lab. And then also scaling up these experiments is hard.

Jenny Yang: So if we specifically focus on the gut microbiome, these are anaerobes that we’re dealing with. They have to be in a oxygen free or really oxygen low environment to keep them alive and actually perform experiments. So scaling that process up is very difficult. So when we first started the company, we knew we wanted to focus on both developing a high throughput method that could perform these perturbation experiments, but also make sure we cross validate findings in the lab.

Jenny Yang: So not only do we generate our own data in our lab, but we’re making sure to generate external data sets with other partners just so we can demonstrate that our findings in our lab can be reproduced elsewhere. And in the, on the computational side, we’re dealing with a very high dimensional space.

Jenny Yang: So even compared to human genomics where there’s only about 20,000 genes, the gut microbiome can have up to 150 to 250 times more genetic diversity there. So it’s already a much more. Complex space that we’re dealing with. And again, bringing it back to human genomics between you and I and everyone else in the world, we share 99.9% the exact same human genome.

Jenny Yang: But if we look at our gut microbiomes, we can be up to 90% different. So not only is there huge individual variability, that multiplied with the diversity of bugs that exist in our guts. That is a huge computational challenge. So without the sophisticated machine learning models that we now finally have, as well as the cloud compute infrastructure that we now have, this is just such a huge problem that we couldn’t have tackled before. So that, that level of complexity, I think is what makes it very challenging on the actual computational side.

Grant Belgard: And can you tell us more about the stage you’re at right now with Outpost Bio and what your role actually looks like week to week at this stage?

Jenny Yang: Yeah, absolutely. We have been a company for one year. We raised our pre-seed last July. So that was a 3.5 million round and really the focus of the past year and. And the rest of this year as well was validating our wet lab method and starting to build initial machine learning models and really benchmarking against current state-of-the-art models and benchmarking specific problems. Just to demonstrate the usability and the accuracy of both the data we’re generating and the models we’re, we’re building. My day to day has. Changed significantly from when we first started the company to now. So really when we first started the company, a lot of it was making sure we build up a vision for the company that could inspire people, but also we could see ourselves growing into over the course of the company.

Jenny Yang: And that really is trying to make human microbiology computable so we can improve personalized health. And then it moved into building up our values. So when we hired, we really wanted to make sure that we built up the culture that we were looking for. So not just look for the technical skillset, but also just the cultural fit for the company we envision having.

Jenny Yang: ’cause when you have such a small team, at the beginning, culture is really everything. And I still believe that one year down the road, and now it’s really transitioned into. Just a little bit of everything. We have so many different things going on at the same time. So we have a wet lab in-house that we just opened last week. We have multiple models being trained. A lot of partnership conversations with potential customers, so I have my toes in everything. But the team, is fantastic on, on staying organized, so it’s been very exciting.

Grant Belgard: What decisions do you feel you still need to hold closely, and which ones are you happy to let others own?

Jenny Yang: I very much believe in this quote that I think Steve Jobs and probably a lot of people have said, which is, basically you hire incredible people and you shouldn’t be telling them exactly what to do because you hired them to tell you what the right thing to do is.

Jenny Yang: And I, I really believe that. So we wanted to be very rigorous on the scientific side. So we hired a VP of RD with a lot of experience in microbiology. This is Heidi. She’s trained in incredible labs and she’s just so good at what she does, and I would never want. Want it to be me telling her how to do good science.

Jenny Yang: Like I can point her in the direction that we’d love to go as a company, but I really trust her to be the expert in what the position that we’ve hired her for. Same with Seth and Nathan on the data engineering and machine learning side of things. I’m very happy to let them do the tasks that they think are right to reach the milestones we want to reach. And I think that’s really important because especially at a startup where there’s so many things that need to get done as one person, you just can’t do it all. And you’re gonna have to have trust in your team to, to do the tasks that they’re the best at.

Grant Belgard: What tradeoffs are you navigating between scientific depth and execution speed?

Jenny Yang: I think a big one is the trade off between generating a lot of data really quickly, really broadly, or generating a bit less data, but focusing more in depth on specific verticals. And that is a trade off because we are trying to build a foundation model. And that term in itself suggests that you have a lot of data to build this large machine learning model that’s going to be generalizable. But I do believe because we’re tackling such an important problem that has to do with pushing forward human health and our understanding of it, we have to be scientifically rigorous, especially if the tools we’re creating are meant to be used by other scientists. There’s a level of validation. And scientific rigor that’s really needed for people to trust that you’ve built something that is usable and accurate and if they implemented it into their own work, which is very important that being rigorous and deciding to use our tooling. So especially at this stage of the company where we are trying to make sure we prove that we can do what we say we can do, We are making sure that we focus really rigorously on validating specific verticals, so we won’t go hugely broad and generalizable at the beginning. We’ll demonstrate that we have a method that can lead us there down the road but we need to be more rigorous on proving the science in specific niches first.

Grant Belgard: On that note. How do you think about robustness, bias, and generalizability when the biology itself is so variable and complicated?

Jenny Yang: Yeah, so I think about this a lot. When I did my PhD, I specifically focused on bias mitigation and machine learning generalizability. So from the wet lab side, we are generating data in-house. But we’re making sure to have additional partners generate subsets of data for us. So we’re working with a West coast US institution as well as a CRO, and we’re making sure that they generate buckets of data so we can validate findings externally across sites.

Jenny Yang: And that is really to demonstrate the reproducibility of the wet lab methods we’re using. We’re also. We’re also making sure that we validate machine learning models, so having external data sets from different sites can then demonstrate the generalizability of machine learning models. And we are making sure that we evaluate both metrics of accuracy and metrics of bias because we wouldn’t want to see model performance being dependent on where data is generated.

Jenny Yang: Ideally, it would be based on the actual science itself.

Grant Belgard: What would a truly meaningful win look like for you over the next few years?

Jenny Yang: I think something meaningful would really be demonstrating that the systems we’ve built both in the wet lab in vitro and the in silico systems we’ve built in the dry lab translate to actual h uman outcomes. So another form of validation that we really think about is how can we validate on actual human cohorts or clinical cohorts. And that is something top of mind. So we are making sure we’re working with scientific advisors that are actually running clinical trials or working with groups of patients. We have a project coming up in, in the summer specifically focused on a cohort of patients that we’ll collect samples from, and then make sure that we validate our, both our in vitro and in silico methods again.

Jenny Yang: So I think a huge win would be demonstrating that translatability into the real world. Another win that I think a lot of the team is really excited about is demonstrating that we can actually generalize between, a community of bacteria interacting with a molecule and leading to a certain biological outcome.

Jenny Yang: I think that would be incredible to see that microbiology does follow, some sort of pattern that can be predicted.

Grant Belgard: What turns a biological data set from merely interesting to decision grade?

Jenny Yang: I think that’s a super good question. And it might depend a lot on. Also the question you’re looking at and the industry that you’re trying to implement this in, for example, like there’s a lot more rigorous testing that needs to be done within drug testing versus maybe developing a consumer good. But I think for us it’s making sure that we do focus on certain verticals and then. Validate and calculate metrics that are needed at proof points that are needed for the specific problem, its itself. So there is a little of bespoke kind of tailoring needed for some problems, but I. I feel like it’s in general across all spaces, we would wanna be able to generalize across multiple data sets that are coming from the field, multiple problems that are related to those data sets and demonstrate it with really external partners because we can do a lot of testing ourselves, but when you actually bring it into the real world and it’s in someone else’s hands and not really in your control, that’s where you would wanna be able to demonstrate usability and, a certain level of accuracy there.

Grant Belgard: What does a strong feedback loop between wet lab work and computational work look like in practice in this space.

Jenny Yang: This is super important to us. So when we first started the company, a lot of people told us, make sure you keep your wet lab and dry lab close together and make sure communication is good between the two. ’cause it’s like completely different languages that these scientists would be used to speaking. So for us, we very much believe in having this quote unquote lab in the loop. So the experiments we generate in the wet lab, we make sure people on the computational side understand these experiments are part of the decision making process. Because they need to understand the data that they’re getting and what kind of problems the data can actually solve.

Jenny Yang: Like what kind of outcomes are coming from this wet lab data that they can build a model towards. And then the findings in the wet lab will inform a lot to the to the scientists in the wet lab because. The people from the dry lab will be able to communicate the quality of the data that’s coming out the accuracy of the models where bias exists.

Jenny Yang: So if bias exists in one section, maybe we need to be generating specific data sets targeting that. So we, we definitely believe in just really good communication all across and making sure everyone’s involved in understanding the overall experiment.

Grant Belgard: When you evaluate a possible collaborator or a partner, what signals make you excited?

Jenny Yang: I think people who are really interested in getting to a mechanistic or causal understanding of their food product or their offering. So one example or really any ingredient manufacturer, they wanna understand how, their ingredient or molecule will perform differently across different populations based on their microbial makeup. And also people who are trying to add another level of personalization and provide more of a personalized understanding to their clients. So you can think of people trying to come up with personalized diets for different individuals. Companies that are, that have an app that help you understand your gut microbiome and how that affects what you should eat. So really anybody who’s trying to put a molecule in or on you and wanting to understand the effects.

Grant Belgard: Where do you think the field still confuses correlation with something more actionable?

Jenny Yang: I think there’s definitely a lot of difficulties getting to that causal or mechanistic level. So I think a lot of the field previously has been just focused on correlations. They’ve been saying these communities of bacteria exist and they might lead to better or worse health. They might suggest that you should have these foods versus others. And I think it’s just still very present at that level. We’re really just at the cutting edge of being able to start getting down to the mechanistic level at a scale that just wouldn’t have been, we wouldn’t have been able to do before. So I think there, there is a little bit of an onus on us to demonstrate that this is scalable.

Jenny Yang: ’cause I don’t think people will necessarily be able to implement this high throughput until they’ve seen it demonstrated to a level that they believe they can trust it. So I think there is a lot of work to be done there.

Grant Belgard: What do people building tools for life science often misunderstand about adoption?

Jenny Yang: I think something that I’ve realized over my time in working in machine learning is that. A lot of people immediately expect machine learning models from one place to work really well when it’s brought to an external setting. And I think the attitude with foundation models is one that I believe in a bit more, which is that a foundation model is a starting point to help you fine tune to creating that really high quality model for your own application or your own purpose.

Jenny Yang: I think. Getting to generalizability in such a complex and diverse field, like anything related to biology or microbiology especially. It’s gonna be really difficult to assume that there’s one machine learning model that’s going to work for all. It’s analogous to what we’ve said about the one size fits all model in, in medicine as well.

Jenny Yang: It’s hard to believe that there’s one medicine that’s just gonna work across. Everybody and work every time. I don’t think we should have that belief for machine learning as well. So I think the foundation model space has really opened up the understanding that you start with a model that’s going to bring you to 80% of the way of solving your problem.

Jenny Yang: But if you have your own data sets that are built in the context of your own setting and the problem you’re trying to solve, if you fine tune on that data set, then you’ll be able to build a really strong machine learning model for your own purposes. So I think bringing tools into the life sciences, should not be like just flat out expecting that there’s one tool that’s going to work perfectly across every every team, every problem, every setting.

Jenny Yang: I think there needs to be care into fine tuning these models.

Grant Belgard: Couldn’t agree more. If you could instantly fix one part of the big data ecosystem, what would you change first?

Jenny Yang: I think language modeling, so like chatGPT and all these different large language models that have come out has demonstrated that you can scrape the web of its data and build really strong models that are general purpose and can be used by a lot of people. I think that is it’s a really effective problem and it works for language modeling, but that’s not something that necessarily translates to biology.

Jenny Yang: So I think you see a lot of companies coming out and people building models on all the biological data that they can scrape from the web. And I’ve definitely seen this in the field we’re in right now with the microbiome, but there’s so much bias that exists depending on where you get this data from, that I think it would be very hard to say that you could scrape the web of all this biological data and get to a general purpose model that will understand the problem. Because a lot of these data sets have their own scientific processes. The measuring tools are different. There’s just so much bias that is not controlled that I think it’s a hard attitude to bring into the life sciences. I think you can leverage public data, but I think at the end of the day you need control and confidence over the biological process that you use to generate this data. I very much believe that, especially in machine learning, it is garbage in, garbage out. So as much as you can leverage the public resources, you should, but you should take care into, again, fine tuning, using high quality data that you genuinely are very confident in for the problem you’re solving.

Grant Belgard: What first pulled you toward biology computation and the overlap between them.

Jenny Yang: I’m honestly not too sure. When I started undergrad I was in engineering physics and really working on like robotics and systems engineering. And this was around the time where machine learning was really starting to ramp up and there was a lot of hype around it. And I think naturally I’m just attracted to novelty and learning new things. So I wanted to try machine learning and when I first stepped into it, I was essentially just scraping the web of a genomics database, building a small program that would analyze it, like really simple analysis. And then I just cold emailed 20 different professors at UBC and then. Asking if I could demonstrate what I’ve built. And then Steve Jones he actually just offered me a job after he saw that program and he runs a research lab at the Genome Sciences Center and works very closely with clinicians all working on personalized medicine for oncology. So I spent a summer there and then just fell in love with the intersection of getting to actually work with clinicians to develop something that’s usable by them and would benefit patients but also bioinformatics and machine learning teams on big data challenges. And ever since then I basically asked Steve for a different position every year. I was still in school at this point, so like I would ask to come back in the summer or do a directed studies program with him, and I just kept coming back to the problem and falling more and more in love with it.

Jenny Yang: So that’s just what started me on the journey. It felt like just a positive flywheel that I’ve never jumped off of.

Grant Belgard: Looking back, what early experiences most shaped the way you go about choosing the problems you work on?

Jenny Yang: I, I think I always start with the clinical significance. So when I was at the Genome Sciences Center, the clinicians would always tell us about the problems that they or like the way that tooling could help their processes. There were models that were being built that would try to take genomics information and then predict the subtype of cancer. There was, so that’s like a decision support tool. There were teams that were working on creating annotation systems for automatically annotating histology images and marking out where tumor cells were. And that was part of my Master’s project as well. And I really believe that translational AI can be so effective.

Jenny Yang: You do have to make sure that it works with the clinicians or the people that would actually be using these tools like you. It’s if you come in as just a machine learning researcher, create a tool that you think is useful, it might not actually work with the workflow of a clinician that would be using it.

Jenny Yang: So I think I’m always driven by, let’s start with what can help the people that would be the end user of this tool. And as I’ve moved forward, like through my PhD. And then now with Outpost. I think another thing that really interests me is where are we starting to generate new data sets that we couldn’t have generated before that can unlock insights for more people, more broadly.

Jenny Yang: So I, I definitely have seen kind of the AI for drug discovery and all other biotech fields move towards data-driven analysis. So I know high- quality data that could be used for a lot of different downstream tasks is very valuable because that’s what that’s really where I believe the field is headed. so the microbiome field was really an interest to me at this point because the field of, for example, metabolomics has allowed us to actually study biochemical reactions and get down to a mechanistic level of what’s happening in the microbiome. So that means we can potentially unlock new data sets that we wouldn’t have access to before, at least not at the scale that we could have now which can help solve a lot of problems in a lot of different fields. So I think that’s been a more recent excitement for me.

Grant Belgard: What were some forks in the road that turned out to matter a lot more than they seemed they would at the time?

Jenny Yang: Ooh, this is a great question. I think one decision that we really had to decide on was at what point do we open our own wet So when we first raised our pre-seed funding, we did have the thought that we would just outsource all the experimentation, part of that is cost savings and just efficiency.

Jenny Yang: We assumed that CROs would have the capabilities to just do the experiments we wanted to them to do, and we ended up opening our wet lab t his year. And that’s not something we planned on doing for the first fundraise. And it ended up being really an important decision because what we learned is that what we were trying to do is really specialized.

Jenny Yang: I think we talked to over 60 different CROs and just couldn’t find one that could do what we did. And we had to really work with the CRO that we ended up working with really closely to develop these experiments. And yeah, bringing it into the wet lab, I think it’s really allowed us to speed up. Both data generation ended up becoming a cost saving because we could just move faster. We’re not paying for like external labor of doing the work as well, and we can just test a lot of different variables that we wouldn’t be able to really do with a CRO without adding costs and like extending the time of experiments. So that ended up being a really good decision for us. I think there is something to say about having control over, just what gets done and at what time and what speed.

Grant Belgard: What made you decide that building a company was the right vehicle for the problem you cared about?

Jenny Yang: I had I, so I’ve always been someone who’s been very motivated by the mission that I’m working on, so I’m one of those lucky people where I found a field that I liked really early on in undergrad and have been through that throughout my entire adult career. And I also am someone who had a really good experience during my PhD, so I came outta my PhD still with a positive view of academia.

Jenny Yang: So I was very fortunate for that. I think I would’ve been very happy, doing this type of work in academia, doing it in industry, or doing it in doing it at a startup scale. What I really cared about when it came to making the decision was, how much ability would I have to create vision that I see because I’ve spent a long time building up the skillset and the confidence that I have, I ideas that I can actually follow through on, which meant I really wanted to make sure I was working on a problem where I could have some autonomy over decisions that were being made. wanted to make sure I worked with a team that I really I’m really passionate about what I do, and it’s really made much more enjoyable when you’re around a team that inspires you, that you admire and you trust and are also similarly motivated. And I think if I found, an academic lab that met all those criteria, I would’ve been very happy. And if I, it just happens that I got dragged into the startup world. So definitely a decision that I’m happy I made ’cause I’m having so much fun doing it and I’m working with people that are incredible and on a problem that I think is very worthwhile. But I think one part that maybe I didn’t appreciate earlier on that there are certain constraints within being in an academic lab that I think we don’t have necess, I haven’t felt the same being in the startup world. So for example, like academic labs can really focus on a specific theme. But when you jump into the startup world, I think it really is just what you choose to do as. As a company. Of course there’s other disadvantages, but it’s been a great setting for what we wanna do right now.

Grant Belgard: What parts of your earlier training or work turned out to be unexpectedly useful once you started building?

Jenny Yang: I think one of the biggest, I think one of the biggest advantages of some of my early work that’s helped me now was actually seeing machine learning translated into the real world. So when I worked at the Genome Sciences Center, I could actually see people building the models and then clinicians using them literally the next week. And that was very rewarding. I saw people developing automated tools for annotation, and then clinicians were actually using those tools immediately. So seeing that translation was incredible. Even through my PhD, I worked on a project really closely with clinicians and we actually deployed that AI tool across hospitals and saw clinical validation of that tool in real time.

Jenny Yang: So as people came into the emergency departments, we saw them being triaged using AI and that I think is an incredible experience. ’cause I do think there are a lot of people who build AI models, they publish a paper on that model, and that’s the end of you hearing about that model or seeing the progress of it. And I think that’s a shame because I do feel like are so many ways that AI can help people and support people in their jobs. So by actually having seen multiple examples of that, especially in a field like healthcare and medicine. Gives me like a certain mindset of what kind of rigor needs to go into building AI models that we want to deploy at Outpost.

Grant Belgard: As your responsibilities have changed over time, what did you have to unlearn?

Jenny Yang: That’s a great question. And I think a big thing that I had to unlearn was, I can’t do everything on my own. So I had a very independent

Grant Belgard: PhD

Jenny Yang: where I really got to choose the projects I worked on, drive those projects, and I had to do them from beginning to end. I did all the coding, I did all the research on it.

Jenny Yang: I wrote all the papers, et cetera. I did all of that quite independently, and I of course I had my supervisor and like clinicians and other people on the projects that I could talk to, but I really made all those decisions and something that I’ve had to. Not necessarily unlearned, but like a new skill that I had to learn was delegating, being more organized and combining different people’s outcomes to create the final outcome. It’s very different than being an academia where you’re like much more of an independent researcher. Now we have a team with people working on different aspects, on a really big So That’s been a change.

Grant Belgard: Who or what most shaped your taste in science, leadership, or risk.

Jenny Yang: I think it’s a mix of two people. So Professor Steven Jones at the Genome Sciences Center. I feel like he leads with a lot of trust. He is also someone who lets the work speak. I think he is, someone who doesn’t say a lot necessarily, but everything he does say just is very inspiring, has purpose, has weight and I think that’s a really thoughtful leadership approach.

Jenny Yang: I think it’s also a person, like a personality thing as well, but I really like how he leads with trust. When I was at Oxford doing my PhD, my professor David Clifton, I also think he leads with a lot of trust, which I appreciated. He also was just so efficient. If I needed anything and I emailed him, I would get an email back in a really, like within 10 minutes probably, unless he was in the middle of a meeting. And that just made me realize how important it is. To give people the things they need to do their work really well, especially if you’re in a leadership position. You don’t wanna be the person blocking your teammates from moving quickly or being able to do their job to their best of their abilities.

Jenny Yang: So being present and trying your best organized and answering emails like e even now, like when I see my email fill up. Or my inbox fill up. It can be a little anxiety inducing and I would love to procrastinate it, but I know as a leader especially I gotta just be on top of that because there are gonna be certain things that I just have to answer to keep the machine running smoothly.

Jenny Yang: The other thing about David Clifton that’s really inspired me is that I think he’s he’s a very, he’s a very good listener. And I think Steve is like this too. I think they really enjoyed listening to what other people have to say because if you can understand the people around you, you can make better decisions, especially in a leadership position.

Jenny Yang: I think one of your main jobs is just taking the information you have and making the best decision you can at that time, and that does require more listening than speaking.

Grant Belgard: Was there a point at which your definition of success changed, and if so, what caused the shift?

Jenny Yang: I think my definition of success has not necessarily changed. And what I mean by that is I’ve always been someone who’s just having a growth mindset. I like setting small goals that are achievable along the way, but I feel like there’s always, and celebrate those goals. But there is something exciting to look forward to next.

Jenny Yang: So I think my definition of success is, and when it comes to completing tasks, it really is just finish what you start. Even if you get an outcome that isn’t great, wrap it up. And you can celebrate those small wins. And I think that’s been nice. I’m not of the mindset that you always have something else to look forward to, so you should never feel like you’re finished.

Jenny Yang: I think you should celebrate the small wins along the way, but I do success knowing that you can finish something from start to beginning, even if it’s not necessarily the preferred outcome. You can wrap it up and learn from it and then move forward. And I think now working with a team. My definition of success probably has changed a bit more because now it’s also about keeping team morale up, seeing how motivated other people are, seeing other people’s abilities to do their tasks from beginning to end, and hopefully being part of the support system to get them there. So if anything, the definition of success has just become more of like a we thing than just a independent thing.

Grant Belgard: What advice would you give to someone who wants to build at the interface of biology and computation?

Jenny Yang: I think one piece of advice would be making sure that you bring on the experts in the respective parts, because I really think if you build at the intersection of AI and like biology, healthcare, et cetera, are a lot of important components to it, and. It’s not gonna be able to be tackled by just machine learning engineers or just clinicians or just biologists. I really think it is going to be a mix of all three. And you need to tightly couple the understandings between everyone, like high level enough that people understand the overall goal and how the components connect, but in depth enough that you have the depth of knowledge for each individual component.

Jenny Yang: Because. It’s not a, it’s not a single faceted a field. And every one of those components will be just as important.

Grant Belgard: How should early career people think about depth versus breadth?

Jenny Yang: I do believe in if you’re going to take the leap into building something yourself, I think you should build up a mastery of a, at least a skill set that will contribute to that. So I think when you’re building up that mastery, it’s okay to be very focused on a specific niche but understand where it fits in the general problem a general context of the field. because as one person, you can really only tackle really well, I think, a specific component of something so big, but understanding where it fits can help you, or at least what I’ve found, help you decide maybe what the next step could be or what the next application of what you’ve done is because you’re really building modularly in that sense, rather than maybe immediately having to pivot to a new area. I really wanted everything that I did over the course of my career to build on one another. So it always felt like I was compounding with the skillset I was building and the effect it could have within the greater space.

Grant Belgard: What mistakes do smart people make when they first enter this world?

Jenny Yang: I think entering specifically the AI and biology space and probably other adjacent spaces, but entering it from a startup perspective, I think a lot of people hear that you have to maybe cut corners or take shortcuts and just move fast and break things in order to survive in the startup world.

Jenny Yang: And I think. There is some truth to that. You only have a certain amount of runway and you don’t wanna like waste time in one area and not be able to enough to know the correct direction to go. But especially with biology where experiments matter so much and like a millimeter, a milliliter of incorrect to an experiment will absolutely derail you. I think there is some care that needs to be taken in doing things in an organized fashion at a more steady pace because at the end of the day, that experiment is going to do its own thing. Whether you did it quickly or not. And wanna come out of for example, the first year of company building saying, okay, we have data and that’s all we have.

Jenny Yang: I wanna be able to say we have meaningful data that’s high quality. So I think there’s more caution that needs to be taken coming into this field from a startup perspective than maybe what you typically hear of maybe just a purely software product.

Grant Belgard: What habits or practices have helped you keep learning while the field keeps moving seemingly increasingly faster as time goes on.

Jenny Yang: That’s such a good question, especially we seem to see new AI developments every day, every hour. I think for me it’s, listening, so definitely listening to the people around me. I have incredible AI scientists and microbiologists around me. And my co-founder Alex, our COO, is also just very interested in all the novel advancements coming up, that being able to listen to the people around me really keeps me up to speed.

Jenny Yang: I also like reading. I have news on my phone that I’ll read every morning. I get an update on the news every afternoon, and I just personally. To keep up when I have a little coffee break. The other thing is even though I’m more in a business position now, whereas before I was doing hands-on coding, I’m still very involved with the experiment planning and like reading some of the new articles coming out. I think having some sort of active, component o f my day actually being a little bit more hands-on not to the extent that I was before, keeps me really up to date on what’s happening.

Grant Belgard: How do you think about credibility when the vision is bigger than the proof you can show today?

Jenny Yang: I think it’s very much about how you present the work you’re doing and how you communicate it. So I very much want people to believe our vision, and it’s a huge vision. Like I would love to see a world where can have personalized health outcomes based on our microbial communities in our gut, which is where we’re starting with, on our skin, in our mouth, et cetera.

Jenny Yang: Like I, I think we’re all so unique in that sense that our personalized health recommendations should consider that. And that’s a huge vision saying I’m gonna create some kind of generalizable model to be able to make these recommendations. Huge vision. But I think people believe in that vision.

Jenny Yang: But when we communicate the science we’re doing, we communicate it to the level that we’re achieving it at, which is we’re starting with a focus on the gut microbiome. We’re starting with these classes of drug molecules, these type of food molecules. And we’re doing it, we’re running these type of experiments, so we are going to post open source and we are gonna publish. So we’re gonna be very open with the field to allow other people to evaluate us in peer review. I think credibility is all in how you communicate what you do have. And I think you have to be very responsible there because I would hate for, anyone in the field to put out like, false statements about the truth of the science, especially when it’s in a field that’s so important for human health.

Grant Belgard: What question do you wish more people asked you?

Jenny Yang: Ooh. I think I’m not too sure. I think a lot of what. motivates me are the people that I, I work with and feel like I, I would love to have more opportunities to get to highlight them because especially as a founder of the company I tend to have to naturally be the face of the company and be the person in interviews or answering questions, but I. I think we wouldn’t be where we’re at without the team we have, and everyone is so individual in what they’ve brought to the team that it’s always nice to have more opportunities to, to highlight them because then naturally I can even go more in depth about certain aspects of the company. ’cause I, like I mentioned, being on the intersection of AI is, there’s many components to it.

Grant Belgard: What part of your work still gives you the most curiosity or wonder?

Jenny Yang: I think it is the the translation of real biology, digital form. Because to me I think it’s a very interesting problem to see whether or not we can really digitize or make certain aspects of biology, computable. I think we’ve seen a lot of bodies of evidence that suggest we can. and just really a fascinating problem to me ’cause it feels like you’re taking something that’s this is a poor analogy, but the only one I can think of right now.

Jenny Yang: But you’re taking something that is very 3D and trying to binarize it. And there’s only so many patterns you can represent. Although I do feel like with the rate that. AI is moving maybe this will be really possible, but getting there, I think it is exciting to watch that progress.

Grant Belgard: What belief about the future of biology do you think more people should wrestle with?

Jenny Yang: I think one, I think it really is that. A lot of these biological processes can eventually be automated in the way that we research them or develop data them. I think there is gonna be a very tight interplay between how biological experiments are being done from like the transition between just like humans manually doing it to some automated approach and then looping that in with AI.I think that’s like. It is something that I do believe in. I’m not sure to what extent it’ll be fully automated, but I think people should really wrestle with the idea of which parts can we start automating to make things a lot more efficient. And then where do we still need a human in the loop or how does that kind of evaluation work? And I definitely still, when I think about where Outpost is going in the future, I would love some aspect of automation to really scale up quicker and certain processes to be more efficient. It is something I wrestle with then, where do we make sure we have a human in the loop and what should we automate just yet ’cause we’re not gonna be there?

Grant Belgard: What would you say to your younger self at the very start of this journey?

Jenny Yang: I think one thing I would say to myself is to trust my gut and make decisions, make the best decision you can, even if you don’t have a full understanding of what the outcome will be. I think that’s very important. I think just naturally in a lot of different leadership roles, you don’t have time to get all the information you need, but the quicker you can make a decision and act on it, the quicker you can pivot if you’ve made a mistake.

Jenny Yang: But if the worst thing that could happen is you don’t make a decision at all and there’s no progress being made because it’s better to make the wrong decision, but you’ve learned something and then you can immediately move to the next kind of kind of solution. so I would say. Just move quicker on some decisions even if you don’t have full information.

Grant Belgard: And lastly what do you hope listeners remember from this conversation a week from now?

Jenny Yang: I think it, I think one thing I would love people to really take from this conversation is maybe like an additional peek in their curiosity on how. As humans, we’re not just human DNA, we’re actually these ecosystems, there’s trillions of microbes in and on us. They really affect how we experience health and disease. Yeah, not to creep them out or anything, but maybe just like the curiosity around like how you eat. If you eat something, all these microbes are also eating them. When you use a skin intervention, there’s a bunch of microbes that are dealing with it as well. So maybe it’s just thinking about us more on the systems and ecosystem level than just like a static human being.

Jenny Yang: We’re actually very interesting in that sense.

Grant Belgard: Jenny, thank you so much for joining us today.

Jenny Yang: Thank you so much, Grant. Really appreciate the time.

The Bioinformatics CRO Podcast

Episode 82 with Manuel Corpas

Dr. Manuel Corpas, founder of Cambridge Precision Medicine and originator of ClawBio, discusses his experience as a genomicist, entrepreneur and educator working at the intersection of genomics, AI, and health data science.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Manuel Corpas is a Senior Lecturer at the University of Westminster, founder of Cambridge Precision Medicine, and originator of ClawBio.

Transcript of Episode 82: Manuel Corpas

Disclaimer: Transcripts are automated and may contain errors.

Grant Belgard: Welcome to the Bioinformatics CRO podcast. I’m your host, Grant Belgard, and joining me today is Dr. Manuel Corpas, a genomicist, entrepreneur, and educator working at the intersection of genomics, AI, and health data science. Manuel is a senior lecturer at the University of Westminster, founder of Cambridge Precision Medicine and the originator of ClawBio. His work has also included contributions to efforts and tools such as Decipher and BioJS. Manuel, welcome to the podcast.

Manuel Corpas: Thank you, Grant.

Grant Belgard: So for listeners meeting you for the first time, how do you describe what you do right now?

Manuel Corpas: I have been doing bioinformatics since 2000. First as an MSc student at the University of Manchester, then I did a PhD in bioinformatics. Then I moved to the Sanger Institute where I developed one of the leading sort of databases for diagnosis of rare genomic disorders. I did there at Cambridge like four years. Then I started the company, Cambridge Precision Medicine, which was part of the Cambridge University incubator. I did that for a few years. Then COVID came and the wife said, oh, we need to move to London because I want to be closest to my family and so on. Then I moved to London where I got this back to academia as now as a senior lecturer at the University of Westminster in genomics. I lead this program on a new MSc on AI and digital health, which is second year and yeah, with 20 students. So I think that gives you a little bit of an initial set up.

Grant Belgard: What led you to create ClawBio?

Manuel Corpas: So I think the prelude for ClawBio was the bio JavaScript projects, which was an open source community I led around 2014. And the idea was at that time to basically come with a way for stop reinventing the wheel in terms of reusable components that people would need, would want to use for biological data on the web. So fast forward into the internet revolution where we are today, where the idea is that, at least from where I see it, the future looks agentic and knowledge currently in the biological domain, but you could say from many other domains, but I am based on the biomedical domain. So all knowledge tends to be captured by publications, PDFs, which are non native for discovery by agents. And by agents, AI is a software that runs continuously and Alexa, you’ve heard of Alexa, right?

Manuel Corpas: It talks to you and you can ask her questions and it’s able to do specific tasks and it’s all the time on. So the extension to that is that we have now the possibility to use AI powered agents or robots which leverage the power underneath of frontier models, such as ChatGPT, Claude, perplexity, Gemini, you name it. The idea around ClawBio is to come back really to the same situation. So we have bricks of knowledge, which currently are not discoverable. They’re not reusable. They are not reproducible. So how can we make a sort of central repository registry where anyone who is developing this sort of discrete skills can find them in a way that there’s no restriction, MIT license. So anyone can see the code, can push the code. Obviously anything, there’s some guard rails, I look at the code and make sure that it’s audited.

Manuel Corpas: And at the end of the day, ClawBio stands for the idea of using this incredibly adopted new tool called OpenClaw. And I’m going to explain what OpenClaw is. OpenClaw really is the planning between the large language model and then the communication that you have via chatbot, via telegram or WhatsApp or Discord. So the planning in between is done with OpenClaw. And OpenClaw has been gaining a huge amount of traction. So I think it’s like in three months, it’s been the most starred and downloaded in the history of GitHub. And it has, right now, it seems like it’s the sort of base technology on which an agentic AI is going to happen. And so everyone, if you are in the sort of AI domain, you will have thought about agents and agentics all the time.

Manuel Corpas: But if you are not in that sort of domain, the sort of hype around or the sort of heat around agents is that for the first time we have enough sort of capacity via some of the big providers, GPT, Anthropic, and so on, whereby for the first time, they give you the possibility to be able to develop code in a way that basically is hands-free. And that code development, when you transform it into specific tasks and chain them into specific skills, problems, whatsoever, you can then very easily develop a library of repetitive tools, which you can automate without having to be there. So a lot of the sort of vision is how can I automate as much as possible of tasks that normally before we had this sort of elements together. Now we can actually have your agent running them without you having to basically be on the computer all the time.

Manuel Corpas: And you can talk to the agent because it will have already pre-programmed the skills necessary to be able to chat with you. And you can see via your chat and you can ask it questions and it can do and spark code and it can change things just with plain English. So I think that’s really the key of ClawBio. So it’s the application of that kind of paradigm into the biological domain, specifically around reproducibility and also this open source community aspect to be able to share, to be able to reuse.

Grant Belgard: So noting that it’s early March 2026, and this was really just very recently launched, what would you say are currently the most mature skills ClawBio has?

Manuel Corpas: None. I wouldn’t say that there’s any that is mature. The project only has a few weeks and so I cannot claim that. What we do have is some minimal viable products. So some minimal functionality that actually proves the point that you can really do something useful. And I guess for me, the coolest application is a robot, which is currently accessible. It’s called RoboTerri. So RoboTerri’s soul is inspired on my Ph.D. supervisor and Professor Teresa Attwood with whom I did my Ph.D. in Bioinformatics at the University of Manchester. And you can basically talk to RoboTerri, who has already had preloaded my personal genome.

Manuel Corpas: One of the things I tend to demo now is that I can now take a picture of a drug pack that if you could go to the pharmacy and then you say, if you’ve been prescribed this drug, and you would like to know whether that particular drug is compatible with your genetic metabolism for processing the drug. We know that the pharmacogenes influence significantly your ability to respond well or adversely to the normal prescription guidelines. We know that most of the prescriptions and the dosages are based for the average population. And then you have, you know, if it happens that you are not white, northern male European, you’re going to have, and the further you are from that sort of population, you’re going to be more likely to have an adverse drug reaction. Out of those adverse drug reactions are included in your genetics.

Manuel Corpas: So if you are able to just take a picture of your drug, which has your genome preloaded, and then RoboTerri is basically able to identify the active principle in the drug, in the medicine, compare it against FDA approved guidelines for dosages based on your genetics, and then against your current genetic variation, the appropriate dosage based on those guidelines. And then it gives you back a report there in your telephone specifically saying, oh, avoid, or yes, this is good for you. So that’s a lot of minimum viable products I currently have that I demo most.

Grant Belgard: And looking forward to say the end of the year, what would your best guess be about where this project or successors of it go?

Manuel Corpas: Yeah, the reason why I’m now in this place, I’m now in an incubator building. And I’ve been invited by a number of potential investors who are interested in getting further with this project. And so the priorities for me at the moment is that we’re talking not about a sort of normal type of data, we’re talking about very highly sensitive data as it is your health data, your genome data, this is data which is incredibly sensitive. So if you are going to reuse any of those components, you have no idea who the heck have developed them. You have no, the trust component from having a library is very beautiful. Yeah. But when you are now talking about your own personal data or data which is highly sensitive, like patient data, you’re just not going to reuse that.

Manuel Corpas: So what I want to build around now is a little bit like you have seen, for instance, in some of the social media profiles where you have this sort of blue badge where, you know, that person has been certified, that person has, so we could have some kind of certification aspect where if we have that blue badge for any given skill, we can guarantee, obviously for a fee, that specialty skill has been audited against regulatory systems, the European regulatory system and it has compliance 2730001, for instance, that way building that trust for specific skills, which people will pay for, then we can add that extra layer of certainty and trust, which then I think would enable a much more trusted resources. So I think for me, it’s not a question about how big this is, which we’ve already had at least 40 contributions, and we can see that the project is really being very popular.

Manuel Corpas: But I think the issue now is how do we make this into something that can be trusted? That’s really where my current thinking is.

Grant Belgard: What do you find people most frequently misunderstand about AI within the bioinformatics community?

Manuel Corpas: That they have no idea of how quickly things are changing and the power of new versions from 5.2 to 5.3 people think there’s a small incremental change. No, this is 10 times better. People don’t understand two things, the exponential growth in terms of capability that we’re experiencing every three months. And I don’t know about you, but I work completely different as I was working three months ago. Yeah, if you are like me, we’re running like tech agents at the same time, any given time and all these instances. I don’t look at my emails first. Now I look at what was the way I left it with that agent last night and see where we are at. So that’s one thing. And so I gave a presentation at the London Bioinformatics Meetup on the 26th of February. Yeah, that’s nearly a month ago. And I raised it. It’s meant to be a community of bioinformatics practitioners, right?

Manuel Corpas: You have people from major institutions that you know their names. I’m not going to say any names. I asked them, does anybody know what Claude code is? No one. And I was like, oh my God. And this is meant to be it. The top people in my field, I think that, or maybe I’m just mad. That’s why I said I was mad because the relentlessness, the pace, the acceleration of at least how I live on a daily basis, how things are changing, it’s just absolutely, I don’t even sleep because I have this sort of situation where I see this coming. And to be honest, I made a very conscious decision. I have two options. I either stay, unless somebody else makes the decisions for me, as it is happening right now, and I’ll give you some examples in a minute, or I actually invent something and I shape somehow in my own very little small niche, what’s going to be the future.

Manuel Corpas: Because otherwise, unless I invent it, I’m going to be out, as simple as that. And for me being out, that to me is, I can’t live with that though. The example. So I was doing a benchmark of the main frontier models. Yeah. Gemini 3.0, Claude Opus 4.6, Sonnet 4.6, GPT 5.3, Deep Seek 3.1. I think it was a 3.0. And I was basically benchmarking for each of the main World Health Organization diseases that give greatest value to humanity, how well each of these models are able to query back some sample research, output research I had taken from biomedical data is called PubMed. Basically, PubMed has all of the biomedical literature index and you can query PubMed for Ebola, let’s say, or for type 2 diabetes or for ischemic disease or whatever.

Manuel Corpas: I was like, I was actually looking at the 170C global burden of disease, World Health Organization diseases, which include Zika virus, Ebola, I don’t know. So Claude wouldn’t let me query them for those diseases. I know that they’re doing it in the right way. Okay. So these are guardrails. But I’m like, who are some anonymous software developers, Silicon Valley, to decide on what I think I should be doing? Because I’m sorry to say, I’m a bona fide researcher and you cannot or should not dictate my freedom. I know this is a very silly example, but I think this is just a taste of what could happen very soon. We’re not talking about two years. I’m talking about months down the line because one month is a year in AI speed.

Grant Belgard: So in this very rapidly changing environment, how do you decide when a total workflow is trustworthy? What do you look for?

Manuel Corpas: So first thing is transparency. Can I, can I actually look at the code or can I, is it being, are there any black boxes basically? Yeah. So if there are black boxes, then I don’t trust it because there’s no transparency. And unfortunately, most of the LLMs I use, they are black boxes. So how can I trust something which is a black box? Transparency is one. Secondly, I want to see the faces of the people. I don’t trust a project where it’s just an anonymous community. I want to see the people behind it. I want to see who are the developing, who is making the decisions. Because if I have a problem, I don’t want to have your LLM power chatbot to give me one of your recommended arbitrary sycophantic answers. Third, I want to see, is this scientifically grounded? Is it, what’s the thoroughness criteria? I want to see the track record of the people. I want to see their LinkedIn profiles.

Manuel Corpas: And I guess the irony of this is that suddenly the human side of their project becomes so much more important. And I was having this conversation earlier with one person here at this, and suddenly trust becomes paramount for anything. It’s all about trust now, but it’s not about technical prowess. For me, non-negotiable asset, which is cause, is trust.

Grant Belgard: How do you think that will evolve over the next few years? Because the kind of technical side of things is, yeah, I feel like the kind of presentable on this is massive for anyone paying attention.

Manuel Corpas: To be honest, I have a lot of difficulty even predicting, understanding the present, let alone the future. I think that for me, yeah, because I don’t know about you, but I think we are all drowning and that we’re all unable, humanly impossible to keep up with everything that is happening. I, so I can’t predict technology, but what I can see that what would be the tools that will help me be better prepared for the future is something that I live on a daily basis. And to be honest, these are things which I had not paid too much attention until now, but the principles that I know will help me are the following. One, don’t get overwhelmed. If you can just do 1% of catching up every day, you are doing fine. I’ve seen it because in three months time, that is not 1%. It compounds.

Manuel Corpas: And I have seen that just by every day listening to podcasts like this one, where you have trusted people, whoever you feel trusted, yeah, you can trust that. And you keep your own environment because right now it’s becoming less and less important on where you are at, as long as you have access to the right information channels and you keep connecting on a daily basis, that’s good. The second thing is that I have, I’m training this sort of gut feeling that my personal intuition. Whenever I get a result now from an agent, from an LLM, it’s not, I don’t even read it. I don’t even necessarily, it’s more to do with the gut feeling. So I am now having a lot, making decisions a lot more with my gut than my brain. Because I feel that the logical part is sorted now and it’s more to do with, not so much with the logic of intelligence, but how it feels.

Manuel Corpas: And then the other aspect that I have now accepted, and I think it gives me a tremendous competitive advantage in terms of the principles for the future, for how things will evolve, is I feel unstoppable in the sense that for me, the intelligence is now, is not the limit anymore. It’s my capacity to ask questions. For the first time ever, I can ask, and this is already cliche, Peter Steinberger says that the creator of global, you have an infinitely patient machine that will be able to explain things at your own level. There’s no excuse now not to want to learn. And also I find another thing that is becoming cliche as well is you have this situation where some people say, oh, this is going to atrophy your brain because it’s making all these decisions for you and they’re thinking, well, that’s a valid option.

Manuel Corpas: But I can tell you that you can flick the side and harness it to even encompass problems that you would have never thought you would be able to develop before. Because now if there’s something you understand, you can always ask. And so that’s for me, my strategy, I cannot predict the future. I cannot understand the present really, but I have this sort of inner guide compass, which is now more important than ever, that I think keeps me, to be honest, ahead of the curve. And at the end of the day, the adoption curve is here. So as long as I’m ahead of the adoption curve, I’m going to be fine.

Grant Belgard: And how do you think about maintaining scientific rigor in this world? It seems like it takes more time to consume and digest outputs than it does to create them.

Manuel Corpas: Yeah. So obviously we’re being swamped. So my approach is very simple. Have you heard of the dragonfly method? Right. So dragonflies are the most terrible predators you could ever think of. They were human size. We would all be dead. They wouldn’t be humans, right? They are so precise. Why? Because they have 10,000 lenses. In other words, they have 10,000 different perspectives from which they can really calibrate their environment. The way to survive this sort of lack of validation or verifiability of having your work, you can train your LLM to judge it as a reviewer. Then you can have the perspective. You have another lens that comes, which is the founder, another lens that is the patient, another lens, which is the general party.

Manuel Corpas: And so I’m going to give you complimentary perspectives that allow you to, for the first time have really, if you trust your model, I guess that for me, the trust comes from what model do you trust? So if you have a model that you trust, and the only way to trust that model is by using it and by tinkering, by testing it, by seeing, oh, this one is better for emotional empathy. Oh, this one is better for generating figures. This one is better for integration with this particular tool. And that’s the kind of knowledge that you can only guess if you are invested into embracing all these different technologies. And you need to stop being a user. And now if you really want to ride the wave, you must become a builder. That mentality has to change and that will give you the necessary confidence to be able to understand what the shortcomings are. What’s the best way to prompt?

Manuel Corpas: Because as I said, every tool is going to have their own quirks and not even the providers of these tools understand it, they don’t even understand it. They can only have a, some kind of general frameworks and they have their guard rails, that there’s a huge space in between that remains unexplored. And that is the right strategies because all of these models are all going to be generalists. But if you have that specific, very specific expertise that you had in my case, bioinformatics, let’s say, that I’m going to win in bioinformatics for these very specific genomics tasks, whatever. I’m not going to win at the many other general things that are out there. And that’s where I see for myself anyway, the opportunity.

Grant Belgard: How can others get involved in ClawBio?

Manual Corpas: So just go ClawBio.ai and then it will have pointers to the repository where the code is and then just tell your favorite LLM or tool to help you understand what the code is. And so you have your own personal tutor and that will be the way. The other thing is that obviously we’re now developing channels, we’re doing hackathons here next week in London, we’ll develop ways for people to become community and through email lists, through Discord, through, as I said, we have not even had time to have all of these different options, but I think that the best way is simply to just go to the website and tell your favorite chat book to walk you through it and then start question. Break it. If you break it, I’ll give you, I’ll buy you a gift. This is meant to be, stop thinking about messing up, please messing up. That’s the only way you’re going to learn.

Grant Belgard: So changing track here to talk a bit about you and your own career. Was there a moment when your work stopped feeling like a series of separate projects and started feeling more like a mission?

Manuel Corpas: It’s always felt like a mission since I had very clear, I wanted to become a scientist and the current projects are simply an expression of this relentless need to somehow express the all of this energy and all of this feeling I have for the world. I think it becomes like really like my own temperament. I must say that I’m a little bit in the autism spectrum, so I get quite obsessed with things. And right now I’m obsessed with AI, I’ve been obsessed for several years now on AI and now I’m obsessed with global and really harnessing the power of agentic AI. Because as I said, I see a lot of potential, I see a lot of danger, and I see a lot of people who very in a very, in a good way, they’re trying to impose their values, which I know I may not necessarily agree. And some people talk about super intelligence as the end goal.

Manuel Corpas: I don’t think that’s necessarily one that really excites me because super intelligence for me, it sounds a little bit like you exclude some people. What if these people are disabled? Some other people are talking about super abundance as the end goal for this revolution. I don’t agree with that. That sounds selfish and materialistic. For me, what really the purest purpose for me to pursue this obsession is what I call super enlightenment. If we take into the history, into account the industrial revolution, which is the closest thing I can think of to what’s happening today. You had these people like Nikola Tesla, Immanuel Kant, or even slightly earlier, Isaac Newton. So these were people who saw advancements as a way for growth, not in terms of wealth or power, but as wisdom about really a better understanding of yourself and the world around you. And I know that there are real dangers.

Manuel Corpas: And then this is the fact that you could have super intelligence ruling the world. I understand that there have to be people thinking about those problems. For me, frankly, it’s a bit of a distraction. If I focus on the negative, I prefer to focus on the things that are meaningful to me. It’s more of an integration of the technology, even with a spirit, even if I can say that. And it’s weird because you have these new agents that are so [?]. It’s becoming somehow spiritual, even if we don’t mean to do that. Not that I’m necessarily a religious guy, but I think that this meaning aspect, this need for me to be authentic, value oriented, which sounds obvious. I think it’s now ever more important than before.

Grant Belgard: Looking back, what risk are you glad you took?

Manuel Corpas: I don’t think I have risked enough. I wish I had. I’m a risk taker, but I don’t think it’s enough. I guess for me, I had this sort of pathological state of mind when I’m never satisfied. So I have a constant sense of dissatisfaction, and that sort of drives me to want to improve, I guess, combined with my obsessive behavior for things I feel really passionate about. I am not taking more risks because of people I love, whom I care, and I would be taking more risks if I was on my own, but I think I would probably be dead by now.

Grant Belgard: What skills matter most now that AI is changing how technical work gets done?

Manuel Corpas: I think some people say curiosity and inquisitiveness and asking questions. I’m going to go one level beyond that. And I actually want to say it’s about your attitude, and it’s about your own grounding and having a very, very clear sense of compass in terms of what matters to you and why. So your internal compass, why should you be doing what you are doing? Finding the right motivation drive that gives you that inner fire that keeps you going is for me the most important thing, not just from now. This is just a new reincarnation, but I think that that is something which will never change regardless of what technology we’re surrounded by.

Grant Belgard: What makes someone genuinely strong at interdisciplinary work?

Manuel Corpas: Really, not being afraid of showing your vulnerabilities and being prepared to swallow your ego again and again by people who know more than you. And I think it’s also an essential appetite for learning.

Grant Belgard: Speaking of learning, there are lots of ways to learn. What specific habits in that space have compounded the most for you over time?

Manuel Corpas: Studying. So for me, a day that I don’t spend, I don’t mean studying on the computer. I need my physical book, smelling, it may sound a little bit old school, but if I am on the computer, I get distracted a lot. And so having that sort of discrete physical medium I can touch, there’s this connection, a physical connection, a spiritual connection with that work. And so for me, a constant habit of, one of my vices is I keep buying books from Amazon, which I never read, but it’s one of the things that gives me most pleasure, just buying books. I know that I’m going to read maybe 50% of them, but I have them everywhere. I have one book in my car. If it happens that I’m in the underground and then I don’t have anything to read, I have another book in my, in this wallet, I have a book in that. So I always have a book around me, which will bring me back to my origins.

Manuel Corpas: And the origins is scholarship at the end of the day. And that scholarship for me is not given by the computer. It’s given by that sort of quiet, calm, no noise place. In this case, in the early morning, 6 AM, 5 AM, when everyone is sleeping, everything is quiet. I have my little lights, my pile of beloved books there, and just enjoy that sort of moment of solace and connection with learnings, new, old, ancient, which to be honest is for me, was the essence of civilization or our civilization. That’s why I don’t think that AI necessarily is going to make us less intelligent or less able. Obviously, some skills like having a calculator, now, if I want to have a sort of complex sum, I use a calculator. Yeah. Okay. But that doesn’t mean that I’m going to stop being less intelligent because there are other things, as I said, this gut feeling, which I’m constantly training.

Manuel Corpas: So you are that, you harness that technology and internalize as a new Swiss knife artifacts, which now becomes part of him.

Grant Belgard: And lastly, what advice would you give your younger self?

Manuel Corpas: Don’t doubt yourself so much.

Grant Belgard: So Manuel, this has been a great conversation.

Manuel Corpas: Yeah, absolutely.

Grant Belgard: Thanks.

Manuel Corpas: Thank you.

The Bioinformatics CRO Podcast

Episode 81 with John Connolly

John Connolly, CSO at the Parker Institute for Cancer Immunotherapy, discusses immuno-oncology and PICI’s approach to funding groundbreaking cancer immunotherapy research.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

John Connolly is CSO of the Parker Institute for Cancer Immunotherapy. PICI’s mission focuses on accelerating breakthrough immune therapies by bringing researchers, tools, and collaboration structures together to move faster from scientific ideas to patient impact.

Transcript of Episode 81: John Connolly

Disclaimer: Transcripts are automated and may contain errors.

Grant Belgard: Welcome back to the Bioinformatics CRO podcast. I’m Grant Belgard. Today, I’m joined by John Connolly, Chief Scientific Officer at the Parker Institute for Cancer Immunotherapy, also known as PICI. PICI’s mission focuses on accelerating breakthrough immune therapies by bringing researchers, tools, and collaboration structures together to move faster from scientific ideas to patient impact. In this conversation, we’ll cover what John’s working on right now, how his career path shaped the way he leads, and practical advice for anyone building in translational science, especially where data, bioinformatics, and real-world complexity collide. Let’s jump in. So, John, for listeners meeting you for the first time, how would you introduce yourself and what you focus on these days?

John Connolly: Yeah, first of all, I just want to say thanks, Grant. It’s a real thrill to be on the podcast with you and to catch up. John Connolly, as you heard, CSO of the Parker Institute for Cancer Immunotherapy. I guess the way I’d introduce the Parker Institute is, first of all, by saying it’s a 501C3 nonprofit. We’re a cancer charity. So we, as you heard, our mission is really to make all cancers curable disease. And the organization really lives to that mission, very much focused on funding fundamental research in the field of cancer immunotherapy, and then really creating what is the highest concentration of immuno-oncology expertise in the world, pulling all of these experts together, betting on these people, not on projects, but on the people and what they do, and creating these 14 centers around the US. They’re full centers or what is known as EMR centers.

John Connolly: And this really gives a lot of autonomy to those sites to do that fundamental research. But in describing the Parker Institute, it’s not just a grant agency that gives money out. What we’ve done is we’ve really knitted these organizations together, sort of a network of networks together above each of the individual institutes. What we’re trying to do is break down those barriers, break the silos, so you’ll be able to exchange data, information, projects, people between the different institutions. So we pre-negotiate confidentiality agreements, material transfer agreements, everything is there. So you can just stand up in the middle of a presentation and send a vector to the next lab just immediately, or open up and really have these substantive discussions about what’s next in the cancer space.

John Connolly: And that’s really a major part of the institute is that funding, is that research, is finding those young investigators, building out those centers around the US. And then the next part really is translating that into real clinical trials in the real world space. And so to do that, I think there’s no better place for blue sky ideation than academia. I mean, this is kind of what it does well. But one thing it doesn’t do well is prioritize programs and execute in the clinic. So what we’ve done is we basically created a venture philanthropy arm that spins out biotech companies based on the technology that’s coming out of the network. So getting best in class from Stanford, MD Anderson and Dana Farber and pulling them all into one company and then putting that under project management. So we’ll fund those companies and actually staff them.

John Connolly: We’ve got a new co-build team that can jump in in operational roles and then push that technology out and really test it in the clinic under project management. So you actually see it executed there. And we have a business development arm that can then work with pharma to get this out to a much broader community. And so with that, I think you really do have to have that. If you want to accomplish such a ambitious goal like all cancers curable disease, you really need to have a line of sight from the blue sky kind of science all the way to commercialization. And when I say line of sight, you need to have operational efficiencies in each of those areas. And that’s really what the Parker Institute does. It’s been around now for 10 years, 2016 we started it. Really everything I just described was the idea of Sean Parker.

John Connolly: So Sean really saw this vision of there’s, these silos are really preventing the immunotherapy from really reaching its full potential. And there has to be a way to break down these silos so that people, the best people at MD Anderson can be working shoulder to shoulder with the best people at Cornell or Memorial Sloan Kettering.

Grant Belgard: What’s the problem you find yourself thinking about repeatedly right now?

John Connolly: One of the biggest problems is we’re in the middle of kind of a strategy review for the next 10 years. And so it’s thrilling to really talk about that. I’m working with Ira Mellman and who’s recently joined the Institute from Genentech and to just think about what that 10 year strategy really looks like. And it’s funny to be on this podcast, what I’m thinking about is how we recruit more kind of AI native people into the immunotherapy space. And I think the way the approach that I think works and that we’re taking is not to train people to become kind of proficient to think in this AI kind of first integration, but really grab people that are in the tech space here in San Francisco or in Asia or other where and then turn them into immunotherapists. I think that’s for me the biggest thing.

John Connolly: So we’re excited, but it’s something that’s because of the way we’re thinking about strategy, we’re really pushing forward. So that’s one big one that we’re thinking about.

Grant Belgard: What’s something about cancer immunotherapy that’s widely misunderstood outside the field?

John Connolly: Yeah, I think one of the things for cancer immunotherapy is this idea that, so there’s a lot of expectation with it. I think the, and it’s that you just simply sort of induce an immune response against the cancer and the cancer would suddenly go away. It’s a difficult thing to do. As much as we’ve created incredible successes in the field, and I really mean that, I think the advent of checkpoint blockade is arguably the biggest advance in cancer in 5,000 years since the advent of surgery. I mean, like really. So it works broadly across multiple tumors. It’s curative in late-stage metastatic disease. There’s just nothing else like that. This chemo doesn’t do that, nothing else does that. So it’s phenomenal. And when it works, it works spectacularly. The downside really is that checkpoint blockade works in 20 to 30% of people, and that’s it. In certain cancers, it works very well.

John Connolly: In others, it doesn’t. And so getting a better understanding of why that is, being able to predict, requires a deeper understanding of how the immune system really works. And so I think that’s a big one. The other is this idea that it’s a free ride, right? There’s any kind of novel therapy that comes in that overactivates your immune system. There are obviously side effects, and mitigating those side effects and amplifying the effective nature of the tumor, anti-tumor effect, is totally essential. And that’s another obviously big area of research.

Grant Belgard: What’s a recent moment where you thought, this is why we do the work?

John Connolly: Oh man, this is, luckily for me, this happens a lot. But we’ve recently taken, a lot of efforts gone into immunotherapy, tackling big cancers, man. Lung cancer, all over the place, right? So we’ve got tons of programs and projects in that. Big efforts in breast cancer and others, and they’ve made true progress, like phenomenal progress. But what I’ve recently done is taken a step back and challenged the network to live to the mission, what I said was all cancers curable disease. Not all cancers, just treatable disease, okay? So, I mean, let’s see what we can cure. And so actually doing a scoping, looking at cancers that are highly responsive to immunotherapy, right? And saying, let’s move beyond just treating those cancers. Like rare subtypes of melanoma with Antoni Ribas at UCLA and Paul Nghiem doing Merkel cell carcinoma up at the Hutch.

John Connolly: Actually saying, guys, what if we actually took all of these incredible tools that have been developed for these very common cancers, like lung disease, like lung cancer or breast cancer, and instead focused on rare cancers that we know are gonna be responsive, and actually cure these things. Let’s tick some boxes here, let’s get these things off the list. And so phenomenal work, again, from Tony Ribas, recent publication in rare subtypes of melanoma showing just cures. It’s really, really spectacular. And I’m proud to say we’re putting a clinical trial together at Dana Farber now, which is a combination cancer vaccine for Merkel cell carcinoma. We really dove deep into, can we induce a really potent immune response against this thing that’s highly responsive to checkpoint blockade?

John Connolly: Can we then clone the T cell receptors, ship those to the Hutch and have cells ready for those patients so that if they do recur, they can be treated immediately and cured. So I think just eliminating cancers, like truly curing a whole class of cancer is something that is really near and dear to my heart. I think it’s just on the horizon.

Grant Belgard: How do you translate such a big mission into a concrete research strategy?

John Connolly: Well, I mean, the first thing is some humility, right? So saying that there’s so many things we just don’t know yet, and we don’t know about the immune system, about how cancer interacts with the immune system, and going to find those people that are really focused on those questions. I think in building that networked based foundation, I think this is an essential portion. It’s really the value of the Parker Institute. It’s just the people and the incredible investigators. And then asking them, guess what is it about the current funding mechanisms or the current mechanisms within your university that are holding you back? You talk about all these projects, but it’s been 10 years and we still haven’t seen something come out. And a lot of this has to do with grant funding, review process, bureaucracy, the slow nature of all of this.

John Connolly: And you and I both know that just the way the concentration of expertise works is these centers attract good people that attract good young people to kind of get there. And so what we’ll do is we’ll come in and bet on those people and say, look, your area of focus, let’s give you a block gift, largely discretionary funding, right? With some obviously project guidance through a steering committee. But that’s it, it’s really up to you. How do you want to spend this money? What do you want to do? And then do it in perpetuity. So they know they’re never going to have to worry about that money again, because it’s coming every three to five years, there’s another gift coming, another gift coming. So that takes that constant grant writing off the table for these academics. The second is when we do have steering committees, at the end of the committee, the money immediately goes out.

John Connolly: So it’s just a review, quick review, and then yes, no, modify it, yes or no. And then the money’s out the door at the end of the meeting immediately. So for all those funded projects. And so that kind of speed, it just changes things. I can just speak from experience, if you’re a PI in a lab, and you want to have the ability to, you’re thinking all the time about what your areas of focus constantly, 24 hours a day. And you want to be able to go out on a bike ride and come up with an idea and immediately hire two postdocs to work on it. Like this is what the institute’s kind of funding brings. And it’s just an essential kind of thing that just changes the dynamic.

Grant Belgard: How do you decide what not to do, even if it’s scientifically exciting?

John Connolly: So, I mean, the focus on immunotherapy, even if immunotherapy kind of is turning out of favor in like the investor community or the pharma community, I think this is staying on mission. This is really extremely helpful. So no matter whether it’s a project that we’re funding, or if we’re building a company, or we’re looking at an investment opportunity, or doing a partnership with pharma, the first thing I always ask is, is this mission? That’s all, is this mission? And if it is, then we’re aligned. So we can work together as companies, we can move forward, we can invest in that company or build that company, we can invest in that project or that person. And so I think that that really is a North Star for us, is this idea of all cancers curable disease and a focus on using the immune system to get there.

John Connolly: The other is, there’s a certain, staying in touch with the network, doing regular site visits, getting out there, is really essential to understand where the momentum in the field is going, and where the early results from early stage clinical trials, the innovation in this space is going. So that also kind of helps an idea of which direction and funding is going to go. The other is, I think some of the best ideas out there are coming outside our space. I mentioned AI and other things, but there’s those inventions that come in sideways to try to solve problems that people have been banging their head on for a long time. I look for those all the time. I also look for contrarian troublemakers too. I love those people that’ll jump in there and the worst thing you can have in a network is just everyone saying the same thing.

John Connolly: You definitely want people that’ll go against the grain and shake things up.

Grant Belgard: What are the biggest friction points that slow progress in multi-team science and how do you try to reduce them?

John Connolly: That’s a great question. So I think the biggest kind of hurdles that you have to overcome, and this is probably true for any organization of our size, is that bureaucracy that I talked about. In worst case scenario thinking, this idea that you have to constantly worry that you’re working with another investigator and they’re going to take your ideas or the IP or something like that, or that that university is going to have a better position than a new company build. That kind of worst case thinking just sort of breeds mistrust and what it does is it eliminates opportunities because you end up just taking a defensive position. You see this across a lot of organizations. This is why it’s really essential.

John Connolly: When you understand the Parker Institute, you understand that it’s a network of people, people that trust each other, the people that get together twice a year in these really intense retreats that we do, sharing unpublished data and just getting it out there and really trying to show the best in class. So that trust within the network, it’s kind of a secret sauce. It’s almost impossible to kind of recreate just with money. It’s something that these guys see each other as Parker Institute investigators before they do Harvard professors. And I think that that’s just an important thing that came with the great work of Jeff Bluestone when he built the network, as the first CEO of the Parker Institute, Sean’s vision of building this and pushing it out. And I think the work that my team and I have done pushing it forward.

Grant Belgard: Yeah, how do you design incentives so that collaboration is real and not just a slogan?

John Connolly: This is, so it’s really matching, it’s kind of impedance matching on capability. I think that’s really what it is. Because I don’t think that collaboration all the time is such a great idea. We fund projects that are very similar to each other in individual labs, like real competitive projects that I’m happy to fund if we think we need to double and triple down on the idea. One lab’s not gonna solve this problem. So even if there are very similar ideas and things like AI based protein design or TCRT therapy, there are programs that are extremely competitive labs. These guys have been competing for decades and I think the Parker Institute isn’t gonna solve it. But I think this impedance matching concept that I was getting to is really the real deal. So if you have someone like Chris Garcia at Stanford, Chris is a genius and he was successful long before the Parker Institute.

John Connolly: But one thing that the Parker Institute brings is, now what Chris is doing is he’s building these incredible proteins, these incredible systems, but then they’re immediately getting transferred over to Carl June in Penn, who’s then taking them into the clinic and moving those forward and really seeing Chris’s work applied to medicine. And I think that that kind of matching of capability where you’ve got the people that just live in structural biology that then pass it off to someone who’s a development expert who then passes it off to someone who’s a clinical execution and do so seamlessly because it’s exciting. This is how those projects really move forward. And again, touch base, just always going back to that touchstone of mission, is this something we wanna work on?

Grant Belgard: How do you measure if your strategy’s working, what metrics matter and what metrics can be misleading?

John Connolly: Sure, for sure. So one metric that can be misleading, I think, is publications, so nothing against it, but if you sort of get the absolute top people in the field, you’re gonna get a lot of publications. I guess just kind of, because that’s a metric of universities. And it’s a good one for universities. So it’s very important to get out there to communicate, to spread the word and to excite and attract other people. But for us, I think that some of the key metrics are this sort of full circle. So I mentioned the idea of moving blue sky science and pushing things forward and paradigm changing kind of ideas into execution in the clinic and then into distribution through pharmaceutical companies or through large biotechs to the greater patient community.

John Connolly: So for us, the first thing really has to go back and ask how many patients are actually benefiting from the research that’s happening at the Parker Institute. For me, that’s a big one. And that’s just really looking back on, and we have hundreds of clinical trials with ideas and therapies that have come out of fundamental research at the institute. We keep really good track of that and just see how things are going. I think that’s, for me, that’s probably the biggest one. And I think this is, it takes a lot of time, unfortunately, to get something that someone that’s City of Hope or UCLA comes up with and then translate that into a phase one clinical trial, execute that into the clinic, and then really begin to push it forward, benchmark it in a phase two or three trial and actually see that applied to patients. But this is really what it’s all about.

John Connolly: There’s wonderful organizations who are here to fund just sort of, just fundamental research, right? And the NIH is clearly the global leader and the actual backbone of all research funding and progress that we’re making in the field. And that’s an important and incredibly amazing thing, but that’s not what the Parker Institute is doing, right? Mission is all cancers curable disease and we do what it takes to get there. And so for us, when we go back and check, we ask that question, how well are we doing by looking at what kind of impact in the patient population we’re making?

Grant Belgard: What are the failure modes of cross-sector collaborations and what guardrails help?

John Connolly: Yeah, I think, I mean, some of the failure modes, I mentioned that trust idea, right? So anything that threatens that, because when it’s working, when people are openly sharing data with an expectation and understanding that working together is better than working in silos, that anything that kind of comes in and threatens that is problematic. I mentioned that I love taking contrarian troublemakers, but I don’t want a room full of them. So it’s like, for me, that’s really the big take home is making sure you maintain that network effect. And to do that, you have to listen to the network too.

John Connolly: So one of the most important things about the job at CSO is getting out to the sites, meeting regularly with the center directors, meeting regularly with the young investigators that are coming up, and talking to them about what their hopes and dreams are, but most importantly, like what the problems are. So part of this job is representing not necessarily PICI or the sites, it’s representing the network, this effect that’s happening. So if there’s a problem that we’re doing back at Central, then we need to know, I need to know, so we can go back in there and fix it and really maintain this network-based effect.

Grant Belgard: What role does bioinformatics play in your strategy?

John Connolly: Oh, it’s huge. So one of the big things with the big opportunities we have at the Parker Institute is really to become kind of a central database for all of this scientific and clinical data across the network. And it’s important that it’s written into the master collaborative agreement that knits these centers together. And so in doing so, we’ve really collected some amazing cohorts that are out there. We have the world’s largest cohort on adverse events in checkpoint blockade, just looking longitudinally at thousands of patients that are treated with checkpoint blockade. We had our RADIOHEAD cohort. And then we just published very recently with Mike Angell at Stanford, a large consortium on the BRUCE cohort, which is the largest collection of brain cancer spatial data that’s out there.

John Connolly: And all of these things are analyzed and we work at Central to make sure that that’s accessible to the network and that’s out there. And there’s many other cohorts that are like this across the network. And what that gives us is it gives the network an ability to dive in and to ask questions, whether it’s a target discovery question in the BRUCE cohort or better understanding of myeloid responsiveness in glioblastoma, for instance, or it’s something like we can predict adverse events or response to checkpoint blockade across multiple different clinical indications. All of this is available and they can dive in and really work closely with the informatics group, as well as their own informatics team. So we really are talking about the top centers in the world with some of the brightest informatics groups internally. So I think that’s one of the major areas.

John Connolly: The other is from this seat as a CSO is to push and encourage to get up there at these retreats and say what I was saying to you, which is we need to start turning the questions away from small questions to big questions and applying both informatics and large language models to the data that we’re generating. And if there’s something that’s missing, come and tell us. Like what is that data set? What is the problem that you want to solve and what’s the data set that’s needed to solve it? And so this, at Parker Institute, we can actually pull that together very quickly, put together a data strike for us to try to build that cohort. We can immediately fund large projects like that to get them off the ground. And these projects can be multi, multi, multi-year projects in order to answer really important questions.

Grant Belgard: When you’re working across multiple sites, what are the hardest data standardization problems you face?

John Connolly: Oh, so for sure. So this is my informatics team is pulling their hair out like every day, right? So always trying to stand, I don’t know, the hardest standardization. A lot of it’s probably, first of all, the simple stuff, right? If we’re doing clinical studies across real world hospitals, then it’s going to be data entry, coding, what you call like this chemo versus that chemo. Going in there, it’s actually kind of hard to, you have to go in and standardize that. The others are real batch effect things like blood processing effects. So we try to control all of that by providing templates to the hospitals prior for data entry.

John Connolly: We have our own red cap system and some of these cohorts they can enter immediately into so they’re familiar with the interface and they can put it in and it standardizes how they’re calling things like the over-the-counter meds that the patients have prior to therapy or the outcomes. The other is we use a lot of centralized core facilities when we put together these big cohorts and that’s, I think, really important. And it was a decision made really by Fred Ramsdell when he was CSO to do this. And I’m just completely benefiting from that when I’m looking at the quality of the single cell RNA sequencing on these cohorts is spectacular. It’s really, really good.

John Connolly: So being able to just take that first step, build the infrastructure to standardize that and saying, guys, for the proteomic analysis where everything is going to one site or the single cell sequencing, one group is running all of it and the samples are centrally archived and curated and we make sure that that’s taken care of. So yeah, I think that we try to tackle those problems. They’re the same problems I think that others have that are out there. But it’s important to do really.

Grant Belgard: How do you think about analysis readiness in real clinical trial data sets? What must be true before you trust downstream conclusions?

John Connolly: So I think one of the terms I just said before is real world. So one of the things, we’ve got a lot of cohorts and you and I have been involved in clinical translational studies where you’re doing deep analysis of say a phase two clinical trial, like this long data analysis where you get multimodal proteomics, flow cytometry, tumor biopsy, spatial, all of it and you get this incredible data set that’s just around each of those patients. The downside is that the patients in those academic medical centers that have those resources to do that, these are really highly selected patient populations. So when you then translate that out into the real world, those biomarkers, they don’t really translate well into real world settings. And so starting off and starting your large scale studies that you want to train these models on in the real world.

John Connolly: So go to set up in community hospitals and put the staff in there to actually pull those samples out. There you’re getting people that are checkpoint naive, they’ve got a job, they’re showing up at, you know, they’ve got real world problems. They are not the highly selected patient population that’ll make it into an academic medical centers clinical trials. So we’ve done that across 50 different hospitals for one of our cohorts now that the data is just, it translates much better into the real world. I think that this is, that’s a big one. But you do have data readiness problems when you start to do that, because these hospitals are not staffed to do any of this stuff, nor do they want to. So there’s not even an enthusiasm to do it.

John Connolly: So the other thing you won’t get is, you know, advance multiple biopsies and things that an academic medical center would do that your local hospital is just, if it didn’t have to do with care, they’re not doing it. So it’s, and they’re not funded to do it. So what we’re doing is, this is where we’re really putting in place resources at those hospitals to really acquire that data. And the other is to recognize them too. You know, when we publish these papers, we make sure that those physicians are authors on those papers, that this is moving forward. When we present the data, it’s always with those 50 hospitals in mind, the work that they did, even through COVID, to collect a lot of the data on that cohort. So, and then pulling that forward, I think, into PICI Central, where we do do a lot of QC, QA on the data sets themselves. This is, so far, this has been a, that’s a heavy process.

John Connolly: You know what I mean? This is what we hope things like AI algorithms can help with and they have to a degree, certainly for the QA they have. But, and the other, of course, is sample archiving. So, and biobanking. Always challenging when you’re doing that at scale. It’s a full-time job. We subcontract a lot of that out to CROs that I think do a pretty good job at tracking them. But, you know, there’s always the spurious sample that people are mislabeled or whatever. So, keeping on top of that, making sure you’re project managing all of it. This is, these are challenges. But I can’t emphasize this enough. The value of real world data in real hospitals is enormous to translating to something that’s actually effective in clinical setting.

Grant Belgard: What data types tend to drive the most useful decisions right now? Genomics, transcriptomics, proteomics, imaging, clinical notes, something else.

John Connolly: There’s no doubt. In the clinic right now, genomics does, of course, because we’re entering this kind of era of genomic medicine and targeted therapies. So, anything that has a matching therapy to it that’s easily available can be, that’s in the US is approved and can, you know, can be used in a doublet combination. So, this is where genomics comes in. Because everybody is getting, you know, a Caris report or a foundation report or, you know, you name it. And there’s all of this, or if you’re at some of the big academic medical centers, they have their own, you know, MSK impact or something, which does similar things. And so, with that information, you do get a bunch of really nice genomic data that can be used to guide care. And I think that that’s hugely valuable, particularly in combination with immunotherapy.

John Connolly: Right now, we’ve got, you know, tests like PD-L1 positivity in the tumor microenvironment. So, okay, well, you’ve got lung cancer, you know, it’s PD-L1 positive. We know you may, you’ve got a high likelihood of responding to checkpoint blockade. And so, that’s why, you know, it’s first line. But the questions that we want to know are not just what’s right for the median of the population, you know, but what’s right for the you personally, right? So, what is that genomic kind of workup that says, hey, maybe I should be, maybe I should start with another therapy. So, maybe I should like a MEK inhibitor or something or platinum based chemo plus, you know? I think that that’s, and then personalizing that journey is really valuable. So far, that’s really come from genomics and just sequencing for sort of driver mutations.

John Connolly: As we move forward, we’re starting to see things like, as I mentioned, what does the tumor, what does that tumor neighborhood look like, you know? So, is it PD-L1 positive? Is it rich in leukocytes? Is it, you know, what does it actually look like? Is it highly fibrotic? All of that can guide care, you know? So, and what I’m most excited about is not so much the research grade, highly multiplexed analysis of the tumor where we get huge amounts of information back. I’m really excited by the large language models that are going in and looking at just H and E stains and they’re trained on outcomes data and the genomics data that’s, or in the transcriptomics data that’s already there to actually predict, hey, this is a KRAS G12V mutant, you know, like just by looking at the H and E stain. These models are getting better and better and better at giving us more and more information.

John Connolly: And so, for me, I think I need that for immunotherapy. So, I want to, you got to move beyond just PD-L1 high, which is really just a interferon signature to not, is checkpoint blockade going to work? Yes or no? But this, your checkpoint blockade won’t work, but NK cell therapy will, or a targeted cytokine will, or an innate immune activator will. Being able to use these models to begin to predict the quality of response to immunotherapy would be absolutely, absolutely thrilling. And then from the transcriptomic side, I think there’s a lot there from the research stage, but it’s just not yet applied to medicine, you know? The biggest thing I run into is you can get a huge amount of information about these patients, but then you go and look at what therapies are available now to actually, you know, actionable therapies that are there to move on that information.

John Connolly: And there’s nothing there, or there’s maybe just one thing that’s available. So, the Parker Institute, our push, if you look at the companies we spin off, they’re a vast majority of therapeutics companies. You know, there’s like very few that are in sort of the diagnostic space. And I think that’s really because that’s where the need is. We just need more options and opportunities based on the huge amount of data that’s really coming out and a deeper understanding of that tumor.

Grant Belgard: What’s a common trap in translational interpretation where people overreach from interesting biology to clinical claims?

John Connolly: So, one is that the last thing you said, which is interesting, biology. You become just enamored with the cool idea, right? So, these ideas, I don’t wanna bash on the NK cell guys, but I’m just gonna use that as an example, that this really works incredibly well in the mouse models that we’re working on, and even in some of the tumor organoid systems or PDX models. But when we then go to apply that to someone that’s actually gone through three rounds of chemo and their immune systems beat up pretty well, and the tumor has been immunoselected to resist the immune response for the past seven years it’s been growing inside of you, this is a totally different world than a transplanted mouse or cell line that you’re trying to kill.

John Connolly: And so, this kind of dependency on clinical data, on preclinical data and this belief in it, and you become enamored with the mechanism and understanding of how it works, this leads you then to misinterpret responses on your phase one trial. So, you’re coming in, doing a safety study in phase one, you see two partial responses, you jump up and down and think this is the best thing in the world. What you missed was the eight non-responses. So, it’s one of those things that’s having, and it’s really because the mechanism works so well in preclinical data. And so, you push to the phase two, again, maybe some, and those patients are heavily selected at your academic clinical trial, so you’re just picking the patients you kind of think will work. And then, once it hits phase three, it’s 50-50 on every clinical trial, it’s a coin flip.

John Connolly: And it shouldn’t be, because you’ve been through so much to get to that point, you should have seen that this thing was gonna miss. And so, again, this is a great place for AI to come in and to remove a lot of that subjectivity from that. I think one of the most exciting things about the path I just described is, even if the drug doesn’t work, you start to, you learn a lot. There is something to be said about experimental medicine. And I think some of the best innovation in this space is really coming from clinical trials that didn’t work the way we expected them to work. It’s throughout the history of science, its serendipity has been an essential portion of it. And it continues, it’s incredibly valuable when it happens in a clinical trial, where you really say, hey, hold on a second, I’m not getting what I want, but I’m getting something here.

John Connolly: It really teaches you a lot that mice and in vitro organized just don’t.

Grant Belgard: Switching tracks now, what did you originally think your career would look like and how did reality diverge?

John Connolly: That’s a funny question. I think from a career standpoint, I’ve always wanted to be, obviously be in science. And someone had asked me a couple of years ago this, I don’t remember a time where I didn’t want to be a scientist. There’s never, never, even when I was a little kid, I had my test tubes and things like that, chemistry kits and microscope and everything in the just bedroom, even when I was six years old. So it’s always kind of gone in that direction. I think the track, as I was coming up, the traditional track would have just been an academic professor doing great things, but very siloed, where you’re doing your thing, you’re kind of in the corner of the lab, you’ve got the main project that’s driving, but you’ve got these side projects that keep you excited. I think that for me was always the track that I think people, you’d always go on.

John Connolly: I think most people that came up even now have that in mind as a potential. I think the difference really came with, as always, with the mentors that you have as you’re coming up. And I think just pointing out some great ones in Mike Fanger of Dartmouth. Mike was head of the department, but he also started a company called Medarex and a number of other companies as well. But Mike had this kind of entrepreneurial spirit, and I think that was highly inspiring. So just to watch him run the department, to do his thing, and to also run this big company at the same time, and almost seamlessly, when he’s thinking about it, he’s thinking the same way. He’s the same guy in both places. He’s just interested in solving the problem. The company, the academic position, the friends, the networks, the service that he has on profits, all of that was toward one thing.

John Connolly: And each of these different things was like Dartmouth, medical school, and the company. These were all ways to get there in service of his greater kind of vision, what he wanted to accomplish in the cancer space. And so he went on to develop CTLA-4, PD-1, all of these great breakthroughs, identified them, and then internally developed them within Medarex, which was then acquired by BMS. But I think that that, the way he worked and the way he thought was really important for me. And then moving on, I think working with Jacques Banchereau at the Baylor Institute for Immunology Research was big for me, because Jacques thought he was very much a high-energy company person. You know what I mean?

John Connolly: Getting into an environment where teamwork was essential, you had to work together with each other to get anything done, and think purely about the human disease, not so much about the mouse models, about really getting things into the clinic and testing them there. That was a change that moved me away from this idea of just the lone professor in the corner, more toward this team-based biology. And so I think those two people were highly influential. That and everybody else, you go through life. But it definitely changed the way I was thinking and the direction. You come to realize that to get anything really substantive done, you can’t do it alone. That’s all, there’s only 24 hours a day. You might be the best at everything, but there’s still only 24 hours in a day. So you really need to have a team of experts and collaborators and networks.

John Connolly: And that really led me to what I think is just a great network in the Parker Institute.

Grant Belgard: What skills turned out to be career compounding, the kind that kept paying dividends?

John Connolly: Oh, it’s mostly, it’s just the ability to work with people, you know what I mean? And enjoy it. I think some of the true, for me personally, one of the true joys in life is working with people to build things. And that for me is absolutely, it’s probably true across most careers. I said, if you like to work with really smart people, you hire people that are smarter than you and empower them and recognize them, this is really essential to building effective teams. And I don’t say that from kind of a business book. I only say that looking back on my life and what has, continues to be a really successful formula. It’s just that the ability to have joy in the successes of others and the rest of your team is really what drives this and has worked out.

Grant Belgard: What did you have to unlearn as you moved into broader leadership?

John Connolly: I think you had to unlearn the idea that you could be, I may be the best CFO, legal guy, CEO, whatever, it just doesn’t matter because again, that concept of 24 hours in a day, if I can only put four hours on that project, then it doesn’t matter if you’re the smartest guy in the room, you only put four hours on it, that’s it. And it requires 12. And so building a team of effective people that you trust, that you work well together, this is something academia does not teach you at all. I tell this story, but in my academic lab, there’s kind of no problem that’s too small for me to have an opinion on. Somebody asked me, what color paperclips? And I’d be like, it’s blue, like everything, it’s your lab, so you’re running the whole thing.

John Connolly: And that’s great because it allows you to deeply explore big ideas, but going into companies and I think probably one of the good examples when we built Tessa Therapeutics and working with a really effective team there, I knew I could walk out of a room and the right decision would be made because the people that were there, they are doing their job as well as I do my job. They are just as effective, they’re really good. And once you have that trust within the team, it just amplifies everything you can do. And again, I think that that’s something academia does not teach you. You really have to learn that in the real world or in certainly in the biotech setting.

Grant Belgard: How do you maintain scientific depth while taking on more organizational responsibility?

John Connolly: You just got to go and talk. First of all, you have to have some enthusiasm to the science, but you got to love it, right? Because you’re going to be hit with a lot of it. It’s really talking to the investigators, going out. Part of this job is to travel around to the sites to talk to them about their projects. But I don’t really want to talk to them about their projects as it relates to funding. We already gave them the funding. They have the money and they know the money’s coming. It’s about how cool the science is. And I want to kind of catch that, the excitement. And you can say it’s the young people that are going to give you the excitement. That’s not always true. These guys have been at this for 50 years that still jump out of bed and get super excited about this cool new idea. So it’s just looking at that.

John Connolly: And then from the standpoint of my own personal excitement, it’s also talking to all of those people who are ultra focused on their own projects and then knitting together those ideas. You know, like, all right, that was cool at Dana Farber, but you know, these guys at UCSF have this other thing. Maybe we should get them to talk and kind of work together. These are observations that are coming at the problem from two different angles. And so that I think is being there, talking to the people. This is what keeps you aligned.

Grant Belgard: For people who have worked across different environments, academia, industry, nonprofits, what mindset shift helps them to adapt quickly?

John Connolly: I think in academia, it’s this sort of mini monarchy that you get in your own lab. You can shift anytime you want. You know, somebody once told me, this was years ago when I was thinking about moving to a company when I was at Baylor. They said this guy named Don Capra. So shout out to Don, passed away, but he started so many great things. He’s an amazing, amazing guy. But had the good fortune to have lunch with Don. I was just asking for his advice on this company. And he said, look, academia, there’s almost no better job in the world. As long as you’re publishing papers, getting things out there, and publishing good papers every few years, getting grants in, you can study anything you want. You can wake up in the morning and suddenly decide to study turtles. And that’s what we’re gonna do. And as long as it keeps going, is there’s no job in this world that has that kind of freedom.

John Connolly: And it allows you to truly explore deep ideas. There’s huge value in that. I think from, I talked about this earlier, when it comes to the lessons from companies are very much teamwork lessons. You know what I mean? I couldn’t have done a global pivotal phase three trial in multiple countries alone. It’s ridiculous. But we could do that at Tessa with 1,500 shipping lanes and centralized cell therapy manufacturing because the expertise that we built together and the trust we had in each other. So I think that’s another one. In the nonprofit space, I think that the lessons there are that concept of mission. So as much as the lab and the company, those are very what’s going on today. You know what I mean? You’re always focused on today. Like what’s going on, we’re kind of putting fires out or keeping things moving.

John Connolly: And you’re enjoying the growth that you had when you look back on where we were and where we are now. But in PICI and I’m sure in other nonprofits, it’s really about where we wanna get to. This vision of this mission and how close are we? It’s right around the corner. And that belief is really quite essential at being effective in a nonprofit. So yeah, I think that’s kind of the take is that that’s what the nonprofit really taught me is that to have this longer term vision and to talk about mission and to constantly check back on everything on whether it’s getting us one step closer to that mission.

Grant Belgard: What does good taste look like in choosing problems?

John Connolly: Oh, wow. That’s a cool question. I think when it has to do with cancer immunotherapy, I can tell you it has to do with like, how close is this to actually curing someone? There is something else though, you know what I mean? And for me, it is that a little bit of that contrarian nature, which is, how unique is this idea? You know what I mean? Like, is it, and sometimes those ideas are, everyone’s doing this. So on that 1% chance that everyone is wrong, let’s take a deep look the other way. Like that let’s assume the sun doesn’t come up in the West. Yeah, I mean the East, it’s coming up in the West tomorrow and that’s just it. And what would that mean? How can we explore these kinds of ideas? So, and it’s absurd and the vast majority of time it doesn’t work, but it gets you to think a little bit of a different way. So for me, ideas that are audacious, I think have value.

John Connolly: And then the ones that, and when you kind of lead your way through or the ultimate kind of end point where you’re gonna get to with those ideas, if it’s leading to real effective impact in the clinic, like if this is right, then we’ve got something totally different here. Then going back and checking the data, not your data, but all the data, everything that’s out there in the field, diving deep saying, well, I can’t be right because this is going another way. So for me, it’s not following along with what everyone else is doing. Those ideas are not that exciting. Because my assumption is there are smart people that are working 24/7 on this and they’re gonna get to where that ultimate path is gonna be. But if you’re going in a different direction, you’re thinking differently, you take a big swing, that I think is an exciting idea.

Grant Belgard: How do you recommend building credibility across disciplines, especially for computational people working with wet lab and clinical teams?

John Connolly: Yeah, sure. So I mean, this team idea is essential right, but it’s also from the computational side, and this is actually even from the wet lab side, the credibility comes with the kind of end result, of course, and the track record is the end result. But to get there, and I think you know this even better than I, Grant, is you’ve gotta have an understanding. So you’ve gotta get in there and understand the biological question that you’re trying to solve, that’s all. Even if you’ve never done this before, you say, all right guys, walk me through this, let’s talk, teach me.

John Connolly: And so constantly learning, constantly engaging the wet lab scientists and learning what are the key, why is this an important question, not just what it is, to the same degree to be able to communicate, your solution to that has to be explained to the wet lab scientists as well so that they understand where this one’s coming from. It’s like, okay. And then you can really start working together, mostly because they’re gonna ask you what you think is a stupid question, but then you’re like, all right, hold on a second. There might be something there. And to the same degree, you’re gonna have this wealth of experience working with so many wet lab scientists that you’re gonna bring excellent questions to the table there too. But ultimately that credibility comes from the relationships you’re forging with these people, but also the outcome from that collaboration.

Grant Belgard: If someone wants to move towards scientific leadership, what experiences should they actively seek out?

John Connolly: So I think there’s a, I mentioned there’s a big difference in sort of this leadership role, but I think versus say your own, just a PI in the lab, there’s just so many different kinds of leadership. So one is just, is be a good leader, right? So focus on yourself. That’s the biggest thing, honestly. This idea of, I mentioned this a little bit earlier, but finding good people that are extremely talented, that are engaged, they’re energized, that share your enthusiasm for what you’re doing, that are better than you at certain areas, absolutely hire those people. And then, as I said, recognize and build. So to be a good leader, just work on yourself. That’s a huge thing. And once that happens, you’ll begin to create incredibly effective teams. And those teams will, the product of that output is really gonna kind of launch you in the right direction.

John Connolly: I think that also translates whether you’re running a small lab somewhere or you’re moving up to a dean of a medical school or you’re head of a pharmaceutical company or even just a division head within a pharmaceutical company that you’re being good leader is, people select good leaders to become good leaders. So that’s kind of how it works. And a lot of that is just focusing on your own behavior, expectations, and sort of, there’s another aspect of this, is it sounds a little Pollyanna to say that, but I think honestly, you also have to kind of filter out the noise. All the organizations I said have issues and problems and real world stresses. You just got to learn to filter that stuff out, focus on what that ultimate goal is. And this is why having a mission and about what you want to do is really important.

John Connolly: Whatever barriers come up in front of you, you’re just keeping your eye on the mission and just weaving your way around those barriers, eye on completely all the time and doing that with your team. That’s really best advice. And that’s from smallish leadership role to head of the NIH.

Grant Belgard: John, this has been fantastic. Thanks for walking us through how you think about strategy, data, and translation, and for sharing the career lessons you’ve gained along the way. For listeners who want to follow your work, where’s the best place for them to do that?

John Connolly: It’s definitely the Parker Institute website, but we also have a really active social media. So please follow everyone on Parker ICI, our Parker Institute for Cancer Immunotherapy, on Twitter, on X, on LinkedIn. There’s a good comms team. So they’re always putting out great output from the network. So, and keep an eye on the immunotherapy space too.

Grant Belgard: And is there anything you’d like to leave the audience with as a final thought?

John Connolly: There is one thing I think, I mentioned this idea of believing in the mission. You know what, I think there’s certain times in the history of science when you kind of look back on things where science doesn’t just work progressively to help society. You know what I mean? It’s not like you see like these sort of increases in lifespan over years and years and years. The way science works is inventions, people, the adoption of new procedures and technologies launch the field forward. There was a time where you were pre-antibiotics, pre-vaccines, pre-germ theory of disease, where half of kids died before the age of 11, half. And this was terrible and everybody hated it. And they felt just as bad now as then as they do now. But you couldn’t imagine a world where that didn’t happen. Well, we live in a world now where we’re living with cancer and everyone’s dying of cancer.

John Connolly: Everyone has this touch to their lives and their families and it’s a huge, huge, huge, huge impact on society. Well, we’re entering a world soon with the advances that are happening where it would be look like, it looks like childhood mortality. You’ll look back and say, how was it possible that we lived in a world where everyone died of cancer? It’s like, this just makes no sense to me. And that comes with funding this research and pushing it forward, but it is just around the corner. It truly, truly is.

Grant Belgard: John, thanks.

John Connolly: Thank you, Grant. Appreciate you.

The Bioinformatics CRO Podcast

Episode 80 with Diane Shao

Dr. Diane Shao, an attending neurologist at Boston Children’s Hospital and instructor of neurology at Harvard Medical School, discusses her work as a physician scientist focusing on genetic causes of childhood neurodevelopmental conditions.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Diane Shao

Dr. Diane Shao, is an attending neurologist at Boston Children’s Hospital, an instructor of neurology at Harvard Medical School, and an investor with Legacy Venture Capital.

Transcript of Episode 80: Diane Shao

Disclaimer: Transcripts are automated and may contain errors.

Grant Belgard: Welcome to The Bioinformatics CRO Podcast. I’m Grant Belgard, and today I’m joined by Dr. Diane Shao, an attending neurologist at Boston Children’s Hospital and instructor of neurology at Harvard Medical School. Dr. Shao is a physician scientist whose work focuses on understanding the genetic causes of childhood neurodevelopmental conditions, including how newer single-cell approaches can help answer questions we couldn’t address before. She’s also an investor. Diane, welcome to the show.

Diane Shao: Thank you so much, Grant. And it’s such an honor to be here and also reconnect with you after our long-time friendship from college.

Grant Belgard: Indeed. So what’s been most energizing for you lately in your work?

Diane Shao: Well, something I really like to think about in my work is how to span across disciplines. So, you know, based on your introduction, I think the listeners can understand I do some very fundamental research in genomics. I also really think about how that research applies to patient translation. I actually see the patients and have to involve sometimes not yet solid data in terms of making a then firm clinical decision.

And then as an investor, thinking about how to assess that landscape. And so all of these, I would say, require a different vision and goal in mind. And so I think a lot about for any given application, what is my vision of translation or patient care or understanding fundamentals, et cetera, and how to generate the data, work with the data, apply that to really further that vision, kind of like big picture goals.

Grant Belgard: What’s a question you’re hearing more often now than you were a few years ago?

Diane Shao: The question I’m hearing more, you know, more and more people want to translate, from the time a PhD student starts working in the lab to post-docs graduating thinking, or post-docs trying to think about what’s next in their careers, wondering about industry versus academia. There is such a strong focus on translation, making that impact on humans versus I think, 10 years ago, as I was still going through training, it was more, you were doing a PhD, a lot more students were thinking about an academic path, fundamental biology.

And, you know, I don’t know if this shift is good or bad, but it certainly brings new questions to the table. And then the, you know, way far past academia had this idea that, you know, going to industry is, you know, maybe a sellout, you know, you’re not asking us interesting questions, but I think there is a growing realization that those questions are also extremely interesting, extremely impactful and need really, really smart people to be involved.

Grant Belgard: What does a good week look like for you? What kinds of activities make you feel like you made progress?

Diane Shao: Yeah, I do think stepping back and seeing where have things gone from, maybe you get some data that is really uncertain and murky. And then this week it’s, hey, we can draw one small conclusion from that. Or it’s thinking about, hey, this data, which is applied to fundamental biology, may be able to be reanalyzed in some small way to give us an insight on an actionable clinical impact. Or thinking about like, could this have implications for which companies we think could really be really strong in their market spaces? And so even one small 1% insight, I think is a good success because it’s building on that, 1% in all different directions that ultimately leads to where we’re going.

Grant Belgard: So since you have this kind of dual role of physician scientist, how do you think about differentiating between this is something interesting and this is something actionable, right? Because for your patients in the real world, you have to make decisions now, you can’t wait five, 10 years for something to maybe be firmed up. So how do you approach that?

Diane Shao: Yeah, those are really, I think, questions that the field of let’s say genetics is always constantly grappling with. So I’ll just give you an example from my clinic from this week just to make it a little more pertinent.

So this week, I saw a case in my neurogenetics clinic at Boston Children’s Hospital, a two year old that has a condition called lissencephaly, which is their brains are smooth and they had clinical sequencing that comes back with a rare variant that’s homozygous, meaning it’s in both the mother inherited and father inherited alleles of a particular gene and records the variant as variant of uncertain significance which means on a clinical lab basis, they cannot provide a diagnosis. And if you search the literature for this gene, there are exactly four patients reported worldwide that have other variants, not this exact variant in this new gene, but their features are all extremely similar to the given patient.

And so there is a practical matter of how many people need to exist in the world for you to have confidence that the fifth variant in mother and paternal inherited alleles of a gene is now diagnostic. And so that’s a clinical lab question. And on their end, they said, hey, we can’t call this disease causing, we have to give it this variant of uncertain significance label. And then there’s a whole other decision to be made on a clinical level. So when the patient comes to see me and I’m like, okay, the lab is not gonna be able to change your classification, but I can tell you your brain looks almost exactly like the four other brains that are out there. Your kid is manifesting all the symptoms that those other four are. We have a very good sense that we should be worried about the things that those other patients have.

So if they have, for example, those other patients had problems with their eyes and so I’m saying, hey, we gotta check their eyes, we gotta check their hearing. These are the things I can worry about now. And those are really practical clinical decision-making matters. And then there’s a whole interesting aspect of, well, what do we do in this gray zone? And those are kind of boundary pushing now research questions. So I then spoke with one of the residents who was very excited about this potentially new gene and this new presentation that we’re seeing. And so they’re saying, hey, can we write this up? I’m like, yeah, it would be great.

And it would be great if we found 10 other people so that we now have the statistical informatic confidence to provide this diagnosis. We can then go back to the clinical lab, change their classification, who would change therefore the classification of other patients that come in with a similar presentation and new variants in that gene, which otherwise wouldn’t be diagnostic. And so we can then go back to the research realm and really make a difference. And so I don’t know if that kind of showcases how the different elements of clinical decision-making, gray zones in what is known in a diagnostic laboratory, and then what can be brought back into the research side are clear from my description.

Grant Belgard:
Oh, that’s great. What kinds of uncertainty are you most comfortable with and which kinds do you work hardest to reduce?

Diane Shao: Yeah, so in terms of, we can talk about a couple of different settings here. So maybe in the clinical setting that I, currently in this example, continuing on the kinds of decisions that we can make are interventional. Does this child need therapy? That’s a pretty certain yes. And I can probably give a good sense of how much therapy they need. Do I need certain screening and certain organ systems based on what I know? The answer is yes. And the risk that I’m going to be wrong, or even if they don’t totally need it, that screening will be negative. Okay, those are tolerable risks.

But other risks are not so tolerable. For example, if I am wrong about the variant interpretation and the family is doing prenatal genetic testing for embryo selection for their next child, at that level, I may stop at my confident assessment that this is absolutely the disease-causing gene until I have more of my research statistical evidence that I’m going to gather with my resident, let’s say. And so those are various arenas where I may or may not be able to make a solid decision. And then in the stepping back into the research space, these, the confidence in a research diagnosis is a little more clear.

Because on a research basis, you don’t need to have an assessment of any patient with any variant in that gene that comes in. You just need to have a sense of, is that particular variant causing a functional change? And on a research basis, there’s a lot of other modalities that can give us confidence. You can do, look at the RNA changes. You can see how that variant affects gene function. Structurally, you can do other types of statistical testing if you enroll patient cohorts within your cohort to do, for example, linkage analysis or other types of confidence-building metrics. And so in different settings, there are different ways to really increase confidence in different types of interpretations.

Grant Belgard: How do you think about measuring success when outcomes can take years to show up?

Diane Shao: Yeah, yeah, that’s a great question. You’re kind of thinking about as we, let’s say, push forward our research agenda on a given genetic condition, what the success is.

Grant Belgard: Well, or I guess, or in the case of investment, right?

Diane Shao: Yeah, okay, yeah, no, I think that’s a great question. So why don’t we jump to the investment for just a moment? So right now, for example, not all rare disease genes are good even current targets for investment. To even embark on starting a company, there are only certain, you’ll see this mentality where people will invest in only a handful, let’s say, of rare diseases that are broadly of interest. And partly it’s because those are the diseases that we know the most about.

There is a lot more research dollars, there’s a lot more research interest. Maybe the patient advocacy groups have been really promoting or focused on getting a therapeutic out and there’s enough support interests that finally there’s enough data and understanding so that initial startup can even be conceptualized for investors to be interested. And so not every, let’s say, genetic condition in this moment in time is ready for research translation.

And so pushing that long-term envelope from the fundamental discovery of a gene to when is it ready to be even considered a therapeutic target to actually pushing out the company to then now assessing that market landscape and seeing whether or not it’s worth funding, et cetera, is a really, really long pipeline as you’re suggesting. And so in any given moment, there are many different people from investigators pushing their visions and agendas to the NIH pushing their research vision agenda to the business development people pushing their agendas and the investors pushing their agendas.

And I kind of really see the progress for each individual needs to be unique. As an investor, I am really interested in pushing the investments that we make into rare diseases more broadly, but that doesn’t mean every rare disease that’s presented to me with the potential therapeutic target is a good investment to make. And so a progress on an investment front means having grasped, let’s say, further the landscape of a particular genetic condition, grasped maybe the market space, what are the FDA regulations?

Those are things are progress in an investment space versus in a research setting, I may be a little more agnostic to which disease I’m looking at and promoting. And that research space may be promoting, let’s say, like progressing new techniques for gene discovery. It may be figuring out how can I collaborate better with others. And so for people in any given phase of all of these different intersecting sectors, I think progress at the end of the day is very, very individual. And I hope that collectively across everyone, this will really push the boundaries of treatment for any disorders, you know, rare or common.

Grant Belgard: So across all the domains in which you operate, what is your expectation for the impacts AI will have in the near term, right? Looking out over the next one to two years?

Diane Shao: Yeah, after this conversation, I’d love to hear your thoughts on that too, Grant. But for me, I feel like AI has touched every aspect of both what I do and also how I assess both the research spaces I want to go into as well as investment spaces I am considering. At a high level right now, I would describe my usage of AI as increases in efficiency. So increases in data sourcing, let’s say to help me find relevant papers and subject matter and people and spaces, et cetera. I also feel it as efficiency in terms of helping me integrate across different, let’s say, perspectives.

Right now I have all this data that describes this biology. Now I want to understand how to change this into a clinical risk assessment model, et cetera. These are kind of, I would still consider efficiency spaces. That being said, I know that the AI field really wants to do new discovery, pushing the envelope, idea creation from AI. I don’t feel that it’s there right now and I’m not really engaged in tool development to know how close are we to that. I think that pushing efficiency and data interpretation, management, et cetera, is already a really, really large task.

It takes off so much from my plate to be able to outsource a lot of those tasks to AI for it to also hold that information for me across, let’s say, these are the grants I’m writing and this is all the data that I need you to store for me. And as I re-synthesize into a new grant with a slightly different focus, how can you help me shape that? And then so it lessens my work a lot. And I’ve found it tremendously beneficial. And to me, that’s really important because it means I can leave my mind space, let’s say, open to big vision problems. I can be the one leading idea generation and then using AI to kind of curate these spaces. So I would say that’s my perspective. I think AI is transformative, but I don’t feel like AI is transformative in the way of taking the place of human creativity and pushing the boundaries of, let’s say, the unknown, unknowns in the world.

Grant Belgard: When you’re designing a study, what decisions early on have you found most impact downstream data quality and interpretability?

Diane Shao: Yeah, so the types of study that I design most are in the realm of human genetics. So I do some human gene discovery research for which I would say the pipelines for that are probably pretty well described. And then I also do single cell technology development for the purpose of understanding how mutations arise and in particular, understanding the variation in the DNA within or between individual cells of an individual, what we call somatic mosaicism.

I’m part of a pretty large consortium from the NIH called the NIH Somatic Mosaicism Across Human Tissues Network, where analogous to other large consortium efforts, one of the most notable ones being the Human Genome Sequencing Project back in the 2000s, the idea is that by characterizing the full intra-individual variability in genetics, that tool can be extremely useful across many, many different areas of biology and life sciences. And so for single cell technology development, that experimental design really affects the downstream.

So for example, I have been working on understanding human brain development and the single cell copy number landscape. Copy number changes are structural changes in the DNA where whole areas of regions of chromosomes either get amplified or deleted or lost. And so to detect structural copy number changes uses fundamentally different techniques than detecting other types of variations, such as single nucleotide variation, where you’re just changing, let’s say C to a G or a single nucleotide, and also uses totally different techniques than identifying, let’s say, repeat expansions in single cells, which are also highly mosaic across an individual.

And so the study design choice of tool becomes really critical to say, is it even possible to analyze my genetic change of interest? And that decision comes down to a matter of, what is the goal of the project? And also has some practical considerations of cost, and also has some technical considerations of do I have the informatic support to analyze the type of variation I’m interested in?

Grant Belgard: How do you communicate uncertainty to different audiences, scientists, clinicians, families, leadership?

Diane Shao: That’s a great question. Depending on the audience, I try to do things differently. Most people do a lot better with the things that are certain than the things that are uncertain. I think that for me, always trying to portray first what we do clearly know can be really, really helpful to then give a framework to all the things that we still don’t know or still exploring.

So just to give a concrete example of that, in my work in mosaicism, I really think that there will be totally new possibilities for genomic biomarkers or different possibilities for precision medicine that are related to the genetic landscape when we look across all the cells in the body. But of course, we’re still in early days of that on a research basis, so I don’t know if that’s true. But what I do know is true is that we have, for example, in our brain, in our neurons, hundreds of single nucleotide variation per neuron, times six billion neurons in our brain by the time we are born.

So that is a biological fact. And so I can hang on that certainty and share with people that certainty and then describe what I think we can do with that level of genomic data. And just so the audience kind of understands where I’m going a little bit with this thought, think about, for example, the difference between when the Human Genome Project first came to light and they sequenced one human versus what we can do with the genetics now that we’re sequencing hundreds of thousands of humans across different countries and across different disease modalities, et cetera. That type of data, while we don’t know yet what will be revealed about ourselves and tissues and how they all work together from a DNA perspective, I think is also inevitable to shift how we think about disease and how we think about diagnostic possibilities.

Grant Belgard:
What is the current state of the evidence on the impact of mosaicism on clinically relevant phenotypes and the prevalence?

Diane Shao: Yeah, that’s a great question, Grant. So in certain diseases, it is a fairly actually commonplace now to think about mosaicism. There are a number of disorders where it’s pretty common to now look for mosaic genetic causes. So for example, epilepsy, there is a subset of patients with epilepsy which will get surgical removal of the epileptic lesion and often somatic mutations are found in those lesions. They follow particular biological pathway principles and so those are pretty clear.

Another realm which is pretty common to think about now is vascular disorders. So localized cavernous malformations, there’s a pretty common precedent. Vascular disorders like Sturge-Weber syndrome which is a capillary malformation over just one part of the body now are pretty commonplace. So there are certain disorders where it is common to now think about somatic mutations as the primary cause. There are other disorders that is coming to light that even while they can have both causes in the germline and at a mosaic level, that many of those individuals actually are mosaics.

And just because think about generating a human, how many cell divisions that you went through to generate this entire person from the time they were an egg and a sperm meeting each other to the huge five to seven foot human being, there’s just a lot of mosaicism to be had and that causes disease and sometimes they look like germline presentations even if the person themselves are mixed genetically. And then there’s a whole realm of things that we don’t know which is maybe a subject of research but things like there are diseases where certain cell types are lost.

So for example, in Hirschsprung’s disease, a very particular neuronal cell is lost from the gut intestine. And so to me, that’s a high likelihood place that there is likely a somatic localized cell-specific component but when it’s lost, how do we use genetics to actually determine what it was that was lost to begin with? So a lot of questions but I hope that answers your question on the areas that we do currently know which is I would say a tip of the iceberg.

Grant Belgard: So how do you think about future development of precision medicine and so on in a mosaic condition?

Diane Shao: Yeah, so I’m really excited about a couple different areas. One area is simply leveraging the power of essentially what I would describe as let’s say a saturating mutagenesis experiment within an individual. So thinking about what we’ve learned from human populations. So when we sequence hundreds of thousands of people from human populations, we can see, hey, these genes never have a mutation and the other genes have mutations that are just scattered all across the genome.

And those genes that never have a mutation, they’re actually important to humans in some way. There’s a reason why we never have a mutation usually is because either they were embryonic lethal or they affected reproductive fitness in some way. And so that’s actually a huge part currently of gene discovery to compare to population databases and say, hey, those areas are constrained, this may be an important disease gene. And so similarly, you can imagine that there are lots and lots of disorders which don’t have a strong reproductive fitness component.

Think about cancer, for example, in old age, it’s not necessarily gonna be selected against the population level. Think about eye conditions like strabismus where you’re not really gonna have a strong reproductive fitness signal or autism even, nowadays many people are getting diagnosed when they’re already lived full lives. And so while some forms of autism will have reproductive fitness constraints, others will not.

And so then the question to me starts to be, well, if we can now get information on genomic constraints, so which areas of the genome are
really, really important just in a particular cell types, like in neurons or in the lung cells or in something like that, is that now new information on what genes are really critical for biology and does that tell us something about disease? So that’s one area I’m really excited about.

You can also think about that the same way in terms of modulation, how do individual genetics within a cell either drive a phenotype or are still collected against the phenotype. So for example, let’s say a person with a neurodegenerative disorder where some of their neurons will die with age. Well, it’s not that these neurons die uniformly, some will die earlier, others will die later and there’s genetic variation between that.

So can we leverage that to somehow understand what is it genetically about those individual cells that are surviving longer? And I think in the past, the view is just, oh, it’s stochastic. Some are just gonna die sooner, some are die later. And yes, probably it is stochastic, but stochastic doesn’t necessarily mean random. Stochastic is a distribution that is related to some underlying biology. And so these are open questions as to the genetics that drives these stochastic processes. And so these are some of the areas I’m interested in and I think they have really strong translational potential and also the therapeutic potential. Yeah.

Grant Belgard: When did you first realize you wanted a career at the intersection of medicine and research?

Diane Shao:
This is a great question, Grant. I think it actually goes back to our college days. When we were at Rice University, I started working for a PI at Rice who’s now left that university, but he was my first significant research experience and I realized he was kind of a remarkable person in that he was a trained astrophysicist who then became an HHMI investigator, which is a very prestigious award funded investigator who studied slime molds, Dictyostelium.

And then when I was in the lab, was going into human immunology and had created a compound to treat fibrosis, which is I was working on in the laboratory as like he had one postdoc and, I guess, me the undergraduate working on this at the time. And then he turned that into a company that was sold for $1.4 billion, ultimately for trials in fibrosis. And so that mindset of the fundamentals of science can be leveraged across astrophysics to slime molds, to human immunology, to translational medicine, I think already came to me maybe through this experience by osmosis maybe as an undergraduate in this space.

And I think that mindset really resonates with me as in at the core, science is science and those principles apply no matter what realms you’re looking into. And also as scientists or people engaging with life sciences in any way that many people do, you also don’t have to be limited to the one dimension that you’re trained in, that all of these realms are possible and so, for me, that is what also made me think doing a MD/PhD career path would be one for me because it was one where I got to see both the research perspective, the translational perspective, the clinical perspective and then in my early 20s, in my past couple years have added this investment and market space perspective as well.

And while some people feel that they’re really disparate and indeed they’re really tackling very different problems at the core, if you go to core principles, there’s a commonality.

Grant Belgard:
What’s something you intentionally didn’t do or stopped doing that made your path a little easier?

Diane Shao: Oh, that’s interesting. Yeah, it does sound like I’m just accruing things, but to be honest, I drop things constantly. I actually think that’s a critical part to maybe going back to your question on what drives progress. Progress does mean constantly cutting out everything that is not leading to your vision of progress.

So even in, let’s say my work on single cell technology, I was developing some technology, applying it to a number of different settings and when I found one that seemed like it’s particularly interesting in terms of its understanding of the biology and that we could really gain traction with the tool and stuff, it meant I just dropped everything else and I don’t have any intention of picking them back up unless they further my vision in a given direction. And so I think that there is always this fear, like the sunk cost fear of like, oh, I’ve invested all this time, I gotta finish it, it’s gotta be a thing, but I don’t buy into that at all.

So I actually need to constantly drop things along the way and to me, that’s a huge driver of success because it means we focus on our energy, on the things that go toward a vision.

Grant Belgard:
If you could go back and give your earlier self one piece of advice, what would it be?

Diane Shao: Oh, probably don’t stress so much. You know, I think that especially as a trainee or a student, it was so easy to worry about how things would unfold and try to control them, but honestly, simply because we didn’t know, for example, writing a first paper, you don’t actually know what it takes to even write a paper or, you know, what are all the steps, what are all the pitfalls, what is everything you wouldn’t even need to think about?

And so I think I spent a lot of time stressing and let’s say strategizing and things like that, but the reality is you just gotta do it and then you’ll learn from it, as in nothing needs to be perfect that first time. And to allow for that, allow for the learning process, you’re gonna get more out of that than trying to make it go a particular way each time.

Grant Belgard:
I guess on that note, how do you avoid burnout?

Diane Shao:
Drop everything else that you don’t feel like doing. But in some ways, I really believe that burnout is a combination of what we’re holding, all the different aspects that we’re holding, and also how we feel about it. As in, if we’re aligned, like if I’m holding a lot of things that I’m doing and those are the things that get me up in the morning, I’m so excited about them, I can’t wait to discuss them with people and share them with the world, that’s not burnout.

That’s just me holding a lot of things that I like to do. But burnout is having a particular interest, but also feeling like I’m obligated to do all these things I don’t wanna do, I’m supposed to be finishing XYZ thing in this other realm that I’ve sunk all this time and effort in. And so to me, preventing burnout is pretty continual, like every few weeks renewal of what is my actual vision, what is actually driving me, and am I doing the things aligned with that? Because if it’s not, and you’re doing that and having that conflict internally long-term, that’s what burnout is. So yeah, so if you are aligned, then that will feel good. Everything will feel like fun and flow.

Grant Belgard: What’s an effective way to build competence across disciplines?

Diane Shao: Competence or confidence?

Grant Belgard: Competence.

Diane Shao: Competence. Oh, that’s a great question, Grant. The biggest thing is to not be afraid and to not be afraid to not know. There’s no reason you would know. And I find that what people really orient around is a strong vision. So for example, maybe with my own interest in mosaicism and thinking about how that can push our boundaries in precision medicine, I work in child neurology, I work in pediatrics, brain development, et cetera. I’m very interested in maternal influences on childhood brain development, but that’s not a space I know at all.

I don’t know a single OB, I don’t know nearly anything about obstetrics or what happened, all the actual biological principles of pregnancy, et cetera. And so, as I delve into that space, that is a totally new space for me. But what I do orient around is how important I think understanding this phenomenon is. And if I can share with people my vision, what I know in a very clear way, others are going to wanna help me and that will build my competence. As in, I don’t go in pretending I know anything about these other spaces where I don’t.

And that’s actually where true collaboration lives. It’s not, we both know everything about the other’s field, it’s knowing exactly what do I know that’s valuable between us and exactly what do you know that’s valuable between us and then putting those together. And competence is not always getting to know everything in a different space. Competence is sometimes being able to know where the gaps are and know how to ask questions and get help.

Grant Belgard: What mistakes do you see smart people make when they try to do interdisciplinary work?

Diane Shao: Oh, that’s a really good question, Grant. You’re full of good questions. So one thing I do think is really important to recognize is that there’s always a difference in culture, no matter what. Research culture, medical culture, even as I’m talking about neurology research versus obstetrical research, there’s a difference in culture. And if you are not recognizing that and respecting those cultures, it’s just not going to work out. So for example, in the biological space, samples are really critical. I work with post-mortem tissues, those are really important. And PhD scientists are also really interested in studying human tissues.

But why do PhD scientists have a lot of trouble integrating with MDs? It’s because they kind of speak different languages, right? It’s the way they’re talking about the samples is different. The PhDs are talking about the samples as a biological utility. The MDs are talking about them like the boy they took care of for 10 years and then passed away for some reason. And so to understand that culture is critical. If you go to the MD and say, hey, I’m looking for samples for X. They might say like, oh, okay, I have some. And then you’re going to say something like, okay, well, I want to study protein. Proteins A and B and how they interact and blah, blah, blah.

The MD is not going to connect with that, right? So thinking about, well, protein A could be a therapeutic if it interacts with protein B in this way as a much more viable start. And then also thinking about, it’s easy to start thinking about, okay, the doctor is just the one who’s going to just be retrieving the sample and et cetera. And the minute you start reducing some other person’s role to just a task oriented sample retrieval role, you’ve totally lost the collaborative interdisciplinary engagement there. And so I think about these things a lot and I encounter them constantly.

For example, even in my example of, what do I do as a neurologist who wants to think about obstetrical tissue? Well, when I started, I’m used to paying $0 for my tissue because I get them from biobanks. I get them from patient groups that are really trying to get people to utilize the tissue for studies, et cetera. But obstetrical tissues are different. They pay a lot of people healthy pregnant women money to collect samples to be part of studies, et cetera. And so even engaging on costs, what is value?

I was running the risk of devaluing all of their tissues simply because I’m used to paying $0 for my tissues. And so these are all cultural nuances between disciplines, the same way going to a different country, you really have to consider those cultural nuances. Understanding them is non-trivial. I do rely on saying things like, hey, I don’t know what the typical way things are done in your field is, this is what I’m used to. And having that humility upfront allows people to also share with you their culture and being open to that, whatever that culture is and not just judging it as unreasonable or too hard just because that’s not the culture you’re used to.

Grant Belgard: What’s a good habit you find most strongly compounds over time?

Diane Shao: Oh, good habits. I find that, I think this may be going to your burnout question, finding the things that are going to make you feel passionate and excited every day. And sometimes they’re not always scientific questions. Like for example, I find a good habit that I have is taking a break at 2:30 PM every day. Either that break could be taking my 2:30 meeting and asking the person if they’d rather take a walk around and have a discussion instead of sitting at a Zoom screen, or that break could be meditating for 10 minutes by myself in a quiet space.

And so I guess I mean that as in, not to say that everyone needs to take a break at 2:30, but just if that is something that you need and will make you feel good about your day, that’s something you need to do for yourself. Similarly, if there’s a particular question you need to answer to feel excited, engaged in science, you just need to go down that route regardless of if it’s exactly the right time or if you have 10 other things you need to finish first or whatever it is, because it’s doing those things for you that is really gonna make everything worthwhile.

Grant Belgard: And where can our listeners follow your various threads of work?

Diane Shao: Oh, that’s a wonderful question. I am in the middle of building my own lab website, but for now you can find me through the
Boston Children’s Hospital. I have a research page there. I’ll provide the link for your notes. And then also my venture capital firm is at LegacyVentureCapital.com.


Grant Belgard:
Well, Diane, thank you so much for joining us. It’s been
lovely.

Diane Shao: Thank you so much, Grant, so lovely to be here.

The Bioinformatics CRO Podcast

Episode 79 with Yang Li

Yang Li, an Associate Professor at the University of Chicago, discusses applying computational genomics to the intersection of genetics, gene regulation, and disease, as well as the impact of new AI tools.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Yang Li

Yang Li is an Associate Professor at the University of Chicago, where his lab investigates the genetics and genomics of RNA splicing.

Transcript of Episode 79: Yang Li

Disclaimer: Transcripts are automated and may contain errors.

Intro: We are conducting our first listener survey. If you enjoy the podcast, please follow the link in the description to a 60-second multiple choice survey. This helps us understand what kind of guests you’re most interested in and keep the podcast sustainable. The survey is anonymous, but you can choose to provide your email to receive a summary of the aggregate results after the survey period is over. Go take the survey at bioinformaticscro.com/survey.

Grant Belgard: Welcome back to the Bioinformatics CRO podcast. I’m your host, Grant Belgard. Today, we’re joined by Professor Yang Li from the University of Chicago, a computational genomics researcher working at the intersection of genetics, gene regulation and disease. Yang, welcome.

Yang Li: Hi, Grant. Nice to see you.

Grant Belgard: Good to see you again. So what’s been energizing you most recently in your work, scientifically or operationally?

Yang Li: Well, since the New Year’s, I’ve been playing a lot with Claude. I mean, everyone’s, I think, playing with Claude. And I think both in terms of the science that he can help me produce and also, you know, just managing my schedule, that has been a game changer. And I’m still exploring what he can do. But yeah, so I think that’s basically what’s been what I’ve been thinking about most of the time.

Grant Belgard: What have you put into practice so far? Like what’s kind of, quote unquote, in production?

Yang Li: Yeah, we’ve been writing the revisions for one of our papers. And I’ve been using that extensively both to help me write some of the response, making it a little bit friendlier, but also rewriting some of my old code and checking for bugs and things like that. And it’s amazing. The number of things that I can do in just an hour far exceeds what I can do within a day at this point. So things like producing a plot in a slightly different way. As you know, it’s very difficult to rerun your code, especially if it’s not the best practice in the sense of software engineering. I’ve been self trained in terms of programming, mostly, and so the comments are not necessarily the best. But with Claude, it helps me comment, it helps me name my variable, right?

Or at least improve the naming of my variables, and then produce plots very, very fast, right? And so as you know, a lot of the way we check that the code is doing its job is to visualizing the underlying data in many different ways. And so Claude helps me do that. You know, as soon as I have an idea, I can just ask it to do it. And then I would see the visualization and I would sometimes I would find error. But most often than not, it gives me exactly what I expect.

Grant Belgard: When someone asks you what you do, what’s your favorite way to describe it without using jargon?

Yang Li: Well, lately, I’ve been trying to steer away from that because I’ve been doing things that are pretty technical. But in just a few sentences, I think I would just describe it as I’m trying to understand how proteins are expressed. And there are many different ways by which we can control the expression of these proteins and focusing on this regulatory mechanism called RNA splicing. And this is highly regulated. And I want to understand what is the function in different system and how to modulate it using drugs.

Grant Belgard: What makes this the right time for that?

Yang Li: Well, I think the reason why I chose this and I stuck to this ever since I think I was in grad school, really, is because almost nobody talks about genes in terms of how many proteins each gene can be producing. And so and it was clear when I was researching, even the things I was researching in grad school, which is, as you might remember, the cichlids, it was clear to me that every single gene produces many proteins or many isoforms. And to me, it felt like this has had to do something. Right. And my perspective has changed slightly since then. But because of my earlier work and the fact that no one, almost no one was really researching that, I became really interested in that topic.

Grant Belgard: So what is your current perspective on splicing?

Yang Li: Well, when you read the textbook, it basically tells us that every single gene, every single human gene can produce many different proteins and many different protein isoforms. So these are isoforms that are essentially the same, but with slight differences. So it could be one protein domain that is included in an isoform and in another isoform, this same protein domain is excluded. And so often textbook or in literature, it would be described as something intentional, as in the two version of the proteins have very different function. So one would be performing function A and the other would be performing function B. And both are very important for the survival or the proper function of a cell or the organism.

But what I think now is that the vast majority, and by a vast majority, I mean really over 90% of these different isoforms is not really to have a different function, but really as a regulatory sort of switch. So again, to fine tune, very similar to gene expression level, right? So when you regulate gene expression levels through enhancers and promoters, you’re not changing the final output or the function of the gene. You’re just changing the activity by a little bit. And I think splicing most of the time is doing that, is doing exactly the same thing. The regulatory input is a little bit different, but the outcome is very similar.

So it’s able to change the protein and have a different function, but those are really the minority of the cases rather than the majority of the cases, as is taught by literature or the textbook.

Grant Belgard: How do you decide if a problem is method worthy or just something you’ll apply existing tools and move quickly on?

Yang Li: So do you mean in terms of developing a tool or just using a tool to solve a problem? Right. So I think it takes me a long time to convince myself that I need to develop a method for something. And so in general, I try to use methods that exist already or previous method that I or my lab has developed. In some very rare case, I think, hey, we need to develop a method because there’s really something that hasn’t been done. And we really need to do that and also that we can do it. So all of these checkboxes has to be checked in order for me to move on to method development.

And I should say that we’re not particularly, I don’t think my lab is particularly good at developing methods, but we’re pretty good at identifying, I think, problems that can be solved by an older method whose goal is not necessarily for, well, hasn’t been developed for the specific question.

Grant Belgard: What are the most common bottlenecks you run into today? Is it a matter of data, compute, annotations, study design, interpretation, something else?

Yang Li: Yeah, that’s a pretty good question. I would say for me, it’s my time and getting a sense of what to focus on when there’s just so many people that I think needs my attention, so many projects that needs my attention. I think one thing that I’ve heard a friend tell me was a good example that I often talk about. It’s this context switching time. I’ve heard that the grape vines that Terry Tao, the famous mathematician, is extremely good at context switching. So he basically could switch from one problem to the next within seconds. And for others like me, we need more time to context switch. And so our schedule, when I guess you become a faculty, is that it’s spreading blocks of one hour. And I find it pretty hard to switch context from one hour to the next.

So I try to block more time, but then there are fewer blocks of the longer period of time. And so I think that’s somewhat of a bottleneck for me, is to find a longer block of time so I can have the time to context switch and then do deep work instead of just trivial work in order to make progress. It feels a lot of time I’m trying to just keep afloat and that doesn’t give me enough time to do enough deep work, which is the thing that I think I’m good at and also the most happy in doing. Yeah.

Grant Belgard: Have you found using tools like Claude impacts that in any way?

Yang Li: Yeah, yeah. So I think previously it was very hard to, I had a lot of questions about a data set or some topic and it just never felt like I had the time to do it. And with Claude, all of a sudden you could do things that would take a few hours. It would just take you a few minutes because it had the context, it remembers the context in which and you would ask and then you would remember the context and then you would just do it. So for example, plotting a figure about a data set and then it remembers where the file, where the raw data was. It would have taken me maybe 10, 20 minutes if I came back to this specific project after a week. It would take me maybe 10, 20 minutes to even recall where was the file that I was using and what exactly I was doing essentially.

I can ask Claude or summarize what I was doing or just scroll up a little bit and then ask questions and then he would give me the answer within a few minutes and then that would get me back on track much more rapidly than he would me by looking at my own code and browsing and recalling. So that has been extremely useful. I think also Claude might be able to help me manage better. I haven’t implemented this, but I’ve sort of joked around that I would have my trainee talk to an agent or Claude and then Claude would tell me, you know, summarize all of their things. And then I would only have to read through the summarized version.

Grant Belgard: So you could be like the nurse at a doctor’s appointment before you see the doctor, right?

Yang Li: Yeah, exactly. And then five minutes before I meet them, I would review that and I would think a little bit to just get into context. And then it would be, I think, a lot more productive, right? So, yeah, I often tell my students to prepare some slides or some notes before I meet with them so that it helps me switch, get into context, because oftentimes I hear about their problem on the spot when they come to me during the half an hour or the hour period. And then I have to think about it. And oftentimes when I think about it, I find it’s not really awkward, but it’s still some pressure to answer, right? I can’t think if I thought in silence for five minutes, even two minutes, right? It feels a little bit long, right? Let alone 10, 15 minutes.

But oftentimes that’s the time that you need to bring you back into context, to recall all of these different information, right? To have a very effective conversation. But the reality is that it’s also hard on them to come up every time with a few bullet points. Or at least, you know, I don’t know if it’s hard for them, but they don’t do it essentially. And this would, I think, speed up tremendously our meeting or at least make it extremely productive because everyone’s on the same page.

Grant Belgard: What’s a recent result or direction that surprised you and how did you respond to the surprise?

Yang Li: I can’t say that there’s a recent direction that really surprised me. I think I plan my projects long in advance and I can see points of failures pretty early on. And oftentimes the project or the direction does indeed fail. But then I often have a backup plan. And so I don’t think there’s anything, any direction that surprised me, I would say. And unfortunately, there hasn’t been anything like a sudden discovery that changed everything, unfortunately. So I’m either a very good planner or just not super lucky in terms of unexpected findings.

Grant Belgard: How do you think about reproducibility in practice? What’s good enough versus gold standard?

Yang Li: Yeah, I think there’s a lot that can be improved in terms of reproducibility. Unfortunately, when I think there is some amount of pressure to understand the system, the biology. I mean, there’s a speed component, right? You want to dig into the biology more rapidly. And oftentimes the solution to that is to do what you know best to do. And we’re not trained as software engineers. We don’t do these kind of unit tests. And so reproducibility and there are bugs, right? So I’ve developed LeafCutter many years ago and I still find bugs there. So in that sense, these things can be improved drastically. On the flip side, I don’t think any of these bugs or these issues affect our results, our biological interpretation of things.

Very rarely there would be a very important result that are affected by these. It does happen that it affects a very minor result, right? Or the interpretation of a minor result. And to prevent these bugs or these lack of reproducible findings, we essentially try to poke holes at our major, what we call major discovery. So the things that would, for example, break a paper or the main finding that we think we made. We would look at many different data sets and we would design tests that would essentially break it in one way or the other. So we have very orthogonal ways of trying to confirm a result. That would include, for example, looking at a just completely different data set or deriving some corollary. So based on these, if this were true, then this other thing must be true.

And so we would do more tests on whether this downstream result should be true, will be true. So we do a lot of these type of analysis. And then at some point, everything makes sense. And if something doesn’t make sense, then we have to explain this. Right. So I think this is a scientific process. And I’m not going to claim that this is foolproof, as in I will never have anything that is later falsified. But I think from my track record, I think this has worked so far.

Grant Belgard: How do you decide what to delegate and what you personally stay close to?

Yang Li: Right. So I try to delegate as much as possible. I try to delegate anything that I think a trainee or someone else or a collaborator can do to them. But I obviously weigh by importance. So the things that are the most important, even though I also try to delegate those, depending on whether I think they can do it, I would pay attention to the outcome. Yeah, in my minor things, I would just trust them to do the correct thing. And sometimes, you know, we have to backtrack when later on we find a problem.

Grant Belgard: How do you help your trainees develop taste, knowing what to do and what’s not worth the effort?

Yang Li: Yeah, that’s a very good question. It’s a little bit like asking me, how do I teach creativity, someone to be creative? And yeah, I hate to have this fixed mindset view, but I think it’s something that’s very difficult to teach, right? I think we can encourage creativity, but it’s something that has to do a lot with personality. I think I’ve noticed some type of personality that are, I would say, not as creative or don’t have as much taste and more rely on other indicators. So, for example, sometimes I notice it as not just my training, but in general, right, that a paper that’s published in Nature, right, or in a high impact factor journal, they sort of rely on that to be as a measure of what’s exciting and what’s good.

And others, they don’t rely on this and they have an internal perception of what’s exciting and what’s not. I think the one way that you can help is to read a lot, right? I think it’s, I always tell my trainees to read a lot. I don’t know if you remember, but in grad school, I just tried to read at least one paper a day and I would go through my RSS feed with, you know, hundreds of abstract every day. I mean, now it’s getting even harder because, well, much harder because there’s just a lot more papers that’s been published. But at least back then I had all my journals that I generally read and an RSS feed and I would go through all of the abstracts, title and abstract. I would do this every morning and I would read at least one paper that interests me.

And so I think that helped a lot in terms of both creativity. I mean, creativity is not just, you know, whether you can come up with new things, right? You can come up with new things to you, but someone might have done it. So you also have to know about what’s out there, right? And taste, I think, is somewhat similar as well, right, to creativity. If it’s just, if you like a paper or if you like a project just because it sounds good to you and you don’t know that much, then maybe someone might not call that a good taste, right? So I think these are linked together.

So the more you know, the more you’re likely to have good taste and you have to have your own sense of what’s worthwhile and what’s valuable and not just use some kind of, you know, external, I mean, what someone tells you, right? Obviously, at some point you have to rely on someone, right? So someone that you respect, someone that you know have good taste, if they like it, then you can maybe up weight something a little bit. But then at the end of the day, you need to build your own, you know, scoring function.

Grant Belgard: So let’s talk about your own career track. In your own words, how did you get here?

Yang Li: That’s a very interesting question. I did mention that I like to plan things ahead in terms of my research project, but my trajectory, I think it’s been, yeah, I’m reminded of the quote by Bertrand Russell. I don’t remember the quote, but essentially it goes like, you know, my life has been like great waves, right? Or great winds, like it blows me here and there. And I really feel that way, that, you know, there are periods of my life where that changed me a lot, right? And that has depended a lot on luck or maybe we shouldn’t call it luck, just circumstances that I guess I viewed favourably and therefore I called these luck. But it could have also been misfortune, right, if it didn’t end up very well. And I think these periods are what, these few periods are what took me here.

And the first period was during the last few years after high school. So right before college, I grew up in Montreal, in Quebec, and there’s this period called CEGEP, which is two years before university, but after high school. And at that point, I met some very good friends who introduced me to coding, but also hacker culture, not in terms of, you know, black hat, but more just, you know, coding hacker. And I remember that I started to also become interested in philosophy a lot and asking, you know, bigger questions such as what is the meaning of life, obviously, what is consciousness and all that. But really asking the question of, you know, what am I doing here, right? And at that point, I started to code and to read essays.

I think many of us have been influenced by essays from Paul Graham about, you know, being a little bit more intentional about who your friends are, who you hang out with. And I think this really has started this trajectory, right, being very humble, always trying to look for people who are smarter, who are more, you know, who are more knowledgeable than me. And so without that perspective, I don’t think I would be where I am right now. I mean, grades don’t really matter, but my grades were very average. I wasn’t particularly interested in anything other than video games, obviously. But at that point, something switched in me, right? So being a lot more intentional about how I use my time, who I’m friends with, or who I hang out with. And I think that worked out, right?

Immediately during university, I just identified people who were really excited about their work, excited about their craft. It didn’t have to be anything particular. But at that point, I majored in mathematics and computer science. And I met very good friends, again, that were really excited about the work that they did. And they were really passionate about something, right? So you can be passionate about video games, which I was about, right? But it’s very easy to be passionate about video games. It’s a lot harder to be passionate about something that’s very difficult and that no one cares about. And so I was looking for these sort of phenotype of people who really cared about mathematics, right?
Something that I didn’t particularly care about, I enjoyed, but I didn’t particularly care about. But then, you know, I started to model things that I cared about and use their intensity on these, I guess, passion, right? And so just recognizing the fact that real usable things like mathematics or coding or these things that are hard and tedious can be something that you really enjoy, right? Was something that’s quite new to me. And then, as you know, I got really interested still through my philosophical angle about aging, the aging process. And so I followed these passions and essentially all through that experience, changing from mathematics and computer science to biology, my main goal was to follow what I was passionate about and to do things as rigorously as I could.

And so essentially that led me to where I am right now, which is not studying what I initially set myself up to do, which is aging. And there’s plenty of reasons for that. But essentially following a passion that not everyone cares about, right? But I care about and applying the same fundamental values to these problems.

Grant Belgard: What’s something you learned early on that still pays dividends today?

Yang Li: Early on, as in how early on? I think one thing that I mentioned a lot to people is doing a degree in mathematics. And I don’t think that you have to do a degree in mathematics to get that. But this is where I got it, is this sense of what you don’t understand is actually very important, right? I don’t know if it’s, it’s definitely not a muscle, right? But it does feel like a muscle that you can use to know gaps in logic. And I think even, I wouldn’t say extremely good scientists, but because I do think that very extremely good scientists, they have this muscle, but I would say maybe many, many trainees and many faculty, I would say, and leaders, they still struggle with some gaps in logic. It’s very easy to jump, to have this logical jump.

And that impacts a lot of things, impacts the science, but also the writing a lot. That’s what I observe. That I think is one aspect that I see the most apparent when I see someone’s writing and I observe that there’s a gap in logic. So you just assume that, well, first of all, you assume that everyone knows what you know, but you also assume that one sentence followed the next sentence, or rather the next sentence follows the previous sentence. And I often see that there’s a gap in reasoning that I think is pretty hard to fix, right? Essentially, you have to say, well, why does it follow? And then a student might say, well, it follows because it’s obvious, but it’s actually not obvious. But how do you know that something is not obvious?

How do you distinguish something that’s not obvious from something that’s obvious? And especially when you’re talking about biology, for something to be obvious, there’s often a stack of unstated assumptions. Exactly, exactly. And in mathematics, when you do a lot of proofs, you’re sort of trained to always question every single step. And so I think doing this has really taught me, or at least it made me extremely careful about these steps. And in biology especially, I thought it was extremely useful because, well, sometimes you just can’t overcome, right? You cannot prove every single thing, right? In fact, the first few years when I transitioned from mathematics to biology, it was extremely difficult, because I was just hung up with the simplest thing, right?

But then I found utility in this because you can stash it, right? You can stash this gap in logic. So you notice them, and then you have to convince yourself that, well, it’s true that I can prove it, but it’s probably right in this system, right? And then you can move forward. But also at the same time, you understand what is this gap, right? And by understanding this gap or this condition, right, that it works only in one system, I think you start to understand the system a little bit better, and you start to understand how can this information that is supposedly only applies to this specific system also apply to another system. And so it helps me transfer some of my understanding from one paper, for example, to another paper, right?

So how might these be similar across papers or across cell types, right, or across disease transferred to another cell type or across another cell, another disease? So I think, I mean, this part helped me a lot, I think, in my thinking and in how I transferred knowledge across disease cell types or anything really. And I guess this was a little bit unexpected. I use very little mathematics right now, very, very little of the things I actually learned during undergrad. But this obviously has stuck with me.

Grant Belgard: What are some things you had to unlearn when transitioning between stages, you know, student to postdoc to faculty?

Yang Li: Right. I mean, I wouldn’t say that it’s unlearn, but change definitely very much so. All right. So when you’re a student and a postdoc, you’re very self-centered. You drive your project forward, and there’s some sense of the truth is the only thing that matters. The results are the only thing that matters. There’s less, I would say there are some, but much less personal touch, right? There’s some collaboration, obviously, but you’re really focused on your own project. At least that was my experience. And whatever I did was a lot focused on just obtaining the truth, obtaining, you know, understanding the way it works. What I had to, I would say, unlearn is maybe be less obsessed with the truth and how things should be done versus, you know, how it will be done by someone.

So it’s hard to force your way of doing things, even though if you still believe that it’s correct onto someone else who might not do the same way as you. Right. And so, as you know, we’re not taught to manage as faculty. And this is something that you learn because you see either students struggle or you see other people struggle. And then you notice that, hey, this is not, I mean, this is not productive, right? You cannot tell someone to work the same way as you did, even though you think or you strongly believe that this is how you would do things. And even if you could prove that this is the more efficient or the better way. So I think this is something that I think about.

Everyone’s different and some personalities are more likely to accept some ways of doing things and some other personalities are unlikely to perform well if you tell them to do it in a certain way.

Grant Belgard: What’s something a great mentor did for you that you try to replicate for others?

Yang Li: Well, I think all my mentors have been extremely kind. I think that’s something that, and at no point I felt like that the mentor just was using me in some ways to get a paper out, for example. And I’m mentioning this because I have witnessed that some mentors, they essentially think trainees, even though maybe they think it’s justified, I mean, using the students as a means to an end. And so I always think of the trainee as a person that is here to grow in terms of their ability, in terms of their knowledge. And so I think that’s something that I’m very careful about. I never try to have a student do something that is not beneficial to them.

Grant Belgard: Given the rapid changes in the field driven by AI, what advice do you typically give to early career bioinformaticians in navigating that?

Yang Li: Yeah, I think that’s a great question. And it really depends on your own personality. I think one aspect is to understand yourself, like what kind of personality you are. And I truly believe that personality matters a lot, right? And it’s really, some might say, oh, well, you have to change your personality. But I find that extremely hard. There are some personality traits that I know that I should change or that if I change, I would be happier or even more productive. But it’s very difficult to change, right? And so the way that I try to guide my trainees, for example, is to get a very broad sense first of what type of personality that person is. There are some, I think it was Ray Dalio in his book, he used to be the CEO of Bridgewater, and he developed these tests.

I don’t exactly remember the specifics, but I think what he did was for every personnel, every member of the company, he has this test that will classify them into what they’re good at and what are their personality. And one thing that I keep on thinking about is doers and thinkers, right? So oftentimes you can characterize someone as a doer or a thinker. The thinkers are those who like to think and then they’d like less to do. And the doers, they’re more, you know, they have a higher affinity to just start to do things before even thinking, right? So one thing that’s helpful and it’s only, you know, potentially related to personality is to figure out if you’re more like a thinker or more like a doer. And maybe you’re both, right? And that’s great.

But then figuring out these sort of traits for you will help you determine what you should focus on. And one advice was, if you think that you’re a doer, maybe you should team up with a thinker, right? And vice versa. If you’re a thinker, maybe you should team up with a doer and be very, again, intentional about this, right? Don’t let chance decide. If you have two doers, odds are that you’re just going to build a lot of things and it might not be very useful or not very good, right? If you’re a thinker, if you’re two thinkers and nothing gets done. And personality as well, it’s the same, right? And sometimes I also think about this diversity. I mean, lots of people say, oh, diversity is good, is good, is good.

But when pressed about exactly how diversity is good, they would say, you know, that the blanket statement like, oh, well, diversity, you know, you have different ways of thinking about things. I agree with that. I think you need a little bit more to really build a good diverse team, right? And this, for example, this diversity in thinkers and doers, and there’s also other personality traits that I forget to mention, right? So there are also, you know, personality traits about being very pessimistic, right? So I would classify myself as being a very pessimistic person. Careful, I’m trying to improve that, obviously. And then there are some people who are extremely optimistic. Like, you have an idea and they’re on board and then they’re like, OK, yeah, it could work because X, Y, Z.

I’m more of the, but it won’t work because X, Y, Z, right? But you need both, I think, on the team. If everyone is optimistic again, then this is going to be maybe an echo chamber of, well, yeah, it’s going to work and they’re just going to be hyped up. And then it’s great, right? Everyone feels great, but then it doesn’t, it’s not, you’re not going to have a good product, right? Because then you don’t consider what the negatives are. And if you’re all pessimistic, like me, if you have two rooms of me, then nothing’s going to work. So I think you need to figure out who you are and then team up with some diverse people, right, in that sense. And again, there’s lots of different, these are just two axes of variation.

There’s a lot more axes of variation that I think that you can optimize to build a very strong team.

Grant Belgard: What separates great collaboration partners from frustrating ones in CompBio projects?

Yang Li: As in me, is it CompBio or like two CompBio teams, or like one biological and one computational?

Grant Belgard: Yeah, probably a computational and a wet lab.

Yang Li: And a wet lab, I see. I think there needs to be respect, right? Respect for each other’s craft. If one is, again, using the other and without any amount of respect, I mean, that seems obvious, but it’s actually not. And it can go in both ways and often does, right? Yeah, yeah, exactly, exactly. So we, as a computational people, we can treat the experimental as just, you know, a pipette. Like, oh, you’re going to be replaced by robots soon, right? In the same way that the wet lab experimentalist cannot treat us as, you know, Claude Code, right? And in fact, I see it happen, right? Not every day, but I know who these people are, right? And so it’s a lot more prevalent than you might expect, I think.

And also a little bit of effort in understanding the other is, I think, at bare minimum, right? But obviously, accepting the fact that you’re not going to be as good as your experimental or dry lab counterpart. Another thing that is extremely important is that you have to enjoy working with them. Sometimes it could be tempting to work with someone who’s just very good, right? And you just need the resource. But personally, I just don’t think it’s worth it if you really don’t enjoy working with someone. Yeah, personally, I don’t think it’s worth it. The other thing is energy level. I think it’s very important to have the same amount of energy. If one of you is just a lot more excited, you end up being really annoyed that the other one is slacking off, right? And vice versa.

They’re probably going to be annoyed at you, or you’re going to find them pushy if you don’t have the same energy level. So I think these things are the main thing. And I’ve had, I would say, very good collaborators and pretty bad ones. And I think always these three aspects separate these perfectly.

Grant Belgard: What frameworks do you use when helping trainees decide on career paths?

Yang Li: Yeah, I think it also has to do with personality. Anyone who’s very curious and very open minded and maybe like more, you know, just very idealistic, I would try to push them towards academia. Anyone who is very practical and, you know, I don’t mean to say that one is better than other. Anyone who is very practical and have a very good sense of what they want in life and they don’t want to deviate too much. I would say that I would, you know, steer them towards industry. And I don’t tell them that, right? Everyone who goes through my lab, I tell them that I think that they could become good academics. But the fact of the matter is academia right now is not super welcoming in the sense that it’s just very difficult. It’s very difficult to have a tenure track position.

That being said, there’s a lot of position that is not tenure track. And if you’re OK with that, and I think you should totally be OK with that, there’s a lot of possibilities. And I would also encourage that. But obviously, you know, if you’re very creative and very, you know, idealistic kind of person and you really want to change the world or research something that you’re deeply passionate about that not many people might care about, then I still think that academia as a tenure track, having your own lab, at least, is the right place or the right path.

Grant Belgard: Final question. If you could give just one piece of advice to your earlier self, what would it be and why?

Yang Li: Other than buy Bitcoin? Yeah, I think communication is very important to focus on and be more open minded to things to improve. I think when I was young, I was really into doing hard things and technical things. I think you can call it hard skills versus soft skills. And I didn’t think at all about improving soft skills or maybe personal skills. So interpersonal skills. And I think I would give that advice to my past self, even though I strongly suspect that I wouldn’t listen to myself. Yeah, so personal skills is, I think, more and more important, especially with AI, which I think can replace a lot of the hard skills, to be honest.

And so the one who I can see that the one who will succeed a lot more than I will are the one who has the soft skills and know how to get AI to help them with the sort of hard skills.

Grant Belgard: Well, Yang, this has been fantastic. Thank you so much for joining us.

Yang Li: Great. Thanks for having me, Grant.

The Bioinformatics CRO Podcast

Episode 78 with Sun-Gou Ji

Dr. Sun-Gou Ji, statistical geneticist and VP of Computational Genomics at BridgeBio, discusses his career in genetics and genomics and BridgeBio’s approach to target validation and novel target discovery.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Sun-Gou Ji

Sun-Gou Ji is VP of Computational Genomics at BridgeBio, supporting target validation and novel target discovery for drug development. 

Transcript of Episode 78: Sun-Gou Ji

Disclaimer: Transcript is automated and may contain errors.

Grant Belgard: Welcome to the Bioinformatic CRO podcast. I’m Grant Belgard, and joining us today is Sun-Gou Ji. Sun-Gou is a statistical geneticist at BridgeBio, where he drives scientific decision making based on human genetics. As VP of Computational Genomics, he leads a team of statistical geneticists and data engineers focused on target validation and novel target discovery. Previously, he was at Seven Bridges, where he collaborated with the Million Veteran Program to validate and uncover genetic factors influencing human traits in a highly diverse and admixed population. Welcome to the show.

Sun-Gou Ji: Thanks, Grant, for having me.

Grant Belgard: So how did you first become interested in genetics and drug development and what drew you into the field?

Sun-Gou Ji: Sure, sure. I’m sure everyone has this time where you think about what impact, you know, do you want to make in this world while living here? And the type of lasting impact that I was very struck with was that a drug that I could develop could help people even when I’m gone. It would stick and still help people for perpetuity. So, you know, so once I thought about those things, I was actually lucky enough to then do a Ph.D. at the Sanger Institute at a time when human genetics were showing a pretty meaningful impact to the success of their programs. And here I am now. I feel like I was just, you know, happened to be at the right place at the right time and things aligned and really happy to be contributing to something that will outlive me.

Grant Belgard: So the Sanger Institute, of course, is an epicenter of human genetics. How did your Ph.D. work there shape the way you think about it?

Sun-Gou Ji: I would say it just basically shaped who I am now. I feel like if I had to choose one time in the past, I could go back to it would be doing a Ph.D. at the Sanger, which I think is pretty rare for people that have done Ph.Ds. And, you know, its history started with sequencing the human genome and the density of world-class human geneticists. There’s just no other comparison out there. So especially the scientific rigor and the collaborativeness I learned at Sanger are still the basis of how I operate today. And I would really strongly recommend it to anyone, you know, considering this field.

Sun-Gou Ji: And many of my friends remember the time at Sanger being the best time of our lives, not only the scientific achievements, you know, people at Sanger do publish a lot and pretty high impact journals, but also the diverse culture and its inclusiveness being part of Cambridge culture is a very exceptional experience.

Grant Belgard: What did you take away from your time at Seven Bridges, especially working on the Million Veteran program and the Graph Genome Project?

Sun-Gou Ji: Yeah, sure, sure. I joined Seven Bridges, it was like 2015, 2016. That time it was the data science, big data was the hype, you know, before AI. And back then, it actually took ages to perform imputation on HPC clusters or run a GWAS using LMMMs. And I’m sure you remember that time too, Grant. And being able to run a large compute and know some stats qualified me as a data scientist. And Seven Bridges has kind of occupied this niche where it was almost impossible to orchestrate complicated genomic workflows on AWS directly. And although everyone knew things would move from HPCs to the cloud, I think there was a time where people were scared of having their precious data in the cloud. And I was in the R&D team working on the Graph Genome Project and I met the smartest people I’ve ever met there.

Sun-Gou Ji: It was very different from the crew from Sanger in terms of that it was a completely different group of folks with PhDs in quantum physics, like mathematics and engineers, software engineers with some with like 20 plus years of experience. And this focus team of like a dozen plus work on a single project to create this graph genome ecosystem. And if you know, the name Seven Bridges comes from the seven bridges of Konigsberg, which was solved by Euler and laid the foundation of the graph theory. And he could understand what Seven Bridges was trying to do. And they were trying to use graph genomes and actually revolutionize how we do genomic analysis. And my experience there really opened my eyes on the difference between academia and industry. And because of the, usually when you have this type of project, you have one PhD or postdoc working on it.

Sun-Gou Ji: Whereas here you had dozens of like people with vast experience working on a single project to get one thing done. And I mainly focused on the structural variant aspect of the project and which led to a nice paper back in 2018 or so. And I believe it’s still part of Velsera’s offering, which is, which absorbed Seven Bridges. And it’s really great to see that this graph genome and pan genome approaches are really picking up more recently. And actually I feel like this really shows how difficult it is to commercialize a completely novel bioinformatics tool, even though it could revolutionize the whole field. And as for the MVP work, I was also working with many others at the VA to QC the initial tranche of the genotyping data and imputation. And all these experiences at Seven Bridges is like, I really learned a lot, especially being the only human geneticist in the group.

Sun-Gou Ji: It took me some time to understand that Sanger was sort of a bubble, right? Where like everyone understands human genetics. But here I quickly had to get myself comfortable basically defending the whole field of human genetics in front of mathematicians and physicists and engineers who would listen to you about how, you know, variant calling is done, alignment is done, association testing is done. And they would say, oh, that’s, this is irrational. This is inefficient. This is like very old statistical tools. You could, there’s these novel things you could use. Why are you using this? And, but actually me kind of defend the field in front of these really smart people helped me explain concepts of human genetics from first principles.

Sun-Gou Ji: Why do we do this reasons though, that the human genetics field uses this type of kind of old statistical techniques rather than these very complicated non-linear models a lot of times. And this kind of explaining the reasons of how human genetics done from first principles turned out to be very useful at Bridge Bio.

Grant Belgard: What do you consider to be the most impactful outcomes of the million veteran program?

Sun-Gou Ji: Well, the data itself, it’s, you know, the Million Veteran Program, it’s, it’s like, it’s very amazing that, you know, the veterans are actually contributing the health information, the genomic information for research to advance, you know, veteran, veteran care. And this type of data actually is reaching for a million in a single hospital system is still, there’s no comparison. And actually the Million Veteran data is really special. And in the way of how the ancestry proportions are distributed within the data, it’s very higher proportion of African-Americans as well as Hispanic Americans compared to the other databases that have larger European ancestry. So the type of analysis and knowledge that’s coming out of the MVP data is very orthogonal to what we get from other databases or biobanks.

Grant Belgard: So what led you to then join BridgeBio?

Sun-Gou Ji: Yeah, so honestly there was, of course, a lot of serendipity. And once I was working on these bioinformatics tools and QCing the data for others to use, the only thing I was sure that I wanted to do is move closer to patient impact through developing drugs. Like, like I said, at the beginning, it’s like, I felt I was sort of ready to kind of move closer to actually making a drug. That I feel like I made and, or I contributed significantly to making. And being as choices back then were like big pharmas, you know, thanks to the, the [Nelsted?] et al paper from GSK or the King et al paper from AbbVie, many pharma companies were building huge genomics teams. And, you know, I think there were a lot of choices from a lot of these places, but looking back and trying to justify my choice to join BridgeBio instead was definitely the people I met during interview.

Sun-Gou Ji: I was really impressed by the team. There were super smart in very different ways. There were, I think a lot of people, I would say from Seven Bridges were like really scientific smart, like street, like very academic smart. Whereas the BridgeBio folks felt a bit more street smart and they would just get things done right somehow. And without dwelling too much into the detail, but just enough to actually get things done in a very efficient way. And of course, the other part was, you know, being the opportunity to be interviewed, like world experts like Richard Scheller and the people that are like that, as well as getting personal call-ups from the CEO, you know, you wouldn’t really get that if I was going to be joining like the big pharmas. And it felt like these people could really do something. And this hub-and-spoke model for rare disease really also resonated with me.

Grant Belgard: So speaking of the hub-and-spoke model, that’s pretty uncommon in biotech. Can you explain how it works and why it’s effective in rare disease drug development?

Sun-Gou Ji: Yeah, so I’ll start with the ‘effective’ because I don’t think a lot of people appreciate it. Like one metric I really like to highlight about BridgeBio is like, we’ve been around for 10 years now. And within that time, we’ve delivered 19 INDs and three NDAs. We had two positive phase three trials that just read out in the last year. And we’re waiting for one more that we’ll weed out within this quarter. This efficiency is really rare. And this starts with actually picking the right programs and having a balanced view of the portfolio. So how do we choose? And the majority of rare diseases happen to be genetic. And we know that targets for genetic support have a higher chance of success. And that’s why BridgeBio develops therapies that target the source of these genetic disorders or are very close to it. All of our targets technically have genetic support.

Sun-Gou Ji: But, you know, everyone knows like there’s twofold increased success rate if you have genetic support. But the chance of a single program succeeding is still very low if you think about a single program. But if you bundle enough of them together, you have low probability of success, but you have slightly increased because of genetic support. And then you kind of bundle them all together. And if you bundle enough of them together, it just becomes a mathematical problem of how many programs do you have to try to get a certain probability of the portfolio of making it? So this is a paper from Andrew Lo, one of our founders that actually came up with this concept and our CEO Neil Kumar kind of delivering, executing it on it. And that becomes a very mathematical problem that actually a lot of investors and bankers get.

Sun-Gou Ji: And it’s very hard to raise funding for a single rare disease program that has a low success rate and actually the outcome of that would not be that huge. So it’s actually very difficult to raise for a single program. But if you, because of the higher probability of success of a single rare disorders, bundle them together, then your risk becomes really low. So there are investors that have the appetite for low risk investment under this model. So we were actually, we were like, BridgeBio was able to raise from, you know, unconstitutional investors in biotech. And also not only that, how we raise funding, but also it allows funding towards the smaller indications with smaller upside, which would not be funded individually if this model was not there.

Grant Belgard: So for a company, aside from a successful launch, the best outcome is failing as early as possible, not going as far as possible. What does that mean in practice to fail early in rare disease development? And how do you operationalize that mindset within BridgeBio where you have multiple shots on goal, you know, kind of in principle uncorrelated risk basket of programs?

Sun-Gou Ji: Yeah, that’s actually a very important aspect of our portfolio. We’re not trying to make every program a success where we try to optimize for the portfolio. And usually this is not possible because if you have one company working on this program, if this program fails, you’re done. Whereas at BridgeBio, if this program fails, there’s always new programs that we are starting. So people that are working on a certain program, even if that fails, it’s not mean, does that mean that they’ll lose their job? They might, they actually can be transferred over to other programs that are being created newly or that need support for other things because, you know, everything moves and all these programs that are uncorrelated have different stages of development, different programs and different problems.

Sun-Gou Ji: And as long as you there, that’s how you can actually least incentivize people to make the right decision rather than the decision that makes the program live longer. And, you know, these type of kind of shutting programs happen in very different circumstances. Sometimes it’s kind of happens because of external factors, right? Where the market’s shrinking. Now there you have to kind of figure out which programs you want to, which is kind of similar to what all other biotechs and pharma companies go through. But we also do that very intentionally where we review our programs, especially the early stage programs and make sure when we start a program, we develop these decision points, like clear decision points. Like if we hit a milestone, then it’s a go. But then we also very clearly lay out what a no-go would be for each milestone and try to make harsh decisions.

Sun-Gou Ji: But these are definitely one of the hardest decisions that we always have to make, but we always try to push ourselves to make those decisions before the market makes us make those decisions.

Grant Belgard: And how do you approach risk-adjusted net present value modeling in rare diseases? And why do you think that’s a better framework than focusing on peak sales?

Sun-Gou Ji: Yes. So we actually released a white paper on this and last October called the feasibility of rare disease drug development. And this is all talking about risk-adjusted NPV is the net present value of program, meaning what is the present value of a certain drug development program at this time, considering all the potential path this program could take and aggregating across all the potential outcomes from failure to like failure risk and success risk and how much and all these things and aggregate and cost and taking time into account, which is risk adjusted. Then you have a single number on whether this program is actually positive, meaning it’s worth investing because you’ll get something out of it versus negative, which is just, it’s not like economically viable, financially viable to actually make investment into the program.

Sun-Gou Ji: And I’m sure people have heard of this herding in rare disease drug development, where everyone is working on a select few more common rare diseases. And most of the other rare disease just have no interest. And that’s, I think what happens if you focus on peak sales, there are just a few rare diseases that actually make sense if you just think about peak sales and the biology is understood about disorder. And if you focus on just the peak sales, there’s just, I feel there’s just not much way to avoid herding on select rare diseases. Big sales only considers the potential outcome and ignores potential costs to get there, no way. So in contrast to common diseases like IBD or more like, you know, autism, like our NPV is not relevant because the cost, whatever you spend on it would actually be negligible in the context of the large outcome, like a large fruit at the end.

Sun-Gou Ji: But for rare diseases, comparing the size of the fruit that will bear with some probability against the expected cost and whether that is positive or not is critical. And like a lot of our drugs would not have been like interesting for many other traditional way of just thinking about peak sales. But you know, some of our team are so lean and efficient and then has pulled off like one of the cheapest drug development programs that you could actually, that has been ever run to reach phase three. And all of that, if you only focus on peak sales, it doesn’t really matter. So if anyone’s interested, I would really encourage people to check out our white paper. And there is actually a toy you could play with.

Sun-Gou Ji: You could kind of change how much you think you’re going to, this is going to cost, how long your trial is going to last and what are the things and try to figure out how, what you need to optimize in order to turn your program NPV positive.

Grant Belgard: In broad strokes, how would you define computational genetics for the work that you lead?

Sun-Gou Ji: In broad strokes, any analysis that cannot be done on an Excel file, Excel spreadsheet that is not directly related to clinical trials.

Grant Belgard: I like that definition. Yeah. I haven’t heard that before. That’s a good one. Where in the life cycle from target ID to validation, candidate selection, trial design, post-marketing, is your involvement the heaviest and why?

Sun-Gou Ji: It will be in the earlier stages, especially like once the target is selected and the drug program gets going, there’s not much in terms of the computational genetics that can be done to actually make the full, it can help decision-making while generating different biological kind of support for the pathway, the target and all that, and that we all do. And actually we work across all of them, but the heaviest that we put our effort into is selecting the right target and actually validating it for that. That’s one of the things where you, this is the type of decision, once you make it, there’s no turning back. You can only know after phase three is spending a lot of money and a lot of time, a lot of resources that could have been spent on try to help other around disease patients. If you just pick the right target, there’s no way you could kind of change that.

Sun-Gou Ji: That’s where we put a lot of our efforts and that’s also where, you know, there is trial and tested proof that it does significantly improve your success when you incorporate a lot of genetics data in that stage.

Grant Belgard: What data sets are most actionable for your work right now and what makes them actionable?

Sun-Gou Ji: There are multiple databases that we, of course, like everyone is working with the UK Biobank, the All of Us, it’s very useful and somewhat actionable because of the kind of general population representation that you could actually learn from where you can think about, okay, if you go after certain rare disorders, what are the kind of more common expression of the rare disorder that could be observed in more common patients?

Sun-Gou Ji: And can we actually build like an analytic series around the target based on more common variants that are not directly causing the monogenic disorders, but also because these UK Biobanks and All of Us are usually devoid of a lot of severe rare monogenic disorders, but you do have to complement those with other databases that have a higher enrichment of these more severe rare monogenic disorders that would include databases like Genomics England that we work closely with and also a lot of these genetic testing providers like Invitae and GeneDX where you would get tested because you have a certain concern about a genetic disorder. So those are the databases that would be enriched in the type of patients that we are trying to treat. So in the end, there’s not a single database because they all have different ascertainment bias.

Sun-Gou Ji: And if you just keep sampling from the general population, you would basically have to sample the whole of the US to actually get enough sample size to do anything for any of these rare disorders. So that would take too long, we’ll get there, but it’ll take too long from the other end because you are biased towards people that actually have a reason to be tested. Then you’re missing a lot of these people in those kind of genetic testing vendors. You’re missing a lot of people that are kind of mixed, where they have slightly less severe forms of the disorders that would not get tested. So a lot of the insights you get from those databases will be biased towards more severe expression of the phenotype.

Sun-Gou Ji: So in the end, you have to merge those two together and make sure that what we get from one database can be replicated, or if it’s not replicated, we can explain why you don’t see that in these other databases. And then of course, it doesn’t end by just using the genomics data, especially now the UK Biobank, I think they’re one of the best things about the UK Biobank. Now they provide all these proteomics data and a lot of other multi-omics data sets are being more readily available and kind of layering on top of that from the genetics is becoming more and more important. But again, a lot of these monogenic disorders don’t have a large enough sample size for these multiomics. So how do you use a general population or a general database, the multiomics to incorporate that layer of information to help de-risk our targets or de-risk our program moving forward, it’s always case by case.

Grant Belgard: So the calcium sensing receptor has been described as a system level node for calcium homeostasis. Can you explain why it’s an interesting target?

Sun-Gou Ji: Yeah, so the CasR gene is, like you said, is the calcium sensing receptor. It senses calcium and calcium level in your blood and try to make sure that your calcium levels are kind of kept at check. And one of our programs that read out last year was an inhibitor of this calcium sensing receptor that’s trying to treat autosomal dominant hypocalcemia, where the calcium sensing receptor is overactive and where it’s a monogenic disorder that kind of causes this calcium sensing receptor to be too sensitive to calcium. And that’s why it thinks that our body has more calcium than needed and kind of keeps the calcium level lower. So the hypocalcemia is the symptom of this monogenic disorder. And why CasR as the gene is super important and interesting is actually it’s a genetic target with an allelic series.

Sun-Gou Ji: And what an allelic series is, is to simply put, it’s nature’s dose response curve, where the dosage of the gene correlates with disease outcome. That means if you have low dosage, meaning a loss of function, CasR, then you have hypercalcemia, where you have too much calcium, and then you have your wild type in the middle, where you’re kind of okay. And then you have your gain of function in CasR that actually causes the disorder that we’re trying to treat, which is autosomal dominant hypocalcemia. So you have this outcome, human and phenotypic outcome that correlates with the dose. And the dose response curve is what you want to see in a clinical trial. That kind of proves that you’re actually hitting the target correctly.

Sun-Gou Ji: And having this allelic series of like different types of mutation, where you have very severe loss of function or like a weak loss of function, a very strong gain of function and a weak gain of function that correlates with a human phenotype, that’s the perfect genetic support for a target. And usually when you talk about the allelic series, everyone talks about PCSK9 for lipid metabolism. PCSK9 has been a beautiful story where you have gain of function and loss of function individual, where you have loss of function individuals who are protected from high lipids and coronary artery disease. Because PCSK9 inhibitors are not only used for monogenic hyperlipidemia. It’s used for just the general population. And that’s the analogy that we could use for these CasR inhibitors is that it’s not just for this autosomal dominant hypocalcemia type one monogenic disorder.

Sun-Gou Ji: But if you have this imbalance in calcium, which also leads to an imbalance in the parasite hormone. And usually when that happens, what you try to do is what you get prescribed is like a calcium tablet or that you could get more of the calcium and kind of increase your blood calcium. But then it normalizes your blood calcium. So it kind of gets rid of a lot of these other brain fog or neurological effect or tingling or other tetany or even kind of seizures. But what it actually does then it increases the amount of calcium that has to go through your kidneys. And that would end up leading to kidney damage. So a lot of the ADH1 patients are actually struggling with controlling the level of serum calcium against by using calcium supplements against their kidneys kind of breaking. So that could actually happen to other people that may be using calcium supplements wrongly.

Sun-Gou Ji: And there’s this kind of allelic series that we see in CasR actually indicates that this CasR inhibition as a therapeutic could be used for an other expansion from not just the rare CasR and ADH1 disorders to more complex phenotypes associated with the calcium sensing receptor, especially the anything influenced by calcium balance.

Grant Belgard: Many companies cluster around the same common rare diseases while ultra-rare conditions are left to non-profits. How do you decide which diseases to pursue, especially when patient populations are unknown or trial feasibility?

Sun-Gou Ji: That’s always a moving target, as you can expect. But one of the things that we really focus on is really let the science speak. Meaning, can we really get into the science of understanding the patient beyond the need and the biology of the disorder? And we call that the connect the dots from the genetic perturbation to human phenotype. And where does the proposed treatment is intervening in that whole pathway? So as I alluded to for the CasR example, like for genetic support, the allelic series is the best. That’s the ultimate genetic support of those response curve, super rare. Interestingly, we either find things that obvious and everyone is working on or stumble upon ones that no one is working on. If the rare monogenic disorder is too hard to make a drug, it sometimes makes sense to go straight to the complex disorder. But usually that’s not for us.

Sun-Gou Ji: And we look for partners that are willing to take it on together for these more larger indications that requires a significantly longer and complicated trials.

Grant Belgard: So as we sequence more of the population, what are you seeing about prevalence, penetrance and variable expressivity of monogenic variants?

Sun-Gou Ji: Definitely a higher genetic prevalence, but lower penetrance and wider phenotypic spectrum of expressivity. And this is definitely not new, right? Because pathogenic variants were observed in an exact a long time ago and were called, you know, these people were called super humans at some point. And that kind of led to the search for modifiers of these pathogenic monogenic variant carriers. And that still goes on today. And proceeding our work on ADH1, you know, Hugh Markus’s work on monogenic stroke or Karen Wright’s work on neurodevelopmental disorders and many others consistently show that there’s very many people, a lot more than expected, that carry pathogenic variants, but the penetrance is much lower than we traditionally thought.

Grant Belgard: How do those findings complicate the way we define patients and measure unmet need in rare diseases?

Sun-Gou Ji: Yes, because of the much wider variants and expressivity that we’ve been talking about, it’s just it’s very important to capture all the phenotype, not just the classical ones. And because treatment starts from diagnosis, but diagnosis a lot of times is based on genetic testing. And there’s just too many rare diseases out there. And if the symptoms observed in a patient doesn’t align with the classical symptoms of the genetic disease, the genetic testing will not be recommended a lot of times and may be only considered when symptoms become too severe.

Sun-Gou Ji: So the unmet need of rare diseases today, that’s why it’s harder to, and we’re learning that it’s actually harder to quantify properly because there’s two things, again, that kind of comes back to our old ascertainment bias that we were talking about, the databases where a lot of these testing vendors would be severely biased towards more classical symptoms with severe phenotypes, whereas the general population will just not be picking up enough of these rare, severe monogenic disorders to actually make sense out of. So making sense out of those two is still going to be hard.

Sun-Gou Ji: And because of the variants and phenotypic expressivity, understanding the full spectrum of phenotypic expressivity, meaning like we should actually start from the genetics, get everyone that carries a pathogenic variant and actually try to even identify new phenotypes that are not classically associated with the traditional monogenic disorder and expanding the phenotypic spectrum and defining it through a genetics first approach would be important.

Grant Belgard: So how do you think this will change the definition of a monogenic patient and impact clinical trial inclusion exclusion criteria for deciding who should be part of the trial and later on who should be treated?

Sun-Gou Ji: Well, it’s all going to be part of the continuum, right? You’ll have variants and that’s a very difficult line to draw, right? Because it’s pretty clear when you think about, okay, do you carry a variant in a gene that has been pathogenic before? And there are a bunch of VUSs, so whether you have a pathogenic, likely pathogenic or VUS carrier may actually tell you that you have a mutation, but whether you have the disorder, that may be a very different thing. You may be a monogenic patient because you have the pathogenic variant, but do you have the monogenic disease? Maybe no, but then how do you say no? Like in case of CasR, you have a monogenic variant in CasR that’s pathogenic. You have hypocalcemia, then you are technically an ADH1, but then when do you start treatment? It’s a different question too, right?

Sun-Gou Ji: Because then like, when does it warrant treatment to actually do these things? It’ll be very different by the disorder and the safety profile of the drug. And that’s sort of the start of personalized medicine, right? That’s when you start understanding the genetics and then the phenotype that you’re seeing in that patients, and when do you actually start treatment?

Grant Belgard: So you’ve talked about the importance of genetic support and drug development. What makes it such a powerful tool compared to other methods of validation?

Sun-Gou Ji: Yes, I would say it’s, you know, genetic support is the only tool with predictive validity for clinical success. There is not anything that I know of that have shown this reproducibly, that there is two to four times increased success replicated across so many different groups. But I wouldn’t really say it’s more powerful than any other tools, but it does provide an orthogonal point of validation of the therapeutic hypothesis that’s just basically not possible through models. Even the best models are just models, right? And although we have to be careful, the effect of a lifelong perturbation, which is a variant that you carry or genetic support versus therapeutic intervention, which is a sudden change, it still provides a completely different validation for the target.

Sun-Gou Ji: So, but again, however, despite genetic support showing two times increased odds of success, whether genetic support alone provides any predicted validity is unclear. Because genetic support, given the target had been tested in the clinic, independent of any genetic support, gives you this increased odds of success. So you always have this conditional, where a lot of these drugs were tested not knowing there is any genetic support. But when then you look conditional on that test set of genes that have been tested, you know, without knowing genetic support, then you have this increased odds. But if you only have genetic support, does it actually give you any increase? We just don’t know because there hasn’t been a drug that’s been tested just based on genetic support.

Sun-Gou Ji: And so it’s very powerful, and we are actively working on it, but that should not be a replacement of a target prioritization, target validation.

Grant Belgard: And final question on the future of precision medicine. So in what way would routine newborn sequencing transform precision medicine?

Sun-Gou Ji: Yeah, this does come to quite a personal story too, because I have a one-year-old daughter who’s been recently diagnosed with a rare genetic disorder. And we were lucky enough to be living in Boston, you know, where our pediatrician knew to refer us to a specialist who then quickly sent us to Boston Children’s and then diagnosed us within a couple of days and starts treatment right away. You know, the nurses and doctors were so helpful, you know, they were super supportive, full of empathy, and so grateful for our care team. And now this is what the US and the medical care should be, right? It’s the best medical care. And of course, it’s, we were lucky in the sense, of course, it’s best to not have a rare disorder, but we were lucky as it had been. But one thing, that’s the one thing I regret, though, is that, you know, this is a genetic disorder.

Sun-Gou Ji: And I actually convinced myself that I didn’t want to get her sequence when she was born. I sort of used the exact same logic against newborn sequencing to convince myself that I’d be overwhelmed with this information. You know, you’ll find these pathogenic variants in different like VUSs, am I going to be worried about them without saying but looking back, I feel like it was quite laziness on my end. And if I actually looked at her genome, have the information of the handful of genes that was potentially bad variants, but I have reduced the search space for what I should prioritize. And is it possible that maybe I would have picked up her symptoms earlier before it’s this late? And with the benefit of hindsight, I do feel like it is possible to catch it, it would have been possible for me to catch this a bit earlier and get her treated before.

Sun-Gou Ji: Technically, this is as much as possible now, right? The technology is all there, like assays are as accurate as it can be. And the interpretation, although needs some improvement, but the only way to get better than interpretation is just by doing more. And those are various newborn sequencing efforts, of course, the UK leading and Guardian and Beacon studies along with others in the US.

Grant Belgard: Well, what are your thoughts on whole genome sequencing versus whole exome versus targeted sequencing for newborns?

Sun-Gou Ji: I feel we should future-proof ourselves. And even for the UK BioBank that released the whole genome set last year, they show an improvement in identifying these pathogenic, likely pathogenic variants even with encoding exons over whole exomes. And I just feel like there’s no reason to use these targeted approaches, especially for data generation. For interpretation, there could be a case to make, but we should just do whole genomes to future-proof ourselves and get the highest yield. And then the interpretation could help. And the data sets itself could be very useful. It’s the first step. It will really help cases like my daughter a bit early on and reducing or at least prioritizing the search space, because when you have a baby, you’re worried about everything. But if you know that she has something and you see signs of that, you would be a bit more careful.

Sun-Gou Ji: And I feel like just for that, it should be worth it. But going back to your question about whole genomes, whole exomes and targeted panels. But in addition, I think the more exciting piece that I was thinking about traditionally as a scientist was the data generated, because it will be huge, so valuable for genetic research and drug discovery or development, because this is the true unbiased information of the population.

Sun-Gou Ji: Where I was talking to you about the fascinating bias about the different biobanks and cohorts, but newborn sequencing will be ultimate unbiased sampling of the population, which will open up the first door for the precision medicine that would really help us understand the difference, not just monogenic prevalence or in a transient expressivity, but also even in common disorders and different or complex disorders and really expand how we think about human health with genetics and start of precision medicine. And you would carry that information throughout your life and whenever something happens, you have that background information to best rather than waiting until something goes wrong and figuring out.

Grant Belgard: Yeah, it’s interesting. You know, we’ve heard for years that this is coming and certainly at this point, it’s not a barrier of price, right? I mean, getting a whole genome sequence is a pretty negligible cost in the American healthcare system these days compared to everything else, but it’s still not routine. I wonder when that will finally flip.

Sun-Gou Ji: Yeah, it’s interesting. And also, I guess there’s questions about privacy and who owns the data and who actually gets to analyze the data and how do we make that equitable before and maximize patient benefit over anything else?

Grant Belgard: Well, I guess that’s another challenge, particularly in the US healthcare system, right, is although there’s a ton of money spent, it is very fragmented from a data perspective, many different systems, et cetera, right? So that will be a challenge.

Sun-Gou Ji: This is like an operational problem rather than a technical or scientific problem now. And yeah, there are a lot of sensitivities and issues about it, but there are these pioneers are trying to do these pilots across different institutes in different countries. And hopefully those will change the mind of governments.

Grant Belgard: Thank you so much for joining us. It’s been great.

Sun-Gou Ji: Thank you for having me.

The Bioinformatics CRO Podcast

Episode 77 with Ewelina Kurtys

Dr. Ewelina Kurtys, a neuroscientist at FinalSpark, discusses her experience bridging AI, neurotech, and business development in industry, and FinalSpark’s mission to build a remotely accessible platform using living neural networks as a biocomputing substrate.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Ewelina Kurtys

Ewelina Kurtys is a neuroscientist at the biocomputing startup FinalSpark, which is working to create a bioprocessor from human neural organoids.

Transcript of Episode 77: Ewelina Kurtys

Disclaimer: Transcripts are automated and may contain errors.

Grant Belgard: Welcome to The Bioinformatics CRO Podcast. I’m your host, Grant Belgard. Today we’re exploring wetware computing, living neural networks as computing substrates. Our guest, Dr. Ewelina Kurtys, works with FinalSpark, a Swiss biocomputing startup building a remotely accessible neural platform where researchers run experiments on human neural organoids connected to electronics and microfluidics. Ewelina’s background spans pharmacy, biotechnology, and a neuroscience PhD with postdoctoral work in brain imaging before moving into industry and startup work, bridging AI, neurotech, and business development. We’ll cover her current work, the path that led there, and advice for anyone curious about this new frontier. Welcome to the show.

Ewelina Kurtys: Thank you so much. Very happy to be here.

Grant Belgard: So for someone hearing about wetware computing for the first time, how do you explain what you work on and why it matters?

Ewelina Kurtys: So we are trying to build computers using living neurons, the same as we have in our heads. And the reason why we do this is because the neurons are 1 million times more energy efficient than digital computers. So we want to solve the problem, which is now emerging, that artificial intelligence, the silicon one, digital, is using exponentially increasing amount of energy. So this is a problem which is growing and many people are searching for solutions. So there are two ways, basically either alternative energy sources or alternative computing, and we are working on the second option on alternative computing. So we try to program living neurons so that in the future we can build biocomputers, which will have as a heart, as a processor, living neurons.

Grant Belgard: When you say programming living neural networks, what does that look like in practice today?

Ewelina Kurtys: So we know that neurons are producing spikes, which can be measured by electrodes as a current, and this is the way of communication of neurons. So in the lab, we can put them on electrodes and we can send them electrical signals and we can also measure the response from neurons. And actually the response from neurons in real time, you can see on our website, finalspark.com, there is section live. So you can see really how it looks. This is spikes, this electric activity of neurons. So we basically try to send them electrical signals and we measure the response and we would like that there is a sense between this input and output. So we would like to be able to program them in such a way just by sending them some signals and measuring what they answer.

Grant Belgard: So what elements of that are feasible with today’s technology and what still feels out of reach?

Ewelina Kurtys: Well, it’s relatively feasible to put neurons on electrodes and to measure the activity. Let’s say it’s something what is already established in the scientific world and technology. So technology is ready for this, but we don’t know how to program neurons. So we don’t know how to make sense of these signals, which we send to them and which we receive. So that’s the biggest challenge currently in biocomputing.

Grant Belgard: And so is there a way to tell if a neural culture has learned something or is that still in the future?

Ewelina Kurtys: Yes, it’s difficult. At the moment we do really simple experiments, the basics. For example, we just want that neurons increase the activity or decrease the number of spikes they produce. So this is the most simple task you can give to a living neuron. And yes, so this you can measure very easily. If they behave as you want it, that means they learned something, but this is still very difficult and not fully reproducible.

Grant Belgard: And as a readout, are you focused exclusively on spikes or other phenotypes?

Ewelina Kurtys: No, on spikes always. And actually you can measure them in many ways. You can measure them just as an occurrence, yes, no, just as this is called spike train. So you just have a series of dots over time and every dot is representing one spike or you can measure the shape of the signal. So in this case, you sample more data. So you can get exact signal how the voltage is changing over time. But we do measure only actually electrical signals from neurons, yes. We can measure also some other stuff, like for example, the color of the medium, which is the liquid in which neurons are immersed, but this is more for monitoring.

Grant Belgard: How do you structure input/output and what forms of reinforcement have you found meaningful so far?

Ewelina Kurtys: The most simple reinforcement is just sending the impulse, electrical impulse, but we also developed other methods. So we know that neurons are also communicating via neurotransmitters in the brain and we try to reproduce this in our lab. So for example, today you can stimulate neurons with dopamine to reinforce the behavior, which is considered as a reward, the dopamine signal. And we do this in such a way that we chemically neutralize dopamine. Then we put it in the medium in the liquid in which neurons are immersed and then by just putting the UV light, we can activate the dopamine. So basically cells get immediate treatment from dopamine. And this is, this is used also to communicate with neurons and to reinforce the behavior if they do what we wanted.

Grant Belgard: What are the biggest problems you’re focused on solving right now?

Ewelina Kurtys: Yes. So there are many problems. One of the big challenge is how to keep neurons alive for a very long time because we want this biocomputer to be robust. And we know from nature that neurons can live up to a hundred years even because those which we have in our brains, they are usually the same for through the, our lifetime, especially during adulthood. So for now we can keep them alive on electrodes for three months, which is quite a lot considering the industry standards, but it’s still not enough for what we wanted. But the biggest challenge is actually programming neurons. So how to learn, how to interact with them in a meaningful way. And the biggest problem here is because nobody knows really how neurons encode information. So we know quite a lot that they producing spikes and then how they process the spikes, but we do not know what they really mean.

Grant Belgard: And is all this 2D or are you looking at 3D systems?

Ewelina Kurtys: So the data is 2D the voltage over time versus time, but the structure of neurons, which we have is actually three dimensional because we are using neurospheres. So these are around structures of the neurons around half millimeter diameter and they are in 3D. So yes, so the neurons are quite complex. However, the electrodes are only on the surface.

Grant Belgard: How do you think about reproducibility for something like this?

Ewelina Kurtys: Well, that’s quite simple. You just have to do experiments many times and then you have reproducibility if you get the same results over time. But this is very challenging because neurons are not stable system. They are dynamic. So that means that responses can change for the same signal. So this is still challenging. But every time we say we have some results, it’s only if we have repeated them many times. So for example, we managed to store one bit of information in neurons. So that means we have done this many times, but we have done also a lot of things which were working maybe one or twice, and then we don’t report them.

Grant Belgard: When you were starting out, you were comparing energy use and efficiency to digital systems. What’s a good apples to apples way to compare energy usage of biological neurons to artificial neural nets?

Ewelina Kurtys: Well, it’s still a bit tricky to compare, but we can have some ideas about the brain efficiency by neurons efficiency by looking at the human brain. So actually all of what we assume about biocomputers today is based on our observation of the human brain. And we can see that human brain can run on 20 watts is quite low energy consuming. But if you would like to reproduce the workings of the human brain with digital computing, you would need a small nuclear plant. So all these ideas about efficiency of neurons are based on what we see in the human brain.

Grant Belgard: What milestones would convince a skeptic that wetware’s more than a curiosity?

Ewelina Kurtys: Well, it’s not only for the skeptic. I think it’s also for us and for everyone who is following the field. So our milestones, first milestone, which is for the next two, three years after we receive investment, because we are currently considering accepting an investor, we are searching for 50 million Swiss francs, which is around $50 million, let’s say more or less. And with this investment, we can have tight timeline because for now we are self-funded. So everything can take longer, but assuming the investment, we would like to solve the problem of learning in vitro. So the problem I just described that nobody knows how to teach neurons something, how to encode information also. So we would like to do basic algorithm into three years. And after the next around three years would be advanced algorithm, because we would like to match the performance of digital computing.

Ewelina Kurtys: And the last milestone would be scaling because we would like to, of course, be able to build huge structures of neurons, much bigger than human brain, whatever it will be technically possible. And we assume that the biocomputer will be ready in around 10 years. And this will be so-called bioserver. So this will be a computer which will be available remotely as today you can access cloud computing. So that’s the idea which we have in mind. It’s just the difference will be that it will be much, much cheaper. So for example, maybe you will be able to run ChatGPT or something similar on the living neurons, but it will be much, much cheaper because of this lower energy consumption.

Grant Belgard: I’m just thinking about how you typically staff a data center and what very different skills might be required for a wetware data center, right? Your DevOps engineer role would look very different if you’re having to care for living cells. How might that look in practice from the perspective of the engineers running the data center?

Ewelina Kurtys: Well, so yes. So biocomputer will need a little bit different expertise, but we hope that everything will be automated. So now we, of course, do a lot of things by hand, but in the future, we hope it will be all automated facility and I’m sure it will happen. But what you need for running biocomputer is definitely biology knowledge. You have to know something about living neurons, how to keep them alive for a very long time. So of course, coding in digital computers is important because everything is connected to digital computers. However, you need to compliment this with the biology knowledge about how to keep living neurons in the proper condition because they are very demanding. They’re very fragile as living cells. So you have to keep temperature, pH, everything perfect for them.

Grant Belgard: Where might wetware make the earliest real world impact?

Ewelina Kurtys: So we believe in generative AI because it’s very energy consuming and also because we believe that human brain is very good at solving complex problems, generating ideas. So if you use the living neurons for that, it will be working much better. That’s what we believe.

Grant Belgard: Definitely more efficient. What collaborations are most valuable for you at this stage?

Ewelina Kurtys: So for the moment where I would say we maybe, I don’t know if you can call it collaboration. Well, we do collaborate a little bit with the hardware, some hardware providers because we need, for example, some systems for electrodes for living neurons. But what is most important is the maybe more that we give our access to our lab for free or paid access. So for free, we give it to universities. We have accepted nine universities from 34 applications and we prioritize those who have the biggest chance to publish. We also have, which is a surprise for us, we didn’t plan for this. We also have clients who pay us for subscription to get access to our lab remotely because everything in our lab you can do also remotely. You don’t have to be in the lab in Switzerland.

Ewelina Kurtys: And we have this because during COVID our engineers have developed all this remote system to access the lab when they couldn’t go physically. But later we decided to use this opportunity and invite universities to collaborate. And also we got a lot of requests and we started to open paid subscriptions for private clients.

Grant Belgard: That’s really interesting. Yeah.

Ewelina Kurtys: So that’s very important for us because it gives us some revenue and also it gives us some kind of recognition, maybe appreciation to our work because this is emerging field. So still many people don’t know about the bio-computing.

Grant Belgard: What surprised you the most since you started working with neuronal cultures as computing elements?

Ewelina Kurtys: I think the most surprising is how difficult it is to program neurons. I know that people from many years try to figure out this on many models. Also there are a lot of physical models which are not using living cells, but some models of living cells, living neurons, and it’s still nobody knows how neurons encode information. That’s amazing. That’s so difficult.

Grant Belgard: What do people outside the field most often misunderstand and how do you correct it?

Ewelina Kurtys: I think what people don’t understand sometimes they say that we build a human brain in the lab, so that’s not what we do. I think it’s important from ethical perspective because we don’t try to reproduce human brain in the lab. We just use the same building blocks as in human brain, which are living neurons. So this is a big difference. I think because of the anthropomorphic bias, people often see human traits in everything. So of course, if we use human neurons, then people think, oh, is it conscious? Can it feel? So these are actually important ethical questions, although I think they are more raised for general public than for really philosophers or ethicists. I think this requires some thinking from philosophers. Of course, we are happy always to get suggestions and also we hope that we can use some work of philosophers to also kind of answer all these difficult questions.

Ewelina Kurtys: But it’s normal thing that every new technology is always raising some concerns and some surprise in some people. So yes, this is important to address this, but I think philosophers can do this much better. And we actually try to encourage many philosophers to work on biocomputing. We have done a lot of effort. Last year, I was at a conference in the Netherlands about ethics in technology. So we try to reach out to this kind of philosophers who could be interested to work on these topics. I think it doesn’t matter at this stage. We are using human neurons because it’s the easiest to produce at the moment because today you can get stem cells which are commercially available and they are derived from the human skin. So we can produce huge amounts of neurons quite easily. And yes, we could also use animal neurons. Absolutely. At this stage of the project, it doesn’t matter.

Grant Belgard: If you suddenly had a tenfold increase in stable high quality cultures, what would you do that you can’t do now?

Ewelina Kurtys: Well, we would run experiments longer because our lab is fully automated. So we can run experiments 24/7. But of course, because neurons usually live up to three months, you cannot really maybe run this longer. So I think it would be easier to make long-term experiments. That’s first. And the second, the maintenance of the lab would be easier because every time neurons die, we have to exchange them. It’s quite efficient process, but still it would be easier if we don’t have to do this too much.

Grant Belgard: How do you think about the balance between advancing the biology, so getting higher quality, more robust cultures and pushing the tooling that you’re using, electrodes and software and so on?

Ewelina Kurtys: I think both are important. I think definitely the second one is much easier, but keeping cells alive and making sure we have… There’s a lot of questions we can have about how to culture neurons and how to do this. So biology is, I think, much more complex. Engineering is just a matter of time. Of course, resources, we are a very limited team because we are just six people. So of course, we are also limited by this, but let’s say our engineers are so excellent that it’s a matter of time to build stuff. However, biology is just… It’s not only of being good or not, it’s just biology. It’s complex and sometimes you just have to do a lot of trial and error. So this is, I think, much more difficult.

Grant Belgard: So when did you first get interested in this interface of biology and computing?

Ewelina Kurtys: So I actually… No, I did my research in neuroscience. So that was totally different field, pure biology. But I did also research in medical imaging because I was doing brain imaging mainly. So my first job in industry was medical imaging service. I had a little experience there. And in medical imaging, you use a lot of AI. At that time, it was hype. It was hot topic. So I learned this way about AI and I get interested in that. And I had a chance at the time I was living in London and I had the chance to attend many different events, networking. I was also doing business development. So I was interested in connecting to people. And I attended AI Summit in London, which was, I think, 2019. Then I met the founders of FinalSpark. And I get interested because it’s not easy to combine or also to go outside your field.

Ewelina Kurtys: So I said, okay, if they try to build computer from living neurons, but they are engineers, then that must be interesting. So I decided that it’s a cool project because generally I always look at the people because every topic can be interesting or not, but on the daily basis, it all depends with which kind of people you work with. So I think every topic can be good, but it’s just mostly the people. But what I’ve noticed is that when you look at the very deep tech research, usually you have nice people to work with. So that’s why I’m in this field.

Grant Belgard: Looking back at your own degrees in training, what experiences most uniquely shape how you approach problems in this field since this field is so multidisciplinary?

Ewelina Kurtys: Well, I have to say the PhD experience for sure, because it gives you a chance to do independent research. But also before PhD, I did some projects. So it always depends on how much autonomy I had in the lab. I think I learned a lot about this. And also I get the confidence. That’s important because that I realized that I really can solve problems and it works what I do. So it gives you its confidence boost is important. And then when I left academia, then actually maybe setting up my own company in the UK because I work within FinalSpark as a consultant. So I think that gave me a lot of experience and it’s always, yes, it’s amazing, always adventure when you can do things by yourself, even if they’re very small, but trying to organize, let’s say life in your own way is the best you can do, at least from my experience.

Grant Belgard: What did you learn from the business facing roles that scientists often overlook?

Ewelina Kurtys: What I learned, I think the biggest lesson was what I learned as a scientist who left academia is that it’s not so much important to be smart, but what is the most important is that likability that people have to like you. And actually every deal you make in your life depends on whether people like you, not whether you are so smart or not. So I think this is a very big mistake, which maybe especially academics are doing because they think it’s all about technical skills and being clever. But of course, some thresholds you need to pass, you have to maybe pass some minimum, but all the rest is all about, I would say, likability. It’s a lot about, you know, talking to people and everything who usually works if you get a good connection with the clients. So I think that’s extremely important.

Ewelina Kurtys: Let’s say this mental part of the work, not so much technical because technical is easy way, you know, after PhD is easy, but mental part.

Grant Belgard: How do you evaluate opportunities in emerging fields with high uncertainty?

Ewelina Kurtys: Well, you mean opportunities, what are the job opportunities or opportunities for us as FinalSpark?

Grant Belgard: Either.

Ewelina Kurtys: Either. I would say the job opportunities are at the moment quite slim. So if I would be an engineer and you know, thinking about biocomputing, I wouldn’t focus only on this. I would rather think more broadly on the emerging fields because there is a lot of things growing on the intersection of neuroscience and engineering. So there are a lot of stuff, but it’s not only biocomputing, it’s also, for example, brain computer interface or some other stuff. So I think it’s good to look at this more broadly if someone is interested, of course, how to combine biology and engineering. And there are a lot of projects, but if you focus only on biocomputing, it’s quite difficult because to our knowledge, there are only three companies in the world who are doing this and all of them have limited resources. So yeah, it’s quite difficult to be on in this.

Ewelina Kurtys: But I think if you like biology, if you are fascinated with biocomputing, you can also do something similar like brain computer interface, for example, or maybe neuromorphic computing, you know, depends on how much engineering, how much biology you prefer. And so that’s about opportunities, the jobs and yeah, we get a lot of actually questions from interest potential and from potential coworkers. But unfortunately, for the moment we don’t hire, but once we get investor, for sure, we will be searching for more people. And when it comes to opportunities for us as FinalSpark, I think it’s quite interesting because when you’re working on such a deep tech project, a lot of people are interested at least to hear what you do.

Ewelina Kurtys: So that makes the work easier, I think, because when you try to promote the topic, for example, we try to reach out to journalists or podcasters like you, this is quite, I would say maybe easy is maybe, I don’t know if it’s the right word, but it’s not so difficult because the topic by itself is interesting because it shows some totally different point of view on the engineering. And I think it’s, it brings added value to many discussions. So I think it’s quite easy to promote, let’s say if I can say so.

Grant Belgard: How do you maintain credibility while crossing disciplines?

Ewelina Kurtys: Well, you mean myself when I crossed the disciplines for biology to engineering or as FinalSpark?

Grant Belgard: Well, for yourself, what kind of general lessons would be in there?

Ewelina Kurtys: Okay. I would say, well, you always have to be prepared at least. Okay. I said that the mental part is more important in the work, but still you have to be technically prepared. You need to really know what you do. So that’s, that gives you the credibility because you can easily answer questions. And I think that’s, that’s very important that you really know upside down your topic. And as a company, I think it’s important to be transparent. I think, and also we, that’s why we collaborate with universities because we want that they publish something. So there is already one publication, uh, from our free users. And, um, this is very important, uh, to be very transparent that people know what we have exactly so that we are open and explain it. And also I think scientific collaborations are helpful to getting this credibility.

Grant Belgard: For a grad student or postdoc intrigued by wetware computing, what should they learn first?

Ewelina Kurtys: Depends if they’re coming from biology or they’re coming from engineering. So if they come from engineering, they should learn about biology. And if they come from biology, they should learn coding and engineering. So it depends from where you’re from, but it’s very important in biocomputing to combine the knowledge between biology and engineering. That’s, that’s the key.

Grant Belgard: So if someone is strong, uh, on the computing side, but new to what lab biology, what’s a realistic path, uh, for them to quickly get hands on competence that’s relevant for this space?

Ewelina Kurtys: Oh, just to read about neurons, about how they process information, even some Wikipedia articles are usually enough for the start. And also I highly recommend to check our website, FinalSpark.com. We have written a lot of blogs and now also our paper, our technical paper in Frontiers, there’s only one we published, so it’s easy to find. Uh, so yeah, I think checking our paper, our blog articles, it could be interesting and helpful for the beginner to just to see what is important. Yes.

Grant Belgard: For, uh, for when you, you, you do, uh, raise money and start hiring, what kinds of portfolio pieces or proof of works would you be looking for from potential applicants?

Ewelina Kurtys: Uh, well, for example, uh, for sure, most of the people we will hire will be on the engineering side. Maybe there will be also some biologists. So biologists will have to have extensive experience with, uh, in vitro cell culture and how to, you know, work with living neurons, but engineers, uh, not definitely. We look at the coding. They, they have to be people who like to code and also who like hardware because they know, uh, by computing, you have both hardware and software. So we are changing this all the time. And so, and also a lot of signal processing, data science, because we try to search for patterns in the signals. So that’s also very important.

Grant Belgard: What underrated skill is a superpower in this area?

Ewelina Kurtys: Hard to say. I don’t know. It depends on the person because it’s so diverse. So I wouldn’t say there is one thing for everyone. I think maybe if you are coding, then it’s underrated that you have to know biology, for example, but it’s really depends where you come from.

Grant Belgard: What red flags should candidates watch for when they’re choosing a lab or startup in this field?

Ewelina Kurtys: Oh, red flag. This is difficult. I don’t know. I think, um, maybe one thing where you can look at, oh yes, this is something I’ve learned during my experience, life experience, uh, is that you have to look at the people, for example, uh, or the coworkers, if they are happy and relaxed. And if you are not, then you should escape because in the nice environment, people are happy and relaxed. And if they are not, that means that there is some pressure and maybe not very nice environment. And I think this is important, although I have to say also from my experience that it’s very difficult to say from the, you know, at the beginning when you have interview. So it’s very, very difficult to spot, I would say, but yes, maybe this, maybe this. And also of course, that when you have an interview, you it’s also, you are interviewing your future employer or project.

Ewelina Kurtys: Uh, so you have to also look at this, that it’s not only them to check you, but also you to check them. And another thing, which I also heard that actually when you have an interview that people really want that you succeed because they want to find someone. So, because usually people are very stressed and they think that interview is just a search for a bet for your weaknesses, but that’s not really true because everyone wants to find a great person. So actually everyone wants that. It will be successful. That I heard from my friend who is actually HR manager, very experienced. So she always told me this, that people usually misunderstand that, but it is very generic. It’s not only about this field.

Grant Belgard: If you could go back in time and give your earlier self one piece of advice, what would it be?

Ewelina Kurtys: Be more confident because when I was young, I was not confident at all. I always was afraid that I will be wrong, which is not necessary. Yes.

Grant Belgard: Where can our listeners go to learn more about you and your work and about FinalSpark?

Ewelina Kurtys: So, uh, we are very active on LinkedIn. We promote ourselves there as much as we can. And of course our website, finalspark.com. And also on the website, you can send us a request that you are interested in the project. We also send some reading materials. Uh, so it’s very easy to get in touch with us. We are also on discord and this is on our website and, um, we have also newsletter also you can subscribe on our website. So many ways to get in touch and learn more and join the community, which is growing very fast.

Grant Belgard: Well, Ewelina, thank you so much for joining us. This is enlightening.

Ewelina Kurtys: Thank you so much. It was a pleasure.

The Bioinformatics CRO Podcast

Episode 76 with Christopher Woelk

Christopher Woelk, an External Innovation Partner at Astellas, discusses his background in multi-omics and AI/ML and what he looks for in his current search & evaluation role embedded within therapeutic oncology research.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Christopher Woelk

Christopher Woelk is an External Innovation Partner at Astellas, which focuses on developing and supporting transformative disease therapies.

Transcript of Episode 76: Christopher Woelk

Disclaimer: Transcripts may contain errors.

Grant Belgard: Welcome to The Bioinformatics CRO Podcast. I’m Grant Belgard and joining me today is Christopher Woelk, aka Topher, from Astellas. We’ll explore what Topher is working on now, the path that led here, and practical advice for scientists and engineers charting their own course in biotech and pharma. Topher, thanks for joining us.

Christopher Woelk: Thanks, Grant. No, great intro. Thanks for pronouncing my nickname and my last name correctly. People stumble on that all the time.

Grant Belgard: What problems are you and your immediate team focused on solving right now?

Christopher Woelk: Yeah, so right now I work, as you mentioned, for a Japanese pharma called Astellas. I’ve had a bit of a career pivot, which I’m happy to explore into search and evaluation and BD from running large technical groups at biotech and pharma companies. So right now what we’re focused on in my group, so I’m embedded in the therapeutic area oncology. I’m not embedded in BD and so I’m really pushing the science first. I think the real sweet spot for me at the moment is trying to find interesting startups with a platform that preferably can spit out more than one asset and a preclinical data package around that asset that shows some evidence that this therapy or asset will be efficacious. So I’m using that template to search my network and meet new startups to figure out if those assets will plug and play with Astellas programs.

Grant Belgard: What criteria do you use to triage those?

Christopher Woelk: Yeah, that’s a great question. I think the strategic part behind that, again, I’m fairly new to this particular role, but coming up with a template. So obviously there’s internal programs ongoing at Astellas. We’re looking to use a template where we can find backups for those programs out in the ecosystem of startups today. Hopefully things that don’t conflict with internal programs, so things that are maybe novel. Then just going through that rubric, having worked with BD and Ventures arms in previous roles, interviewing these startups, what is their problem statement? What are they actually doing to correct that problem? Why are they different from everybody else? That competitive intelligence piece, who are your competitors and why are you different from them, are a series of questions that I like to work through when I’m chatting with startups.

Grant Belgard: What therapeutic areas or technology platforms do you come across to your work most often?

Christopher Woelk: Yeah, that’s a great question. I think, again, being embedded in oncology, my primary focus is oncology. Astellas is also working in ophthalmology, so I keep my ear out for those disease areas. Then in terms of platforms that I come across, really thinking about target identification, target validation, generative AI for small molecule and biologics design are all at the forefront. I think Perturb-Seq is something that I’m focused particularly on at the moment, and I know you and I have had conversations in other contexts to that regard. But building these models of the cell with Perturb-Seq, finding new targets, validating targets, finding biomarkers, I think this platform is really starting to come into its own with respect to those outputs.

Grant Belgard: What does success look like for your group over the next 6-12 months?

Christopher Woelk: Yeah, that’s a great question. So I’ve been wrestling with this a little bit because in a traditional BD role, of course, success is a transaction. So meaning that you find a company, a startup, they have an interesting platform or an asset, and there is a collaboration or a partnership, or maybe even a merger and acquisition, that type of transaction, of course, is success. I don’t have a budget myself to do transactions, and so I’m trying to figure out what success looks like to your point. And what I think it is, exactly what you brought up earlier, what template can I use to go out there and assess academics and startups? How many things can I feed in the top of that funnel? And it’s probably going to be in the hundreds so that the really good opportunities trickle down and BD gets to transact on them.

Christopher Woelk: So I think success for me is probably getting out there in the world, meeting hundreds of startups, whittling those through that filtering criteria we were talking about, and being able to trickle really high-class opportunities into BD.

Grant Belgard: What have you found to be the biggest differences in your current role versus your previous roles in R&D, and how have you adjusted to those?

Christopher Woelk: Yeah, no, that’s a great question. So previously, I mean, just to cover it briefly, I had a whole academic career at UCSD and at the University of Southampton in the UK, really using AI-ML and multiomics, again, to get to target ID biomarkers, reverse translation mechanism of action of drugs. And so I transitioned into industry after academia. I ran an exploratory science center for Merck and built them up a systems biology group, and then I went through a couple of startups, and I even had my own consultancy business for a while before this current role. So my old jobs were running technical groups of 15 to 20 people, really focused on things like target ID and reverse translation, as I mentioned. And that was getting into a lot of collaborations, bringing in a lot of data, searching through that pay stack for the needle that is really going to be a promising target.

Christopher Woelk: And then shifting over to this new role, sort of a search evaluation in a therapeutic area. I mean, I think one of the reasons I got hired was I did have that technical background. And so when I’m going out in the world and talking to startups, I can actually evaluate what they’re doing from an AI-ML or technical standpoint or causal inference, multiomics, data integration, I can actually dig in and figure that out. So the commonalities, I’m still using my technical background, but I’m using it now to evaluate companies as opposed to sort of solve problems in technical groups. And that’s a lot of fun. That’s a lot of going to conferences. It’s a lot of having coffee chats with startups, and it’s a really nice social aspect of this role.

Grant Belgard: So after the triage step for potential companies of interest, what questions do you ask as you get deeper with them and how does that process typically play out?

Christopher Woelk: Yeah, typically you’ll go under CDA so that you can have those deeper dive conversations. And normally at that stage, you’re pretty excited about the science. But as you go under CDA, presumably you get access to more data that’s not publicly available. So with the scientific hat on, you start to take a deeper dive into maybe its efficacy in a mouse model or a bunch of testing across in vitro cell lines that aren’t in the public domain. And so you can continue to convince yourself that the science is good at the particular startup that you’re vetting. But also going under CDA, you can start to explore what the company is looking for. So is it a fee for service type engagement? Is it a partnership with milestones? Is it more of a collaboration where you’re both going to put things into the pot and then maybe share the data at the end?

Christopher Woelk: And so you can start exploring what the relationship might look like, and then you can also start getting information around costs. And so is the startup just asking for too much and it’s never going to fit into the budget and what we’re trying to do? Or does it look like a good fit and quite a reasonable cost? And we can get a thumbs up from BD.

Grant Belgard: Maybe we can get into some questions that may draw more on some of your experience in some prior roles. But I hope will be interesting for our listeners. So how do you turn exploratory analyses into decision enabling work to inform programs?

Christopher Woelk: Yeah, I think that’s quite a challenge. So in previous roles, I’ve really been tasked with generating multiomics data sets, figuring out where the signal is in those data sets, and delivering targets. And so that sounds relatively easy, but in terms of generating the samples, you either need to find a biobank that has what you need or you need to work with your translational medicine colleagues, spin up a clinical study, which can take years, collect the right samples in the right way to get the data that you want. And then, of course, there’s the big question, well, what data am I going to generate from my samples? Maybe the question is disease versus health or treatment versus untreated. Which omics layers am I going to look at for that particular disease?

Christopher Woelk: The MRC always, the Medical Research Council in the UK, always used to ask which tissue and which modality, meaning are you sampling from the right place and are you sure the modality that you’re going to run on these samples in terms of omics layer is going to give you what you want? I had the privilege in some roles where we weren’t limited by omics modality, and so we ran four or five layers. I came up through transcriptomics, so I always have a slight bias towards transcriptomics. But then I was often surprised in studies that the metabolomics layer, for example, had more signal. And so it’s keeping an omen behind around those omics layers as you’re crunching the data in a data integration project to try and get to target ID. I spent a lot of time with my groups thinking about quality. The last thing I wanted to do was take an omics layer in five and slam it together with the others.

Christopher Woelk: If it was a really noisy layer, it’s just going to diminish the signal from the other layers. And so making sure that each individual layer is quality controlled and that if there’s anything really noisy there, it’s better to leave it out than smush it together with all the other omics layers. And then all these different ways to get to target ID, right, Grant, that I think a lot of places are wrestling with. So do you build some sort of correlation network across your modalities, and then you query that network for health and disease? Or do you query for health and disease and then build a network to try and figure out what the biology looks like? And then, of course, we always sort of get it to fall in this trap. I’m going to bring this up tonight.

Christopher Woelk: I’m actually teaching it at Northeastern, where we hear it all the time, correlation is not causality, right, that ice cream sales are correlated with shark attacks. So if you eat an ice cream, you’re going to get bitten by Jaws. So really, it’s trying to figure out what types of causal methodologies, as I’m combing through these multi-omics layers, can I use to really give me confidence this target is involved with the disease and is not just responding to the disease. And in that context, I’ve always loved that genomics layer. When you have a SNP or a mutation in the DNA, that’s something that’s sort of static and built in. If it’s in a gene that’s related to that disease or it’s related to sort of some co-express module in the protein or the transcript sphere, then you’ve got a causal sort of indicator pointing at some interesting pathway biology in other layers.

Christopher Woelk: So that was a long answer, but hopefully what you’re looking for.

Grant Belgard: Yeah, well, what are your thoughts on methodologies like Mendelian randomization, structural equation modeling, and so on?

Christopher Woelk: Yes. I mean, I think I’m not an expert in sort of the genetic genomic space. I actually had a great colleague at a previous startup who used to spend a lot of time trying to explain Mendelian randomization to me. But I like the concept of these methodologies where you can look at the data set in different ways and get outputs. And then the trick is always to look across those outputs and seeing if they agree with each other. And if a lot of different outputs are pointing at the same pathway or pointing at the same target, then I think you’re in good shape.

Grant Belgard: How does effective cross-functional collaboration look to you?

Christopher Woelk: Yeah, that’s a great question. So for me, it’s interesting. I think biology has gotten very complex, right? There was a concept of a polymath probably a century ago where, as a scientist, you could be an expert in every domain. But even now, just in biology, that’s impossible to do. So I think, to your point, to tackle some of these really interesting questions, you need that diverse group. You need sort of clinical, you need commercial, you need your AI/ML, your software engineer, your bioinformatician, your biologist. And so I’ve been in several collaborations where these people, so if you think about bigger pharmas, these people live in different departments. And so you have to bring them cross-functionally together. It’s a little bit easier. Smaller companies, like startups, where you’re pretty much already all on the same team because the company is only 50 people.

Christopher Woelk: And so you can bring those folks together, build the psychological safety much faster, and tackle whatever the problem is. But at the end of the day, you want to bring those cross-functions together, again, build this environment of psychological safety where everybody feels heard, there are no stupid questions. And then I found it sometimes can take up to a year before everybody’s speaking everybody else’s language because the clinicians think one way, the software engineers think another way, the biologists think a third way. And I’ve been in rooms before where I’ve seen a clinician arguing with someone from IT. They’re actually agreeing, but because their terminology is so different, they think that they’re on different sides of the argument. And so I love being in those rooms and basically guiding the conversation to show that everybody’s in agreement.

Christopher Woelk: We’re just using different semantics.

Grant Belgard: What role, if any, do foundation models or LLMs play in your work right now?

Christopher Woelk: That’s a great question. I think, yeah, I mean, LLMs are becoming fairly pervasive. In my current role, search and evaluation, I’m starting to stumble across some interesting companies that have consolidated data across clinical trials, poster abstracts at conferences on those clinical trials, and patent information. And then once they’ve pulled all that information together, being able to search across it or ask questions through an LLM type interface is starting to look really powerful. So that’s my current role. In previous lives, I got pretty interested in foundational models. I worked with a great company. They were a client of mine when I was consulting called Imugene. And they had built foundational models of histology images from cancer patients.

Christopher Woelk: And to cut a long story short, what they had been able to do is normally when you get cancer, they take a sample of that tumor, and it gets sent off for sequencing to figure out which biomarkers you have. And based on that biomarker profile, it can dictate which therapy you get. And what Imugene had done is they’d gone into the software as a medical device field, and they’d used the image data along with this molecular biomarker data on a subset of patients to build a foundational model that was a neural network that could basically recognize in the image data whether someone was biomarker positive or biomarker negative. And of course, why that’s important is that cancer patient has to sit around for a month and wait for their molecular data to come back, which is a long time in a cancer patient’s life.

Christopher Woelk: And at the time, around diagnosis when these histology images are coming back, if you can make that biomarker call right there and get the patient on the right treatment, you’ve saved four weeks of them not being on a treatment, which is huge. And so that’s a place where I really thought foundational models were having a big effect and a big impact on oncology patients.

Grant Belgard: And on the flip side, where have you seen AI methods under deliver and what tends to make them succeed?

Christopher Woelk: Yeah, I think this is a fascinating space. I spent a bit of time thinking about this. Again, as a consultant, I would help out with strategic plans and platform initiatives for a number of clients. And a component of that was AI. And so the story I have in my head, and I sort of tested this a bit out in the real world, and I think it’s holding up, is that if rewind the clock five years and you were able to sit in a couple C-suites and a couple large pharmas, I think you would get the impression by the conversation that they thought AI was going to be the silver bullet. So let’s get some AI in, whatever that is. It’s going to speed up our drug discovery pipeline. It’s going to reduce our clinical failures, and it’s magically going to increase profits and everybody’s going to win. And I think there’s been a realization that it’s not a silver bullet, right?

Christopher Woelk: People have gotten educated in this domain over the last few years. And in fact, the way that I see AI/ML, especially around the drug discovery pipeline, is a series of accelerators, so modules that you can sort of plug in and they’ll speed up a bottleneck or a particular problem in that drug discovery pipeline. And so I think we’ve had big problems in implementation. You can imagine that if AI is a silver bullet and you’re just going to apply something everywhere regardless of whether it works or not, that’s a path to failure. Whereas I think people have gotten a lot smarter about how to implement AI.

Christopher Woelk: And again, the really successful templates I see are looking at the drug discovery pipeline, identifying a bottleneck in that pipeline, having a strong problem statement, ensuring it’s a fit for an AI/ML solution, building that solution and proving that use case on that single component in the drug discovery pipeline, and then figuring out where else it applies or building other AI/ML tools to accelerate different parts of the pipeline. And then, of course, when we put all of those things together and we’re not there yet, I think we’re still several years away, but you will start to see, especially in the larger companies that have the budget to do this, the ability to accelerate drug discovery, decrease clinical trial failures, and increase profits. But I think the implementation and approach is the real change that is happening right now.

Grant Belgard: What’s overhyped and what’s underhyped in your corner of R&D right now? Yes. That is a good question. I think I feel like, I don’t know where you think we are on that hype cycle curve, but I feel like everything was overhyped, again, a couple of years ago. I feel like we’ve come down the backslope and we’re in that little valley of death. We’re coming up the other side.

Grant Belgard: Trough of disillusionment.

Christopher Woelk: Is that what it is? Yeah. Valley of death might be a little dramatic, but we’re coming up that slope of where the hard work begins and these things might actually work. So I’ve been in meetings before where we’ve been trying to build an infrastructure to handle multi-omics data. And we start talking about patient privacy. We start talking about homogenizing across different array platforms for calling SNPs. And someone’s come along with a sticky note and with AI written on it, sticking on every problem that we have, saying it’s going to fix that. So the danger, the hype is what we were talking about earlier, that AI is going to fix absolutely every problem. I don’t think that’s true. I think there are problems that are suitable and problems that aren’t suitable. So as we move away from that fix-all hype to what’s the specific problem and what is the solution.

Christopher Woelk: And the solution just might be a database as opposed to a whole AI ML approach. But really finding those good use cases, I think, is important.

Grant Belgard: And a question that’s especially topical in light of the continued financing troubles in biotech. How do you keep institutional knowledge from getting lost, especially in the context of layoffs, downsizing, restructuring, et cetera?

Christopher Woelk: Yeah. So that’s a fascinating question. And I’ve actually wrestled with that question and tried to run projects in that space before. So you’re referring to knowledge loss. So what is knowledge loss? You’re right. It’s when somebody leaves a company and they take critical pieces of information with them in their head. And you can no longer do that thing because that person has left. And so I used to think about how, especially, I think, to your point, in our field, again, over the last few years, there seems to be this two, three, four-year cycle of either our companies going boom and bust or people moving to get to a better position at a different company. And that’s in strict contrast to you think of pharma companies of old where people would go and spend their careers. They would work there for 25 or 30 years. And so if you’re in that environment, there is no knowledge loss.

Christopher Woelk: And you just go down the corridor and you ask the subject matter expert and you get your answer. But in the current landscape where people are cycling every three or four years, you’ve got to really think about how you mitigate that knowledge loss. So one of the things that I did at a company is we built what’s akin to a Stack Overflow system where anybody across the company could answer that question. And then the answer that was the best got upvoted and locked in as the correct answer to that particular question. And then as that data accumulated, you could start moving it into wikis and information pages at the company. And so again, I really found that those types of initiatives helped capturing people’s knowledge that was in their head, getting them into a database that was searchable so that when those people left, you could still find the answer to that question.

Grant Belgard: What first drew you into computational biology and translational questions?

Christopher Woelk: Oh, that’s a great question. I think the honest answer is I was horrible on the bench. So I think this is all the way back to my undergrad. I did a biochemistry and genetics degree at the University of Nottingham in the UK. And we had organic chemistry. We had biology labs waiting for things to change color or to stop spinning or centrifuging. I enjoyed the coffee breaks, but I was always frustrated at how long things took. And so when I was at Nottingham, I did my third year project in an evolutionary lab under a gentleman called Paul Sharp. And I realized pulling down sequence data, aligning it, drawing family trees of bacterial families at this stage, it was all quite immediate. You could write code. You could run the software. I could get my answer in a day as opposed to several weeks.

Christopher Woelk: And I guess that speaks to me being quite an impatient person that opened up a whole world of computational biology for me.

Grant Belgard: What career move changed the way you think about drug discovery the most?

Christopher Woelk: Yeah, I think it’s that academic to industry transition. So I love my academic career. I did a lot of great projects. I was part of clinical studies. But I think in academia, and it’s understandable, people haven’t been inside a pharma company, so they don’t fully understand the drug discovery pipeline and all the steps and all the types of data and all the checkpoints that are required. And so when I moved into Merck, it’s a different language. It’s a different way of operating. It took me about a year to really understand the vocabulary and all the checkpoints and how a target gets all the way through to become a drug. And so that was a big transition for me. But then I really enjoyed it because you’re moving away from sort of the theoretical in academia to the real practical in industry.

Grant Belgard: What did you keep doing the same across these different environments and sectors, and what did you have to relearn in those key transitions?

Christopher Woelk: Yeah, that’s a good question. I think it really goes back to this concept of building happy groups and psychological safety. So in academia, my groups were like extended family. They’d come over for Thanksgiving. We’d go out for meals. It was a very close-knit group. And so when I moved into industry, I recreated that. And it works well with small groups, I think 10 or 15 people. I think it’s hard if you’re managing a group of 50 or 100. But I really enjoyed taking that personal element into industry and building those tight-knit groups and forging those relationships with my colleagues. And I found that when groups are happy, they’re very productive. When they’re having fun, they’re very, very productive. And so I like that part. It’s much more effective than going in and screaming at everyone every day to do their job.

Christopher Woelk: So I’ve always tried to maintain that through the jobs that I’ve had.

Grant Belgard: When you’ve considered new roles, what signals told you a team or culture would be a good fit?

Christopher Woelk: Oh, yeah, that’s another good question. I think, yeah, so my approach to interviewing, hopefully this will get at your question, is, of course, asking the same question to many different people. And if I get the same answer, that tells me that that team or that group is all on the same page and the objectives are clear. If I ask the same question, I start getting vastly different answers, especially from people in leadership. That tells me that team is not on the same page and that that’s a bit of a red flag and I need to be careful.

Grant Belgard: Interesting that just to note, that was the same kind of answer we got from the NASA engineer turned organizational culture expert. I was telling you about it before we hit record. Whenever he goes in to assess an organization, that’s like the first thing he does, ask the same questions to people across the organization and particularly look for differences between the leadership and the people on the ground.

Christopher Woelk: Yes, yeah, because ultimately, if the objectives aren’t clear from top to bottom, then you’re not going to be an effective organization. But now you’ve got me thinking I might have missed a career in space frontier in NASA, but we’ll leave that for another day.

Grant Belgard: What kinds of challenges have you found consistently energizing?

Christopher Woelk: Yeah, that’s a good question. So I think I am quite challenge orientated. So often I’ve been told, you know, you sort of, you can’t get an NIH R01 before the age of 45, you’ll never become a full professor. You know, these are sort of personal challenges that I’ve come across. I think from a scientific aspect, what I find quite motivating are these really complex questions. Like, you know, again, we’ve generated five layers of multiomics data in a longitudinal study, and we want to understand the mechanism of vaccine response. How do you put all those layers together across time in order to answer that question? And I find that motivating because it’s complicated. There is a lecture record that needs to be dived into to figure out what the solutions are.

Christopher Woelk: There are teams that need to be brought together to brainstorm where the gaps in existing solutions are and what we would do differently. There’s a strategic plan and an operational plan that needs to be pulled together to get that analysis done. And at the end of the day, there are results that start falling out of these studies that some of them are what is already known, but especially when you hit those normal nuggets that people haven’t discovered before. I find that very motivating.

Grant Belgard: Who shaped your approach to science or leadership and what did you take from them?

Christopher Woelk: Yes, so there’s been a few people, quite a few great mentors over the years. I mean, I can go all the way back to high school biology. I had a great biology teacher, Mr. Williams, at a boarding school in the UK that really excited me about biology and set me on a biology path. My PhD supervisor is a gentleman called Eddie Holmes, who’s down in Australia these days, but I met him at Oxford University, and he really taught me about managing groups. In an Oxford academic group, there were some very different personalities and traits, and I noticed what he would do is, he didn’t have one management style. He would adapt his management style to each individual to get them what they needed. And I always took that away with me in groups that I managed really trying to adapt to my, not force my style on everyone, but adapt to what that individual needed.

Christopher Woelk: And then I had another great mentor at UCSD, Douglas Richmond. He really sort of helped characterize HIV resistance and how to get over resistance with combination therapies. But he was a great academic mentor and sort of taught me about the HIV world and how to climb the academic ladder. And then transitioning into industry, there’s a wonderful scientist called Daria Hazuda, who was my boss when I was at the Exploratory Science Center, and she really helped me understand how industry functioned and educated me on the industry side.

Grant Belgard: What has changed most about the field since you started?

Christopher Woelk: Yeah, that’s great. So I started, you’re going to date me now. I started as a postdoc at UCSD in 2002, when U95A Version 2 Affymetrix arrays were in vogue and the latest array type. And so, again, I think sequencing technology has really opened up a lot of biology that we didn’t have, especially in the transcript arena. And then watching the Human Genome Project kick off, watching Craig Venter lambast academia that we should do this faster and better, and then proving that you could by parallelizing sequencers, seeing sequence technology get better and better in a way that, you know, I don’t know what the dollar amount is on a genome now, but it’s a lot less than back in the early 2000s.

Christopher Woelk: I think the, just the amount, the technology and the amount of data that we can get out of a human sample these days provides an incredible microscope to look at disease that we haven’t had before when I started my career.

Grant Belgard: Looking back, what did you underestimate about working at the interface of computational biology? Yeah, that’s a good question. I think you’re reminding me of a conversation I had with a machine learner at Southampton, [?]. And so it’s basically around this concept of trusting the data that you’re given and not being more curious and exploratory around it. And so, you know, very specifically, it’s a very specific answer to your question. If you looked at the old Affymetrix array data for expression analysis, it came with 14 decimal points. And so [Neurangin?] sat me down one day and said, is this data accurate to 14 decimal points? And I said, what do you mean? And he goes, do we need them? And I said, well, of course we need them. It’s the data, it’s coming off the machine. And he goes, well, let me show you something.

Grant Belgard: And he’d binarized the data, basically zeros and ones, and showed that he could get the same answer that I did when I was using, you know, 14 decimal points. And so, you know, it’s just this concept of, that was a surprise to me, right? That, oh, okay, there’s different ways to look at this data. I should be more curious about these 14 decimal points. And it always stuck in my head that he educated me that just because it’s coming off the machine doesn’t mean it’s useful.

Grant Belgard: For someone just finishing a degree or fellowship, what skills would you prioritize in their first year on the job?

Christopher Woelk: Yeah, I think that’s another good question. I think it’s an interesting landscape right now. You know, they’re saying that, so my girls are 16, they’re heading into college in a couple of years. They’re saying this generation is going to change jobs six or seven times in their lifetime. So, you know, I used to hate this phrase, thriving in ambiguity, but really getting used to change, right? Because it’s coming with all the sort of AI impact, greater efficiencies, increased technologies. I think you’re gonna have to be very flexible in your career. And then I went to a career advice workshop when I was an undergrad and the gentleman got on stage and said, don’t stress too much about where you are today starting your career, because when you finish your career, you’re going to be in a completely different place. And that didn’t appeal to me at the time at all.

Christopher Woelk: I thought he was speaking rubbish, but as I’ve looked at my own career, that’s exactly what has happened is that, you know, where you start and where you end up, I started, you know, in a very technical field, now I’m in sort of more of a research and evaluation role. And just being able to sort of go with the flow of that career and make sure that you’re always curious and you’re always doing something that you find interesting would be really rewarding.

Grant Belgard: How can scientists tell whether management is a good next step for them?

Christopher Woelk: Yes, I’ve had this conversation dozens of times in my career too, because there are these three tracks, right? There’s the management track, there’s the independent contributor track, and then there’s sort of a middle track where you’re an independent contributor, but you have a couple of reports. And I can tell you what really helped formulate my thinking in this space was that work-life podcast series by Adam Grant. Is it Adam Grant? Yeah, I think it is. And he’s like this workplace psychologist at Harvard who sort of gets out into groups and really tries to understand what makes, you know, innovative groups tick. But he has a particular podcast exactly on your question of am I management or am I independent contributor?

Christopher Woelk: And the problem is that the management tract is often the one that everybody thinks they should be going down because it seems to come with these titles and salaries and increased responsibility, but it’s not a good fit for everyone. So there are cases where people leaped into the management track, they’re absolutely miserable, and then they end up in the independent contributor track. And so I think what you really need to do is sit down with a mentor or sit down with a whiteboard and try to figure out the things that really motivate you. You know, do you like coding? Do you like working directly on the data? Do you like solving problems? That feels more independent contributor versus do you like mentoring people? Do you like helping other people solve their problems? That feels slightly more going down that management track.

Christopher Woelk: And I think that, you know, to one of your earlier questions about how do you assess companies or organizations, this is another thing that you can do as you’re looking to onboard at a company. You know, what is their management track and what is their independent contributor track? And do they have an independent contributor track that has senior positions that are equivalent in status and in salary to the management track? And if that’s the case, then that company’s really thought about valuing both managers and independent contributors in a way that I would wanna work at that company.

Grant Belgard: What signs suggest it’s time to change roles?

Christopher Woelk: Yes, there’s a rubric that I worked through for that. I’ve worked through it with myself and I’ve worked through it with mentees. And again, it came from this gentleman, Adam Grant. So I do encourage you to listen to that. The first season of that podcast is fantastic. So it’s voice, loyalty and alternatives. And so if I’m at a job and there’s a problem or something that needs fixing, then the first thing to do is I use my voice, right? So I highlight the problem, I talk to people, I try to make the change by following sort of change management procedures and speaking up. Now that doesn’t always work. Sometimes you’re ignored. And so then you move on to this loyalty bucket. So you’re at a company, are you still loyal to the mission of the company? Are you still loyal to the objectives? Are you still loyal to the people that you work with and that team? And it feels really strong.

Christopher Woelk: But if those loyalties start to get frayed, then I think you start looking at alternatives and those alternatives of course are, what else can I do with my skillset? Can I find a similar role at a company elsewhere? Could I find a different role with my skillset? And then you start exploring those alternatives. But I just found that quite a useful rubric, the voice loyalty alternative. You can work through that and it helps you sort of relax through a very stressful process.

Grant Belgard: What books, papers or resources would you suggest to someone entering this space today?

Christopher Woelk: That’s a good question. I think, again, I think scientifically, everybody’s pretty familiar with downloading reading papers, staying up with the research. I think the thing, at least with my old manager hat on, that’s been harder to teach is around soft skills. And so what I’ve often done is as I see people that could be going down that management track in my groups, or they’re just really talented, independent contributors, there’s some literature around soft skills that I’ll give them. So I used to give out a book called the One Minute Manager, which is a great quick read. And the take home message is one minute objective setting. Everybody should know the objectives. There’s one minute praising. It’s when people do something right, you should tell them they’re doing something right.

Christopher Woelk: And then one minute course corrections, don’t wait for things to go completely off the rails, but get people back on track early on when you see problems. And that’s just a nice little template to run a group. I’ve transitioned recently, again, sticking with soft skills to a book by a friend of mine called Gwen Acton. And I think it’s Leadership for Scientists and Engineers. And it’s a very comprehensive manual explaining the soft skills that are needed in STEM to be successful. She’s got some sort of great examples and role-playing examples in that book, and then a series of things that you can do when you find yourself in certain situations. And so I’ll often give that book out as well. But to wrap up the answer to this question, these types of materials to really help people develop their soft skills is something that I found really important.

Grant Belgard: And last but not least, if you could go back and give just one piece of advice to your younger self, what would it be and why?

Christopher Woelk: Oh, wow. Yeah, I think there’s this phrase, this too shall pass. And so there’ve been fairly stressful parts of my career, trying to get grant funding, transitioning jobs in industry. And it feels sometimes like these periods are never gonna end, but this too shall pass. Hang in there, get the work done, try and show some strong deliveries and ultimately you’ll find yourself in a more productive place.

Grant Belgard: Great, Topher, thank you so much for joining us.

Christopher Woelk: Oh, it was my pleasure, great questions. You had me thinking there.

The Bioinformatics CRO Podcast

Episode 75 with Chris Yohn

Chris Yohn, leader of CompBio Bridge, discusses his current experience with computational biology contracting and consulting, what companies are doing with computational biology right now, and how to most effectively bridge the gap between data science and the wet lab. 

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Chris Yohn

Dr. Chris Yohn is a computational biologist who currently leads CompBio Bridge, which provides a fractional strategy and management practice to help biotech teams bridge data science with the wet lab.

Transcript of Episode 75: Chris Yohn

Disclaimer: Transcripts may contain errors.

Grant Belgard: Welcome to The Bioinformatic CRO Podcast. I’m Grant Belgard. Today we’re joined by Dr. Chris Yohn, a biotechnology leader and computational biologist. He currently leads CompBridge Bio, a fractional strategy and management practice that helps biotech teams bridge data science with the wet lab. Previously, he headed computational biology at TRexBio and held discovery leadership roles at Unity Biotechnology, with earlier industry experience spanning platform buildouts and translational programs. He trained at Scripps Research and later completed postdoctoral work at the Skirball Institute in New York. Chris, welcome.

Chris Yohn: Thanks, Grant. It’s great to be here.

Grant Belgard: How do you describe the work you’re focused on right now?

Chris Yohn: So currently, I do computational biology contracting and consulting. Think of it as a fractional head of computational biology, typically for small companies that maybe can’t afford or aren’t ready to bring on a full-time head of Comp Bio.

Grant Belgard: What kinds of problems are showing up most often in your engagements?

Chris Yohn: I’d say there’s probably three main categories. First is early target identification, validation. Then, of course, there’s once you have a program doing translational informatics. So in that, I would include things like mechanism of action studies, biomarker selection, then a discovery, indication selection, even some like tox flags that you might be able to point out for a program that’s headed towards the clinic. The third category that I think is important that comes up pretty frequently is research informatics. So this is really, you know, essentially kind of managing your data, making sure you capture your data well and that once you capture it, you can use it and visualize it.

Grant Belgard: That’s been fun this week with the AWS outages.

Chris Yohn: Yeah, for sure. Yeah.

Grant Belgard: We’re recording this a good while before it comes out, just for our listeners. AWS hopefully did not go down the week you’re listening to this. So when a new group asks for help, what do you listen for in the first 15 minutes?

Chris Yohn: You know, so my original training is in molecular and cell biology. So, you know, I’m a biologist at heart. So really what I’m thinking about are what are the key biological questions that need to be answered? What’s going to help advance the company? What’s going to advance the programs they’re working on? What’s going to hit their goals? So what is the biology that’s underlying it and what are the questions that they need to really address for that?

Grant Belgard: And what does success look like in a typical project? How do you measure it?

Chris Yohn: Maybe it’s easiest. I’ll give a couple of quick examples. So one company I’m working with, I’m helping them with some mechanism action studies. And in this particular case, this is not typical for a lot of companies, but one of their major goals for this study is publication. You might think of that more for academics, but sometimes companies have that goal, too. So that’s a pretty concrete goal and metric that we can use. Like if the study helps lead to a publication, then that’s success. Another example is I’m working with a group basically to figure out, like, is there a company? So it’s actually the company hasn’t even been formed yet. So is there enough here to actually get something off the ground? So in that case, I guess getting the company started would be the measure of success.

Chris Yohn: And frankly, you know, I think in that case, making a decision not to start the company could be just as good as an outcome. Right. So that’s a good decision, too.

Grant Belgard: Right. You have to know where to allocate resources.

Chris Yohn: That’s right.

Grant Belgard: So where do you see the biggest disconnects between data science and the bench today?

Chris Yohn: You know, many, including myself at some of my previous companies, would talk about this, you know, sort of a design build test loop that really helps, you know, once you get data to bring it back into your modeling. Unfortunately, in many cases, it’s not always a loop. It’s kind of a one way trip. Right. And I think that’s where we see some disconnects. You know, the vision is there, but sometimes the execution to bring the data back into your modeling doesn’t always happen.

Grant Belgard: If you had to pick one capability that most accelerates discovery for your clients, what is it and why?

Chris Yohn: You know, this might be a little bit related to the last question, and I’m not going to pick a technical capability. I’m going to say communication. You know, I kind of consider myself because I’ve had a pretty diverse background. I call myself a multilingual scientist. I’ve worked in a lot of different areas, and because of that, I’m able to really translate between different disciplines. And I think that’s what could really accelerate discovery is that if you can increase communication, help different groups really understand each other and understand what they’re capable of, what their needs and goals are. And then how to move forward with that. I think that’s really what can help discovery move forward quickly.

Grant Belgard: When timelines are tight, how do you choose between depth of analysis and speed to decision?

Chris Yohn: You know, this is probably a common theme for our talk. You know, I really always go back to what are the key questions? Like, you really have to understand what’s the question that’s going to advance your program? What’s the question that when you get the answer, you’re going to make a decision based on it? And so if you can define what that key question is, then you go deep on that and you really dig in on that question. And kind of others that maybe are interesting but aren’t going to help you move forward fall by the wayside. At least when time and money is tight, you’ve got to do that.

Grant Belgard: What’s your framework for deciding build or buy?

Chris Yohn: I always lean towards buy, frankly. I think I want to rely on people who focus on building things, you know, focus on your expertise. You know, again, I’m going to focus on the biological questions and if I need tools for that, I want to find somebody who focuses on building that tool and then use it as opposed to trying to make it myself. Plus, frankly, software engineers are pretty expensive. So if you don’t really need it and to bring that capability in-house, then I’d rather rely on someone else who’s putting all their energy and effort into building a tool, but then I can make use of.

Grant Belgard: Where do you see multi-omic analyses and single-cell or spatial data actually changing decisions?

Chris Yohn: Yeah, you know, sometimes you do see where it’s not peripheral, but it’s just not core to really making things move forward. You know, I’ve seen a few. I helped build a target identification platform based on primarily single-cell data and we use that for some of our translational work, but really to have a big impact, it’s got to be really baked into the core approach of what you’re doing. It can’t be kind of an add-on. I do think that, you know, one place, especially as you move towards translation and getting things closer to the clinic, that you can have a couple of places you can have a big impact there is in certainly mechanisms of action studies, right? That’s going to really get you a lot more insight.

Chris Yohn: And then perhaps I think we’re starting to see a little bit of traction even in biomarkers where people are starting to bring more multi-omics technology later into the clinic and I think that’s going to start to really help us with really understanding both markers that we can use for things like pharmacodynamics and outputs as well as hopefully eventually even like, you know, patient selection and stratification down the road.

Grant Belgard: How do you approach data readiness, metadata, QC and so on?

Chris Yohn: I think you really want to start with consistent, you know, semantics. You know, make sure your IDs, ontologies are all kind of in place. Make sure all parties both on the wet lab side and the dry side really agree ahead of time. And then, you know, I think including biological QC in addition to sort of statistical QC of your experiments, I think is important, like did the experiment even work, right? An example is recently I was working with a company and they did this in vivo experiment where we were doing, you know, some omics readouts on it and we were looking at the data and let’s just say we didn’t see the effect we expected. Some cases we did, so there was like some old and young animals and you could definitely see differences there, but they had a compound treatment and they just didn’t see anything.

Chris Yohn: And so I went back and we talked about the experiment and unfortunately in that case they didn’t have any biological readout from the animals that we used for that study. So we didn’t know like did they see the effect they normally would see with their drug? Maybe somebody misdosed them, maybe like somebody left the drug out on the bench the night before and it was no longer effective and we just had no information. So having that biological QC would have made a huge difference for that experiment.

Grant Belgard: Yeah, that happens far too often and oftentimes, you know, people like you aren’t brought in until after the experiments run, right?

Chris Yohn: Exactly. I mean, that’s a huge point, right? I think that being involved early on as a computational biologist and experimental design is so important. And, you know, not to go off on a tangent here, but I think, you know, most computational biologists and bioinformaticians have experienced someone coming to them, giving them a pile of data and asking the question, what does it say, right? And that’s like the worst experience, I think. So, yeah, definitely getting involved early is critical.

Grant Belgard: Especially when it’s multimodal data, right?

Chris Yohn: Yes, even worse.

Grant Belgard: It does many things.

Chris Yohn: Yes, that’s right.

Grant Belgard: It’s your question. How do you pick evaluation metrics that matter to the biology?

Chris Yohn: You know, it has to fit the biology and the question and what the next testing step is. Like, you want to make sure that you’re getting an answer that’s going to help you make a decision. You know, if we’re looking for, and also like making sure your level of information fits your question. So, like, for example, let’s say we’re picking some targets and you have a screening platform you want to put the targets into and you can fit, you know, maybe 20 things into your screening platform. What you want is what are the top 20, right? You don’t really care like the relative order of numbers two, three and four. You just want to know, am I accurately getting the top 20? So, designing your, you know, experiment so that you get that answer and not like what is two versus three is important.

Grant Belgard: What’s your process for closing the loop, turning predictions into testable decision-relevant hypotheses?

Chris Yohn: I think it’s kind of related to the last question, you know, about making sure that you fit the experiment to the biology. I think also really important here is making sure you have a really good collaboration between the wet and dry side. You need to kind of have buy-in ahead of time that you’re going to be able to test the predictions, you know, as computational biologists, almost everything we do is just a prediction, right? And in order to really show that this is truth, you need to go into the lab most of the time to prove it out. And so having, making sure that that’s in place ahead of time, I think is important. Yeah.

Grant Belgard: In translational settings, what’s the most underrated biomarker characteristic to pressure test early?

Chris Yohn: For that, I would say one thing that I’ve seen is donor or patient variability. Often, especially when you’re doing multi-omics experiments early on, it’s hard to get a large N for your study. And you may not have fully looked at the amount of variability that you might be seeing once you move forward into a clinical setting. So as much as you can, paying attention to donor and patient variability and doing maybe follow-on experiments with larger numbers, where maybe you hone in on a particular set of biomarkers or assays versus, you know, maybe early discovery or kind of bigger experiments with smaller N. But that’s definitely something that I think you really have to pay attention to.

Grant Belgard: I totally agree. How do you keep analyses reproducible without slowing teams?

Chris Yohn: That’s a tough one. You know, usually, you know, I’ve always been at small companies and, you know, you’re always moving fast. And I think one of the things that, you know, we talked about at one of the companies I worked at previously was everybody has to eat their vegetables, meaning that, you know, everybody wants to like do sort of the quote unquote fun analysis where you get to the interesting biological result. But in order to get there, you need to have like, you know, the infrastructure and the process in place. And so we used to say everybody has to eat their vegetables. Everybody has to do some of that as well as sort of more fun analysis. So spreading it out, I think, helps.

Grant Belgard: So on that note, what are your thoughts on, you know, the recent rise of bioinformatics agents? Because I have to say one concern I have is that a lot of the vegetable eating is skipped to some extent, right? So there may be confounds in how the data was produced that, you know, if you’re going through it properly eating your vegetables, you know, looking for all those things, you catch that early. And otherwise, you might get some really nice volcano plot, but it might be nonsense.

Chris Yohn: Yeah, yeah. No, I think it’s a great point. And, you know, I think it’s important to understand the fundamentals. And unfortunately, you know, some AI approaches are going to enable people to skip that. I even think back to like when I was working in the lab and a new cool kit would come out for, you know, doing some process, even, you know, like simple things like mini preps or whatever. And when I was in grad school, my advisor forced us to kind of do it the old school way first so that we really understood the process. And then you could go to like the fancy kit that did it really quick and fast and with simple steps. So I think the same thing applies here. Like I would hope that as we’re training people that we continue to make sure people understand the fundamentals before they jump to sort of the quick and easy path. It’s great to have those. Like I’m not discounting them, right?

Chris Yohn: Like I use them. And but I think knowing the fundamentals and how it actually works under the hood is key.

Grant Belgard: How do you handle batch effects and confounders when experiments are multisite or longitudinal?

Chris Yohn: That’s a tough one. I mean, it’s the one thing that, you know, kind of hits anybody who does these kind of analyses. You know, I think this also gets to what we touched on earlier about being involved in experimental design, because I think if you were involved in the experimental design, then you can help to try to minimize those variables as much as possible. And the other thing is, I think you need to make sure as you’re looking at the data, you model both technical variance as well as biological variance and have them both like distinct so that you can as much as possible understand like where things are, where the variance is coming from. And then if it’s the biological, then you can start to understand like what are your biological questions. I mean, I don’t have a great solution, right? That’s a tough one. And I think everybody struggles with that.

Chris Yohn: So I don’t know if you have any like magic wand that you’ve used that you can help me and your listeners to deal with this.

Grant Belgard: Yeah, I mean, it’s a question we get a lot. And unfortunately, if it’s not baked into the design from the get go, it can be very difficult to do well. I mean, of course, there are approaches to try to mitigate it, but they introduce their own artifacts, right? Unless you have proper controls run everywhere. And ideally, you know, you’re not changing your array midstream or something, right? It causes huge problems that you could do things to try to get around it, but they’re not going to be perfect. It’d be far from perfect.

Chris Yohn: Yeah, yeah. I mean, and that’s I mean, that’s a good point, too, right? It’s really making sure that you pick the right whatever platform and approach like at the beginning so that you don’t realize halfway through that, oh, this is not really fitting my needs. I’ve been able to switch something. And obviously that throws in a whole nother set of issues around batch. So, yeah.

Grant Belgard: So when a single cell or spatial data set underwhelms, what’s your troubleshooting playbook?

Chris Yohn: I think first you have to probably need to define, again, whether it’s a technical or a biological reason that you’re getting underwhelmed. Then you go back to your QC. And this is like that experiment I was mentioning earlier, where it turns out that we didn’t really understand if there was a biological effect. So, you know, talk to the experimentalist who did the data, who produced the data, like, was there anything unusual? Sometimes you can talk to them and they mention, oh, yeah, so happens that these samples looked a little odd when I was processing them, but I just went ahead with it. And then that can maybe explain what you’re seeing in the data. So I think that’s an important thing to follow up on. So really, you know, trying to gather as much information as you can to try to explain why you’re not seeing the effects that you had hoped or expected to see.

Grant Belgard: Where does simulation or in silico perturbation add the most value in your experience?

Chris Yohn: For that, I would say if you have like a really big space that you want to explore, that is just impossible or intractable to approach from in the wet lab, then those simulation or in silico perturbation type approaches could help you then limit or focus your wet lab experiments. And again, I’m probably showing my biological and lab-based bias in that answer a little bit, right? Because I’m always headed back to how do you validate it in the lab, right? So for me, you know, doing simulations or predictions from models just helps you to be more efficient in your lab work, I think.

Grant Belgard: Yeah, totally agree. What’s one technical belief you’ve changed your mind about the last two years?

Chris Yohn: Hmm, that’s interesting. Well, maybe I’m in the process of changing my mind on this one. I haven’t quite settled yet, but if you had asked me a year or two ago, I would have said that in order to build a good model, you really need highly structured, clean data to really get a good model. I think that’s still true. The thing that’s maybe I’m changing a little bit is, and this is all driven by, you know, large language models and everything we’ve seen with ChatGPT, et cetera, is that the fact that they can make sense of sort of the messy data of language makes me reconsider that maybe we can get good value out of the corpus of messy data that we currently have in biology, right? So I think I’m still always, if I have a choice, I’m going to go to like well-structured, clean data as my go-to, but maybe there’s going to be more value out of the messy stuff than I first thought.

Grant Belgard: Switching to talking about building teams and operating models, what responsibilities do you believe belong inside computational biology versus in a central data organization?

Chris Yohn: So I’ve always been at small companies, so usually that’s one organization, usually not a separate group. But I think if you do have it split, certainly biological interpretation, right, lies in the computational biology group, whereas maybe more like infrastructure and enablement of being able to answer those questions, you know, data platforms, you know, shared services are going to be in that central data organization. But that’s, like I said, that’s not from personal experience because for me, it’s always been one and the same in a small group.

Grant Belgard: What competencies do you expect from computational biologists versus data scientists or machine learning engineers?

Chris Yohn: Again, probably my small company bias is showing, but I think there’s overlap. Like you need people who can do a little of a lot of things. But generally, I would say for computational biologists, it’s more about, you know, really understanding like experimental design, getting to the biological results, sort of why things matter. Data science is more about, for me, you know, modeling really rigorous analysis, good statistical approaches to the work, model building, essentially. An engineer like an ML engineer is more about like scale, right, like more system based. Like we’re talking, you know, then you’re talking about bigger data sets and really bringing a lot of things to bear and getting to, like I said, more scale approaches.

Grant Belgard: How do you operationalize scientific prioritization when everything looks interesting?

Chris Yohn: I think the key thing is you need to look at an experiment you’re doing and then decide what decision am I going to make based on the result. So if the result of this experiment is X, I’m going to do this. And if it’s Y, I’m going to do something else. Right. So that really helps, I think, to prioritize what you move forward with.

Grant Belgard: How do you approach hiring in a market with both mass layoffs and at the same time intense competition for certain niche skills?

Chris Yohn: Yeah, it’s really an interesting market for sure in the hiring front lately. You know, I go back to something that’s, I think, pretty critical, especially, again, small companies is it’s about oftentimes it’s about culture and sort of mission alignment. I mean, certainly, obviously, you need to make sure that the skills you need are there. And I think it’s right. There are a lot of people out there looking for jobs. So you kind of if you’re hiring, you kind of have your pick a little bit, but certain skills are still in high demand. So to me, whether you’re in that environment or in a different kind of hiring environment, it’s so important that the folks that you bring in are aligned with, you know, sort of the culture and what you’re doing in the company. You know, I’ve unfortunately experienced had experiences where someone isn’t right and it just throws everything off.

Chris Yohn: So you’ve got to have the baseline of making sure, like the technical competencies are there. But then to me, getting that alignment is is really a critical part of hiring.

Grant Belgard: Yeah, we actually just recorded a podcast with an expert in organizational culture and kind of the emergent properties of individuals. Right. And how, you know, taking the most skilled, best and smartest people in every function and sticking them together rarely creates the most effective team.

Chris Yohn: That’s right. That’s right. That’s right. We’ve probably seen we’ve probably all experienced examples of that, of dysfunctional teams. So then you kind of figure out from that maybe what the right approach is.

Grant Belgard: Yeah. So looking back, what were the pivotal decisions that led you into computational biology in your own career?

Chris Yohn: Oh, wow. You know, I was doing my postdoc. I was in doing in a fly lab doing developmental genetics. This was like a while ago, like late 90s, early 2000s, when really that was really like, you know, genomes are being sequenced and just a lot of great technology coming out. And I think, you know, in my graduate and postdoc work, it was really still kind of a single gene focus. Like I literally worked on like very specific, a couple of genes in both my graduate work and postdoc. And seeing kind of what was possible as the genes were being sequenced really inspired me so that when I started getting into it in my postdoc and like took some programming classes and started doing some work there. And then when I left and I started my first my first biotech job was a bioinformatic scientist.

Chris Yohn: So, you know, I think just that timing, that time was really pivotal for, yeah, just the advances that we were seeing.

Grant Belgard: Yeah. And can you talk about how that transition was for you from academia to biotech?

Chris Yohn: Yeah, I think the way I like to talk about it is in academia, you have time, but no money. And in biotech, you have money, but no time. So that’s really the…

Grant Belgard: Except right now where you neither have time nor money.

Chris Yohn: That’s a good point. And I think along with that, like the willingness to take risks is much greater, right? Because you don’t have time. You’ve got to just try things and move forward. So that was a real difference. And that’s why whenever I talk to people who are kind of thinking about the transition, like that’s one of the things I really try to help them understand, because I’ve seen people make that transition well. And I’ve seen people struggle with it.

Grant Belgard: Yeah. I would say that that’s, I think, the most common answer we get from people and certainly an observation I’ve had. So what experience has prepared you to manage both bioinformatics platform buildouts and translational aspects of that?

Chris Yohn: When I was at Unity Biotechnology, we were working on diseases related to aging. We did a lot of early sort of discovery around new applications in different diseases. And at the same time, we had programs that were advancing into the clinic. So I think the fact that I was able to, for example, I helped design and execute a biomarker clinical trial for osteoarthritis. While I was also working on exploring new indications that we could potentially get into, really helped me to understand kind of what was necessary to move things towards the clinic, but also kind of the exploration that you have to do on those platform buildouts. So being able to do both at one time was really great.

Grant Belgard: What’s a fork in the road moment? You’re glad you chose the path you did? And what’s one where if you had to do it over again, you would make a different choice?

Chris Yohn: Probably, so I’ve spent a lot of my career in San Diego. And then about a decade ago, I moved up to the Bay Area and I think that move was great. So it really allowed me to expand my network as a lot of opportunities. I mean, San Diego is awesome. I love San Diego. It’s got a great biotech community, but the Bay Area is just another level. And that’s been really a great opportunity. And I’ve really enjoyed the work that I’ve been able to do here. In terms of something I would do differently, I’m not sure if there’s anything I would say. I mean, I don’t know, maybe I’d buy Nvidia stock 10 years ago. In terms of my career, I mean, I definitely have been very… I’ve kind of followed opportunities. It’s kind of been my path. It’s not like I’ve decided this is the thing I want to do and I’ve pursued it with passion. It’s more about seeing interesting opportunities and following up on them.

Chris Yohn: And so I don’t think there’s an opportunity that I chose that I would have preferred to have passed on at this point.

Grant Belgard: What habits or practices have been most durable across very different problem domains?

Chris Yohn: I think, and sorry if I’m being a little redundant, but I still go back to focusing on the key questions. That’s so important because I’ve worked in biofuels, in early stage, late stage clinical, across different therapeutic areas, different modalities. And no matter what, in order to really focus, you have to understand what is the question that’s going to help me move forward and do everything you can to get an answer to that question. So I would say, and there’s sort of two pieces in that answer where I say focus on the key questions. You know, certainly part of it is the key questions and the other part is that focus word, right? Because it’s so easy to get distracted. There’s so many things you can do. So making sure that you focus on what’s important has been so important to me.

Grant Belgard: So to get your thoughts on advice for people at different stages of their career, a number of questions. Firstly, for grad students and postdocs, where do you think they should invest their time and focus in learning over the next year?

Chris Yohn: Well, at the risk of sounding like probably what many other people say, you know, I think the sort of obvious answer is to really understand how AI is going to impact what they’re studying, how it’s going to impact them. I think a really important aspect of that is what are the limits of what AI is going to be able to do for you and to you a little bit, but also like what are the opportunities that you can use, that you can follow up on in your studies or in your work. Like I said, it’s maybe an expected answer, but I think it’s super, super important today.

Grant Belgard: And for scientists moving from wet lab to dry lab, what’s your recommended on-ramp?

Chris Yohn: I would say if you can, like look at your own data. I mean, certainly you could go and like there’s a lot of like tutorials and places that you can download data and learn on that. But if you can look at your own data, I think you’re going to be much better. Like, you know the data, you know what the limitations are of the data, you know what makes sense in the data. So I think that’s going to help you a lot more than like coursework or tutorials. And certainly I think if you can find one, find a mentor who can kind of walk with you just to keep you from making silly mistakes that, you know, a lot of people probably would do when they’re just getting started.

Grant Belgard: For first time computational biology managers, what advice would you have?

Chris Yohn: I would say you really want to kind of understand the landscape. Like what do you have? Like, do you have a team? What are the pipelines that are in place? What kind of data do you have? I think for new managers, usually the advice is, you know, don’t come in and start changing everything. You need to learn first, right? And I think that applies here as well. So understand the landscape. And I think out of that, you know, most important is probably really understanding the data, both what you have currently and what’s planned. And then if there’s data being planned, like get involved in planning those experiments, right, that’s really critical to plug in, get on program teams, you know, get, you know, to the project manager people who are actually like moving things forward and get into the planning as soon as you can.

Grant Belgard: And for scientific founders or heads of R&D, how do they set problem statements that are tractable and can be decision driven?

Chris Yohn: I think you have to define the scale of the question or the problem statement so that you can get to a decision. I mean, maybe that’s kind of built into your question, but, you know, you don’t want your problem statement to be too big, right? Like, can we cure Alzheimer’s, right? I mean, that’s way, obviously, that’s way too broad. But if that’s your ultimate question, you need to break that down to the point where you get a question that has like a clear go, no go at the end of it, right? You know, define your problems by what they allow you to decide next, not just by, oh, data we’re going to generate or something, right? You want to be clear about I’m getting, I’m doing this experiment to get this data that’s going to enable me to make this decision.

Grant Belgard: What types of structured communication, for example, memos, dashboards, formal reviews, and so on, do you find most effectively inform and drive decisions?

Chris Yohn: It varies a lot. I mean, to me, the best tool is the one that actually gets used, whatever that is. You know, I’m actually starting an effort right now with a company to create some dashboards, and we’re figuring out, you know, what those use cases are. And it’s going to be different. Like, we actually kind of define the two extremes. One is the person who is a little more data savvy and wants like a big, basically download dump of data that they can then play with, right? And then you have the other extreme, which is usually, you know, the senior management who wants like a PowerPoint slide with a summary of the data.

Grant Belgard: And some nice colors.

Chris Yohn: With some nice colors, right? Exactly, exactly. Some red and green checkboxes and stuff, right? And that’s exactly what we’re doing, right? So I think, and probably what, you know, I think what we’re going to do is, you know, we’re going to create some drafts, we’re going to circulate them, and we’re going to kind of see like, where do we get traction, and then you just double down on those. So I think you have to try a few things and then see, like I said, whatever gets used, that’s the one that you want to focus on.

Grant Belgard: When budgets are tight, as they have been for many companies in recent years, what do you defend first? And how do you go about deciding what can be paused, what can’t be?

Chris Yohn: Yeah, I think you need to define your one-way doors. Like, what are the things that if you stop, it’s really difficult to start again? And what are the things that you can easily restart again, if you do pause them? And so obviously, the ones that are easier to restart, then those are, you know, pretty easy to say, well, we’re going to pause that if it’s not going to be critical to our next step. I think if it’s a one-way door, then that’s when you really have to look at it very carefully. Like, what are the implications of pausing or stopping this, and then base your decisions on that. Like, if it’s a, maybe it’s a collaboration, and if you pause it, then they’re going to go find somebody else to collaborate with, right? And you can’t come back, right?

Chris Yohn: So that might be something you think twice about, versus, you know, something that’s completely controlled internally, you could maybe be a little more flexible with how you prioritize it.

Grant Belgard: And if you could give advice to your younger self, maybe at different stages of your career, what would be the most impactful advice you would impart?

Chris Yohn: Hmm. I think I would probably encourage my younger self to take more risks, and to just go for it. I think that, and this is probably a little bit of my own personality, but you know, I am somewhat conservative and a little risk averse, and you know, that’s probably, you know, held me back a little bit in some cases. So I think, you know, just, you know, failure is not a bad thing. Failure is how you learn and how you learn how to be better. So I think just going for it is important sometimes.

Grant Belgard: And if someone wants to work with you in a fractional leadership capacity, how should they prepare? And what sets an engagement like that up for success?

Chris Yohn: You know, there’s probably two main ways that people interact, that I work with people. One is where someone really knows what they want, right? Like, I need, I need this, I need to answer, I need a mechanism of action study for my compound. Can you help me like with experimental design and execution? And like, I have one customer or clan I’m working with, but that’s what I’m doing. The second is probably a little more open, where, you know, you might have overall goals, and you really need to figure out like, what is the strategy to help us find a solution to meet these goals? And like the company I mentioned earlier, where we’re really trying to figure out, is there a company here, that’s very open and broad, and it’s sort of there’s a overarching goal, but then like, together, we’re figuring out what that what that strategy is.

Chris Yohn: So understanding like where, which of those two categories you’re in, and then helping to define that, I think is important. Yeah.

Grant Belgard: And where could our listeners follow your work or reach you?

Chris Yohn: So LinkedIn is probably a great place to reach me. My website is compbiobridge.com. And my, if you want to just reach me directly, my email is just chris@compbiobridge.com.

Grant Belgard: Great, Chris, thank you so much for your time.

Chris Yohn: Hey, this is great, Grant, I really appreciate the time.

The Bioinformatics CRO Podcast

Episode 74 with Phillip Meade

Dr. Phillip Meade, a leadership and culture advisor at Gallaher Edge, discusses his experience evaluating organizational culture and how to diagnose culture problems and build lasting habits for high-performance organizations.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Phillip Meade

Phillip Meade is a leadership and cultural advisor at Gallaher Edge, which provides executive coaching, leadership development, strategic guidance and culture management services for businesses and organizations.

Transcript of Episode 74: Phillip Meade

Disclaimer: Transcripts are automated and may contain errors.

Grant Belgard: Welcome back to the Bioinformatics CRO podcast. Today I’m talking with Dr. Philip Meade, a leadership and culture advisor at Gallaher Edge, whose career has included extensive work inside NASA, particularly around organizational culture and return-to-flight moments after major setbacks. He’s collaborated across public and private sectors and co-authored a book on building high-performing cultures. Today we’ll translate those lessons for labs, universities, biotechs, and pharma, how to evaluate the strength of a culture, diagnose problems, and build habits that last, plus common pitfalls to avoid. Dr. Meade, thanks for joining us.

Phillip Meade: Good morning. Thank you for having me. I’m happy to be here.

Grant Belgard: So we’ll cover three arcs today, your current work and lens, how you got there, including time with NASA, and practical advice for leaders and teams in the life sciences. So to kick us off, in your current work at Gallaher Edge, what kinds of culture or leadership challenges are you most often being asked to help with right now?

Phillip Meade: The thing that we see most often is companies asking us to come in and help them because either they are in the process of growing and scaling or they want to grow and scale and they’ve hit a ceiling and they’re having trouble doing that. And so culture typically is one of those things that either is an enabler for scaling or it ends up being a roadblock that keeps them from being able to do the scaling that they’re wanting to do.

Grant Belgard: When you first meet an executive team, what signals, good or bad, do you look for the first hour?

Phillip Meade: There’s a few things that we typically see that demonstrates what we’re looking for in terms of a high-performing culture. Openness is one of them. Is every member of the executive team truly engaged and contributing or is there one or two key members that are really the ones that are doing everything and everybody else is sort of sitting there waiting and seeing what they do and hanging back? Another one is self-awareness. Are they really aware that when we’re talking about culture that they’re a part of it, that culture starts with them and so that this work is really about them and they’re a piece of it and they’re involved? Or are they talking about everybody else needs to change and this culture is about out there? And then another piece of it that’s very important is a willingness to be vulnerable.

Phillip Meade: Do they show that and demonstrate that willingness to actually let the guard down and take the armor off and be vulnerable as human beings? Or are they armored up and trying to present themselves that way?

Grant Belgard: How do you decide whether a client needs structural changes, leadership, behavioral changes, or both?

Phillip Meade: You know, it’s usually all of the above. It’s just a question of how much of each and how do we set those dials in there. When we talk about organizational culture and how is that created, people take cues for how they behave and what they believe about how they should behave. They take that from the leaders and what the leaders do and what the leaders pay attention to and what the leaders say and do and all of that, as well as from the structure. And so we really want to be intentional about all of that and be intentional about how do we design the behaviors that we want from the leaders and what are the leaders saying and doing, as well as how are we creating the structures and the experiences within the organization that people are seeing and responding to. And so it’s really a total design that we’re looking for from that perspective.

Grant Belgard: Many leaders feel they already talk about culture. What separates talk from traction?

Phillip Meade: I just touched on it a little bit in my previous answer, but first and foremost, it’s an intentional design. I think a lot of people think they’re doing culture just because they do things that are culture adjacent. Like they do things that are around, you know, employees being happy or feeling good in the workplace, but they haven’t done the work to intentionally design what is the culture that they want? How do they create that culture? What are the beliefs that they’re intentionally trying to create in their employees around that culture? And how are they creating those beliefs through the specific experiences that they’re creating? And what experiences are those? How are they doing those experiences? So if you haven’t intentionally designed that, then it is kind of just talk.

Phillip Meade: And so you want to have that level of intentionality to the design of what you’re doing so that you know, let’s just take the silly ping pong table in the break room. If you want to have a ping pong table in the break room, that’s great. Do you know why you have that ping pong table in the break room? You should know exactly why you have that ping pong table there, what that experience is designed to do. Is it what beliefs are you trying to create in your employees? And then what beliefs those are creating? What do those beliefs drive from a behavioral perspective from your employees? And how do those behaviors then help to create that culture and ultimately drive the strategy of your organization? So that’s the whole flow that you want to have from a design perspective. And if you don’t have that level of understanding, then you haven’t really designed your culture.

Phillip Meade: You’ve just bought a ping pong table and put it into your break room. And so it’s there’s nothing wrong with the ping pong table. It’s neither good nor bad, but you haven’t designed a culture around it.

Grant Belgard: What’s your go to way to align executive intent with middle management behaviors?

Phillip Meade: So you want to have first the senior leaders to demonstrate those behaviors, because if the senior leaders aren’t truly living it, it’s going to be very difficult to just look at the middle managers and say, you know, do what I say, not what I do. That never works. Secondly, you’re going to want to communicate those expectations clearly. It needs to be crystal clear so that they understand what is exactly expected of them. You’re going to want to align the systems and processes so that they have the ability to do what you’re asking them to do and that it fits into how they do their jobs and they’re rewarded for it. And then finally, you’re going to want to provide them with if it’s if it’s skills based, you’re going to provide them with training.

Phillip Meade: And if it’s really is behavioral, you’re going to provide them with some behavioral change workshops that will support the behavioral change that you want from them.

Grant Belgard: If a team has strong technical results, but shows strain, missed handoffs, creeping burnout, how do you frame the problem without pathologizing people?

Phillip Meade: This is one of the things that we typically focus on with all of the organizations that we work with, because blame is actually one of the greatest drivers of organizational dysfunction. I mean, you see it in a lot of a lot of organizations, and it’s a huge waste of time and energy. We like to focus on contributions. And so in any time that there’s an issue that happens, there are many things that contribute to it. If you think about blame, blame is typically a game that we play where we try to figure out who was mostly responsible, and then we assign blame to them so that we can say it was their fault. And from an organizational standpoint, if you’re trying to think about how do we become most effective, that doesn’t make us most effective. We really want to figure out how do we diagnose how this happened? How do we correct that?

Phillip Meade: And how do we move forward and prevent this from happening in the future? So the way that we do that is we try to identify all the contributors to the situation, and then we figure out how do we prevent those contributions or shift those contributions so that this doesn’t happen in the future. And so we want to approach it from that standpoint so that people aren’t afraid that if I admit that I contributed to this, either through my action or inaction in some way, I’m not going to be in danger of becoming the person who is blamed as a result. And so we come together and we look. Everybody contributed in multiple ways through action and inaction. The system contributed to it. There were environmental contributors. We really look at exactly all the things that contributed to it, and then we say, okay, how can we shift those contributions in the future and get a different result?

Phillip Meade: And so that’s the way we want to start approaching things differently from now on. How do you design for sustainability so the workout lives the initial consulting period? You really want to embed it within the fabric of the organization. And that’s where, when we talk about true culture change is not a short-term project, this is why. Because oftentimes it can take a little while to really go through the whole process of getting it really embedded. But you want to build it into everything you’re doing.

Phillip Meade: Once you really understand the culture that you’re trying to create and what that looks like and have it well-defined, and you understand the behaviors that you’re looking for, and you understand the core values that you want, and what that really looks and feels like, and how to create this culture that you’re after, then you can build it into how you recruit, how you perform your interviews, how you onboard and introduce people into your organization so that they’re trained into your culture from the beginning. You can build it into your leadership development programs. You can build it into your executive development. You can build it into your performance management systems. You can build it into your succession management. You can build it into the language that you use in your organization and how you talk and speak and interact with each other.

Phillip Meade: And then, as I was talking earlier, you can build it into the experiences that you intentionally design into your organization that are part of the way that you do things as a company. And so, you know, as you’re doing that throughout the course of the year and the course of the life of the organization, you know these are the different experiences we have and why we’re doing it. And you can change those out and tweak those over time. But as you’re doing that, you know what you’re doing and why you’re doing it. And then, as you update it, you know how you’re updating it and why you’re doing that.

Grant Belgard: So, shifting gears to talk about your own career trajectory, what early experiences pointed you towards organizational performance and culture as your focus?

Phillip Meade: Well, you touched on it in the introduction. It was an abrupt change for me. It wasn’t a subtle shift. In 2003, the space shuttle Columbia disintegrated on re-entry, killing all seven astronauts on board. And in the wake of that accident, the Columbia Accident Investigation Board found that NASA’s culture had as much to do with the accident as the piece of foam that hit the wing. And I was asked to lead all of the cultural and organizational changes for return to flight because they grounded the entire space shuttle fleet until we could fix the culture. And so, that really set me off on sort of a life-altering path where I began looking into organizational culture and really how that impacts organizations and how important that is to how they perform.

Grant Belgard: When did you realize engineering, as of course you originally came up as an engineer, right?

Phillip Meade: Yeah.

Grant Belgard: Systems thinking could be applied to human systems.

Phillip Meade: Well, I mean, I will say it was a lifeline to some extent. I was trying to grasp for something to make sense of how do I figure this out? How do I solve for this problem of organizational culture? And I realized that an organization is a system. But the thing that I realized is that it’s not just any kind of system. It’s a complex adaptive system. And so, that’s where systems thinking came in. Because if you try to treat an organization like, you know, a car engine, you’re not going to get the right results. You have to treat it like the complex adaptive system it is. And so, when you shift your thinking and begin, you know, analyzing it and diagnosing it and working with it in that way, you get different results. So, a couple of pivotal mentors that I had, I worked with a couple of consultants very early on, Paul Gustafson and Shane Cragun.

Phillip Meade: They were very instrumental in helping me to learn a lot about organizational behavior. And, of course, I read a ton of books that helped me come up to speed on all of this. And I’ll say that one of the moments that helped shape my approach was really the fact that, you know, I thought that NASA had a great culture. And that’s really part of what freaked me out when I was asked to lead this culture change. Because I would have felt better if there were tons and tons of problems for me to solve. And I didn’t think that there were any. So, one of the moments that shaped my approach was that the results of a study was released right after I was asked to lead this. And it named NASA as the best place in the federal government to work. And it was like, okay, this just confirmed what I thought.

Phillip Meade: And so, it really shaped my approach because it confirmed that the way that we’re looking at culture might not be perfectly correct here. If culture caused this accident, and yet we’re the best place in the federal government to work, then what does culture really mean? And, you know, that’s where I came up with the fact that, you know, culture means more than just people are happy at work, right? It has to mean something more. And so, that really influenced my philosophy on organizational culture.

Grant Belgard: So, this might feed into the next question. What’s a belief you held earlier in your career that you’ve since updated?

Phillip Meade: So, beliefs that I held earlier in my career that I would have updated, I think I’ll go in a different direction on that one. I mean, I was very much an engineer in my early career. I was an electrical engineer. You know, they say you can’t spell geek without double E. And I had, I think one of the ones that is my favorite one to reminisce on is, I used to say, I can explain it to you, but I can’t understand it for you. And, you know, I had philosophies on communications that, you know, if I explained it, and I was technically accurate, and you didn’t get it, then that was your problem. And, you know, I grew a lot, you know, over my early career, realizing that being effective was more important than being right. And being effective meant learning how to work well with other people. And organizational culture, oddly enough, really is a lot about that.

Phillip Meade: Organizational culture is about how do you help human beings to work together effectively as a group. A lot of the psychology underpinnings that we use in the work that we do actually comes from work that was done with the Navy, because they were having challenges, trying to figure out how to put the most effective teams together in the control center of their ships. And their theory was, if we take the smartest, you know, best performer at each position and put them together on these teams, we should get the best performance. And they weren’t getting that. And they were confused. And you would think that that’s what you would get. But in reality, the best performance on a team comes from the teams that work best together, not from putting the best performers together. And so that’s what culture is all about.

Phillip Meade: Culture is about how do you get people and put them together that actually work well together. And in an organization, that’s what you need. You need people who feel good about themselves and have the ability that when you put them together with other people in that environment with other people, they all feel good working together. They feel good about themselves. They have the ability to adapt and interact with each other in ways that it makes the whole team perform better. Not just about each one of them trying to maximize how they work best individually, but the team suffers as a result of it. That’s not what you want as an organization. And so, you know, it’s ironic, but I was a part of that personally when I think back to how I performed individually as a young engineer.

Grant Belgard: So, diving a bit more into your learnings from your time at NASA, when people hear culture, they often picture perks, right? The ping pong table in the break room, as you mentioned. In mission-critical contexts, what does culture actually do?

Phillip Meade: Yeah, so this takes me back to the previous question where I said that, you know, being named as the best place to work in the federal government showed me that it has to mean more than, culture has to mean more than that, right? And so, I define culture as, you know, being three things. I think it has to drive employee engagement because you get so many benefits from that. I mean, when a culture drives employee engagement, I mean, there was a 2020 Gallup poll that said that disengaged employees have 37% higher absenteeism, 15% lower profitability. I mean, that drops down to the bottom line and translates into a cost of 34% of their salary. I mean, you know, engagement is huge. You know, it’s a big deal. And so, having highly engaged employees is a big part of what culture does for you. And then, it also improves people’s lives.

Phillip Meade: And that’s a big part of what having an effective culture does. But the third thing that culture does is that it drives organizational performance and market success. And, you know, for a mission-critical organization like NASA, this means that it had to support mission success, which meant taking astronauts up to space and returning them back to Earth safely. I mean, safety was a huge part of that. And so, if it doesn’t do all three, it’s like, you know, three legs of a stool. If it doesn’t do all three, you don’t truly have an effective culture. I mean, I can think of examples of companies that have any two of those three, and I would argue it doesn’t have what I would call a truly effective culture. In some ways, it’s not doing good things. And so, when it has all three of those, and that’s what it takes to truly have an effective culture, and that’s what you want to be shooting for.

Grant Belgard: What did you learn about surfacing dissent in bad news in environments where schedule pressure and hero narratives play a big role?

Phillip Meade: Yeah. You know, I learned that human psychology is complex. You know, even though we’re an organization full of, NASA was an organization full of engineers, and, you know, we like to joke that they’re not really human beings. They are human beings. And when you talk about organizational culture and what happens there, it all starts inside of the human being, and it really is driven by that human psychology. And we don’t think about this. We don’t talk about it very often in our daily lives, but we’re all actively self-deceiving ourselves, you know, on a daily basis. It’s just, it’s part of what our human psychology does to protect us.

Phillip Meade: And so, you know, when we are afraid of something, when we’re afraid that something’s going to make us feel uncomfortable, when we’re afraid that we’re going to be unpopular, when we’re afraid that this isn’t going to align with the identity that I’ve created for myself, all kinds of funny things happen in our psyche, and we get behavior that you wouldn’t expect. And so, when you’ve got engineers that live in an environment where failure is not an option, and they don’t want to be the one that says that something’s impossible or something that can’t be done, and they’re tremendously committed to mission success, and they love their jobs, and they love doing what they do, and they’re working really, really hard and long hours to try and make something be successful.

Phillip Meade: They don’t want to be the one that holds their hand up and say, hey, I don’t think we can do this, or this isn’t possible, or we can’t get this done. There’s a lot of silent peer pressure to be successful, and to save the day, and to make things work, and to not do that. And it’s not overt, and nobody’s saying anything, and nobody would call them a bad name if they did that, but it’s all below the surface, and it’s all in the subconscious. And so, it makes it very, very hard to identify and see, which is why it’s so deadly. So, many organizations talk about psychological safety and practice what behaviors from senior leaders create or destroy it. It’s really about truly encouraging and rewarding the feedback and dissenting opinions, normalizing dissent and healthy conflict, and helping individuals to increase self-awareness.

Phillip Meade: You know, that self-deception that I was talking about that’s happening on a daily basis, educating people that that’s going on, helping people to know that that’s a piece of what’s happening, and helping us all to know and be aware of what we’re doing and what’s going on so that we can recognize it and combat it. Because noticing is the first step. Until we notice, there’s nothing we can do.

Grant Belgard: Could you share an example of aligning structure, for example, reporting lines or decision rights with the desired cultural behaviors?

Phillip Meade: Yeah. So, there’s two I’d like to talk about. One is sort of a large-scale one, and then there’s another one that I like to use, which is a sneakier one. And so, I like to use it as an example. The larger one was with the Columbia accident. One of the challenges that was identified after the accident was that the way we were structured, the engineering, the technical, as well as the budget and schedule and safety, they all rolled up to the program manager. And so, it was a single point of accountability was managing all of that. And so, there was a feeling like from the engineers that they didn’t have their own voice. And so, you had one human being who was having to try to juggle responsibility for budget pressure and schedule pressure, as well as technical decisions and safety.

Phillip Meade: And so, afterwards, we split that out into separate technical authority and safety authority so that we did have the, again, we called it the three legs of the stool, but we had the three legs there where we had a program manager that was responsible for budget and schedule. And then we had a safety organization that was responsible for safety and a technical organization that was responsible for the engineering. And so, engineering, if they had a technical concern, they felt like they had a route that they could advocate all the way up and didn’t feel like they were having to go up to their boss who was more concerned about budget impacts than the technical concerns. And then the sneaky one that I want to talk about is an organization where they had quality assurance technicians that were responsible for safety and speaking up about safety concerns.

Phillip Meade: And they had to punch a time clock on a daily basis coming in to work. And the engineers that were working in this area didn’t have to punch a time clock. Nobody else had to punch a time clock. And for whatever reason, the quality assurance technicians, the story in their head as a result of punching the time clock was that management didn’t trust them to keep their time, that they distrusted them. And so, that’s the reason they had to punch a time clock. And so, they felt like because they weren’t trusted by management, then they created a similar distrust towards management, because trust is a reciprocal entity. So, if you don’t trust me, I’m naturally not going to trust you. That’s just the way that it works. And so, speaking up and raising safety concerns becomes harder. If I don’t trust management, it’s going to be harder for me to raise a safety concern.

Phillip Meade: And so, it was creating a challenge with raising safety concerns because there was a trust issue. And one of the root causes of this trust issue was this silly time clock that they were having to punch in and out of work. So, it’s just weird structural stuff. It’s all about the beliefs that are created in people through the environment that they live in and through the things that happen. And so, we create those unintentionally many times in ways that we never intended to do.

Grant Belgard: That’s interesting. Yeah. Because in the clinical trial arena, you do have this structural separation of the safety monitoring for the patients, but there’s typically not something like that in the earlier stages of drug development before patients get involved. So, for leaders inheriting legacy systems in history, where do you begin?

Phillip Meade: I always like to begin by trying to learn as much as I can about why things are the way that they are. I don’t like to change things until I understand the reasoning behind why they are and how they got there. Usually, there’s people and there’s inertia around the existing systems and processes and everything. And so, providing honor to why it’s there and being able to respect that and take the good for what it is and then only change the things that need to be changed or build upon what it is. That usually helps at least minimize some of the resistance from the people who are involved in what’s there already. And you can save time and energy too because there’s probably are reasons why things are the way they are. And so, you’re not, you know, breaking things that don’t need to be broken or, you know, doing something that won’t work.

Grant Belgard: If you had a week inside a life sciences organization, how would you diagnose the culture quickly?

Phillip Meade: I would try to be as much of a fly on the wall as I could. I would just try to hang out, visit meetings and listen, see how the meetings go, you know, see how much actual discussion happens in meetings. Are people speaking up? Is there meaningful dialogue and is there healthy conflict happening in those meetings? You know, follow people out into the hallway. Are there, is there more conversation after the meeting than there was in the meeting? You know, listen to what’s happening, the conversations that are happening in the executive meetings and what they’re, they’re asking to have happen. And then, you know, see what the managers at the middle level, what are they telling their people? Are they telling their people the same things that the managers at the upper level are telling? Or is the, does the message get distorted by the time it reaches that level?

Phillip Meade: And do the employees, or do they understand the things that the leaders want them to know? Do they even know why they’re doing what they’re doing? Just that, that kind of a thing. You know, what is, what is the, what is the general vibe around the office feel like, you know, or do employees seem like they’re happy and enjoy being there? Or does it, does it feel like it’s a, it’s a drag hanging out at the office? You can learn a lot just by hanging around.

Grant Belgard: What questions would you ask at the bench level versus the executive level?

Phillip Meade: I probably would ask a lot of the same questions. Honestly, I’d want to know, like, if they understood what their, what their strategy was, it might come out in different language, but I’d want to know, you know, do you understand how you’re going to be successful as a company? What are the values here? Or what, how would you describe the culture? Do you know, do you know what that means to be an employee here? I’d probably ask them questions about how they liked working here.

Grant Belgard: How do you tease apart performance issues that stem from process, structure or relationships?

Phillip Meade: You really just have to dive in and start asking questions and, and, and figure it out. You know, a lot of it is, is trying to figure out, you know, if the person that’s doing it, is it, are they, if there’s a challenge, is it because they, they can’t do it? Or is it because they won’t do it? Do they not have the, the ability to do it because they don’t know how to do it, or they don’t have the ability to do it because there’s something that’s missing? You know, you just have to, there’s just so many different ways it can go. You have to, just have to dig in and, and start asking questions and, and figure things out.

Grant Belgard: For, for regulated environments, of course, drug development is fairly regulated. What cultural strengths and blind spots tend to show up?

Phillip Meade: Well, I mean, sometimes you’ll have a strength from a feeling of, of sameness. You know, there can be like a, a, a sense of community or camaraderie that can come with being a part of a committee or a particular community there. But similarly, a blind spot can come along with that, that maybe there’s an over-reliance on standards or regulations to protect you from things. And, you know, that can be dangerous because many times, well, in all cases, those are only as effective as, as the people who are following them. And so, you know, you, you really have to depend on people to do what those regulations say. So.

Grant Belgard: When, when publication pressure or go, no, go, gates, loom, how do you maintain integrity of decision-making?

Phillip Meade: So first and foremost, I want to be honest, I haven’t dealt with this too much personally, but if I’m reading into the question correctly, I would say that as an organization, you would want to make sure that you are structuring your incentives correctly. You don’t want to create situation where you’re, you’re putting your, your employees into a no-win situation and, you know, putting them under undue, undue pressure to, to do things in order to save their job or, you know, or whatever. So, uh, I think that’s what I would say there.

Grant Belgard: What are the telltale signs that a strong culture has drifted into groupthink?

Phillip Meade: Uh, I think similar to, to what I said about being a fly on the wall in a, in a meeting earlier, you know, groupthink is obvious when everybody basically agrees to everything all the time. So, you know, I, I look for healthy conflict, uh, as a sign of a strong culture in, in many cases. And so I would be looking for, you know, that type of healthy dissent, not arguing or fighting, but, you know, questioning and challenging and, and people with different ideas or different positions on things. And so that’s where you get the, the best decisions and the best ideas and the best innovation. And so, um, that’s what you want to see.

Grant Belgard: What’s your approach to decision rights clarity? Who decides who’s consulted, who’s informed?

Phillip Meade: I don’t think that there’s a single answer to this one because, you know, there’s lots of different types of decisions. The idealistic answer to this is that you want the people who are affected to be involved in the decision. That’s not realistic in a lot of cases. I would say that I would lean as far towards that as is practical because the more that you can involve the people that are impacted in the decision, the more buy-in you’re going to get. And so one of the things that people don’t think about oftentimes is they, they misinterpret what it means to make a decision quickly. And they think of the time to make a decision as the time it takes to actually like decide. And I would argue that the time that you want to look at is the total time from when you start to the time to finish implementation.

Phillip Meade: And so you may get from the beginning to making the decision quickly, but then your implementation may take three times as long if you don’t involve the right people. And sometimes it may take a little longer to get to the actual decision point, but then your implementation is, is a third of the time to actually implement it. So the total time is actually shorter when you involve more people. And, you know, you got to think through that. Obviously you can’t always involve all the people and you can’t, and sometimes it is too long. And the way I just described, it doesn’t work out. And that’s the reason I said, it depends and it’s not really super clear, but, you know, I would lean towards involving more people and trying to get, you know, implementation to go more smoothly and getting greater buy-in when, when you can’t, because it really does, it really does help.

Phillip Meade: And I think that right now, in many cases, people lean too far on trying to decrease the amount of individuals involved because it makes the deciding part go faster. But then I think they’re under, underweighting how much it increases the implementation portion of it.

Grant Belgard: That’s a good point. How do you cultivate leader self-awareness?

Phillip Meade: I mean, coaching is a great way to do that. We have some workshops that, uh, that help to increase leader self-awareness, you know, reading helps, you know, as if once a leader decides that they want to start improving their self-awareness and then there’s, then just starting to pay attention and notice things can, can begin to, to be that part of that process. But as with all self-improvement, it has to start with the desire from the individual themselves to, to improve.

Grant Belgard: So how do you adapt culture work as a company scales from 20 to 200 to 2000, uh, even 20,000, right? Life science organizations come in all shapes and sizes.

Phillip Meade: Yeah. I mean, you’re doing the same basic things. It’s just a matter of how do you roll it out in tiers? So, you know, we, we always like to start at the top and then roll it down. And so you want to start with the executive team and then you want to move down to the layer below that. And then the layer below that. And so you, you just, you have more tiers. It takes a little bit more time. You know, when you start to get up to like 2000 and above, now you’ve got more mature, more well-developed HR departments. So you’re, you begin to work with, you know, more well-developed HR systems and processes. So you’ve got LMSs that you’re, you’re now integrating with and you’re, you’ve got really well-developed performance management systems and tools that you’re integrating into. And you’ve got internal HR teams that you begin to integrate into and work with.

Phillip Meade: And so, you know, you’re, the work that we do begins to integrate with the people that they have and the work that they’re already doing. And so we begin to, to weave in, into that.

Grant Belgard: What’s the best small concrete habit a leader can start tomorrow?

Phillip Meade: You know, for me, it’s, it’s just, I would say it’s, it’s learned something new every day. You know, one of the commitments that I made a long time ago was that I was, I was going to read every day. And so I try to, I try to read something new every day, but I think more generically, I would say just to, to learn something new every day. I think that’s a great habit.

Grant Belgard: What are the top three mistakes leaders make that quietly erode culture over six to 18 months?

Phillip Meade: I think the top three are not communicating, not admitting mistakes and tolerating bad behavior.

Grant Belgard: Where have you seen well-intentioned values backfire?

Phillip Meade: I think there’s two ways that well-intentioned values backfire. The first one is anytime the company or the leaders of the company don’t actually live the values or, you know, do something counter to the values that kills it right there. People see that it’s basically a lie or that it’s not true, then it becomes immediately ignored or, or worthless to them. The other one is when the values as well intentioned as they may be are over general. And Patrick Lencioni refers to these as permission to play values. And I mean, I’m not opposed to them existing as permission to play values, but I would call them that and differentiate those from your true core values. But, and these are things that almost every organization could claim that they have like integrity and respect and safety.

Phillip Meade: You know, it’s, it just feels so vanilla that a lot of times employees will look at those and they’re like, yeah, yeah. Okay. I don’t get it. You know, like it just, it just feels like it’s a platitude or, or something that is just being hung on the wall just to, just to do it because it doesn’t seem like there’s anything particularly special to it. Like, yeah, of course, you know, we don’t want employees to steal from us and, you know, everybody should have some basic respect from each other and you should expect not to die when you come to work. So that, you know, those things make sense. And so people just sort of blow it off that, you know, and they don’t pay attention to it. And so I think that those things are, are very well intentioned and there’s nothing bad to them, but it’s also very difficult to really get a lot of traction with them because they are so in most cases, vanilla.

Phillip Meade: And you know what, what Patrick Lencioni says is that, and unless you can truly argue that you have more integrity than 99% of the other companies in your industry, like it’s not really your core value, like it’s not what defines you. And so it’s, it’s hard to like, say this sets us apart. This is something that we’re going to hang our hat on and your employees see that. And it’s like, okay, like, yeah, we have integrity, but you know, it doesn’t really, it doesn’t really mean, you know, mean something special. And so it sort of just becomes this thing that we hang on the wall.

Grant Belgard: When culture change fails, what was the root cause of that failure most of the time?

Phillip Meade: Most of the time it comes down to a failure of leadership. Usually the leaders, the most senior leaders haven’t really truly bought into it and committed to it.

Grant Belgard: How do you prevent hero culture from undermining redundancy and documentation?

Phillip Meade: This goes back to what we were talking about a little earlier. I mean, this is a self-awareness issue. When hero culture is about me not truly having the self-awareness to realize that I am trying to make myself feel better by becoming the hero. And, you know, it’s that lack of self-awareness. It’s that self, it’s where I, it’s a defensive mechanism where kicking in, where, where I’m just trying to, to prevent myself from, from feeling bad. And so it’s, it’s part of my identity and I’m trying to protect. And so we want to try and raise that and prevent that from happening and increase, increase that, uh, self-awareness so that it, it doesn’t happen.

Grant Belgard: What’s the smallest viable step an individual contributor can take to strengthen culture?

Phillip Meade: The smallest viable step I would say is to increase your courage by 1%. If you increase your courage by 1%, then you’re going to increase your openness by 1%, which means that you’re going to increase the feedback that you give to others by 1%. And you’re going to increase the self-accountability that you have by 1%. And you’re going to increase the initiative that you take by 1%. You’re going to increase the contributions that you make by 1%. You’re going to increase your performance by 1%. I think if, if everybody in the organization were to do that, I think that you’d start to see visible changes in the culture.

Grant Belgard: What book, practice, or question has stayed useful across contexts?

Phillip Meade: I think the thing that has stayed useful across contexts, the practice, I’m going to go with the practice is getting curious. And it’s, it’s something that it’s something that I’ve, I’ve had to learn. And, you know, it’s, I’m not necessarily proud of it, but, you know, one of the things that is my tendency is, you know, and probably a reason why I’m sitting here answering all these questions really quickly for you on a podcast is I like being an answer guy. And so, you know, people come to me and, and ask me a question and I’m, I’m really quick to have an answer. And a practice that I started developing as a leader was to not answer the question immediately and to get curious and to ask more questions and try and learn more and say, okay, well, what’s going on here?

Phillip Meade: Or when someone would say something and I thought that they were wrong or I didn’t, you know, I thought that I had the answer and they didn’t, they were, they didn’t understand, get curious and figure out, well, why do I think that they’re wrong and I’m right? That’s been very, very useful to me across a lot of contexts to just try to get more curious instead of assuming that I always know the answer, that I always had the, you know, the right answer and that everybody else is wrong is very, very useful.

Grant Belgard: So what, what options do our listeners have to get more engaged with you through your work at Gallaher Edge? And, uh, you know, I know you have a book, you offer courses, you have, uh, consulting and so on.

Phillip Meade: Yeah, absolutely. You pretty much summarized it. We have a, we have a book that they can get on Amazon. It’s, it’s called The Missing Links: launching a high-performing company culture. They can get that on Amazon. You can go to our website. It’s Gallaheredge.com and, uh, check us out. Uh, we offer individual workshops as well as, uh, consulting engagements. We have a on-demand leadership development course that we offer. That’s, uh, it’s a micro learning format and, uh, it’s, uh, it’s a great way to get introduced to us and, and see what we did. We’re all about. So a lot of different ways. We, we also do, uh, speaking. So if you’re looking for a speaker for, uh, for an event, it’s another way that we can come and help you all out. So.

Grant Belgard: Great. Dr. Meade, thank you so much for joining us.

Phillip Meade: Thank you, Grant. I really appreciate it.