The Bioinformatics CRO Podcast

Episode 82 with Manuel Corpas

Dr. Manuel Corpas, founder of Cambridge Precision Medicine and originator of ClawBio, discusses his experience as a genomicist, entrepreneur and educator working at the intersection of genomics, AI, and health data science.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Amazon, YouTube, Pandora, and wherever you get your podcasts.

Manuel Corpas is a Senior Lecturer at the University of Westminster, founder of Cambridge Precision Medicine, and originator of ClawBio.

Transcript of Episode 82: Manuel Corpas

Disclaimer: Transcripts are automated and may contain errors.

Grant Belgard: Welcome to the Bioinformatics CRO podcast. I’m your host, Grant Belgard, and joining me today is Dr. Manuel Corpas, a genomicist, entrepreneur, and educator working at the intersection of genomics, AI, and health data science. Manuel is a senior lecturer at the University of Westminster, founder of Cambridge Precision Medicine and the originator of ClawBio. His work has also included contributions to efforts and tools such as Decipher and BioJS. Manuel, welcome to the podcast.

Manuel Corpas: Thank you, Grant.

Grant Belgard: So for listeners meeting you for the first time, how do you describe what you do right now?

Manuel Corpas: I have been doing bioinformatics since 2000. First as an MSc student at the University of Manchester, then I did a PhD in bioinformatics. Then I moved to the Sanger Institute where I developed one of the leading sort of databases for diagnosis of rare genomic disorders. I did there at Cambridge like four years. Then I started the company, Cambridge Precision Medicine, which was part of the Cambridge University incubator. I did that for a few years. Then COVID came and the wife said, oh, we need to move to London because I want to be closest to my family and so on. Then I moved to London where I got this back to academia as now as a senior lecturer at the University of Westminster in genomics. I lead this program on a new MSc on AI and digital health, which is second year and yeah, with 20 students. So I think that gives you a little bit of an initial set up.

Grant Belgard: What led you to create ClawBio?

Manuel Corpas: So I think the prelude for ClawBio was the bio JavaScript projects, which was an open source community I led around 2014. And the idea was at that time to basically come with a way for stop reinventing the wheel in terms of reusable components that people would need, would want to use for biological data on the web. So fast forward into the internet revolution where we are today, where the idea is that, at least from where I see it, the future looks agentic and knowledge currently in the biological domain, but you could say from many other domains, but I am based on the biomedical domain. So all knowledge tends to be captured by publications, PDFs, which are non native for discovery by agents. And by agents, AI is a software that runs continuously and Alexa, you’ve heard of Alexa, right?

Manuel Corpas: It talks to you and you can ask her questions and it’s able to do specific tasks and it’s all the time on. So the extension to that is that we have now the possibility to use AI powered agents or robots which leverage the power underneath of frontier models, such as ChatGPT, Claude, perplexity, Gemini, you name it. The idea around ClawBio is to come back really to the same situation. So we have bricks of knowledge, which currently are not discoverable. They’re not reusable. They are not reproducible. So how can we make a sort of central repository registry where anyone who is developing this sort of discrete skills can find them in a way that there’s no restriction, MIT license. So anyone can see the code, can push the code. Obviously anything, there’s some guard rails, I look at the code and make sure that it’s audited.

Manuel Corpas: And at the end of the day, ClawBio stands for the idea of using this incredibly adopted new tool called OpenClaw. And I’m going to explain what OpenClaw is. OpenClaw really is the planning between the large language model and then the communication that you have via chatbot, via telegram or WhatsApp or Discord. So the planning in between is done with OpenClaw. And OpenClaw has been gaining a huge amount of traction. So I think it’s like in three months, it’s been the most starred and downloaded in the history of GitHub. And it has, right now, it seems like it’s the sort of base technology on which an agentic AI is going to happen. And so everyone, if you are in the sort of AI domain, you will have thought about agents and agentics all the time.

Manuel Corpas: But if you are not in that sort of domain, the sort of hype around or the sort of heat around agents is that for the first time we have enough sort of capacity via some of the big providers, GPT, Anthropic, and so on, whereby for the first time, they give you the possibility to be able to develop code in a way that basically is hands-free. And that code development, when you transform it into specific tasks and chain them into specific skills, problems, whatsoever, you can then very easily develop a library of repetitive tools, which you can automate without having to be there. So a lot of the sort of vision is how can I automate as much as possible of tasks that normally before we had this sort of elements together. Now we can actually have your agent running them without you having to basically be on the computer all the time.

Manuel Corpas: And you can talk to the agent because it will have already pre-programmed the skills necessary to be able to chat with you. And you can see via your chat and you can ask it questions and it can do and spark code and it can change things just with plain English. So I think that’s really the key of ClawBio. So it’s the application of that kind of paradigm into the biological domain, specifically around reproducibility and also this open source community aspect to be able to share, to be able to reuse.

Grant Belgard: So noting that it’s early March 2026, and this was really just very recently launched, what would you say are currently the most mature skills ClawBio has?

Manuel Corpas: None. I wouldn’t say that there’s any that is mature. The project only has a few weeks and so I cannot claim that. What we do have is some minimal viable products. So some minimal functionality that actually proves the point that you can really do something useful. And I guess for me, the coolest application is a robot, which is currently accessible. It’s called RoboTerri. So RoboTerri’s soul is inspired on my Ph.D. supervisor and Professor Teresa Attwood with whom I did my Ph.D. in Bioinformatics at the University of Manchester. And you can basically talk to RoboTerri, who has already had preloaded my personal genome.

Manuel Corpas: One of the things I tend to demo now is that I can now take a picture of a drug pack that if you could go to the pharmacy and then you say, if you’ve been prescribed this drug, and you would like to know whether that particular drug is compatible with your genetic metabolism for processing the drug. We know that the pharmacogenes influence significantly your ability to respond well or adversely to the normal prescription guidelines. We know that most of the prescriptions and the dosages are based for the average population. And then you have, you know, if it happens that you are not white, northern male European, you’re going to have, and the further you are from that sort of population, you’re going to be more likely to have an adverse drug reaction. Out of those adverse drug reactions are included in your genetics.

Manuel Corpas: So if you are able to just take a picture of your drug, which has your genome preloaded, and then RoboTerri is basically able to identify the active principle in the drug, in the medicine, compare it against FDA approved guidelines for dosages based on your genetics, and then against your current genetic variation, the appropriate dosage based on those guidelines. And then it gives you back a report there in your telephone specifically saying, oh, avoid, or yes, this is good for you. So that’s a lot of minimum viable products I currently have that I demo most.

Grant Belgard: And looking forward to say the end of the year, what would your best guess be about where this project or successors of it go?

Manuel Corpas: Yeah, the reason why I’m now in this place, I’m now in an incubator building. And I’ve been invited by a number of potential investors who are interested in getting further with this project. And so the priorities for me at the moment is that we’re talking not about a sort of normal type of data, we’re talking about very highly sensitive data as it is your health data, your genome data, this is data which is incredibly sensitive. So if you are going to reuse any of those components, you have no idea who the heck have developed them. You have no, the trust component from having a library is very beautiful. Yeah. But when you are now talking about your own personal data or data which is highly sensitive, like patient data, you’re just not going to reuse that.

Manuel Corpas: So what I want to build around now is a little bit like you have seen, for instance, in some of the social media profiles where you have this sort of blue badge where, you know, that person has been certified, that person has, so we could have some kind of certification aspect where if we have that blue badge for any given skill, we can guarantee, obviously for a fee, that specialty skill has been audited against regulatory systems, the European regulatory system and it has compliance 2730001, for instance, that way building that trust for specific skills, which people will pay for, then we can add that extra layer of certainty and trust, which then I think would enable a much more trusted resources. So I think for me, it’s not a question about how big this is, which we’ve already had at least 40 contributions, and we can see that the project is really being very popular.

Manuel Corpas: But I think the issue now is how do we make this into something that can be trusted? That’s really where my current thinking is.

Grant Belgard: What do you find people most frequently misunderstand about AI within the bioinformatics community?

Manuel Corpas: That they have no idea of how quickly things are changing and the power of new versions from 5.2 to 5.3 people think there’s a small incremental change. No, this is 10 times better. People don’t understand two things, the exponential growth in terms of capability that we’re experiencing every three months. And I don’t know about you, but I work completely different as I was working three months ago. Yeah, if you are like me, we’re running like tech agents at the same time, any given time and all these instances. I don’t look at my emails first. Now I look at what was the way I left it with that agent last night and see where we are at. So that’s one thing. And so I gave a presentation at the London Bioinformatics Meetup on the 26th of February. Yeah, that’s nearly a month ago. And I raised it. It’s meant to be a community of bioinformatics practitioners, right?

Manuel Corpas: You have people from major institutions that you know their names. I’m not going to say any names. I asked them, does anybody know what Claude code is? No one. And I was like, oh my God. And this is meant to be it. The top people in my field, I think that, or maybe I’m just mad. That’s why I said I was mad because the relentlessness, the pace, the acceleration of at least how I live on a daily basis, how things are changing, it’s just absolutely, I don’t even sleep because I have this sort of situation where I see this coming. And to be honest, I made a very conscious decision. I have two options. I either stay, unless somebody else makes the decisions for me, as it is happening right now, and I’ll give you some examples in a minute, or I actually invent something and I shape somehow in my own very little small niche, what’s going to be the future.

Manuel Corpas: Because otherwise, unless I invent it, I’m going to be out, as simple as that. And for me being out, that to me is, I can’t live with that though. The example. So I was doing a benchmark of the main frontier models. Yeah. Gemini 3.0, Claude Opus 4.6, Sonnet 4.6, GPT 5.3, Deep Seek 3.1. I think it was a 3.0. And I was basically benchmarking for each of the main World Health Organization diseases that give greatest value to humanity, how well each of these models are able to query back some sample research, output research I had taken from biomedical data is called PubMed. Basically, PubMed has all of the biomedical literature index and you can query PubMed for Ebola, let’s say, or for type 2 diabetes or for ischemic disease or whatever.

Manuel Corpas: I was like, I was actually looking at the 170C global burden of disease, World Health Organization diseases, which include Zika virus, Ebola, I don’t know. So Claude wouldn’t let me query them for those diseases. I know that they’re doing it in the right way. Okay. So these are guardrails. But I’m like, who are some anonymous software developers, Silicon Valley, to decide on what I think I should be doing? Because I’m sorry to say, I’m a bona fide researcher and you cannot or should not dictate my freedom. I know this is a very silly example, but I think this is just a taste of what could happen very soon. We’re not talking about two years. I’m talking about months down the line because one month is a year in AI speed.

Grant Belgard: So in this very rapidly changing environment, how do you decide when a total workflow is trustworthy? What do you look for?

Manuel Corpas: So first thing is transparency. Can I, can I actually look at the code or can I, is it being, are there any black boxes basically? Yeah. So if there are black boxes, then I don’t trust it because there’s no transparency. And unfortunately, most of the LLMs I use, they are black boxes. So how can I trust something which is a black box? Transparency is one. Secondly, I want to see the faces of the people. I don’t trust a project where it’s just an anonymous community. I want to see the people behind it. I want to see who are the developing, who is making the decisions. Because if I have a problem, I don’t want to have your LLM power chatbot to give me one of your recommended arbitrary sycophantic answers. Third, I want to see, is this scientifically grounded? Is it, what’s the thoroughness criteria? I want to see the track record of the people. I want to see their LinkedIn profiles.

Manuel Corpas: And I guess the irony of this is that suddenly the human side of their project becomes so much more important. And I was having this conversation earlier with one person here at this, and suddenly trust becomes paramount for anything. It’s all about trust now, but it’s not about technical prowess. For me, non-negotiable asset, which is cause, is trust.

Grant Belgard: How do you think that will evolve over the next few years? Because the kind of technical side of things is, yeah, I feel like the kind of presentable on this is massive for anyone paying attention.

Manuel Corpas: To be honest, I have a lot of difficulty even predicting, understanding the present, let alone the future. I think that for me, yeah, because I don’t know about you, but I think we are all drowning and that we’re all unable, humanly impossible to keep up with everything that is happening. I, so I can’t predict technology, but what I can see that what would be the tools that will help me be better prepared for the future is something that I live on a daily basis. And to be honest, these are things which I had not paid too much attention until now, but the principles that I know will help me are the following. One, don’t get overwhelmed. If you can just do 1% of catching up every day, you are doing fine. I’ve seen it because in three months time, that is not 1%. It compounds.

Manuel Corpas: And I have seen that just by every day listening to podcasts like this one, where you have trusted people, whoever you feel trusted, yeah, you can trust that. And you keep your own environment because right now it’s becoming less and less important on where you are at, as long as you have access to the right information channels and you keep connecting on a daily basis, that’s good. The second thing is that I have, I’m training this sort of gut feeling that my personal intuition. Whenever I get a result now from an agent, from an LLM, it’s not, I don’t even read it. I don’t even necessarily, it’s more to do with the gut feeling. So I am now having a lot, making decisions a lot more with my gut than my brain. Because I feel that the logical part is sorted now and it’s more to do with, not so much with the logic of intelligence, but how it feels.

Manuel Corpas: And then the other aspect that I have now accepted, and I think it gives me a tremendous competitive advantage in terms of the principles for the future, for how things will evolve, is I feel unstoppable in the sense that for me, the intelligence is now, is not the limit anymore. It’s my capacity to ask questions. For the first time ever, I can ask, and this is already cliche, Peter Steinberger says that the creator of global, you have an infinitely patient machine that will be able to explain things at your own level. There’s no excuse now not to want to learn. And also I find another thing that is becoming cliche as well is you have this situation where some people say, oh, this is going to atrophy your brain because it’s making all these decisions for you and they’re thinking, well, that’s a valid option.

Manuel Corpas: But I can tell you that you can flick the side and harness it to even encompass problems that you would have never thought you would be able to develop before. Because now if there’s something you understand, you can always ask. And so that’s for me, my strategy, I cannot predict the future. I cannot understand the present really, but I have this sort of inner guide compass, which is now more important than ever, that I think keeps me, to be honest, ahead of the curve. And at the end of the day, the adoption curve is here. So as long as I’m ahead of the adoption curve, I’m going to be fine.

Grant Belgard: And how do you think about maintaining scientific rigor in this world? It seems like it takes more time to consume and digest outputs than it does to create them.

Manuel Corpas: Yeah. So obviously we’re being swamped. So my approach is very simple. Have you heard of the dragonfly method? Right. So dragonflies are the most terrible predators you could ever think of. They were human size. We would all be dead. They wouldn’t be humans, right? They are so precise. Why? Because they have 10,000 lenses. In other words, they have 10,000 different perspectives from which they can really calibrate their environment. The way to survive this sort of lack of validation or verifiability of having your work, you can train your LLM to judge it as a reviewer. Then you can have the perspective. You have another lens that comes, which is the founder, another lens that is the patient, another lens, which is the general party.

Manuel Corpas: And so I’m going to give you complimentary perspectives that allow you to, for the first time have really, if you trust your model, I guess that for me, the trust comes from what model do you trust? So if you have a model that you trust, and the only way to trust that model is by using it and by tinkering, by testing it, by seeing, oh, this one is better for emotional empathy. Oh, this one is better for generating figures. This one is better for integration with this particular tool. And that’s the kind of knowledge that you can only guess if you are invested into embracing all these different technologies. And you need to stop being a user. And now if you really want to ride the wave, you must become a builder. That mentality has to change and that will give you the necessary confidence to be able to understand what the shortcomings are. What’s the best way to prompt?

Manuel Corpas: Because as I said, every tool is going to have their own quirks and not even the providers of these tools understand it, they don’t even understand it. They can only have a, some kind of general frameworks and they have their guard rails, that there’s a huge space in between that remains unexplored. And that is the right strategies because all of these models are all going to be generalists. But if you have that specific, very specific expertise that you had in my case, bioinformatics, let’s say, that I’m going to win in bioinformatics for these very specific genomics tasks, whatever. I’m not going to win at the many other general things that are out there. And that’s where I see for myself anyway, the opportunity.

Grant Belgard: How can others get involved in ClawBio?

Manual Corpas: So just go ClawBio.ai and then it will have pointers to the repository where the code is and then just tell your favorite LLM or tool to help you understand what the code is. And so you have your own personal tutor and that will be the way. The other thing is that obviously we’re now developing channels, we’re doing hackathons here next week in London, we’ll develop ways for people to become community and through email lists, through Discord, through, as I said, we have not even had time to have all of these different options, but I think that the best way is simply to just go to the website and tell your favorite chat book to walk you through it and then start question. Break it. If you break it, I’ll give you, I’ll buy you a gift. This is meant to be, stop thinking about messing up, please messing up. That’s the only way you’re going to learn.

Grant Belgard: So changing track here to talk a bit about you and your own career. Was there a moment when your work stopped feeling like a series of separate projects and started feeling more like a mission?

Manuel Corpas: It’s always felt like a mission since I had very clear, I wanted to become a scientist and the current projects are simply an expression of this relentless need to somehow express the all of this energy and all of this feeling I have for the world. I think it becomes like really like my own temperament. I must say that I’m a little bit in the autism spectrum, so I get quite obsessed with things. And right now I’m obsessed with AI, I’ve been obsessed for several years now on AI and now I’m obsessed with global and really harnessing the power of agentic AI. Because as I said, I see a lot of potential, I see a lot of danger, and I see a lot of people who very in a very, in a good way, they’re trying to impose their values, which I know I may not necessarily agree. And some people talk about super intelligence as the end goal.

Manuel Corpas: I don’t think that’s necessarily one that really excites me because super intelligence for me, it sounds a little bit like you exclude some people. What if these people are disabled? Some other people are talking about super abundance as the end goal for this revolution. I don’t agree with that. That sounds selfish and materialistic. For me, what really the purest purpose for me to pursue this obsession is what I call super enlightenment. If we take into the history, into account the industrial revolution, which is the closest thing I can think of to what’s happening today. You had these people like Nikola Tesla, Immanuel Kant, or even slightly earlier, Isaac Newton. So these were people who saw advancements as a way for growth, not in terms of wealth or power, but as wisdom about really a better understanding of yourself and the world around you. And I know that there are real dangers.

Manuel Corpas: And then this is the fact that you could have super intelligence ruling the world. I understand that there have to be people thinking about those problems. For me, frankly, it’s a bit of a distraction. If I focus on the negative, I prefer to focus on the things that are meaningful to me. It’s more of an integration of the technology, even with a spirit, even if I can say that. And it’s weird because you have these new agents that are so [?]. It’s becoming somehow spiritual, even if we don’t mean to do that. Not that I’m necessarily a religious guy, but I think that this meaning aspect, this need for me to be authentic, value oriented, which sounds obvious. I think it’s now ever more important than before.

Grant Belgard: Looking back, what risk are you glad you took?

Manuel Corpas: I don’t think I have risked enough. I wish I had. I’m a risk taker, but I don’t think it’s enough. I guess for me, I had this sort of pathological state of mind when I’m never satisfied. So I have a constant sense of dissatisfaction, and that sort of drives me to want to improve, I guess, combined with my obsessive behavior for things I feel really passionate about. I am not taking more risks because of people I love, whom I care, and I would be taking more risks if I was on my own, but I think I would probably be dead by now.

Grant Belgard: What skills matter most now that AI is changing how technical work gets done?

Manuel Corpas: I think some people say curiosity and inquisitiveness and asking questions. I’m going to go one level beyond that. And I actually want to say it’s about your attitude, and it’s about your own grounding and having a very, very clear sense of compass in terms of what matters to you and why. So your internal compass, why should you be doing what you are doing? Finding the right motivation drive that gives you that inner fire that keeps you going is for me the most important thing, not just from now. This is just a new reincarnation, but I think that that is something which will never change regardless of what technology we’re surrounded by.

Grant Belgard: What makes someone genuinely strong at interdisciplinary work?

Manuel Corpas: Really, not being afraid of showing your vulnerabilities and being prepared to swallow your ego again and again by people who know more than you. And I think it’s also an essential appetite for learning.

Grant Belgard: Speaking of learning, there are lots of ways to learn. What specific habits in that space have compounded the most for you over time?

Manuel Corpas: Studying. So for me, a day that I don’t spend, I don’t mean studying on the computer. I need my physical book, smelling, it may sound a little bit old school, but if I am on the computer, I get distracted a lot. And so having that sort of discrete physical medium I can touch, there’s this connection, a physical connection, a spiritual connection with that work. And so for me, a constant habit of, one of my vices is I keep buying books from Amazon, which I never read, but it’s one of the things that gives me most pleasure, just buying books. I know that I’m going to read maybe 50% of them, but I have them everywhere. I have one book in my car. If it happens that I’m in the underground and then I don’t have anything to read, I have another book in my, in this wallet, I have a book in that. So I always have a book around me, which will bring me back to my origins.

Manuel Corpas: And the origins is scholarship at the end of the day. And that scholarship for me is not given by the computer. It’s given by that sort of quiet, calm, no noise place. In this case, in the early morning, 6 AM, 5 AM, when everyone is sleeping, everything is quiet. I have my little lights, my pile of beloved books there, and just enjoy that sort of moment of solace and connection with learnings, new, old, ancient, which to be honest is for me, was the essence of civilization or our civilization. That’s why I don’t think that AI necessarily is going to make us less intelligent or less able. Obviously, some skills like having a calculator, now, if I want to have a sort of complex sum, I use a calculator. Yeah. Okay. But that doesn’t mean that I’m going to stop being less intelligent because there are other things, as I said, this gut feeling, which I’m constantly training.

Manuel Corpas: So you are that, you harness that technology and internalize as a new Swiss knife artifacts, which now becomes part of him.

Grant Belgard: And lastly, what advice would you give your younger self?

Manuel Corpas: Don’t doubt yourself so much.

Grant Belgard: So Manuel, this has been a great conversation.

Manuel Corpas: Yeah, absolutely.

Grant Belgard: Thanks.

Manuel Corpas: Thank you.