The Bioinformatics CRO Podcast

Episode 3 with Ben Logsdon

In this episode, Grant sits down with Ben Logsdon, director of computational biology at Cajal Neuroscience, to discuss new perspectives in Alzheimer’s Disease research, incentives in academia, teamwork, and societal resiliency.

(Recorded on Oct 1, 2020)

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen on Spotify, Apple Podcasts, Google Podcasts, and Pandora.

Transcript of Episode 3: Ben Logsdon

Grant: Welcome to The Bioinformatics CRO Podcast. I’m Grant Belgard, and I’m here with Ben Logsdon. Ben, would you like to introduce yourself?

Ben: Hi Grant. Thanks for having me on the show today. I’m a computational biologist. I’ve been in the field of computational biology, professionally, I guess, for about six plus years now.  And before that did the whole, you know, postdoc, couple of postdocs and graduate school in upstate New York.

Grant: Great. Thanks.  So tell us more about your path. What made Ben Ben? Start from the beginning.

Ben: Yeah, definitely. Absolutely. So I was in college in undergrad and I was really into genetics and biochemistry and I ended up getting an undergrad BS in biochemistry, but also minored in mathematics.

Ben: So I’ve always kind of had multiple multidisciplinary interests. I’ve definitely pursued both of those going, throughout my sort of trajectory, both professionally and personally. I then went on to Cornell and got a PhD in computational biology, really focused on building new machine learning and high dimensional statistical methodologies to analyze genome wide association studies and high dimensional gene expression data sets. And really the driving purpose behind all of it was just wanting to understand these complicated systems, you know that in physics, there’s these simple rules. And, you know, humans have spent hundreds of years building better instruments to figure out what those rules are to try to understand them.

Ben: But biology is just this, like it’s the frontier man. Like we still don’t know the rules. I mean, we have ideas about pieces and parts of it, but I’ve always been fascinated by that. And it’s like one of those places where a lot of biology that’s being done right now, or has been done has not really focused on this sort of more quantitative side of things up until I would say relatively recently. You know, and there’s a lot of really good work you can do just at the bench doing Westerns and gels and all that good stuff. Like that’s been really powerful to help us understand and disentangle some of these systems, but I’ve always sort of been of the opinion that to understand these things you need a bigger tool set. 

Ben: So that was kind of the motivation to do more quantitative stuff at Cornell and get better chops on the stats and machine learning side. And then after that, you know, I went to the Fred Hutchinson Cancer Research Center, did a postdoc there with Charles Cooperberg looking more at genetic epidemiology, sort of like a method development for analysis of a whole exome sequencing and rare variant analysis type work. And then decided I wanted to do something with potentially more translational impact and did a second postdoc at University of Washington, focused on applying these sparse model building methods to gene expression, data sets in cancer to try to come up with alternative ways of identifying driver genes that, you know, wasn’t just based on mutations, but trying to use expression signatures or detecting within expression data, signatures of drivers.

Ben: But then I guess the real thing that I’m passionate about now, and I’m really grateful for. You know, I just left this job at Sage Bionetworks and I spent six and a half years there working in the neurodegenerative research space. And that’s just been an amazing experience. And, you know, as a computational biologist, oftentimes you kind of are like a hired gun, right? Like some principal investigator or in CRO-land, especially, some client brings you in and is like, “Hey, I’ve got some data to help me make sense of it.”

Ben: But I do think I’m not just interested in doing data analysis. Like I think in the context of Alzheimer’s disease in particular, like, I really want to understand the biology as well. And really want to help sort of marry all of these different quantitative techniques with the right data sets to inspire the right question that then the folks doing the bench work can go track down, develop new assays, do the right experiments so we can actually like start figuring out these diseases.

Ben: It’s also been fascinating in the Alzheimer’s disease space, how the field has been very married to a very small set of hypotheses about like, what is driving this disease. And, you know, just looking at some of the data analysis of the new, the omic data coming out of Alzheimer’s, it’s not as simple. Like amyloid and Tau, you know, the signatures are there and there’s really interesting results or insights to be gleaned from that, but there’s so much other biology that’s going on and it’s very complicated.

Grant: You’re looking under the streetlights, right?

Ben: Yeah. I mean, the streetlamp effect is real. And, you know, you can talk a lot about misalignment of incentives in academia and industry, and why that leads to lack of diverse portfolios in terms of risk as well as the technology needed to generate data, to be able to even articulate some of the new hypotheses. Right? Like you think about for a long period of time, it’s just people looking at tissue slides under a microscope and saying “okay, well, we see these amyloid plaques and these neurofibrillary tangles, what’s going on there?” And then omics just opens up a whole new frontier of possibilities in terms of the biology and the molecular causes of the disease. And you can’t see it right under a microscope necessarily, unless you know what gene you want to look at.

Ben:  And really, I think a lot of it is like knowing what the players on the chess board really are and what the rules of engagement for those players are and how it relates to what we already know. 

Ben: The thing with Alzheimer’s disease that makes it very different from cancer, for example, is that Alzheimer’s, you can’t profile the tissue during the course of the disease, right? Like you can’t get antemortem tissue samples. And so all you see is what’s happened at the post-mortem. And so it really is like a Sherlock Holmes mystery, in some sense: you know what happened after the fact, but then you’re trying to like put the pieces together as to what the sequence of events that led to that is?

Ben: I think that makes it a very different type of problem than in cancer. And some sense it’s a lot harder because you’re having to do a lot more inference and we don’t have good model systems. There’s plenty of mouse models where you can just crank the amyloid to 11 and yeah, like things change, but that doesn’t mean if you cure Alzheimer’s in a 5XFAD mouse, that whatever that drug is, is going to work in phase three human trials. 

Ben: So, yeah. So I think, I guess to wrap up the answer to your question about my arc, I think it’s really been one of been really sort of generally curious and expansive in my interests, like wanting to understand biology and the quantitative mathematics/statistical side, but really sort of gaining a passion for their application to neurodegenerative disease in the last six years. 

Ben: And at Sage I’ve been working in these amazing National-Institute-of-Aging-funded consortia: the Accelerating Medicines Partnership in Alzheimer’s Disease (AMP-AD), the Model AD consortium, and most recently I was in the TREAT-AD consortium. And these are like multimillion dollar, multi-institutional, open-science consortia that are trying to pull back the curtain on other causes of the disease through new data generation and analysis of that data. So like AMP-AD was focused on generating data to do systems biology analysis like: WGCNA, or causal network analysis, those sort of things on gene expression from post-mortem brain to prioritize new targets and disease; model AD, building 50 new mouse models of late onset Alzheimer’s disease; and treat AD sort of like the open drug discoveries idea, where we actually would have medicinal chemist, structural biologists, people who had experience in developing high throughput, screens, and assays, and then marry that to everything upstream. 

Ben: Right. So it’s just been an amazing experience working with so many different types of people. I think that’s not something you would generally get to experience as much in academia, Like as a bioinformatics expert, you generally have the PI who has some biological question, and you’re asked to analyze some data. And in this case, there were a lot of different perspectives and language, how people talk about things.

Ben: And so it’s been great, it was a really amazing experience and definitely opened my eyes to like, you know, how complicated all these processes are. Like from a philosophy of science side of things, like all of this is open science. So like everything was being put out in the open through the AD knowledge portal that’s hosted at Sage. And I think that’s also something that the young guard is recognizing: how important it is as we go forward. That the actual value of any individual data set is usually–unless you’re talking like clinical trial data, obviously, but like preclinical/basic research– like the value of any of those datasets is actually pretty minimal on their own. And it’s only when you can start combining them and layering things up that you can really realize their potential. 

Ben: But a lot of people, in terms of incentives, are like “I’m going to like generate this data and then, you know, sit in my lab and have some postdoc crank on it for two years until they can hopefully find some gold and get a Nature, Science, or Cell paper right.

Grant: So following on the misalignments of incentives, what do you think are the strongest misalignments and what do you think might be some reasonable reforms that should be considered to mitigate them? 

Ben: I mean, I think a lot of it has to do with academic promotion, right. That basically people who are looking to get tenure, they’re being judged on two highly-related criteria. Actually really one criteria, which is how much indirect they bring into their institution, which is a function of how many successful R-level grants they are applying for and getting. And that’s all predicated then on how many publications they’re putting out because publications are kind of the raw material to demonstrate leadership in a particular field or domain.

Ben: I do think that, you know, in terms of the misalignment of incentives, I mean, the problem with that is that it sort of leads to a model where people are all trying to be an expert in one narrow thing and some of these problems, the scale of the problem, it’s not something that you can do if you just have one hat.

Ben: And so then it makes it much more difficult for the traditional R kind of awards, where you have the academic who has a lab that’s like cranking away, cranking out postdocs and graduate students who are all working on that one tiny little bubble on the edge of human knowledge that they’re trying to expand.  I’m less familiar with physics, like in actual experience with how it works in the world of particle physics. But in that case, there are papers with 10,000 authors on them and the instruments are just so big and expensive that in some sense, they have to work together with lots of people with different expertise in a lot more coordinated fashion, just because the scale of the problem is so big and complicated. 

Ben: But in biomedical research, it’s still a little bit of the wild West for academic research labs. It’s kind of like having your own little company, where you’re trying to put in competitive bids to the federal government on research proposals and you’re trying to demonstrate that you can be out in front and push the boundary of human knowledge in a very specific way. But I think those incentives lead more towards putting out lots of papers and being able to secure a lot of indirect dollars to your parent institution and that doesn’t necessarily mean you’re going to be taking risks, right. You’re going to want to continue to keep your lab funded. 

Ben: I think one of the challenges is for some of these areas of biology, where we don’t really understand what’s going on and we have a lot of the streetlamp effect, as a community, we need to take more risks and we need to spread that risk around to a much broader pool of people working on these problems. We need a leaderboard of hypotheses and have people work cranking away on all of them. And then as a society, we’re investing proportionally across them. 

Ben: You can’t ask an early stage academic investigator to be like, “Oh, you should go after this target that nobody knows anything about. There’s 10 papers in PubMed on it.” They’re going to be like, “no, I’m going to go after the one where we have a lot of prior evidence and we can write a sweet R01, right? Yeah. 

Ben: So I think that’s one big misalignment of incentives where for people who want to get tenure, both in terms of the review process for grants, but also in terms of how they’re being assessed. There’s a general sort of necessary conservatism. Maybe that’s fine in academia. It can just be how it works, but then there does need to be some other outfits that can contribute to our collective knowledge and take some of those risks and push the boundaries a little bit more.

Ben: And a lot of that has to do with how academic organizations organize themselves. They’ve decided that they have this concept of tenure and that’s the big carrot they have for all these early stage investigators. 

Ben: It’s interesting. Cause I think once you get to someone who’s a later stage investigator who has already made their name and they have less to lose. They’re actually more likely to take some of these risks and go after like projects and ideas that are a little bit more on the frontier, a little bit more on the boundary, but 

Grant: Well, they certainly afford to do so. They typically have larger labs. They may have HMI funding or something like this, and the failures don’t really count against you. And the productivity per dollar I don’t think counts against you that much if you’re still publishing high-profile interesting papers. What I’ve seen from a lot of labs  is they’ll put postdocs and graduate students on fairly risky high risk, high reward projects, which are great when they work out.  And that kind of stuff is pretty important to move science forward, but it doesn’t necessarily always serve the postdocs well, who may have been put on an unsuccessful project given the rest of, of the system that’s currently in place. 

Grant: So if Francis Collins is–who knows why– listening to this podcast, driving into Bethesda what would be your message for him?

Ben: Oh, Oh man. Put it on the hot seat. Yeah. I mean, I think the way in which the labor market works in academia should be completely rethought. I think that postdocs are incredibly, on average, under compensated given their level of training and that you look at fields where there are good industry opportunities–I’d say more in this sort of machine learning area or EE or CS–you see this just brain drain from academia. And I think that’s a problem. I think for me personally, it’s super frustrating that on the biggest problems of our time, like curing Alzheimer’s disease or cancer or all these huge biomedical research problems, you have a huge brain drain of folks with quantitative skills. They’re all going out to Amazon or Facebook or Google or whatever because the financial compensation is just, it’s just not comparable, right? Why would you do a postdoc when you could get a six figure salary?

Ben: I think that’s one thing I’d say. And then in general, like postdocs, you can end up having folks be taken advantage of, because the actual academic job market is so absurdly expensive or absurdly competitive, and people just get stuck in a permanent postdoc, where they’re just in a lab. It’s comfortable, but there aren’t a lot of good opportunities to progress professionally. And so people will stay in postdocs for seven to eight years. 

Ben: So I think, you know, if I was talking to Francis, I’d say like, “Hey, there needs to be a complete rethinking of the training model to address the problems that we have. The old model doesn’t work. Like you don’t have this model where you can just have people come in as grad students, get their PhD, go do a postdoc with the one expert in the field and then have their own ideas and get that first R01 or do a K award or whatever, and then go off and start their own lab.

Ben: I just don’t think it’s going to work like that going forward. If we’re actually going to make progress on some of these problems, you need to be able to assemble teams of people with complementary expertise who can work together well as a team. And that’s just not something you’re trained for in the academic model necessarily. Like you have to figure out where you’re going to have the insight. You know, the lone genius in the tower, who’s going to figure it all out.

Ben:  Really thinking how to restructure the training model comes, at the end of the day, it comes down to the funders. Because the PIs are the ones that are applying for grants and those grants are being used to pay the postdoc or grad students salary. Yeah. Maybe that’s a little too radical of a take, but I do think it’s true.

Grant: Yeah. There definitely are some bad habits. We sometimes have to train people out of, when they come from academia. When you’re going to assemble teams with complementary expertise because I think there’s a lot more general teamwork in biotech. The incentives are set up in a very different way. Charlie Munger said, “you show me the incentives and I’ll tell you the outcome.”

Grant: So channeling Peter Thiel here, what’s something you believe is true, but where most people would disagree with you? 

Ben: I think we don’t talk or think enough about the long view in biomedical research. I’m not sure people would disagree with me on this necessarily. I think that they just haven’t really thought about it. Have you ever read the foundation novels by Isaac Asimov? 

Grant: Yeah. 

Ben: So just for people listening, in those novels you have this galactic empire, that’s hitting the end of its tenure, basically, and about to descend into some like 10,000 year dark ages or something. And this guy, Harry Selden’s like, “well, that sounds terrible. Let’s do something about it.” He creates this organization called the foundation. The long and short of it is that the foundation’s purpose is to marry changes in policy and technology and like all of the things that make a society work and come up with probabilistic models associated with those and make subtle changes. Putting off, pushing on all the levers so that humanity doesn’t go through another 10,000 year dark age. 

Ben: Basically, from my perspective, we think a lot about the short game–like going back to incentives in the private corporation world or public corporations. But in the private sector there’s a lot of focus on shareholder value, maximizing profits and like, those are fine. I think that having good incentives, having people be productive and produce goods and services that are valuable to the community are great. For a lot of areas in human society there’s problems that are very amenable to that solution. In my mind, it’s like those market forces are really good at finding local Maxima. But I think for the longer view problems, you need a little bit more than that.

Ben: The only thing we have now–for biomedical research to be specific–is the academic model where you’re funding people to satisfy their academic curiosity about little pieces of this bigger puzzle of say neurodegeneration or evolution, or biology, development, whatever. And I don’t think it’s as intentional as it could be. I think that there could be grand projects or grand plans. Not so much like the war on cancer. That always kind of felt like it was more of a PR stunt to raise lots of money and awareness. These bigger projects where you’re saying here are the things we need to understand to be able to actually move the needle on this and here’s how we’re going to fund this in a very intentional way over, not three years, but like, 20-30 plus years. 

Ben: So you’re expecting failure and you’re building all of those things in and as a society, we just don’t talk and think like that. Half of society struggles to accept climate change is real. So it’s definitely an uphill battle, but like

Grant: Well, the NIH funding is a roller coaster.

Ben: Yeah

Grant: It’s hard to make a 20 year plan when you have no idea what will be happening with the overall budget. I do think that is a pretty controversial take, right? Certainly projects like ENCODE and the Human Brain Project and things like that have gotten a lot of criticism from scientists saying the money would be better spent on R01 or, internationally, R01-like grants. But it’s interesting, the kind of long view and squaring that with our system of funding is a challenge.

Ben: Yeah, definitely. I think the biggest challenge really is the human side of things and figuring out how to design these systems or articulate these plans in a way that works, given the sort of vagaries in personal human interest. I’ve worked in multiple consortia and with lots of different scientists in my time and it’s pretty amazing the variety of ways in which things can go wrong when you’re talking about collaborative exercises. I can’t remember who I was reading on Twitter or somewhere about, but there’s a scientist who was talking about how “I can’t trust anyone else’s data but my own. Cause at least with my own data I know exactly how it was collected. I know it was done right.” But I think at some point we have to, cause we just can’t get far enough having individual investigators.

Ben: The amount of people who are suffering so badly because of some of these diseases and the fact that we just work together. Like that just seems like it shouldn’t be the reason why we don’t move the needle. So I think that there’s some aspects here of the science of science that probably needs to be brought in. Like there was an interesting paper that came out–I think it was in nature last year–talking about how small teams could be more disruptive, that they can coalesce a new idea and move it forward very quickly. So they’re like the explorers who are going out and discovering some completely new, you know, asteroid or something.

Ben: But then it takes the whole community to vet that thing and move everyone forward. So in terms of how we work together as scientists, I think you need some hybrid model where you’ve got small teams that are taking big risks and then maybe finding some crazy new biology or whatever, but then you have to bring the whole community along.

Ben: The danger of some of the high profile publications is there’s such an incentive for people to be the one who discovers that asteroid. There was a paper that came out recently on somatic recombinants in APP, [Amyloid Precursor Protein] and they thought that some of them were more pathological and that they were getting reintroduced into the genome and all this crazy stuff. And that was a paper that came out of Nature a couple of years ago. And there was paper that came out recently basically saying how that was probably just an artifact. That’s like an example of where the community is doing its work, but it’s on such a slow, long timeframe. 

Ben: I don’t think it’s a problem that we make mistakes as a scientific community. Like that’s kind of the point, right? You’re on the boundary of human knowledge. It’s an inherently risky enterprise. Your ideas are probably going to be wrong more than they are right. But that doesn’t mean we shouldn’t have good mechanisms for vetting that. And, but also for encouraging that exploration in a productive way.

Grant: Absolutely. I mean, in my experience, it can be more difficult to get a rebuttal published. You can be in review for much longer and the standards in some cases can be even higher than for the original paper. And I think part of the reason for that is there’s not enough tolerance for people being wrong.

Grant: And I don’t mean things like fraud. I mean, that’s a totally different matter, but when people get a paper retracted or something, it can be seen as the kiss of death for the first author and a stain on the senior author and so on. And as long as it’s an honest mistake.

Grant: The consequences can be so severe that people will defend bad work that’s wrong long after they should, just engage their critics and recognize, “Oh yeah, this is, this is wrong.” And retract the paper with a relevant statement and move on. 

Grant: And to a lesser extent that I think that happens very frequently. There are a lot of papers out there where essentially the core conclusions of the paper are wrong. And everyone in that subfield knows that, but if you aren’t in that field–you’re entering from an adjacent field and things like this–unless you really talk with people or have a postdoc spend a year or two trying to replicate the results you don’t know, and we don’t currently have a good mechanism for communicating that because again, in many cases, people fight the retraction so it doesn’t happen.

Ben: That’s partially due to the incentives, right? It’s like your stock options or something, man. Like once you have a couple of those Nature papers, you could just keep exercising your scientific credit options for a long, long period of time. 

Ben: I think it’s a human behavioral thing. Like there’s a network effect: the rich get richer, that sort of thing. You’ve established yourself as a leader in the field, so it’s going to be so much easier for you to get that R01 or whatever other federal funding opportunity. When people are wrong, they’re going to fight tooth and nail because it has a very direct effect on their ability to continue to professionally be a scientist in the current model.

Grant: And the other thing I’ve observed–I’d love to hear your thoughts on this–is sometimes the criticism is wrong and the results are solid, the methods are solid, but in many cases, other bystanders rush to conclusions. They see a criticism or a rebuttal of a paper and without really reading it and judging it for themselves and assessing it on the merits, they take a shortcut that “this is crap.” Sometimes that’s right. I think sometimes it’s not. Sometimes these rebuttals are–let’s see, podcasts appropriate language– incorrect. 

Grant: And I think right now everything is very stilted. So, there is good conversation at conferences in person, but that’s not recorded that doesn’t get disseminated. There’s sometimes very polarized conversation on Twitter that doesn’t really get us towards the truth. How do you think we could set things up and take advantage of the internet and everything to get us closer to that in a way that is better recorded and more easily disseminated across both that sub-field but also the broader community.

Ben: Yeah, that’s a great question. I know that journals will often let the authors post their own rebuttal to the rebuttal. I was trying to think of a really good example of that. I think it was–oh, what’s his face?–David Reich at Harvard. If you read his rebuttal to the criticism, it was like a masterclass in how you defend yourself. But at some level, it almost feels like it’s a little bit more like science is becoming some sort of legal enterprise where you’re trying to make a case and it becomes less and less about a holistic synthesis of all of the evidence and more about debating your opponent and winning points on them in some way.

Ben: I think to answer your question, if there are ways in which it’s easier to share primary data, share all of the methods that are used and have almost like an audit type process. Where someone who doesn’t have any skin in the game, who’s an objective outside observer as much as possible, can go in and do an assessment. That would be one way.

Ben: The technological side of that is you have to be able to share data and methods. But I think until we get to that point, you’re always going to have this back and forth, these grudges that come up between various research groups. I think that’s all a lot of noise. 

Ben: Like you said, Twitter. I really like it– science Twitter–for seeing new science hot off the presses. Like that’s Twitter at its best, but for actual meaningful dialogue about these things, it’s just too easy for it to devolve into everywhere else in the internet. And then at that point, you’re just like, “okay, this is a waste of my time. I’m not getting a lot out of this.”

Grant: I mean, I’ve seen a lot of people essentially go quiet in the last few years or just leave their accounts together. I don’t know what your impression is of that, but my sense is maybe four or five years ago, there was a bit more of the back and forth. And now it’s gotten so polarized that you do see certainly some combative figures that are always jumping in and fighting with each other. But a lot of people just kind of lurk. And that’s mostly what I do. I just look for interesting papers.

Ben: It’s just too easy to say the wrong thing. I was just reading this article in the New Yorker. I’m going to be totally typecast now to your listeners: this guy loves the New Yorker. But it was in the one recently where they’re talking about the COVID-19 crisis and people getting shamed on social media. And how we still don’t quite understand the effect of social media. Public shaming has been how society enforces certain behaviors, but we’ve now created a technology that puts it on steroids. And what’s the effect of that? And just sort of fascinating.

Ben: And I think it can stifle open and frank conversation because people don’t want to login and get all this hurtful feedback from hundreds of thousands of people. That’s just a bummer man. 

Grant: I mean, it seems like the challenge is the monkey mind, and maybe tech can’t save us. Maybe it kind of amplifies it. And although–the thing is–some of the same people who are just total jerks on Twitter, are perfectly nice seemingly reasonable people in person. I think there is a psychological element to being face to face with someone versus typing on your phone.

Ben: Yeah. The anonymization piece of it. I think you could talk about that in the context of peer review too, if we’re just hitting all the related topics. I think the anonymization, there are good reasons for it in peer review. There’s also probably some pretty good reasons against it.

Grant: Do you sign your reviews?

Ben: I haven’t been. I might start now, especially when I’m going to a startup. I might start signing them because I’ll be in industry. Because your incentives are less linked to the whole academic system, there’s less chances for things being held against you later.

Grant: Right? It’s crazy. Some of these grudges you see they date from 25 years back. But it’s such a small world that it does have a substantial, negative impact.

Ben: It’s a very small world. Like the number of people in Study Section is not that many. And it’s basically like the last person standing, who gets to the point where they get invited to the study section. 

Grant:  Especially where a single person can essentially sync an application. I think that’s kind of a problem maybe and how the aggregate scores are competed.

Ben: Yeah. You know, I think most people are acting in good faith in Study Section. And in most reviews I’ve received as an author, there’s obviously exceptions where people are just kind of nasty and that’s just unnecessary. We should all as a community, make a strong stand like, “don’t be nasty in any of your reviews.” I don’t know why that’s a cultural thing in science where people can be just straight up mean, just give your thoughts and give it to them straight. But there’s no reason to tear people down.

Grant: Well, some people are just mean. For some people, the anonymous factor plays a role, but there’s some scientists out there writing under their own name that are very openly mean well beyond just making their scientific point. I mean, it’s kind of funny because you know, I’m pretty sure, like most of us, they were probably bullied as kids and things. Somehow some people become the bully.

Ben: Yeah they probably internalize it and they probably aren’t even consciously aware of what they’re doing is the sad part. It’s just how they’re reacting to that situation, given their personal history. Right. 

Grant: Yeah. Do you have anything else you’d like to add?

Ben: Yeah. I mean, a question I have for you, maybe I turn the Peter Thiel question back on you. I’m just curious what your take is on that. Like, what’s an opinion that you hold that other people would find controversial? 

Grant: That’s a good question because it’s not actually something I’ve thought about. Even though I asked you right? 

Grant: I think the chances of an existential calamity to modern society are higher than most people think. I mean, there’s a lot of fragility. We are extraordinarily dependent on the internet for so many things. And in many ways, if a lot of the backbone infrastructure of our civilization were suddenly severely disrupted–you know, if you’ve got a very strong solar storm or something like this–I think it would be difficult for us to reorganize quickly.

Grant: I mean even this COVID stuff. This is like an IFR 0.5% respiratory virus. Throughout the 19th century, we had infectious disease epidemics that were far more deadly on a regular basis, every several years. We’d have something like this.

Grant: And of course we’ve tamed that through modern medicine and with vaccination, good clean water, and things like this. But something like this that no one would’ve really batted an eye at in the 19th century has done a lot of damage around the world. Not enough to end civilization as we know it or anything like this. But I do think it reflects a greater level of fragility because a lot of the ways we used to do things, we don’t don’t have anymore. So a lot of even workplaces now increasingly are getting rid of landlines. It’s just so many things that were backup systems, we’ve gotten rid of for the sake of efficiency that we can no longer fall back on. 

Grant: And I don’t know specifically what that shot could be. It could be any number of relatively low probability things, but if you take a lot of low-ish probability things and integrate over time, the chances of something happening are more than negligible. 

Ben: I was just gonna say, I totally agree with that. So I don’t fall into the camp that doesn’t, but

Grant: So maybe it’s not as controversial as I thought. 

Ben: I don’t know if I’m a typical person. But I think that a lot of that has to do with incentives. Like you said: efficiency. Markets are always looking for unrealized short term efficiencies, but these big scale risks, these black-swan events. The local risk model where your tails are very thin and you’re like, “Oh yeah, no, that’s like a 15 Sigma event. That’s not going to happen until the heat death of the universe.” Well, no, the distributions for those sort of events don’t follow that for a while. 

Ben: I think a lot of the incentives are linked to short term thinking. Coming back to what I was saying earlier, if you think more long term, then you start to think “Oh yeah, no, we’ve got to design our systems to be less fragile. We have to build in redundancy” And that there’s that concept of antifragility, where you actually have things that, in the presence of perturbations, become stronger. Those sorts of conversations, it’s rare to hear them. It’s not like what we’re taught. It’s not like this crazy political season that’s what you’re hearing on the debates.

Grant: Right. Well, and that’s another thing, maybe my other answer to that would be–although I think this has become a lot less controversial in the last few years–is just that the modern democratic-neoliberal order is much more fragile than most people recognize and we take it for granted in a lot of Western countries, in English-speaking countries, and things. We assume it will be like this indefinitely. But there are already cracks, right?

Ben: Yeah. Not just in the US either. It’s like everywhere.

Grant: Right. And the relative freedom and prosperity and things that we’ve enjoyed for a number of generations here, in the long view of history, is very short. Hopefully we can keep that going for as long as possible. But I think it’s far from guaranteed. You know, we could see things break apart in our lifetimes. I don’t know. Hopefully not.

Ben: Gosh, I hope not. And that has become a lot less controversial in the last few years, but yeah. I think climate change, that’s the real X factor. I mean, even the defense department was putting out a report on how climate change is going to cause all this geopolitical instability.

Grant: I mean, I think climate change is a part of it. I think it’s a lot bigger than climate change though. Climate change certainly contributes to and accelerates a lot of the habitat loss and things that were already occurring and have been for a very long time, but at the end of the day. Actually in our last last episode Chris was here, we actually talked a bit about ecological disaster. 

Grant: I think something like that is more than just a possibility, depending on how you define it. If you talk about mass extinction events, that’s a certitude. It’s already happening in a lot of the insects and things like this, on which ultimately the charismatic megafauna depends, are already on their way out.

Grant: You know, it’s kind of a nervous laughter kind of situation. But yeah. People are pretty adaptable. It’s not–I don’t think–going to be the end of, certainly not the end of life on earth. And I don’t think the end of humanity on earth or anything like that, but it certainly will make things different. And there will probably be a lot of people wishing that their ancestors had made different decisions.

Ben: Yeah, I totally agree with that. It’s all kind of unnerving. I’d really like times to be a little less interesting for a bit. They just seem to be getting more interesting.

Grant: Yeah. Boring isn’t bad. Yeah. 

Grant: So what are you doing in between Sage and the startup? I know you’re not hiking the continental divide or something, but obviously your options are limited at this time.

Ben: I know. I’ve had a week off and I’m in Bend, Oregon right now taking a little bit of a break. Though it wasn’t much of a break cause I was working the last two and a half days on finalizing the editorial changes on my last paper from when I was at Sage. So I feel like I was kind of trolling myself. Like “I’m going to have this week off to relax.” And then I’m like, “Oh no, I need to get this edit. Cause it’s going to be a pain to do that once the job starts.” 

Grant: Oh yeah you’re going to be busy.

Ben: But that’s done now. Thankfully I got those in yesterday. So I don’t know. You know, I’m an aspiring ultra runner. So, I do a lot of running. I’ve got a big race coming up in February next year. Hopefully it’ll happen. Obviously who knows with COVID. It’s the Black Canyon 100 K down in Arizona. So I’m just trying to put all the work in so that hopefully that’ll go well.

Grant: Well if the official race doesn’t happen, you can always go to Arizona and run by yourself. 

Ben: Go run for like 11 hours.

Grant: Make yourself a shirt, right?

Ben: That’s right: 11 hours just in the desert.

Grant: Thanks for joining us today, Ben. I appreciate it. 

Ben: Thank you, Grant really appreciate being here today. It was a lot of fun chatting with you.

Grant: Awesome.