The Bioinformatics CRO Podcast

Episode 16 with Lucas Steuber

We talk with Lucas Steuber, clinical product and marketing director at Cognixion, about using a brain-computer interface and augmented reality to help people with speech disabilities communicate.

On The Bioinformatics CRO Podcast, we sit down with scientists to discuss interesting topics across biomedical research and to explore what made them who they are today.

You can listen onSpotify, Apple Podcasts, Google Podcasts, Amazon, and Pandora.

Lucas is clinical product and marketing director at Cognixion. A renowned linguist and speech pathologist, he has helped develop a brain-computer interface with augmented reality to help people with speech disabilities communicate.

Transcript of Episode 16: Lucas Steuber

Grant: Welcome to The Bioinformatics CRO podcast. I’m Grant Belgard and joining me today is Lucas Steuber. Lucas, can you introduce yourself, please? 

Lucas: Sure. First of all, thanks so much for having me, Grant. I appreciate it. 

Grant: Thanks for coming on. 

Lucas: This is a cool topic for a podcast, right? Bioinformatics is one of those niches within a niche.  It’s really cool to, I think, have opportunities to learn more about and to share what we’re working on.

So I think like a lot of folks, I’ve had a bit of a twisty turny journey to where I’ve ended up professionally. I was originally a computer science and business major at the University of Oregon years ago. And had a lot of expectations from my family in terms of like, you’re going to be a businessman and this is what you’re going to do.

And I already sort of had snuck in the computer science element. To my great chagrin, the university forced me to take some classes outside of my major one year. So sort of like, okay, you need to go experience some other things. And I was kind of like, why am I doing this?

And so I signed up for some courses in linguistics and I literally think that it was because I could sleep in that I chose those. It was like Oh, good. On Tuesday-Thursday, I can get to campus at 10. 

And just really immediately fell in love. I had always been really into computer science and math from a structural basis. And I really liked the sort of elegance of taking something really complex and watching it sort itself out into a simple solution or making noise from the chaos so to speak. 

Language I found has a lot of that same element. It really is math in a fundamental sense. There’s structure and syntax and all these different things.

So I ended up getting a bachelor’s and a master’s in applied linguistics, studying a language structure when it’s disordered. Right. So looking at schizophrenia specifically, and some other situations where language starts to break down and then decided I didn’t have enough student loan debt. And so I went back and became a speech language pathologist. 

So it’s funny when I’m on airplanes and people ask me what I do. The two job titles are basic. I am a speech language pathologist and a brain-computer interface product manager. And I don’t want to explain either of those two things.

So I just ended up wanting to make something up, like, I’m a real estate agent and I don’t know, go to sleep. So in any event, I worked clinically as a speech language pathologist with a specialty in low incidence populations. So we would call these orphan disorders or rare disorders of folks that really require the use of assistive technology and specifically in our case, augmentative and alternative communication. 

Most people will tend to think of Stephen Hawking as the Cardinal example of somebody who used assistive technology to speak. So I’ve been in the industry, and now I’ve been designing and building those systems for about the last nine years. And I sort of head up clinical and marketing for a company called Cognixion, which is a brain-computer interface startup out of the Santa Barbara area although we also have offices in Toronto.

There’s a lot of deep learning and analysis that goes into this not from a genetics standpoint, but certainly harnessing the signal out of the noise from a lot of biological data. And so that’s what drew me to this podcast and I’m excited to be here and talk more about it. 

Grant: Fantastic. Thanks for coming on. Yeah, we’re really happy to have you here. So, and just for our listeners, that’s a Cognixion spelled with an X. So we’ll have a transcript on the website where you can link out to the website. So tell us about your product. 

Lucas: Yeah. So this has been sort of a stealth thing for the past couple of years. What it is is a coupled brain-computer interface with augmented reality displays. So, we call it a mixed reality. It is sort of becoming the new term. It’s not as complicated as it sounds from a use perspective.

So let me put that out ahead of time, just to frame the conversation a little bit. There are a ton of people in the world and specifically in the United States that can benefit from augmentative communication. But it’s largely an invisible population, right? I call this the silent minority. According to the best of my math, only something like 7% of the people who could benefit from augmented communication are even aware that it exists.

Right? So there’s a lot of people with say cerebral palsy or that have had a stroke, or maybe who’ve been paralyzed in an accident. These are all people that we don’t really see out in our daily lives. A lot of them are in care facilities or in their homes. And very often without the ability to speak. 

The industry of augmentative communication has largely been focused on tablet based, touch solutions in the last couple of years. So certainly for children with autism, where there isn’t any sort of motor disability and they’re still able to touch and interact with an iPad. The Cardinal company in that, for which I was the director of products for some time for Tobii, which is well-known in the sort of eye-gaze world, which is really also sort of a bioinformatics platform, just in a more structural reflectometry way, measuring gaze using IRL light. 

But that still doesn’t meet the needs of this other population of folks that maybe either have too much movement to adequately track gaze. So the example would be like spastic CP. If you have a lot of chorea, which is the involuntary movement, and you’re all over the place. Or then the opposite, if you have late stage ALS or folks that are totally locked in after a stroke, things like that, that currently really don’t have a means to communicate.

So. What our product does is it presents a language and sort of home control interface in the augmented reality environment and then measures using electrodes on the occipital lobe on the rear of the head. It measures evoked potentials, and specifically, visually evoked potentials or SSVEPs.  To define evoked potential, it’s like being pinched, right? If somebody pinched your arm that would cause a spike in the specific sort of part of your brain that’s measuring pain or tactile experience in that part of your body. 

What we’re doing is the same thing from a visual standpoint. So the different interface elements in the AR environment basically fluctuate or have different frequencies to them, say 5 Hertz, 8 Hertz, 12 Hertz. And depending on what somebody is fixated on, we can measure the evoked potential of the spikes in their occipital lobe to associate that then with intent and make a selection.

So it’s been really cool. There’s a lot of signal to noise questions around it, which I can go into. But that’s the rough path. 

Grant: It’s really interesting. So are the people essentially being shown different letters, for example, at different frequencies and you’re effectively measuring attention?

Lucas: Yep, absolutely. I liked that you said attention, like there is a distinction between gaze and fixation in this sense. For example, if somebody is unable to move their eyes, we can still detect what they’re attending to within their peripheral vision. So it is very much attention instead of explicitly looking at something in an eye gaze modality.

And yes, it has to do with the frequency of the elements. There is sort of a threshold at which the noise becomes too much. Right? So one of the things we’re working on is trying to have as many interactive elements as possible within that, with the ultimate goal of getting to a full QWERTY keyboard, where somebody can type out.

We’re working towards that. We’re not quite there yet right now. We have for example, something called a SpeakPros keyboard, which is based on the frequency of letters within a sort of subset of the alphabet and then comes back up. And, we do a lot of things in terms of natural language processing and prediction and context awareness to try to predict the right stuff.

And then we also have a pre-built phrase inventory that’s relevant to certain contexts— medical needs, social needs, stuff like that— as kind of a starting point, but then really we anticipate everyone’s going to customize that and add their own phrases and stuff. Right. You can’t predict what everybody wants to say.

I think one of the reasons why I love this field is that it sort of combines a lot of different disciplines. I mean, we obviously have this sort of clinical healthcare component, but then we also have a really deep language and linguistic research component and the BCI and all these other interesting engineering questions. So there’s never a dull moment. 

Grant: So out of all these challenges, what do you think is the most difficult? 

Lucas: It’s tough to answer. I think that I’m going to go with two of them. Well, three of them, can I give you three? I’ll give you three equal challenges. So one of them is the design of the language system, right?

So one of the issues with these historically has been that what we do is provide people with an inventory of what we want them to say, rather than giving them the flexibility to say what they really want to say, which in Chomsky’s terms would be like the infinite general ability of language.

The fact that you can say anything in English, you can make a sentence of infinite length with all these modifiers and different things. That’s hard to capture when you’re working with a UX where you maybe are constrained to something like 8 interactive elements, right? So we don’t want to make people have to dive 30 pages deep or paginate for an hour in order to find these pre-programmed things.

What we want to do is increase their rate of production as much as possible using artificial intelligence based on their context or prediction and this other stuff. And so it’s been a challenge because language prediction and language generally is not very well understood even among the neuro-typical populations.

But then you think about the specific needs of someone with ALS or CP or Rett syndrome. And it’s been a really interesting journey, working with the community and having actual users of augmentative communication, vet these out for us and give us feedback.

We have about a hundred users that work with us on the development of the language model. And that’s one of our core principles is: nothing for us without us. Right. So keeping all those folks involved. 

The second one I would say is the sort of signal to noise question. And the analogy I guess I would have is the hearing aid industry, which is really big by the way. There’s like billion dollar companies that are making hearing aids.

And from a consumer standpoint, they’re under a lot of pressure to increase battery life and decrease the size of the devices, but they also are under a lot of pressure to improve their far-field sound detection and differentiation between people sitting close to you. Like my grandfather had hearing aids 20 years ago.

And it just amplified everything. So if you’re in a crowded room, you just basically can’t hear anything because you hear everything all at once. And so there’s just a ton of algorithm work that has gone into trying to be like, who are we actually listening to here? Let’s tune out all the rest while also dealing with battery constraint and firmware and everything else.

And that’s really similar, right? Because we get all this electric data from the brain and we’re really looking for this one little needle in that haystack. And everything else can go away. And so meanwhile, we need to make something that’s wearable and portable and durable and has adequate battery life to work all day. That’s, I think, been the balance and the trade-off, why we’ve been working for several years. If this were an invasive solution, like Elon Musk, for example, with Neuralink, a lot of those questions would become a lot easier. But we really didn’t want something invasive. We wanted something that you could just sort of put on and take off. 

And then the third piece of challenge, which is yet to be sort of fully vetted out, is just explaining it. When you think of these folks that maybe have ALS, this might be your 70 year old grandfather who isn’t particularly computer proficient, right.

And suddenly we’re asking him to be a cyborg with this brain-computer interface. So if you’re a 20 year old with a CP or a 40 year old with ALS. I think those folks are a lot more caught up with modern technology and are willing to experiment with the environment.

One of my jobs over the next few months is to be preparing all these materials and webinars and everything else to try to show everyone why this is valuable for them at any age. 

Grant: That’s really interesting. So how far do you think non-invasive methods can be pushed? 

Lucas: There is sort of a theoretical threshold that’s been established in research. Like if we’re looking specifically at the keyboard access use case, there was an article published recently arguing, basically, that 30 words per minute is about what we’re going to get in terms of being able to sort out the intentionality from specifically the acceptable use case. 

Grant: 30 words per minute is not bad. 

Lucas: Yeah, it’s alright. And that’s about the rate that most people text. It’s a little bit slower than typical conversational rapport, which is really what we sort of want to get to. And so we scaffold that a bit with things like the pre-built phrases, but that’s our goal is to get to that point.

I think that it’s probably premature to say that we have a hard and fast threshold, right? Like that’s just famous last words because somebody is going to come up with a better electrode or somebody is going to come up with a better algorithm. And if we didn’t have the constraints on battery and firmware, like if we could hook somebody up to a mainframe with these, we could easily exceed that. 

But we’re trying to build something that’s compact and portable. I am confident that things will evolve and I’m confident they’ll evolve in such a way that at some point the distinction between invasive and noninvasive for this sort of measurement is probably going to be moot. We’re probably still 10 years away from that. 

Grant: And what does the learning curve look for people as they start to use this? When it’s their first day versus a month versus a year, how much difference do you see in speed? 

Lucas: Yeah. So that’s a big subject, obviously the clinical trials and user testing and human factors that we’ve been doing now for quite some time and will continue to do. And we still have three more iterations of that testing before the first version of the product goes to launch, which will be in early summer. This is kind of a cop out answer, but I would have to say it depends. 

I think that folks that are used to, for example, even just the QWERTY keyboard interface, right. Those folks, that maybe have just had a cell phone that had been texting or computer literate to begin with, they’re going to move pretty quickly through being able to do this. Like, I can almost hit that 30 words per minute threshold right now. And we found that to be consistently true for folks with ALS that are familiar with computers and folks with CP, MS, supranuclear palsy. I mean, all these different things. 

Basically, if you are a literate adult who then goes through a transition into needing to use this, then the pick up period is pretty quick. There are two exceptions to that. One of them is the sort of older adult who maybe doesn’t have a lot of experience with typing in an electronic environment, so there’s a little bit extra there in terms of what we would call operational competence, knowing how the thing might work or how you want it to work and that it needs to be charged and all these basics. 

And then at the other end would be, this population like Rett syndrome is a really good example. So Rett syndrome is a rare disease that only affects women. It tends to cause paralysis from the waist up. And it has an onset of between three to five years old. And so there’s a lot of girls out there with Rett syndrome that I’ve worked with personally that maybe haven’t had a communication system, their entire life or have been doing something like, somebody holds up a piece of plexiglass with words written on it, and they try to guess what the person’s looking at, that sort of thing. 

And when we catch them at 16 or 18 years old, there can be a little bit of a learning curve there too, because they haven’t even necessarily been exposed to written language in a sort of authorship sort of capacity.

So, that’s one of the things we think about too, is how to scaffold people up to literacy. Maybe they never were exposed because of their disability. 

Grant: It’s interesting. Long-term do you see any nonmedical applications for this kind of technology? 

Lucas: Yeah, absolutely. And in fact, I think that’s where most of the market is focused. Especially when it comes to augmented reality. I mean, a lot of people are looking at industrial applications, but also entertainment stuff. I know one of the companies we work with just licensed their lens technology also to the Super Mario Land sort of thing in Disneyworld.

And so there’s all kinds of cool stuff going on with it. So within medicine there’s stuff that I haven’t even addressed. So there’s the concept of therapeutics for Alzheimer’s using this. There’s the concept of diagnostics. All of that is outside of our use case, but is really, really interesting.

And then beyond that, one of the things that we’ve done—and Amazon has been really, really gracious with us. We use a lot of their sort of backend for our computing and for privacy reasons and things like that. They’ve enabled us to embed Alexa as a virtual assistant, which is cool because you don’t actually need to own an Alexa. It actually is an Alexa hub itself.

And there’ve been times when, for example, this assistant is in our kitchen and my wife was cooking a few weeks ago and it couldn’t hear me, like there was too much other noise. And I was like, man, I wish I just had the wearable and I could just tell it with my brain to turn off the lights. So, I mean, I absolutely think that stuff is coming. 

Grant: Where do you think the first applications will be? 

Lucas: It’s interesting to look at who’s most interested versus what gets actualized first.  For example, we have a ton of interest and a lot of inquiries from gaming companies, who are interested, not just as a control modality. But also, and this is something that I worked with at Tobii, a fair amount of training and measurement for like professional video game players like, what were you looking at when you did this play? And what were you thinking about? Those sorts of things.

However, that’s really not where we see the first investment happening. I think the first investment is much more industrial and medical in another sense. So remote surgical tools, things like that. If you think of it as almost a separate access modality, like there might be somebody who’s who is interacting with, let’s say a remote surgical tool and they have a mouse and they’ve got a right click and a left click, and you’ve got a keyboard in front of them. But now they also have this totally other modality where they can interact in sort of a third way.

You can just zoom in on a piece of what they’re looking at or whatever, using the brain-computer interface component. I see that stuff coming and I also see it for a neuro-typical audience, not even being that expensive. Just the pure brain computer interface stuff is probably going to become pretty popular.

I mean, you look at stuff like a Muse now, these companies that are offering meditation awareness. That’s all fundamentally the same sort of technology just applied to a different scale. So it’s going to be really cool to watch. I think that we’re going to see a real revolution in terms of what these things can offer us over the next 5 to 10 years.

Grant: Going back to the medical applications, you talked about therapeutics, how could something like this have a therapeutic use? Can you discuss that? 

Lucas: Yeah. Sure. So there’s been some evidence for example, that in Alzheimer’s specifically, as well as in certain visual impairments, that specific frequencies of the steady state visual elements might actually in one case breakup plaques, in another case serve as a training and attention mechanism for people that maybe have cognitive impairments or attention impairments in terms of like attending to their entire field of view. We have a lot of folks that are interested in using our hardware for that.

And that’s one thing I’ll say is that, I’m a speech language pathologist. My role has been to build this for our initial use case, which is augmentative communication. But we also do see this as a platform for other folks. If people want to develop on top of it, the reason why frankly, we’re not jumping into diagnostics ourselves right off the bat is just as a result of the FDA and requirements that would be involved with that.

We are a startup. We’re honing in on something that we know we can do really, really well. 

Grant: Start with the tractable use case. 

Lucas: Yeah for sure.

Grant: And can you maybe discuss diagnostics a bit? 

Lucas: Sure. Yeah. I mean, there’s all kinds of different things that have been shown to be diagnostic from specifically an EEG use case.

So one of them that I’ll just throw out there as a metric that we measure is fatigue. If we have somebody who’s been using the device for a long period of time, we are able to tell that it’s been wearing them down. And one thing that we could do for example, was to simplify the user interface or bring the interactive elements closer together.

So there’s not as much range of motion or attention involved. We’re looking at all those things. There’s ethics that come into play there too. It’s like, I don’t want it to be like, you’re tired now. You can’t use language. 

We need to sort of balance what we’re doing. Fatigue is sort of like an in-the-moment thing that’s relatively easy. More complex things would be actually tracking the rate of neurodegenerative diseases and prognosis in that regard of which there is compelling evidence for ALS, Parkinson’s, and Alzheimer’s that that all could be possible, but it’s still pretty exploratory right now.

There isn’t a clear blueprint of X plus Y equals Z at this point, but we’re certainly willing to throw this tool into the mix in terms of something else that people can experiment with. 

Grant: So, if you were to speculate way out, 50 years, what do you think will be the long-term implications of this kind of technology? You’re writing a scifi story involving a noninvasive brain-computer interface technology with AR.

Lucas: It’s funny you asked that question. So I’ve been listening to Ready Player Two, the audio book by Ernest Cline. He wrote Ready Player One and they made a movie out of it, directed by Steven Spielberg.

But I was listening to Ready Player Two. And literally the premise of the book is about the first noninvasive BCI AR and, to the point where I was walking to work and I’m looking around, wondering if somebody was listening to me, like I thought I signed an NDA about all this. How do they know? 

So other people are thinking about it, absolutely, in the science fiction context. And, his take is a little bit dystopian and it’s that people sort of begin to enjoy this BCI mixed reality environment more than life itself. Right. So they sort of dive into it wholesale. For better or for worse, I can assure you we’re not there yet.

There’s all kinds of other evoked potentials that are explored in the novel: taste, sound, haptics. All of that is there, but we’re really still looking at visuals. But I would say that there’s probably two futures. I see one of them in terms of the assistive technology use cases. I really feel like we’re moving towards a future where accessibility is going to become synonymous with personalization, which is very much something that I want.

There’s a quote that I love that is: for some people, technology makes things easier, but for others, technology makes things possible. I really want to just sort of raise the bar with assistive tech and establish this as the new standard moving forward. We should be looking at all of these things, not just BCI, but also just the context of life and use in whatever form we can get it.

And then for society generally, that is going to be really interesting to see. I see all these sort of hypothetical’s come up. Obviously there is an immense military application, right, which is a whole other conversation in and of itself, but there’s also application as we look towards autonomous vehicles.

If you would have asked me five years ago, I never would have guessed that this meditation use case would be as popular as it is, but people really like the self monitoring of their emotional state and getting that feedback. And so it would not surprise me at all if a BCI wearable in some form became a pretty common piece of technology for people to have in 10 years.

Grant: Interesting. And what do you think are the military use cases? 

Lucas: Yeah. Coming from the eye gaze tracking field, there is a lot of military interest in that realm as well. So I can speak to it pretty comprehensively from that perspective. We have not worked for the military directly, although we have had conversations with various space agencies, but I think it sort of comes back to the idea of having that other access modality.

Right. So if your hands are tied up flying that jet with 600 other things, that you have to do. And, the jet is able to infer what you are attending to, whether that’s a threat or simply something within the cockpit interface, that adds another layer of really interesting data. Not only in terms of what it can do in the moment for predicting what needs to be done by the jet, but also in post-flight analysis.

If something went really right or something went really wrong, how can we harness the intentionality of the pilot to either repeat or prevent that from happening again? And I think a lot of that translates into the space use case as well. So I’ll be really interested to see what people come up with. Frankly.

Grant: It seems a bit like the possibilities are limitless. 

Lucas: Yeah. Right. Well, it’s this whole other sort of measurement that we haven’t had access to at a consumer scale before. One of the sayings in linguistics that I was sort of, raised with in all my college training was that the only measurement of cognition is language and behavior. Right.

We can’t look into someone’s head. We can only see what they say and what they do in order to measure what’s happening up there. And that’s kind of changing, right? I mean, I wouldn’t say that we’re reading words directly, but we can definitely get a really clear sense of at least what people are looking at and paying attention to at any given moment.

And I think that’s pretty telling in terms of their behavior. I would love to hear from anybody listening. I mean, it is so cool to have podcasts like this that focus on something really specific like bioinformatics because I’m sure there’s a lot that I have even missed in terms of talking about this and I’d love to hear ideas.

So again, it’s cognixion.com. And there’s a whole long story I won’t go into in terms of why that’s the spelling, but it means things, and my name’s Lucas Steuber. I’d love to hear from you at Lucas@cognixion.com with any sort of thoughts or interests people might have. I guess my final bit is that I think we’re the first that has blended the BCI and AR modality or at least the first to market.

It wouldn’t surprise me at all if other people were working on it, but we won’t be the last, I think that the disability use case is one that’s really compelling for me. And it’s one that’s going to really benefit a lot of people around the world, but it’s also sort of a harbinger of what’s to come. I think that everyone is going to be looking at and talking about devices like this a lot more in the next 20 years. 

Grant: Thanks so much for joining us, Lucas.

Lucas: It’s been a lot of fun.