Turning Discoveries Into Companies with Sam Arbesman of Lux Capital

 
Sam Arbesman photo
 

Sam Arbesman is joining us to talk about the intersection between science fiction and reality. He is a scientist in residence at Lux Capital, a hard tech VC firm focused on turning science into reality.

Sam is an author. He is a computer scientist. He has a PhD in computational biology. He is a wonderful, enthusiastic nerd. 

In this episode, we chat about strategies for deeply technical startups, how to accelerate the pace of innovation, how sci-fi is a leading indicator of where technology is going, and how facts change over time. He brings an amazing perspective on technologies that will become major companies in the coming decades.

Here’s what I learned from the episode:

  • Knowledge changes over time. Facts get overturned. There is flux and change in knowledge. (Topic of Sam’s Ted Talk)

  • There is ongoing work to track the likelihood of a given paper or conclusion to “decay” or become outdated.

  • Science is a method, not a set of facts – It’s a rigorous means of querying the world.

  • There is often tension between IP value capture and the funding of science as a public good. There needs to be a balance.

  • University Tech Transfer offices are challenged by misalignment of incentives and short-term focus. Universities can play a longer game.

  • Boston Dynamics and DeepMind straddle the worlds of business and research. Both orgs “cornered the market” on high-tech skillsets, bundled and sold them to larger strategic buyers (without products!)

  • Instead of banning/blocking kids from using AI to make homework easier, we maybe ought to encourage kids to use AI to do more, to produce better work.

  • AI has underperformed for a long time, and now way overperformed in a short period of time.

  • Sam often uses an evolutionary mental model to analyze systems from a fitness-function perspective.

  • Ben Reinhart is building Speculative Technologies, a nonprofit version of DARPA that focuses on road mapping certain fields and creating programs around molecular manufacturing and other domains. He finds top-notch program managers to survey the landscape of a specific scientific domain and move it forward.

  • Sam recommends scifi books such as "Three-Body Problem" trilogy, works by Neal Stephenson, and the "Culture" series by Banks.

  • Sam also recommends the novel "Babel," which explores the art of translation and its implications.


Bread company logo

This episode is sponsored by Bread.

Bread is not your typical dev shop. They're like a technical cofounder team that you can add a whole product team as a pod. So, if you are non-technical founders or you want to spin up a new project, a new "swim lane" in your company quickly with very talented people, talk to Bread.

If you reach out to them, please tell them Eric sent you. Or just reply to this email and I'll introduce you.

It's madebybread.com. Check that page out. It's very cool. It's very well designed and will give you a sense of what they can do.


Learn more about Sam Arbesman:

Additional episodes if you enjoyed:

Episode Transcript:

Eric Jorgenson: Hello again, and welcome. I'm Eric Jorgenson. I don't know much, but I have some very smart friends. And if you listen to this podcast, then no matter who, where, or when you are, you do too. Together, we explore how technology, investing, and entrepreneurship will create a brighter, more abundant future. This podcast is one of a few projects I work on. To read my book, blog, newsletter, or invest alongside us in early stage tech companies, please visit ejorgenson.com. Today, my guest is Sam Arbesman. He is the scientist in residence at Lux Capital, a hard tech VC firm focused on turning science fiction into reality. Sam is also an author. He's a computer scientist. He has a PhD in computational biology and, I say this with the deepest love in my heart, is a wonderful, fantastic, enthusiastic nerd. In this episode, we touch on a range of topics, a wide range. We talk about startup strategies for deeply technical startups like Boston Dynamics and DeepMind which was acquired by Google, how to accelerate the pace of innovation across a wide range of industries and technologies, how science fiction is a leading indicator of where technology is going, and how scientific facts actually change over time. He has got an amazing perspective on technologies that will become massive companies of the coming decades. And I always get so much energy from talking to Sam and so many new ideas. And I hope you get a fraction of that today too. Our conversation starts shortly. Until then, here is this episode's sponsor. And if you're pulling out your phone to skip the sponsor message, it's a great opportunity to leave a review in your podcast player, which is another amazing way to help the show. Thank you so much. 

This show, this episode is sponsored by Bread. They are a newish sponsor. They are founded by very good friends of mine. You can think of them as your technical co founder to launch a company. They will create for you a pod of engineers and designers that is an entirely self sufficient mini product team. And they will help you design roadmaps, tech stacks, build the first products. They will then help you recruit and onboard technical team members and go from zero to a product with a roadmap with a team. This is not your typical dev shop. They are extremely founder oriented. They've been building companies together for a long time. They've been founders themselves. And they really love working closely with users and embedding with a team, moving really fast, and building a solid foundation for a successful long term company. A lot of agencies out there, I hate to say it, tend to bleed founders dry and have misaligned incentives and a short term perspective. That is not these folks. Bread is a great team. They're wonderful people. And I know them personally. They do incredible, incredible work. I have a good friend, a repeat founder with a very successful exit who just signed a deal with them to build the first version of their product. And so if you have a startup or a company that needs a very talented technical team, please check out madebybread.com. Just take a look at their site, and it'll give you a sense of what they do and the combination of engineering and design that really makes them special. If you reach out to them, please tell them Eric sent to you. If you have any questions about them, feel free to email me or DM me, and I'll happily answer them or personally introduce you to the leaders at Bread. Now, thank you so much for listening. Please enjoy this conversation with both ears and everything in between, arriving in three, two, one. 

It seems like every single time we get together, I leave with 100 new ideas. And I always learn something from you. And I'm very excited to be able to share that with thousands of new people today. So I’m excited to have you on the podcast after all the time we spend in person.

Sam Arbesman: Likewise. This is going to be great.

Eric Jorgenson: A question I love to ask but I don't think I've ever asked you before, who are your heroes, fictional, real, whoever?

Sam Arbesman: Who are my heroes? Oh, that's a good question. Let's see. I'm not sure I have any heroes per se. I can talk about maybe like my influences. But I would say, so let's see, I guess certainly one person that had a very big influence on me certainly when I was younger was my grandfather. So he's a dentist and then most of my childhood was actually retired. But he also was an artist, but one of the other major things is he was very, very interested in science, like science and technology and technological progress as well as science fiction. And he read it. He actually- and he read huge amounts of science fiction, would give me like giant bags of like old analog science fiction magazines, like I'm going to take them to camp with me. He read- and he's basically read- he read science fiction since like the modern dawn of the genre. Like he read it I think- He read Doon when it was like being serialized in a magazine before it even became a book. And so he like was always introducing me to all these different things. I actually found out like years later that he had been reading Popular Science I think for like 70 years, like something insane. And then actually I happen to know one of the editors there, and so I was able to get him in as like one of their longest readers, which was awesome. But certainly, him exposing me to science fiction and kind of just like the true broad scope of what is possible scientifically, technologically, how we think about how all these different changes actually affect society as a whole. Because the truth is, and science fiction, it has a lot of cool things. But it's not just like here's a whole bunch of cool gadgets and technology. It's more, okay, if these things are real, how does it affect society? How does it- like what are the legal, social, ethical, regulatory implications, these things, but kind of really understanding in a much more holistic way of seeing how all these things relate and ripple outwards. And yeah, he was, I would say, one of my major influences that kind of got me interested in thinking about all these different things of science and science fiction. Certainly in kind of the fictional realm, I would say one interesting- I would not characterize this as a hero, but certainly one interesting early influence going into science fiction is the psycho historian Harry Selden. So there's the like Foundation trilogy by Isaac Asimov. And so like the premise of it is that- well, it's like the Galactic Empire is going to be collapsing. And there's this whole thing and like how do we stop it. But underneath it, there is this field, this fictional field of psycho history, which is can we look at the statistical properties of societies as a whole and say, okay, we understand each individual makes decisions that we can't predict. But when you get enough people together, then suddenly, maybe there are these regularities, these kind of like mathematical, quantitative regularities. And I remember when I first read that in the book, that was really, really cool. And I thought, okay, how can I do this for real? It turns out that I'm not alone in this. If you talk to a lot of people in kind of the quantitative social sciences, a lot of them, their early influence was reading The Foundation Trilogy and thinking, okay, how can I make this reality? I think Paul Krugman, who is an economist, I think he even wrote like an introduction to a new edition of The Foundation Trilogy. And because he said this was like his one of his major influences. And so for me, thinking about this and think about, okay, what are the rules and regularities behind massive complex systems, especially in the social realm, was something that I thought a lot about, and I ended up doing a lot of research and kind of academic work related to that kind of thing. And so this idea of like, okay, there are obviously differences between biological systems, technological systems, and social systems. But underneath it, they're all just big, complex systems of huge numbers of interacting parts, and how do we connect that? And this whole field of network science, like science is the kind of way of quantifying how these things work. And so certainly, that was kind of a Northstar for me, like thinking, okay, Harry Selden did this in this kind of fake science fictional world; can this be done for real? And the answer is sort of, and that's actually great. And it's kind of this continuously growing field. I would say those are at least two influences, one very real and one fictional.

Eric Jorgenson: Those are awesome. I see both of those in your work, in your energy. I think you were playing the role that your grandfather played for you for hundreds, if not thousands, of other people with like your excitement about sci fi and the recommendations that you bring and everything. And I heard something in your Harry Selden piece, some rhymes in the TED talk that you gave a while ago, probably 10 years ago now, the half life of facts, which I think it was super interesting. You brought up a very similar idea, which is like it's very difficult to predict individual things. But when you look at a large mass as a system, patterns start to emerge that are actually really predictive of the outcome. Do you want to sort of take us through that?

Sam Arbesman: Yeah. So that had an effect. I mean, I wrote a whole book about kind of this idea of like how knowledge changes over time and with kind of this analogy of, okay, we can't- it's very hard to predict which specific discovery is going to occur or which fact is going to be overturned, we have to- understanding the landscape of knowledge is a very, very difficult thing, every fact and detail of scientific discovery, whatever it is, is idiosyncratic in its own way. At the same time, though, if you kind of abstract away the details in the same way you do this, like other kinds of social systems and things like that where you can kind of understand, I don't know, the rise and fall of cities or whatever it is, you can do the same kind of thing with knowledge, like how what we know grows and changes over time. And so the analogy is kind of the half- the analogy to radioactive materials. I can't predict when a specific atom of uranium is going to decay. It could decay in the next fraction of a second; it might take thousands of years. But when I get a whole bunch of atoms together of uranium, suddenly I have a whole chunk of uranium, it goes from being entirely unknowable to actually very, very predictable. And  you can actually trace out this very nice clear curve, like this half life of decay over time. And it turns out, you can do the same kind of thing with trying to understand how knowledge changes. And you can understand, look at how long it takes for the number of papers in a field to double, you can actually do some of this kind of like more mathematical, more mathematically precise work in terms of the half life of certain fields. And so, there was a paper out probably several decades ago, I don't actually remember when it was published, but it was looking at within certain fields in medicine, and it was looking at how long it takes for papers to be kind of become obsolete or overturned by newer technically- more technical advances. And basically the way they do this, they kind of gave I think the abstract or the findings to experts, and it's okay, which of these are true and which ones have been overturned. And from that, they were able to kind of trace out this very nice curve. And so the mathematical shape was probably a little bit different. But there's kind of this very nice metaphorical sense of there is this half life to knowledge and how it gets overturned and turned out. There's a lot of these different regularities and then we kind of- and we often don't think about knowledge change, or if we do, we kind of think about it as oh, yeah, I don't know that there's stuff in my textbooks when I was a kid that are no longer true. And then we discover it's false when our kids come home and say, guess what, like dinosaurs look completely different. They're like fearsome chickens instead of weird reptilian monsters. But the truth is beyond that, though, there are all these regularities to how knowledge grows, how it gets overturned, and kind of there's this relationship between like the number of scientists involved and the population scale, overall, how jargon barriers are overcome. There's all these different mathematical or at least quantitative and kind of structured ways of thinking about how knowledge grows and changes. Yeah, and that is certainly one area kind of as a quantitative social science, like quantitative social understanding, in this case is actually scientometrics, like the science of science, or people don't realize- meta science or meta research. But yeah, it's a very big burgeoning field, which is fantastic to think about.

Eric Jorgenson: It makes me wish that we had like ongoing tracking of sort of the verifiability or the likelihood of how close a given paper is to maybe decaying or a given conclusion is to decaying or even like what are all the laws that were passed that were based on this fact that now may have changed or may have been overturned or may have been refined? Like, there's a lot that's based on science that maybe doesn't reflect the changes that science is going through.

Sam Arbesman: Yeah, and I think this is probably all- first, one thing to recognize is science is a very human endeavor. So all the idiosyncrasies and biases and kind of infections of any other endeavor, science is like highs, these kinds of things because it's being done by scientists to be done by humans, which is great in many ways because we can get very excited and want to like work at the frontier. But as a downside, it means like we're human, like you are going to- people might work on research that is out of date, or they'll cite papers that have maybe been refuted without necessarily realizing that kind of thing. And the nice thing is it's actually, as scientific research has kind of come online, people have begun to kind of take these vast corpora of scientific literature and figure out if they can kind of do these kinds of things. And so there's work that has been done to actually say, okay, let's use certain AI techniques to see when someone cites another paper, is it just citing it because this is interesting, is exciting, because we're saying we actually agree with what you're doing, like we maybe have some sort of confirmatory results, or is it actually refuting and saying like I'm citing this paper because you're full of crap, this is no longer true. And so there have been ways of actually kind of like measuring this. And so people are trying to do these kinds of things. In the same way, though, people have actually also been thinking about, okay, how can we use some of these kinds of techniques, like how can we stitch together bits of knowledge that should have been connected like a  long time ago, but because the vast scope of knowledge has never been connected, no one actually knows. It's like a really- So there's this great paper in like the mid 1980s where this information scientist Don Swanson created this fun thought experiment. He said, imagine somewhere in the vast scientific literature, there's a paper that says A implies B. And so you read that paper, but there's somewhere else in this article or this has been- but because the literature is so big, no one's actually read both these papers. And so it might be true, if you combine them, that A would imply C, but because no one has actually read these two papers together, no one knows it. And so when- he called this undiscovered public knowledge. It's kind of like knowledge that is known, but it's because it's out there, but no one's actually discovered it because no one's actually connected it. And so the nice thing is that like even though it's the mid 80s, he's like, I'm not going to keep this as a thought experiment. I'm actually going to use like the cutting edge technology today, which was I think like keyword searches in medical databases or online, but he actually ended up making some really interesting results. I think he actually found some sort of relationship between some like circulatory disorder and consuming fish oil to actually help alleviate that disorder, and then ended up publishing it in a medical journal, even though he was not a physician, which is pretty cool. And of course, now since then, people have begun trying to kind of do this at a much larger, like a much more automated kind of high, high, high scale kind of approach.

Eric Jorgenson: I imagine AI would be another huge unlock.

Sam Arbesman: Oh, totally. It's like, oh, we now have all this, like let's begin to stitch together all these different domains. And the interesting thing is, and this is a world that I play in, kind of this weird interdisciplinary worlds, I'm often like connecting one idea from another to another domain and kind of almost doing import export of ideas and people and things like that. But the cool thing though is related to this is like that there still is a lot of this to be done. I mean, obviously, I think it's going to help a lot with this kind of thing. And you can see this like- just because everything is on the internet doesn't necessarily mean that now everyone has it at their fingers ,looks like there's still these jargon barriers that are incredibly difficult to overcome. And there was- I'm struggling to remember the exact details. But I remember there was some paper that came out several decades ago where the author looked at a number of different mathematical models. And he looked to see how many times they had been reinvented in different domains because they were just called different things. And so there was one that I think had been like reinvented like 8 or 10 times because people just didn't realize that these are things and actually in like the world of complexity science and network science, I remember seeing some examples of this, because it's kind of this inherently interdisciplinary field, you could be on a mailing list, and someone would say, oh, how do I measure this kind of metric of connection between different people or some social network metric, and someone would say, oh, this has actually been known in sociology for like 30 years or whatever. And it was just this constant thing of like people like not knowing even what terms to search for and someone kind of helping out. And yeah, so there's a lot of really interesting possibilities, like now that all this stuff is out there, how can we kind of make that much more use of all the raw material that we already have at our fingertips?

Eric Jorgenson: This might not be a fair question given what you said about the difficulty of predicting individual sort of discoveries, but does that apply on the other side, too? Like, do you have a hunch on which sort of commonly accepted scientific facts are about to expire or get overturned or get refined, let's say?

Sam Arbesman: I mean, so there's like- what was like the Lindy Effect, where it's like things that have been around for longer are kind of more likely to stay. So I think that's not a bad rule of rule of thumb. With that being said, and there are many counter examples, I'm sorry, going back- I don't know if they conflict, but it's like kind of recognize, again, there is a certain amount of permanence, but there also is a certain amount of flux. Yes, like my grandfather, who I mentioned before, so when he was in dental school, he actually learned the wrong number of human chromosomes. There was like, I think, a 20 year period where we had been able to measure, like to count the number of chromosomes within a cell. But we didn't have a really good visualization technique. It wasn't until I think like the mid 50s where they had this is better method. And so there was two decades where people thought it was 48 chromosomes instead of 46, which is kind of wild, but that was like, it seems like a pretty basic thing. And so in truth, though, it's less about like, okay, this is something we thought was true and now it's totally not true. It's more this process of refinement. It's this kind of thing of like, we are constantly approaching this like asymptotic approach to truth. And actually, in that talk, to have like facts, I kind of give this quote from Isaac Asimov, going back to Isaac Asimov, I guess these are a number of touchstones where he was- I think someone had written to him and said like, we used to think the Earth was flat, and then now we don't, and then we used to think it was a perfect circle, and now we know it's like this oblate spheroid. How can we know anything is true. And as long as responses- if you think that thinking the earth is flat is just as wrong as thinking of the Earth is a perfect sphere, he's like then your view is like longer than both of them put together. And he's like, we're kind of constantly approaching- he showed the amount of error in a flat versus a perfect sphere versus an oblate spheroid. And he kind of shows it like we are kind of constantly approaching- and the same way that like Newtonian mechanics was- it was succeeded by Einsteinian, like understanding of relativity, it wasn't- but it's not that like the ideas of Newton are no longer accurate. They're just kind of like a special case of a more general thing. And when you're building a bridge, you probably don't need to worry about things dealing- like close to the speed of light. So therefore, the Einsteinian corrections and modifications, like they're not necessarily that vital to building a bridge or whatever it is. And so I feel like a lot of things with science are this kind of process of refinement. The reason why people don't perceive it that way is because oftentimes, when we look at not the scientific literature but like scientific journalism, they are reporting on things that like had just been discovered or just recently known, which means these are scientific ideas that are being worked on at the frontier of knowledge. And the further frontier of knowledge is where we know the least. And so that's where there's going to be all this flux and change. And so I think people- there is this kind of divide between the perception that there is, for the most part, this kind of body of knowledge that is being constantly refined, and then at the border of it, there's this constant mass and flux, and like that's where the scientists want to learn and operate because it's awesome, and it's where we don't know as much and where we can make advances. But we also have to be a lot more humble about recognizing, okay, that's where we don't know as much, and so therefore, there's going to be things that get overturned and that's fine. That's totally fine. Like that's amazing. Like, that means there's still a lot left to learn.

Eric Jorgenson: Yeah, it's hard to remember that the whole point is exploring the uncertainty not necessarily, like reveling in it, living at the edge of it, being willing to step into it and step back from it.

Sam Arbesman: Totally. Yeah. Science is like- people think of science as this body of knowledge, and like to certain degree it is. But it's really less about the body of knowledge and more about science as a method. It is a rigorous means of querying the world and constantly learning. And this was brought home to me, there was a professor of mine from grad school, I remember I went back, I was giving a talk at Cornell where I did my PhD. And I remember I was catching up with him. And he told me this great story where he had gone in to teach a course on a Tuesday, taught some topic. And then the next day, he actually read a paper that invalidated everything he had learned. And so then he rambled on Thursday to the next class, and he's like, remember what I taught you? It is wrong. And if that bothers you, you need to get out of science. And it was like this rejoicing in the fact that these things can be overturned. And, of course, it's a lot easier to say when it's like not your own ideas being overturned, but then oftentimes, then kind of going back to the idea of the human nature of science where people kind of fight tooth and nail for their own ideas. At the same time, that which is important, I want to keep everyone honest, versus being like, oh, I'm going to turn over everything. But at the same time, though, I think there is this- There's something wonderful about a field and like a body of this endeavor that delights in turning things over. And this is like the motto of the Royal Society, kind of like one of the early scientific societies was- I'm not doing it justice. It's the Latin for basically kind of like don't take anyone's word for it. I'm pretty sure that's what it is.

Eric Jorgenson: Like it now- sort of the academic with a scientific rigor brings the same sort of rigor that the market does and that like you're sort of constantly checking the truth of your beliefs against either the physical reality or the economic and sort of the market reality, which is where we both spend a lot of time, like that intersection between new discoveries, new technologies, new knowledge, and getting that actually deployed to like the most people getting new discoveries distributed in some form of product. And I think, I mean, you've had a front row seat to this at looks for a long time, which we'll get into. But I want to start by digging into something that you sort of taught me, but we didn't get a chance to dig into, which is like the business models are in the variety of where this happens and how it happens. So I didn't realize that. But I think the two examples you gave me, Boston Dynamics and DeepMind, which are acquired by Google, were not so much like businesses with products as a collection of academics pushing on a thing that we knew could have commercial potential but never really even approached being a business with a product. Can you like- are there other examples? Can you take us through that? Like, where else has that happened?

Sam Arbesman: Yeah, it's interesting. And I mean, these wouldn't quite characterize it as like, oh, just kind of like a bunch of people without a product. I think they had a very clear vision, the way I would say- or they had a vision, they were building lots of things. But I think it was one of these kinds of things where it was- there's a recommended recognition, I don't know as much about Boston Analytics, but I've seen this maybe in some other companies as well, that you kind of have this sense that there is a certain technology or kind of bundle of scientific ideas that are going to be valuable down the line. Now, it might be a while before those kinds of things are actually commercializable. But there's a sense that, okay, if we can almost like corner the market on the talent in this space. And maybe it's like reinforcement learning, in the case of DeepMind or whatever it is, then that is going to be just kind of such a valuable thing in and of itself, that eventually maybe it would be good for some sort of acquisition target. Now, of course, I mean, obviously, in the venture world, like making something, an appealing acquisition target is not always the goal, you want to kind of have like some massive company that will become this like category of its own that will go public and kind of become this massive thing. But at the same time, though, it is a very interesting idea of like this is a way of kind of building something that can kind of almost straddle the worlds like business and research and science because it's kind of understood that what these people are doing is very valuable. And it's very hard to find the talent in those spaces outside of this specific company. But this was, I mean, this is- we can talk about this more, but I think there is the concern over the past few years, kind of this broader sense of like people trying to expand the types of structures that people are thinking about in terms of like, what is a regional organization? What is a company? What is a startup? And from my perspective, it's actually kind of exciting that like these boundaries are getting a little bit more porous and amorphous, which is pretty cool. And I'm happy to dig into that more and talk about that as well.

Eric Jorgenson: Yeah, I think that's something that I know you've put a ton of thought into. I think what's compelling to me about both of those, like sort of the example that we've mentioned and these new research orgs, it changes the incentives just enough which maybe even changes the timeline. Like what's exciting to me about Boston Dynamics or DeepMind isn't necessarily like they got acquired or whatever it is, how much sooner some of these things are going to hit the market. Because they started moving into the world of commercial incentives and paybacks rather than the world of academic incentives or papers, or it's not always obvious when a technology or a person needs to sort of move between those two ecosystems. But the feedback loops between each are very different and encourage very different things. And if you believe that the impact isn't there until the technology is distributed, then timing that transition appropriately becomes like a point of huge leverage.

Sam Arbesman: Yeah, it's not always trivial, though, to kind of figure this kind of thing. I'm actually- a friend of mine, Reinhardt, where I think he has this great article where he wrote, it's like when does an idea that smells like research can be a startup, which would- so it's something that kind of feels like research, when should it actually be a company, and he kind of has this almost like map or a flowchart kind of thing, but the idea is like, okay, most of the time, there's only very specific set of circumstances when an idea that is kind of more research-y should actually be kind of a traditional startup, which on the one hand, I definitely would love it. It's like more research-y kind of companies can be created. Because I think that will allow there to be this kind of nice- it's kind of this nice import export business from the research world into the technology world, into the sort of world and actually allowing people to actually have these kinds of things. At the same time, though, it is very, very difficult because there are so many instances where people either like think something is ready to be commercialized, and it's not quite there yet, or people are like enamored of the idea of having a company sometimes whether or not it actually makes sense to have a company, but there was like, oh, I want to- I have this cool idea. Let's turn it into a company. And the truth is, you see a lot of companies like this, where it's like there is some cool research advance. And it's kind of a research advance in search of the use case. And with that being said, I am certainly all in favor of people trying to find uses for research. At the same time, though, there are many situations where like it is not clear that you- the use case you have hit upon is actually one that people will actually want to pay for it. And so sometimes people that kind of use this kind of pejorative, oh, is this a startup? Or is this kind of like all research funded? And I will say, for one, I love to research products. But the truth is that not all research products should actually be companies. But it's a very interesting kind of thought exercise to figure out okay, what are the situations when these kinds of things are possible, and I do think it is becoming a little bit more possible to kind of it used to be there's a very narrow- going back, once your resume smells- or when what smells like research can be startup, there's been a very narrow set of those kinds of things. And I think it is expanding. But I still think it's narrower than many people realize. But so again, there's always- it's a tough balance.

Eric Jorgenson: Yeah, but why don't you set up sort of the context in the world of alternative research orgs. And then we can hop back to Ben Reinhart because I know he's got a really exciting example there.

Sam Arbesman: Yeah, and so these alternative research organizations, and so basically, the way I kind of view it is, there's- and when we think about doing research, we often think about it being done in the context of like a deep tech startup that we talked about, or a university laboratory, or maybe a corporate industry lab, or maybe like one or two other kinds of things. But oftentimes, when research is being done, it's done in kind of this small set of different institutional forms. But the truth is, those institutional forms are really just a few points in some weird high dimensional landscape of institutions and organizations. And over the past few years, people have been saying maybe we should actually be exploring this high dimensional space and trying to find new institutional forms, whether it's things that kind of combine the best of startups and research organizations that are kind of more engineering oriented, like say an example, data focused research organization, maybe there's things that can be for profit, like a for profit research lab, but can be entirely bootstrapped. So it doesn't need to be kind of venture backed. Is there a way of doing nonprofit work in a totally distributed way? And can we think about funding and people, individual researchers, rather than specific projects and topics, like there's all these different things. And the truth is, when I think about it, there's all- going back to this, high dimensional space analogy, there's all these different dimensions, and we've kind of gotten blocked to say, okay, everything should be the specific thing, but no, we should actually be like twiddling all the knobs for all these different dimensions and trying things and the nice thing is, over the past few years, people have begun to try and there's been this massive Cambrian explosion of all these different forms and if someone kind of comes from the biology world, oftentimes when there is this massive explosion, sometimes there might be some sort of extinction event afterwards because there are these evolutionary pressures. Now, that is unfortunate for a lot of these organizations, but I do think this kind of pressure will hopefully yield new types of institutional forms that will be sort of like templates that other people can use. So rather than saying like I have a cool idea, or I want to do certain research, should I do it in the form of a startup or saying stay in academia? Now there will be a whole suite of possibilities. So in terms of like which ones do I think are going to win, I'm pretty agnostic, but I just love the fact that so many people are trying so many more things, that really just happened until a few years ago. So that's super exciting to see.

Eric Jorgenson: You've compiled a big list of these and sort of categorize and organize them, which is on your site, which we'll link to in the show notes. One that we have talked about in particular that I think is exciting, I'd love for you to give me the context on DARPA and then how that feeds into what Ben is doing at Speculative Technologies.

Sam Arbesman: Yeah, sure. So yeah, so DARPA, and DARPA is like the Defense Advanced Research Projects Agency, which I guess used to be just the ARPA, I want to eventually make like the defense- Yeah, make it clear. But basically, the idea is, so DARPA, they have been enormously valuable in funding, kind of like spring forward a number of different kinds of research advances. And so the Internet is a big one, the internet and like our RNA vaccines, like that is mRNA vaccines, that's another big one. And so they've done a whole bunch of different things. And there's a massive list. And so the way DARPA operates is they're not doing research in house, they kind of- what they do is they have a number of program managers, and each program manager is kind of responsible for his specific research program, where they kind of come in for a small number of years. So it's a time limited position, so there's a certain sense of urgency, and they have kind of this vision of okay, here is the kind of thing I want to see happening in the world of science and technology. And then what they do is using a combination of purse strings, like the ability to actually get grant money as well as maybe the idea that the defense- like the Defense Department might be a consumer of these products kind of on the other end along with a sort of coordination function, that they're basically able to kind of catalyze certain fields forward. And so people will- someone will come in and say, okay, I want to move, I don't know, some field in biotechnology forward in some specific way. And they work- they kind of create this sort of roadmap, they work with a number of different labs, I think companies as well, to get and coordinate them, kind of give them money in order to kind of do the things that they think are important. And then hopefully, on the other end, they will- the actual advances will kind of pop out. And so it's kind of one of these things where it's like- I mean, it is a government agency and has a lot of money. But relative to a lot of other government agencies, it's actually pretty small. But it's been able to kind of use this sort of catalytic structure in order to kind of move various different fields forward and build things like the internet, which is pretty exciting. And so Ben's idea was DARPA is great, but like there's kind of a limited number of things it can be working on. So is it possible to actually build a private version of DARPA? And so in this case, he's actually building an organization called Spectral Technologies, which is nonprofit devoted to moving, kind of like road mapping certain fields and creating programs and a number of different domains around like molecular manufacturing and some other ones. I think there's a handful already now, and he's going to be spending more. And the idea is like find a specific top notch program manager, who can then kind of help survey the landscape of some field and then kind of move the specific domains forward. And the interesting thing there is, I think, that the role of program manager, and I've talked to Ben about this kind of thing, and I think it's like, well, no, the program manager is a very, very specific type of person and like institutional role because it's different than running a specific lab, it's different than running a company. It's different than just being like someone who gives out grant money. It's like someone who is kind of doing all these kinds of things in some sort of distributed way but basically saying, okay, here's my grand vision, and I'm going to figure out the best way to kind of give people money in order to bend the world towards my vision of the future, which is really, really exciting. And yeah, so Ben is working to build this kind of thing. Yeah, it's amazing.

Eric Jorgenson: It's so cool. As you were describing that, I was picturing a similar project from the Foresight Institute, which is the tech tree mapping. So they'll take an area like space or nanotechnology or something and map out sort of all the different capabilities, the different bottlenecks, the different companies, the different research papers. And it's interesting to think about where like overlapping these two ideas to think about how can we progress through a tech tree, what's the most efficient sort of place to drop money or capabilities or transfer knowledge or connect people or overlay efforts between academia and some of these research organizations or maybe startups that are trying to commercialize it or even customers who know that they need it and like unlock some of those things. There's so much that- 

Sam Arbesman: I mean, the truth isn't- Yeah, expected psychology, he's doing a lot of this kind of world mapping and undertaking bottlenecks and analysis kind of stuff as well because that is a precondition for doing all the other stuff as well. And the truth is there is this kind of growing community of people kind of playing in this space and like trying to kind of understand, okay, in the near term, what are the things that need to be done and then like working backwards or whatever and trying to figure these kinds of things out, which is interesting because on the one hand, when people think about science or scientific technological advancement and innovation, they think of it as kind of disparate opening when someone makes an advance, and then someone else figured out how to make it useful. And they kind of recombine things. And this is very much saying, hey, no, we have a very specific direction we want to move in. And the way I think about this is it's not an either or kind of thing. Like, there are many, many times when we need a very clear kind of like coordinating function to actually do this kind of stuff and move things forward. On the other hand, though, there are many instances where it is actually very good to have this very undirected kind of research. And so there's this book by the research scientists, Ken Stanley and Joe Layman, it’s called Why Greatness Cannot be Planned. And the idea behind it is anytime you have some like weird high dimensional search space, and you're trying to find something in there, having a very clear objective function, a very specific goal is actually really, really hard to get to. And so their argument is basically it's much better to just kind of have this undirected, sort of like novelty kind of curiosity, interesting based kind of search, and then combine all the different things that you stumble upon or you discover, you create these kind of- create these stepping stones that can be recombined in kind of novel ways and then eventually will get us towards the eventual goals. And I think when it comes to most innovation, you can kind of be a little bit of both. And for me, personally, I love kind of the undirected thing. And maybe this is my way of just kind of like justifying the kind of like undirected way my curiosity operates, but at the same time, though, if it's true, like when there are certain things where it's maybe like one or two engineering advances away, or there is a specific direction we should be going in, and we don't want to start- want to waste as much time, effort, or resources, then actually having a very clear roadmap and someone who provide this coordination function can be really, really valuable. 

Eric Jorgenson: Yeah, it's interesting that the solutions, the ways to push progress in both of those scenarios are totally different. Like the gray sort of undirected thing, you just need like sample size and time, the law of large numbers. And the other is kind of someone like a sniper, who's really precise about understanding the big map and just like plugging these two things in could theoretically move technology up a decade or something. And when you see these fields, then you see the outcome of these processes over and over again, and then you gain the sense that we have some agency over that either individually or collectively. It's just a very  fundamentally optimistic thing. Like, oh, this is controllable. This is optimizable. There's a lot of low hanging fruit in all these fields to move on.

Sam Arbesman: Yeah, I think there's something there. I think, the truth is both of these kinds of approaches I actually view as like fundamentally optimistic, where it's like I can just- you can just be like searching around and find interesting things. And that is contributing towards kind of this greater endeavor. And on the other hand, saying like I can make this massive advance through kind of like noticing, yeah, these points of leverage, it's maybe kind of like the best of the forces of history combined with the great main idea of history. It's like we kind of need them both, at least when it comes to this specific thing. I don't want to make too many judgments about the forest as a whole.

Eric Jorgenson: Are there any other of these alternative research orgs you want to kind of highlight or that you think are interesting examples?

Sam Arbesman: There's a lot that I like. And they're like interesting in different ways. But I'd say like one that I think is kind of fun in a sort of distinctive way is he can switch- which is involved in like research, kind of like computer science research at the intersection of human computer interaction and user interface kind of things as well, like tools for thinking and the way they operate, and this is kind of my understanding from the outside, is they kind of described it as the studio model kind of by analogy with movies, where it's like when you're making a movie, it's not necessarily just a company, it's like you kind of bring a ton of people together for a certain amount of time and then you- they all kind of work on this thing, this project, and then kind of everyone goes on their merry way. And the way in which it gets-which has a number of people who are kind of more full time. But oftentimes, it's specific people are involved for specific projects. And so the kind of more permanent members of- thinking switchable say, what kind of helped set the roadmap and say, okay, here are the different things you want to work on. But what they'll do is then say, okay, we have a product for the next three to six months. Let's bring together the team of like the best people, have them work on this thing, and then figure out, okay, what is the next step? And the nice thing about that is because often in the world of tech, people- like it's hard to sort of get the best people for a very long amount of time because people are just like switching from companies or wherever, or they're just very, very expensive. But oftentimes there's a lot of top notch people who are kind of in between things for a certain time, maybe like they started a company, and then it got sold. And now they're trying to figure out what their next thing is for a few months, or they are moving from one company to another but kind of wanted to take some time off. And you can get them for a relatively short amount of time. And it switches able to capitalize on that and say, hey, we're going to kind of work with these amazing, amazing people and get this top notch talent to work on these research products. I'm kind of stringing them together towards this massive kind of research program, which is pretty exciting.

Eric Jorgenson: Yeah, that's interesting. It is just taking advantage of whatever components you have laying around and being a very modular approach. Yeah, that's interesting. Where do tech transfer offices sort of fit into the alternative research, they're traditionally supposed to be the interface to sort of commercialize the discoveries coming out of academic institutions, universities, but that's hit or miss, and they all have very different sort of capabilities. And each professor, I'm sure, is sort of unique. And each bit of research is unique. Is that something that you have had much experience with, or how does that fit into like the alternative research world?

Sam Arbesman: So I mean, obviously, as you mentioned, there's a lot of issues with kind of tech transfer as it's kind of currently done. And like a lot of research, there's a lot of IP that kind of sits on the shelf and is never examined. And it's very hard to kind of figure out how to actually transfer those kinds of things and make them useful. When it comes to like the alternative research organizations, I mean, I think some of them are trying to kind of sidestep some of this kind of stuff, but in different ways. I mean, obviously, they want to- many of them, especially the for profit ones are trying to create a certain amount of IP that can be privatized or licensed to actually create some sort of value, and I still think we're still in kind of like the early days of figuring out how that's all going to work. I think partly because there's some time, there can often be a tension between IP capture and kind of just the endeavor of science where they- like science is this like public good of knowledge that sort of like, okay, you're supposed to share everything with everyone. And then IP is like, okay, we're going to actually kind of own some of this. And IP is very good. There's like a lot of good reasons why we have IP and like the patent system and these kinds of things. The United States Constitution was a very kind of- it was recognized that these kind of things were important. At the same time though, these new types of organizations sometimes strive to like capture very large amounts of the value that they're creating in terms of IP, and I'm not sure that always is going to work the best, to be honest. I'm pretty agnostic as to where we're going end up, I think you kind of have to find the right balance. And I think universities, they want to capture more than they do. But they kind of go about it. And sometimes like a kind of- kind of way. And other organizations about the for profit are trying to like capture more and maybe successfully, but then maybe as a result of that, they're not able to do as much like solid capital as science. So I'm not sure what the right balance is. Do you have a sense of what the right balance should be?

Eric Jorgenson: I know it's different. I know, like I've looked, not super deeply, but at enough different tech transfer opportunities to see the massive range of differences in different deals that different companies get even within the same sort of tech transfer institution. And it seems like universities that are producing novel discoveries and novel technology should be producing like, hit after hit after hit commercially. But when you look at the incentives, it largely seems like short term incentives, like put in place by the tech transfer office, which is just the interface, like get the technology out the door and into the market. They put all these onerous terms on like in cash payments or royalties and trying to like recoup their cash expenses, even though they don't- there's an artificial need on the university side, when the university could, from a capital perspective, have the longest view in the room and don't mind having decades to like make this company successful. They don't need their cash back for 20 years. And it helps the company succeed and helps the technology to get out there and helps the professor get rich and have an impact and like live a good life. I'm not an expert in it, but just from the outside with a first principles kind of mindset. It seems frustratingly misaligned with the sense that we're looking at these industries and saying like, okay, what are all the components in this industry? And how can we move technologies forward? How can we move the corpus of knowledge forward? And how can we get technologies distributed to people that makes their lives better? And I don't think that's how each individual like node in the like university academic tech transfer like CEO sort of stack is looking at it.

Sam Arbesman: Yeah, I think, with universities in general, I feel like oftentimes, whichever organization or individual kind of like afford to play the long- like the longest game, like they're going to be the ones that are going to be like best positioned to win, and universities. And I think this is kind of one of the things where universities, I think universities have been around for. Like some have been around for hundreds of years, if not, like I think the oldest university is probably around a thousand years old. Like they can't afford to play a much longer game with a lot of other institutions. I think the downside is like how do we- what are the incentives, the timeline of just-

Eric Jorgenson: The career risk of the people inside the university?

Sam Arbesman: I don't know, any academic level, I mean, people always say, oh, like academics play a very long game in terms of research. But the truth is, they're often operating on the timescale of like research grants and things like that and like grant proposals or grant cycles. And I wonder if there might be a similar kind, yeah, similar kind of incentive, like incentives within the tech transfer kind of world like, yeah, because the truth is, they take very long time horizon and say, okay, well, he wants to do whatever you want. And because we're going to be so enlightened, you are eventually 15, 20 years down the line, you're going to give us a like massive donation that you'll just- out of the goodness of your heart recognize like maybe it won't happen all the time. But on average, people are going to give these massive donations because they recognized it as a very enlightened perspective, that would be kind of like one extreme. But at the same time, they like don't necessarily- I agree, there needs to be kind of this recognition of like, yeah, there are these long time horizons, they can afford to play this game. And so maybe this is the kind of thing where, because one of the other areas I also play with is like new types of educational institutions. And I feel like I haven't looked as much at the intersection of like educational institutions and new types of research institutions, which are basically like reinventing the university. But yeah, maybe there is the opportunity to say, okay, well, let's really examine things from first principles or at least tweak the model or recognize what a university is, a point in some- larger space and let's just kind of like do a little random walk a little bit and kind of see maybe as we modify some of the terms and conditions of the tech transfer so that it'll actually become much more amenable, but I feel like we've kind of converged almost like too rapidly on a set of conditions and ways that these things operate. That's clearly not the global lifestyle.

Eric Jorgenson: Yeah, seems like the best practices, the best practices are not the best practices.

Sam Arbesman: They're just the most common practices.

Eric Jorgenson: Education is an interesting one. I didn't anticipate going into this with you. But it strikes me that we are entering like the technologies that are coming out today and getting commercialized and distributed, AI maybe for most among them, could be deeply disruptive for education in a very good way. Like, it could really scale sort of like the personalized and self paced educational approach. I'm curious sort of to maybe talk through that with you. There's a sense of what universities are for research, but there's a sense in which universities are for education. And this is going to be a really interesting chapter. We're watching schools try to like figure out how to track and ban kids from using AI to do their homework instead of like, alright, well, let's 10x the expectations and give them all the tools that we can. Have you sort of-

Sam Arbesman: Yeah, rather than like preventing them from using these kinds of things, say, well, maybe like, I don't know, I was reading this today. Yeah, it was maybe like a five paragraph structure of an essay is like not the best way to write. It's kind of like it just works and making it easy to grade it or whatever. And at the university level, universities are interesting. It's like, really, they are like a bundle of different functions that have kind of been like hallowed by time. So there's like the research function, there is the educational function, there's like, I don't know, a lot of like, they're kind of elite universities, there's a certain socialization function. And this is kind of the first opportunity that people, kind of young people have to kind of just exist as adults on their own. And like this- they're kind of learning how to be people. And it's a weird collection of activities that we all have a university do. Which is not to say that we therefore wish for like the unbundling all of these, but I think it kind of- we have to look at, okay, what are the different roles? What are the different functions? And then figure out how can we- which ones do actually make sense naturally and which ones also can be modified. And sorry, yeah, education more broadly. And so in terms of what the university looks like in 20 years, I don't know, my guess is that the universities that are going to be more willing to change rapidly or experiment are going to be like the lower tier ones because the top, like the Harvard's of the world, they have the luxury to say, okay, we're a brand name, we don't have to change as quickly as possible. You can just do whatever you want, and like people will still come to us, while like, I mean, you've seen over the past decade plus, maybe close to two decades, Allison Michael Craig's the president of Arizona State University, he has been able to kind of try a whole bunch of different things and do some like really interesting experimentation at the kind of university level there. It's like, for example, there was- they have this center called the Center for Science and the Imagination, which kind of connects scientists and engineers with science fiction writers to kind of get both groups to be as creative and imaginative as possible, which is like bonkers and wild. Now, going back to like kind of education overall, I do think it's going to kind of totally change how we educate. I think the ability not just to- right now, it's great that we have like videos that people can watch. But this kind of personalized as well as kind of like interactive sort of educational content I think is going to be adopted content. And there's a lot of people building companies in this kind of space as well already with a lot of the AI stuff, but I think it's going to be wild. I also think it will- also related to the fact you were saying like, oh, how are we avoiding- like trying to get students not use these technologies for certain things, I think we should ideally be think about, okay, how can we get- And maybe they first need to learn the basics on their own, but then say, okay, how can we kind of use these things in partnership with humans in the same way. Like calculators, it's like a very simple example. You have to learn your multiplication tables, you have to learn how to do long division, kind of these basic things because you need to know how you need to behave like, a way of like estimating if the numbers you get from computers and calculators are wildly wrong. Like, it's good to have those basic skills. But then yeah, at a certain point, okay, just like use the calculators and computers the entire time. I feel like the same kind of thing with a lot of this as well. It is important to be able to know how to like structure an argument, think through things, connect different ideas, articulate the ideas that you have in your head in a way that is compelling to another human being. But once you kind of learn those basics of like rhetoric and things like that, like then maybe it is also good to have a machine partner that helps you with brainstorming and creative endeavor, whatever it is. And so there's a lot- and I'm not saying anything, like total knowledge, but a lot of people are thinking about this. So there's a long way of saying I think it will be used. How it will be used, we're like barely scratching the surface.

Eric Jorgenson: How have you been using like GPT or any of the other AI tools in your day to day?

Sam Arbesman: So some of the innards. So a lot of the way I've been using it has been- first was fun, just like these are fun new tools and especially with like the image generation ones. Those are just like, they're just wild and exciting, for me, the thing that I find most interesting. And so I have been using it kind of to help with maybe like certain ideas, I'm trying to think of something, and I can't google it quite the right way, I'd like us to actually do that as it's actually worked quite well. For me, though, I'm very interested in using these tools to probe the underlying cognitive map that exists in these systems. For example, like with Dali, one of the things that I played with early on was I was very interested in having to make like a whole bunch of maps or posters from like world's fairs, like fictional world’s fairs. And one of the cool things about it though is I discovered quickly that the city, you mentioned, didn't matter necessarily that much, as much as the year. Clearly there was embedded in this kind of high dimensional space within these language models were this- really this kind of like- there was a sense of like a mapping between years and some sort of aesthetic. And so you could like ask for, I don't know, 1920 versus 1950 or 1970 or whatever it was, and it would kind of articulate the structure. And I find that very interesting actually. And more recently, with a lot of language models, I've been thinking a lot about kind of them as less of like these agents that can kind of do things, and this is not my own idea, but this idea of like simulators. They're kind of like simulating- kind of embedded in them models that have been extracted of the world, but through language- these language worlds are these kind of story worlds are actually very different, or at least I would say not entirely overlapping, but the world of physics and reality. And so some people have noticed that they call this like the semiotic physics of these language models where it's like they have- when you ask for random numbers, these random numbers have certain biases in them. Because the way people write about randomness is not always as random as real randomness is, which is kind of interesting. In the same way, oftentimes, the way in which you think these agents or these models are going to operate, they often end up, because they've imbibed so much language, they're based on certain narrative structures or like tropes. And it's like the most well known one, I think, that people are talking about over the past couple weeks, like the wild Luigi effect, and okay, you think the way you've made your language model is like very good and kind of well behaved thing. But actually, you've now made it almost like more brittle. And it can kind of like easily switch into this- go from like Luigi from Super Mario to like Waluigi, this weird agent of chaos or whatever it is, and people kind of like explain how you've got to do this, and how it almost embodies a lot of narrative and like stories that we've been telling and writing, and because these language models have consumed huge amounts of fiction, they kind of know- their story world. They live in story world as opposed to just the physical world. And so for me, seeing how these things can kind of like trip into these weird little stories. Well, thanks, I think is really interesting. So anyway, that's some of the stuff I've been thinking about with these models.

Eric Jorgenson: Interesting. So that is not a segue at all to what your role is and your job as a scientist in residence at Lux Capital. And I'd be doing everyone a disservice if I didn't ask what that job is and what you do.

Sam Arbesman: Thank you very much. That's a good question. Yes, I have this title, which I think you can kind of read into it all sorts of things. So basically, my role at Lux, which is this venture capital firm, which we kind of invest in like emerging tech and deep tech, frontier tech, probably the almost like more fun way to describe it is weird science fictiony things. And maybe that's even the more accurate way to describe it. And so my role, because we range across a lot of different areas, my role is really to survey the landscape of science and technology and find areas, communities, and individuals that we should be more closely involved with. And some of it is kind of finding interesting companies in those spaces and to bring them to our team or finding individuals that we might want to build companies around and kind of helping with that. Some of it’s kind of already after we've invested in connecting those ideas or individuals to our portfolio companies, we've already been- so kind of saying our CEO, like talking to one of our CEOs, area x is actually really relevant to like whatever you're doing, you might not even be aware of it. But a lot of it's also very upstream from investment and just kind of helping kind of explore areas that will eventually be of relevance to our portfolio or to our investment theses. And so it's engaging with the public through writing and speaking or connecting to various communities that are kind of playing with interesting ideas, even if maybe right now they're not quite right for investment. And so that gives me a great deal of flexibility to kind of like think about weird research organizations or things around like AI. And while Luigi or the tools for thought space or how like scientific change operates. And so yeah, it's a lot of fun.

Eric Jorgenson: It's such a cool job. You're like the advanced guard of the- advanced guard of venture capital. Like Lux prides itself in being at the frontier, and you're like the scout for the firm at the front often in a lot of these industries and sort of surprising technologies that come out, which is why it's so fun to chat and watch what Lux does and see what you're into. And I'm curious sort of from your seat, you've been there since not quite the beginning but very early on, I mean, coming up on 10 years now I think.

Sam Arbesman: Well, and Lux has been around for almost like I think over 20 years. I've only been there for like seven and a half. So I've been there for a while. But yeah, there were still a long history before me.

Eric Jorgenson: Okay, well, this is still within your tenure, I suppose. What are the sort of technology categories that have either underperformed or overperformed from the sort of priors?

Sam Arbesman: That's a really good question. I'm trying to think of like which area, and certainly there was kind of this- and I guess this is actually before my time, but there was kind of like all the excitement around clean tech and kind of this trough of despair when it didn't quite measure up. Well, that being said, I feel like there's kind of this- there isn't this resurgence now and the past year or two around this kind of stuff. And so I think a lot of these things kind of come in waves.

Eric Jorgenson: Clean tech is like real- all the renewable, like cost curves keep coming down. It's just hasn't had the returns that venture capital wanted when they invest in it.

Sam Arbesman: And I think that's the thing. And I think the better way to think about this is not whether or not these ideas are real, it's whether or not they've like paid off in the venture style kind of ways. And so it's- So it's an interesting question of like what field it could work in because I guess there's a good- there's a number of distinctions there on the right, which are the things that are real that can be kept, like that become outsize heads, which are the things that are real that are maybe not quite ready for venture. And then which are the things that we have found to be not real? And that's interesting taxonomy. I'm not- Oh, man, I don't actually have a good sense of which of those things there are. And to be honest, I guess I've been lucky. I've been there for a while. But because we're operating in like things that are pretty frontier-y, like they are these kind of long time horizons, I would say certain things probably around like augmented reality or virtual reality, those things, I think, are- I've never- I personally have just never been super into that kind of stuff. Separate from that, though, I think we are moving forward in that. And certainly, with Oculus, that was a big advance. But I feel like it's one of those kinds of spaces where it's like- and there's many different domains where people haven't been like, oh, this thing has been, I don't know, X years away, but it's been that way for like 50 years, for 10x years. So maybe people always- in VR, in that space, or maybe like nuclear, like something's on fusion or whatever. Like there's a lot of these different things that have always- and that being said, AI I think was always kind of one of these things people always thought was kind of far away. And actually I was just rereading before we were before we hopped on the original paper that Vernor Vinge II wrote about the singularity was talking about something, I was talking about it with someone this morning. And it's like this paper, and I read it a while ago. And it kind of feels like very sick. When I read it a while ago, it felt very science fiction-y. And because it's written like ’93, and he's like maintenance broken into basic, like 30 years plus or minus, there's going to be like superhuman intelligence. And you kind of read it a while back. And I'll tell you, this is a fun, weird kind of thing. And like, you read it now, you're like, oh, he kind of like laid out this pretty clear roadmap. And we can quibble about all the details. But it doesn't feel- I think there's many unstated things within, like how do we get to the singularity, if that even makes sense, or if it will ever happen. But the fact that we can kind of like read that document that was from 30 years ago and no longer view it as kind of this like wide eyed science fiction thing, that's kind of wild. So I would say that AI has been one of these things that maybe has underperformed and then like drastically overperformed in a short amount of time.

Eric Jorgenson: Yeah, slowly then all at once. Like, that's amazing. Very interesting. Okay. What are some facts or statistics that like everyone in your industry knows that would blow other people's minds?

Sam Arbesman: Oh, man, I'm blanking. Like the only way I can- I don't know if this makes sense for this. But the only one I can think of is I just read this blog, like long time horizons. And he's like talking about how it hit him when like his daughter was born that like she has a very good chance to live really into the 22nd century. And that was just like, it just blew his mind. And so maybe that's one. But that's not- I'm trying to think of like one of these weird facts. Yeah, sorry, I might not come up with anything. I apologize. By the way, actually, this is interesting. So as an aside, the small scale version of this is like when you have a word on the tip of your tongue, and you can’t think of it. So I have a friend who- he's a cognitive scientist. And he has done- there's research and there's questions that they're actually like designed. And the questions are designed to actually elicit that feeling because scientists actually know like how and kind of why it actually happens. Like, there are ways of actually making it work, you kind of know you know the answer, but you can't actually figure it out. So it's a really weird thing. Like, there isn't a way of generating that tip of the tongue phenomenon, which is wild, but unfortunately- so maybe that's a cool fact. But that's not quite where you were going.

Eric Jorgenson: No, that's why that's like a- don't tell the job interviewers at Google, don't tell the hypnotist those tricks. Okay, here's another one. What is the mental model or heuristic that you find yourself using most frequently?

Sam Arbesman: I guess I would say probably the most common model that I use, and kind of this is probably like my basis for my roots in biology, like evolutionary biology is like evolutionary thinking. It's kind of like looking at systems from an evolutionary perspective and saying, okay, what are the fitness functions? What does it mean for something to become like more fit or less than? What is the genotype versus phenotype in a system? What is the level in which we're selecting things for? What does it mean? What does it mean for exposure to occur? What is the high dimensional space in which things are evolving? So I would say that doesn't equate to like a single- that certainly not a heuristic, but I feel like it is a bundle of ideas. And I kind of say, okay, here's all this field. And here's a skill, let's try to do a whole bunch of mapping, see what maps one field to another, and then I'll see where it breaks down. Because I think both where it works and where it breaks down, then there's some like really interesting stuff there. And then, that's been worth digging into more deeply.

Eric Jorgenson: Yeah, that's a hugely powerful model. I have a few questions I want to kind of circle back to from the previous points in our conversation. So, we talked about Boston Dynamics and DeepMind as being- we sort of cornered the market on a certain type of academic excellence. And I found myself wondering how many people does it take to corner the market on something like robotics or predictive machine learning, or pick another sort of academic might category that we brainstorm around?

Sam Arbesman: Yeah, that's a good question. I think there's- I guess it depends on the size the field as well as how much you want to corner the market. I think there's also- I think it's certainly easier and more feasible, but it's still also very powerful, the road and say, okay, we're cornering the market, like all the talent is now within this organization. But you can make it and say it has enough people that it becomes this like massive attractor of talent where it's like anyone who's anyone wants to go to one of these kinds of organizations. And so I feel like it's less about like, okay, corner all of it and more about like a critical mass kind of thing. Yeah, I think it depends on the specific domain. And it could be the kind of thing where it's like, okay, if it's a very cutting edge thing, and people end up moving very quickly, then maybe it's quite easy because you kind of just get the people who are working on that frontier.

Eric Jorgenson: There's 5 or 10 people who are the foremost experts in the world.

Sam Arbesman: Yeah. Or if it's a really niche kind of thing or if it's something that kind of is at the intersection of a lot of different domains, that's also, I think, where it's kind of easier because there's not that many people who are willing to kind of play at that intersection because you're kind of having to move a little bit outside of your area of expertise. So it really depends on the area. But I would say, once you kind of make it that like critical mass attractor, then maybe it will have some sort of feedback effect.

Eric Jorgenson: Yeah, that's interesting. I like the critical mass, the threshold for that is so much lower necessarily than cornered the market for something like that. So, that's a good distinction. Is there any other- what is maybe the space from your crow's nest aboard the good pirate ship looks for? Or there're just like whole swaths or industries that you feel like you have an eye on or you're interested in that most people are just not paying attention to sort of in the broader tech landscape?

Sam Arbesman: I just very rarely- like areas of no one else is fantastic. I said one area that I like thinking about that I'm not sure if it's like this really important area in the tech landscape, but I think it's fun and interesting and kind of like not as many people are thinking about is what I call kind of like space, like emergent microcosms where it's kind of all the different ways in which you can kind of use relatively small amounts of computer code to unfurl some sort of like complex virtual world. An example would be like cellular automata of like Conway's Game of Life. That would be the kind of like canonical example of more recent versions of like linear, but then you also have things like all the other artificial life examples that people are playing with, like other things like agent based modeling or certain areas within simulation or certain things in like the gaming world where people kind of doing this generative art kind of general design, where they're kind of using relatively compact descriptions to kind of make a whole bunch of different things. And so, it's like this weird thing that kind of is at the intersection of physics and complexity, science and artificial life, and gaming. And I'm not sure, at least in my mind, I think it's all kind of deeply connected. Maybe most people don't think it's deeply connected. And that's why they're not sorry, think of it as I was in the field. But I definitely think there's something really interesting there. And it's kind of- and people are continuing to play with it. And so that's one area that I like thinking about a lot, but it's kind of me kind of saying, hey, guess what, you and you and you, you're all doing things that are kind of like spiritually connected, and you guys should be interested in each other. And they are for the most part, which is nice.

Eric Jorgenson: Yeah, and you get to kind of bring them together and hopefully make some sparks fly. So, I will sort of tee you up to do the closing parenthesis in this conversation and ask you for some sci fi book recommendations. We started with the impact your grandfather had on you and how influential sci fi has been. I know you are a deep and wide reader in sci fi. So what are some books that people should pick up, myself included?

Sam Arbesman: Okay, yeah. And so I would say, I mean, certainly a very big thing is there's kind of like different levels of how well known and how later on well, no, I'll just say a whole bunch different things. So, I certainly like the three body problem trilogy, obviously very well known. But I think it's amazing. And I think, to my mind, at least, each successive book kind of just gets grander and grander in scope until it just like-

Eric Jorgenson: That's what I love about that series. It starts at basically where we are now, and so far, it takes you all the way to somewhere unimaginable. It's incredible.

Sam Arbesman: And it's- yeah, so that's fantastic. I'm a big fan of the writing of Neil Stevenson. I love his stuff. I would say I'm actually partial to a lot of these more historical, like historically minded. So, I think some people like cyberpunk kind of stuff, I enjoy reading basically everything that was written, I would say I'm more partial, it's like the Baroque cycle, which is this like massive trilogy about the scientific revolution and the invention of the modern monetary system, but it's also thrown in a ton of other ideas like, yeah, those books are wild and strange, and I find them fascinating.

Eric Jorgenson: What was the one you just described to me, we were talking about, sort of a post scarcity world, like sci fi that takes place after so much- 

Sam Arbesman: So this post scarcity society, I mean, obviously, the startup world was- which is great. I grew up on Star Trek Next Generation, but I think, so in Banks’ culture novels, like Culture series, so it takes place in- it’s kind of pan human society called the Culture, but the way it kind of operates, it's really there's like these super intelligent minds that basically run everything and then humans can kind of just do whatever they want, and they have- they can like excrete any chemical and hormone straight from the brain just by thinking it, and they can kind of live as long as they want to, do whatever they want. And I think the way Banks describes this as like you're effectively immortal, but it's kind of considered in poor taste to live beyond a several 100 years just like that. But it's like this weird world where it's like you can kind of do anything you want, indulge in all of your hobbies. The interesting thing about the novels, though, is you don't really see that much in sort of like the core of the culture of the actual society because it doesn't necessarily make for good storytelling. So oftentimes, the stories take place at the boundaries, where like the culture is interacting with other societies, whether it's like they're going to war, or they're trying to change some other society, kind of make it more like Culture or whatever it is. And so, they have- there's like the contact division, which interacts with other societies. And then they have this kind of very euphemistically named organization called Special Circumstances, which is kind of like the group that kind of goes in and changes governments and societies to kind of make them more like the Culture which is this weird like anarchist socialist post scarcity utopia. Those books are bonkers. And yeah, what other one- and so actually, I just read this novel called- I guess, this is more fantasy, but it's kind of like a weird fantasy. So it's called Babel. And basically, the premise of it is it takes place in the mid 1800s. And the idea is there are these translators and they studied the art of translation. Because even when you have one word in one language, you think it maps exactly to one other word, like connotations are slightly different. And the idea is, if you can write a word on a piece of silver and then it's translation in another language on the other side of the silver, the space, the gap, like the sort of like mental gap in connotations gets manifested as some sort of thing. So, the internet, you can use this as a technology and so they end up using this weird translation silver stuff for making an industrial revolution. And then they also tie into like thinking about empire and colonialism and how we think about translation and language. And so it's not quite science fiction, but it's like super thought provoking in terms of how we think about words and language and meaning. So that was a book I just happened to finish recently.

Eric Jorgenson: Thank you, Sam, so much for taking the time and being such a teacher and such a leader in the space and sharing everything. I'm so excited every time we get to talk because you just have such a fascinating job and outlook. And it's a pleasure to get to peek into it. 

Sam Arbesman: Thank you so much. This has been a lot of fun. I really appreciate it.

Eric Jorgenson: I appreciate you hanging out with us today. Thank you for listening. If you like this episode, here's three other quick episodes that you will also adore with very similar energy: number 51, Max Olson, number 34, Josh Stors Hall, both cover a very wide range of exciting technologies just like we did here with Sam, and number 58, Brett Kuglemass is a deep dive into nuclear and the progress that's getting made there. Brett just signed a huge deal for like 30 nuclear reactors to get built all over Europe. So really exciting time to go and check out that episode. Accredited investors are welcome to put their money into early stage tech startups that we invest in alongside me and my partners in Rolling Fun, link to that in the show notes. I also encourage you to check out madebybread.com if you need some beautiful software built. And for a free way to support the show, please leave a quick review for this show, this episode in your podcast player or text this episode to a friend or coworker you think would enjoy it. Keep those sandwiches toasted.