The Magic of Code, History of Computing, and the Future of AI Interfaces [Sam Arbesman #2]
Sam Arbesman returns to discuss his new book The Magic of Code. Sam wrote a deeply personal, wonder-filled tribute to the history, philosophy, and magic of computing.
Sam is a 3x author and Scientist in Residence at Lux Capital. He was also episode 63 of this podcast.
Sam lives at the edge of the internet. This conversation is a love letter to curiosity, computing, science fiction, and the joy of building.
Links to Platforms:
We discuss:
Why software is ACTUAL MAGIC
Combining history and science fiction for more accurate prediction
Phases of computing technology, and what might come after silicon
The creative opportunities for the future of software
This conversation is for anyone interested in software or the future of computing.
Quotes from Sam:
“Software is absolutely magic. It's wizardry. It's sorcery. Nobody gets it.”
“We actually can use text and code to affect the world around us.”
“Computing should be a humanistic liberal art—it should connect to language, philosophy, and art.”
“Code is not a substance, but it operates in the world.”
“Computers are weird everything machines.”
“We’re shielded from the vast complexity of computing—until something goes wrong.”
“People used to build computers in their garages. Now we can’t even open them.”
“AI is powerful, but it’s part of a much longer conversation around tools for thought.”
“Biology is a wildly different computer than anything we have ever used to compute with so far.”
“Science fiction doesn’t always predict well, but it gives us worlds we can aim toward.”
“Computing history is so young that many of the pioneers are still around. You can email them!”
“Ultimately, all these tools were developed for people. They’re meant to be in service of humans.”
Want to invest your money into early-stage utopian startups like Ideem?
When new technologies meet the market, the world changes for the better. That's why we invest our money into obsessive geniuses building utopian technologies.
We write small checks to 15-20 very different startups each year. Previous investments include Aalo Atomics, Atom Limbs, Ouros Energy, Stell Engineering, Airship, Terraform Industries, Longshot, Dirac, Occam, Atomic Industries and more.
Our Website has background on the fund, our past deals, and more.
*Pre-rich is cheeky fun and not a guarantee of being future rich. Startup investing is risky, illiquid, and not for the faint-of-heart. Be warned.
Accredited Investors: reply to this post, I'll send you our deck, and we'll get you into our deals starting this quarter.
Learn more about Sam Arbesman:
Additional episodes if you enjoyed:
Turning Discoveries Into Companies with Sam Arbesman of Lux Capital
Building a Venture Firm From Zero to One, and AI-driven VC Thesis Research
Episode Transcript:
Eric Jorgenson: Sam, it's good to see you. It's been a minute.
Sam Arbesman: Yes. It's great. Good to be talking to you again. This is a lot of fun.
Eric Jorgenson: How is life? Have you been wrestling? Was this book a beast or did it come easy for you?
Sam Arbesman: In some ways... it took a lot of work. Every book takes a lot of work.
Eric Jorgenson: They all do.
Sam Arbesman: Every stage, you think you're close to the end and there's always so much more to do. But this one really, it felt very personal in terms of a lot of the topics and ideas that I had been thinking about for many, many years. Because it was really... in truth, this is the kind of book that I wish I had had when I was young and first getting into computers. And I kind of had to cobble together all the different ideas on my own and over time. And so it's very personal. So, in some- in many ways, like this is the kind of thing I've been thinking about much of my life. But it's also a way of me kind of paying it forward to like future little Sams of like, okay, when you're thinking about computers, hopefully this will now be one thing that will get you excited about it.
Eric Jorgenson: Yeah. I think those are some of the very best books. Like when you are writing for an audience of one and it is like the inner child. It's sometimes hard to find the boundaries of those books when you have 20 years of thought about it.
Sam Arbesman: There was a lot of things that remained on the cutting room floor, which is just how it is. But yeah, it was great.
Eric Jorgenson: Yeah. I remember talking to you about the germ of this idea, the seed in a coffee shop in Kansas City in 2021 or something, it was very early. And I think you were still- I'm always curious how the ideas evolve. But I think the first conversation I recall about it was like, you were like, software is magic. Software is absolutely magic. It's wizardry. It's sorcery. Nobody gets it. It's so much cooler than everybody thinks. Like, I got to tell the story. I gotta tell the world. Like, people need to appreciate this more. And I'm glad that that like motif is the final, stayed all the way through. It's the magic of code. That's the book.
Sam Arbesman: Right, there's definitely that through line. That remained. I really wanted to kind of convey that sense of excitement. And there is an entire chapter about magic and sorcery and computing and coding and everything like that. And of course, there's a lot of other things as well. But yeah, I really tried to kind of keep hold of that wonder and delight when- yeah, in terms of actually trying to share it with my audience, which was great.
Eric Jorgenson: Yeah. So, tell me that like that story. What is the magic that you see in software and computing?
Sam Arbesman: Yeah, and so the way I kind of think about it is, I mean, like right now, when people talk about computing or just tech in general, I feel like there's this almost like broken conversation around computing and technology where it's like everyone's either adversarial towards it or we're worried about it or just ignorant about it. And the truth is all of those responses, they're not bad, but I feel like something's missing. And so going back to my own childhood, when I think about my own childhood and my own experience first coming to computers, it didn't feel that way at all. It was much more full of wonder and delight. And it also didn't even feel like computing or computation was this branch of engineering. It really was, oh, when you kind of understand this stuff around computers and technology, it should almost be this humanistic liberal art. When you think about it, it should connect to language and philosophy and biology and art and all these different topics. And so that was the kind of things that I wanted to share in the book of like the wonder and the delight, and of course, the concern is in there and I talk about those kinds of things. But when... I think back to my own experiences with like our first, family's first computer of the Commodore VIC-20 or like early programming experiences or SimCity and fractals and all these things, like that's the kind of wonder. And specifically, I'll also relate it to like the magic component. When it comes to magic, I mean, we've had this desire like as a society for millennia to use words and text to coerce the world around us. And the truth is, until, I don't know, 75 odd years ago, whenever digital computers first became a thing, it really was just kind of a fantasy. But now it's real. We actually can use text and code to affect the world around us. And so that's one of the deep ideas that I want to kind of share, along with just sort of all the ways in which this magical stuff of code can touch upon all the different topics and ideas that I mentioned. And so actually going back to like the Commodore VIC-20... this first computer that we got, it was early 1980s. I was really little. I didn't know how to program at the time. I would probably just like sit in my father's lap while he would like manually enter code from like a magazine or whatever that he was copying out and it would make a game or something like that. More often than not, it would also create a bug and you could get weird like gibberish on the screen. But even though I couldn't program, I could see the relationship between, okay, there was a relationship between text and what was happening on the screen. And I feel like that sense of magic and wonder of like, okay, text affecting something that really had an impact on me. And I eventually wanted to do it myself and slowly but surely got into the world of coding, which was a lot of fun.
Eric Jorgenson: Yeah, it is. It's fun to think through the whole history of, I can't remember who made this observation for the first time, but like that magicians and sorcerers and even fraudsters in that sense are like the father of science from a historical perspective, people created illusions and then had to engineer things that appeared like illusions, and then people were trying to figure out how that illusion got created, and it just cascaded into discovering how the real world works.
Sam Arbesman: I feel like there was definitely a period in, several hundred years ago, for probably a long period of time, where there were these mechanical automata, where people would make these things with like gears and various components or whatever. And some of them were... they were fakes. So, like the classic one is like the mechanical Turk, where it was like this lifelike creature that was playing chess, and it would play at a really high level and people were very impressed. And it turns out it was sophisticated, but also inside it was an actual human being who was playing chess. And then you have like some, I don't know, digesting duck that it would do all these things, and it was mechanical and very, very sophisticated, but inside it was like it was not actually digesting the food. It was kind of taking the food in and then there was another compartment that was kind of pooping things out or whatever it was. But at the same time, though, there were also really sophisticated real things happening. And I feel like if you look also at other scientific domains, like the transition from alchemy to chemistry, there was also these kind of like weird, fuzzy areas. And so I'm not sure I would say it was like illusion and magic kind of as the thing that led to science. But I definitely feel like there was kind of this interesting liminal period where things were kind of in between. There's this great story, which is kind of bonkers, but I don't know all the details. My sense is... I think it was around like the early 1700s maybe, where there was this woman in England somewhere who ostensibly was giving birth to rabbits. They weren't live rabbits; it was kind of like dead rabbit parts or whatever. But it was in that weird in-between period where people were like, okay, science is a thing, but we also kind of believe all this other stuff, that there was a time period where people were not sure, where she was able to kind of hoodwink everyone. Of course, it was a hoax. It was not real. She was not actually giving birth to rabbits. But the fact that there was kind of this in-between period between magic and nonsense and science and technology, whether you're looking at automata or alchemy to chemistry or weird things in biology, yeah, I feel like we kind of think, oh, like the scientific revolution, like we put this weird marker, and now everything from there is very easily understood. But no, there was this weird period where people were just kind of working stuff out, and it's a fascinating time period. A lot of the computing stuff came a lot later. But it's just fun to see how people were kind of thinking about these things during this interesting time period.
Eric Jorgenson: Yeah, so that's a good segue actually. I was hoping you could give kind of a brief summary of like the history of computing. I think it's one of these things that like most of us have lived within just the silicon chip era, or at least are really only aware of that. And it's been long enough that people forget that there's like substrates of computing behind this that like unlock huge things. And we're likely to see another one in the coming decade. But could you kind of trace that trajectory for us?
Sam Arbesman: Yeah, I mean, and so certainly there were a lot of ideas. So there's kind of like this very deep prehistory to computing as well. And so, I would say the first kind of modern digital computers were probably sometime in like the 1940s. It's like around the end of World War Two. That's kind of like the point at which people say, okay, this is like the modern digital computer. But before that, there were all these precursors. So, I mean... you can go back really, really far, and like people were doing interesting things around logic and mathematics. But in terms of like mechanical things, there's kind of like the prehistory in the 1800s with Charles Babbage, who was doing sort of these mechanical computation devices, and he actually designed one. I think it was really basically just a general purpose computer. I think he only ended up succeeding in building a very small component of it. So the thing was never actually built. Ada Lovelace was actually probably... people often describe her as like the first computer programmer because she took the design that Babbage had and actually developed an algorithm for it and then kind of described... Interesting, intriguingly enough, in the same way that bugs kind of go along with software, her very first program had a bug, which is not surprising because she never actually had a chance to run it. And so, she would have never, never actually known. But there's kind of like that prehistory. There were like things that were kind of developed around like Boolean algebra as well. And then, of course, there's like the actual technologies, wires and cathode ray tubes, or sorry, not cathode ray tubes, but like vacuum tubes and things that were... used and repurposed for early computers. But then also before digital computers, there was also this very long period of analog computers. And the truth is, analog computers, they coexisted with digital computers for quite some time, because in many ways, they were actually faster for certain special kinds of things. And so, the way to think about an analog computer is an analog computer is basically what it describes. The goal was to find a physical analog, like a physical analogy in the real world for whatever you are trying to calculate. So, for example, so rather than instantiating numbers as amounts within like... symbolically described in a digital computer of like, oh, they're described in binary or whatever it is, an amount, a number would actually be described by some physical quantity. So, it could be like the height of water in a tube or the resistance in a wire or something with a gear or whatever it is...
Eric Jorgenson: Is the way to think about that like an enormous automatic abacus essentially, like a whole bunch of different abacai?
Sam Arbesman: Well, and the truth is, so one of the most sophisticated kind of mechanical analog computers was this differential analyzer, which was developed by Vannevar Bush, who was also one of the people who was responsible for kind of the development of the National Science Foundation, kind of like the big government supported science. But he was also, I guess, a physicist or mathematician involved in these things. And that one, it went through multiple different iterations, but it was this interesting kind of combination of mechanics and electronics where it was all these different wheels and gears spinning in certain ways. And that was related to this whole set of differential equations that you could kind of program by figuring out the specific relationships there. And then you could run it, and it could do its thing, and you could simulate things very, very accurately. And then it turns out, there are still analog computers around now, but the big advance in the 1940s or so, although it kind of depends who you're talking to about which one was the true first one, was this idea of using electronics to represent numbers in kind of an indirect way, as opposed to the more direct way of the analog computer. And then once you had that, although actually the ENIAC, which was kind of like the first well-known digital computer, it actually did not use binary. It actually used, I think, a decimal representation, if I remember correctly, for numbers. So binary is not required. Binary is, I think, just probably the most logical thing because like it's very easy to say on or off, zero or one, and then from there, work your way up. But then there were, yeah, so then after that, there was this advent of very large machines based on vacuum tubes, because this is before the advent of the transistor, which is basically another kind of switch, but it's a lot smaller. And so, one of the reasons why transistors eventually gave way to, or sorry, transistors took the place of vacuum tubes is because they were just, they were smaller and they were more reliable. And so, a lot of times these mainframe computers, they required you to like replace huge amounts of vacuum tubes and there were lots of things going wrong, and it required like an entirely different building to cool these gigantic machines. And so, these machines, they were huge. They were humongous. You can kind of... they were room sized, if not larger. But the one interesting thing, you mentioned in terms of like the early history of computing, is that a lot of the things that we do with computers nowadays, our computers look completely different than those room size monstrosities, all those ideas and like the applications were almost there in the very, very early days of computing. So, like things around simulation or, I don't know, modeling, like actually trying to embody ideas of living things in biology within computers. These were done in some of the earliest machines or even things around like artificial intelligence. Artificial intelligence, like the inaugural conference around this was like in 1956, which is like not that far after the advent of the computer. And of course, it took a long time to percolate and to bake in, but now we have a lot more sophisticated things. But yeah, so there was this long period of mainframes and then they had- and then from there, they kind of went to like, I think they were called mini computers. And so they were no longer the size of the room. They could just be, you had computers that could be the size of a refrigerator. So that was like a much smaller thing. Obviously, you're not- they're not a laptop. You're not carrying them around or they're not portable, but they were much more- they were smaller and easier to use. And also, around this time, people began to think about different types of methods of input, and so like there were screens and... actually there’s a whole variety of interesting ways that people thought about using input. And so, like there were early days around people thinking about like the computer mouse, but then also people came up with other ideas, and obviously keyboards made an impact. And actually during the mini computer age, I think was like the first computer game. So, Space War was the first computer game, and it was... actually, even though it was like the first game and you would think it's really, really basic, and in some ways it is, I played an emulated version years ago at some science museum. And it has actually a number of hallmarks of the kind of games that we have, where it was multiplayer. It was real time. There were graphics. It was visual. So, it wasn't like a text-based game or anything like that. And my son and I had a great time playing with it. And like periodically, he like... we haven't done it in a while, but we found an emulated version on the internet archive and like we would play it periodically because it’s just a fun little weird game. And you’re like these two little spaceships kind of flying around and trying to shoot each other and you can like go into hyperspace and jump around and do all these weird things. And just the fact that that game was like the first game but also had a lot of the features of games that we still have is kind of wild. And so yes, there was that period. And then of course, I mean, we have... so beginning in like the mid 1970s, late 1970s, that was the advent of the personal computer. So up until that point, computers were not things that individuals could really buy easily or have easily. So, the first one I believe was the Altair, which I think didn't even come with really many ways of inputting or even seeing output. Like it was basically just a machine with a whole bunch of switches that were like the ways of like inputting binary, like machine code, like just raw binary commands. Then you could like, I think you could add on a keyboard and a screen or whatever, but like the basic thing was really, really simple. But then a couple of years later, that was, so it was 1977 was sort of the big year for personal computers where the Apple II came out and maybe it was the Commodore 64 and like the trash one or Radio Shack one. I'm forgetting, but there were three personal computers that came out around the same time. And then, people were like, oh, this is a real thing. The interesting thing though is there was this slight period between when personal computers first became a thing and when they became widely adopted, because they really needed a killer app. They needed something that people were like, oh, it's not just fun for hobbyists. They needed something where people would say, oh, this is something I can use at home, or I can use in the office. And the first killer app was VisiCalc, which it was the first computer-based spreadsheet. So spreadsheets actually used to be real physical sheets of paper. And then Dan Brooklyn and his co-creator, whose name is escaping me, they created VisiCalc, and it was this- it was a spreadsheet basically, similar to the spreadsheets we have nowadays. And there were cells, and it doesn't look that that dissimilar. And then people realized, oh, this is real. Like there's something here that people can really use in a very powerful way. And of course, then it kind of took off.
Eric Jorgenson: It's crazy, 1977, we're not even to the 50 year anniversary of the first personal computer.
Sam Arbesman: Yeah, so this is the interesting thing about computing history, which, well, I guess, so 1975. So, then we are kind of at- we are at that point. So, I guess, yeah, that would be- so we've just hit like the 50th anniversary of the first personal computer. The interesting thing about computing history is that for a lot of it, like a decent fraction of the people who were involved in the very early days, they're still around. Or we have at least like their oral histories where like they knew they were getting older and so they recorded all these things. So, we don't necessarily have to just go through, half remember like letters or whatever, like we have their stories, which is kind of wild. And actually, there was one point when I was working on this, working on my book, there was a story that I had heard about where it was like people who were involved in like early computing and early AI in the 1970s. And there were three kind of luminaries involved. And I wanted to check the story. And I realized one of them is still around. So I just emailed him. And we had a nice little correspondence. And he was able to say, okay, this part of the story is accurate. This part, maybe this is not quite right. But it was great because the history is so young. That being said, the interesting thing with like computing history on the one hand, many, many things have changed. And I've been just talking about like the early personal computers, but of course, since then, like we've had graphical user interfaces and now we have like the internet and the web and all these different things that have changed. On the other hand, though, we still have a lot of pretty fundamental things. For example, like a Unix and its sort of descendants, they are still around, and actually one of the points I kind of make in the book is like talking about kind of like the extreme age of Unix and it's sort of brethren. So, there's this great book by Neil Stephenson, the sci-fi novelist, but he has this, it's a small book called In the Beginning... Was the Command Line where he's talking about kind of things around open source and graphical user interfaces and Unix and there's all these different things. And one of the things he talks about is Unix as sort of the epic of Gilgamesh of computing. And there's a specific reason he's talking about this. But one of the interesting things, and I mentioned this in my book, is that, given the fact that Unix came around almost within like the first third of computing history and it's still around, like that's not the epic of Gilgamesh. If you look at like the long span of human history, like this is older than Gilgamesh or cave paintings or whatever it is. This is something that's been around for a huge fraction of computing history, and we still use it. Like there are, I think there are refrigerators that have Unix built into them and all these other devices. And the fact that this was developed not instantly within computing history, but fairly early on, and it's still so foundational, it's kind of wild. And so, with computing history, it's both very short, and so you can barely talk about like multiple generations of computing history, but there's also been a huge amount of advances. And so just alone of just the fact of we look at Moore's law and processing power doubling, the kinds of things that people were doing back in the day are just unbelievably slow... I think our Commodore VIC-20 maybe had 5K of RAM, which is a number that you can't even comprehend. And like I can remember our family's first, I think it was our first Macintosh because we went from the VIC-20 to some Macintoshes in the 80s. And I remember my father wanted to upgrade it to have, I think, a 40 megabyte hard drive or something like wildly big. But now it's like this is ridiculous. And it was this... And I remember we had to watch this video to explain how to open, because it was like the single- how to open the machine. Like, don't touch this thing, because it'll erase the memory. And don't touch this, because it'll make like the cathode ray tube explode. And like, it was kind of crazy. And it was also, it was all for 40 megabytes of ram or 40 megabytes of hard drive storage and it’s a trivial number nowadays. And so, yeah, just to think about how all these things have kind of changed so rapidly is pretty humbling.
Eric Jorgenson: Yeah, I mean, going back to the sorcery and the magic of it all, the fact that you and I are live streaming a 4k video and perfect audio that is like as good as you sitting literally on my desk and behind it all is ones and zeros, just like infinitesimally fast ones and zeros and layers upon layers upon layers of computing and software engineering that makes this feel effortless and reliable to the point where like it's basically a given.
Sam Arbesman: Yeah. Well, and I think that layering, I think is some of the most powerful- a powerful aspect of computing. It's like one of the fundamental ideas of computer science is the idea of abstraction, that you can build very- like more powerful things and more powerful abstractions and tools, and sort of almost like computational concepts out of more fundamental things. And then once you do that, you don't necessarily have to think about the more fundamental things. So, for example, when we are doing this or when I'm programming in Python or whatever, I don't have to think about binary. And the truth is, if I do think about those things periodically, it's almost overwhelming. And so, it's like, oh, we've had these, so many layers on layers upon layers. And because of that, we can now build these unbelievable edifices. And it's only because of this accretion of multiple layers of abstraction. And so, yeah, it's unbelievably powerful. And it's also... it's also interesting where, going back to like the magic and sorcery, I imagine very early on in computing history, it did feel magical, but it also probably felt really frustrating because of just like, you're just dealing with zeros and ones and machine code. And I feel like for me though, in terms of like when I came of age in computing history, because of those layers of abstraction and because of the certain types of programming languages that I learned. So, I actually neglected to mention like programming language, there was a long period of like machine code. And then at a certain point, we actually had high level programming languages. So Fortran was the first one. And now we're kind of saying, oh, we don't necessarily even need those. We can just kind of vibe code or whatever it is. It happens to be kind of the time period when I came of age and learning about programming, programming was incredibly powerful. It was incredibly magical in terms of like, here's the specific texts and like I write this incantation or whatever, and it does something on the screen, it's amazing. But it also required a lot of work, which is not so dissimilar from the way people talk about magic and sorcery. I mean, like you have to go to Hogwarts school for witchcraft and wizardry for seven years to really be good at magic. And there's lots of stories where you have to train for a long time or apprentice or whatever it is. And I feel like I probably identify with that kind of thinking, especially so because of the time period when I learned about certain aspects of computing. And I imagine some people before that might not have identified with it as much. And certainly, some people after that might also not because it could just be, oh yeah, it just feels- in some ways, it does feel even more magical. I'm just having these conversations, and software is being built for me or whatever. But it's a different kind of magic too.
Eric Jorgenson: Yeah, when the engineering is so good, you can take it for granted. Like I'm a little behind you, but not by a ton. But yeah, it was enough to really understand how difficult it was coming before and that like we had to build computers. If you wanted a cool computer, you had to build it. But now nobody opens devices anymore. Like you can't open an Apple device. You can't really spend much time on the OS of an Apple device. And so, you're being pushed almost up the abstraction layer.
Sam Arbesman: And I think also related that, you're also being shielded a little bit from the vast complexity. So, I remember, this is years ago, it was when the Apple Watch first came out. There was this article, I think it was in like the Style section of the Wall Street Journal. It was around whether mechanical watches are still going to be a thing, or whether or not everyone's going to go the way of the smartwatch. And they interviewed this one guy, and he's like, of course- and the answer is, of course, people still buy mechanical watches. But this one guy was like, yeah, of course, I want a mechanical watch. I think of the vast complexity of this mechanical watch as opposed to a smartwatch, which is just a chip. And I'm thinking, just a chip? Like this thing is like orders of magnitude more complex, but we've been shielded from it or we don't have access to it. And I think- and so going back to what I was saying of like the type in programs with the Commodore VIC-20, I saw like the relationship between the code and what was happening. And you can still do that kind of thing, obviously, with computers nowadays, but by and large, especially if you are not as technologically savvy, you are shielded from vast levels of complexity until something goes wrong, and then you're like, oh, bugs are kind of showing me a little bit of like the seams between things for better or for worse. But you're right, most people are shielded from that or just don't realize the levels of complexity, or because of the design choices that a lot of these companies have made, yeah, we are pushed up that level, up these levels, and we don't necessarily have as much of an opportunity to dig into the innards of these machines. And people talk about this with like, I was never really into cars, but my sense is like people who are a generation older than me, they would like fix their cars, and it was very mechanical, it was very easy to figure out, not easy, but you could actually figure things out in a way that now you're using like computer diagnostics and... cars are basically just like enormously powerful computers on wheels where there's like tens of millions of lines of computer code in them, which is really, really impressive, but doesn't necessarily have the same kind of- doesn't allow for the same sort of mechanical tactile ability to manipulate these kinds of things.
Eric Jorgenson: Yeah, those cars were built to be repaired and maintained and restored by the user in a way that, and computers used to be too in a way that they're not anymore. And you're hitting on something that I think is very interesting because I think, I often say like, thank God for Moore's law. Like in an era of like decades of stasis, like this, what you are talking about, the compute and the software improvement is basically the main area of technological progress for the last couple decades. But at the same time, as you point out, we are shielded from it. And so, people know that they get a better iPhone every couple years. They know that they can stream video now in a way they couldn’t 15 years ago and that AI is maybe where that frontier is now of like, oh shit, I didn't used to be able to do this. This is cool. But we're not seeing those improvements as much in other areas the way- imagine living through like humans couldn't fly and now we have commercial airlines.
Sam Arbesman: And not only that, we've kind of in some ways like almost taken for granted some of these changes. And so... I'll tell a little story. So, my grandfather, he lived to the age of 99 and he was a retired dentist, but he was also a lifelong fan of science fiction. Like, he read science fiction since basically the modern dawn of the genre. I think he read Dune when it was serialized in a magazine. He knew all the stories. So, when the iPhone first came out, I went with him and my father to the local Apple store. And we went there and we're kind of playing around with the phone. And at one point, my grandfather says, this is it. This is the object I've been reading about for all these years. And I feel like we've kind of gone from that moment of like, oh, this is an object of the future and it really felt like the big F future, capital F future or whatever people, however people describe it, to, I don't know, complaining about like camera resolution or, I don't know, just like the way- random, little things.
Eric Jorgenson: Battery life.
Sam Arbesman: Yeah, battery life. And you're like, come on, this thing is insane. It's unbelievable. And on the one hand, that's good. It shows that humans are really good at adapting to all the changes around us. On the other hand, it shows that we take all of these advances for granted. And so that even in the case of Moore's Law and these things that actually have had unbelievable advances, most people are either shielded from them or we just kind of take them for granted or we just immediately start complaining about something that it can't quite do without realizing the fact that, oh my God, this is wild.
Eric Jorgenson: Yeah. The Wi-Fi in the airplane is too slow. So, what about kind of the future of computing? Do you see us growing out of Silicon in the near future? Like, what's your kind of rundown on future computing substrates? Do you see another, Moore’s law continuing?
Sam Arbesman: Yeah, I mean, so I'll say one thing. So first of all, I think I'm not that great at predicting the future. I feel like...
Eric Jorgenson: That’s a great start.
Sam Arbesman: Yeah. So, take everything I say and just kind of ignore it. No, but I mean, so actually I remember when my first book came out, so my first book was in 2012 and I remember being really...
Eric Jorgenson: That was Super complexity?
Sam Arbesman: It was The Half-Life of Facts... And so, when that one came out, I was really glad that it came out in print because I knew, I was certain that soon after that, every book was going to be electronic, and there were going to be no print books after that. Of course, I was deeply wrong, and I feel like there's like other instances where I like totally missed it. And so I would say for me, I'm not necessarily- I can't say, okay, this is- this specific thing is the future. I can kind of tell you like there are some interesting hints that I've noticed. And so, in terms of like other substrates and things like that, so actually one of the chapters I have in my book is around, is about kind of the similarities and differences between computing and biology. Because if you look like, you're like, oh, like computing has binaries, it has binary, it has zeros and ones. And biology is DNA. And it's four base pairs. It's like four instead of two. It's very, very similar. It turned out, you can actually do some nice analogies. That being said, biology is also deeply different. Like it's much more stochastic and random. It deals in kind of much more messy things as opposed to computers are made in... chips are made in clean rooms. It's a very different kind of substrate. And some of the interesting areas that people are looking at are right now around thinking... instead of thinking about biology as something wildly different, viewing it as no, no, computing, like traditional computers and the kind of computers we have on our desk and things like that, that is a subset of this larger space of like information processing. And biology actually can show us all these other ways that we can compute and we can kind of do these kinds of things. And so, because like people- we always are drawn to metaphors and analogies of like, oh yeah, the cell is a machine or a computer, whatever, like it is, but it's also a wildly different computer than anything you would ever use to compute with. But because of that, it shows us how much broader computing can be. So, for example, like I mentioned, like harnessing randomness, that is a very, very different way of operating. Or people have actually used, I think, DNA to factor very large numbers or whatever into like their prime factors. But there's just also... there's also another idea of I think the term is polycomputing where, by and large, in computers, we have, okay, here's a function, it does this specific thing. But in biology, you can have an enzyme that can do like, I don’t know, three different things depending on like the level of the biological hierarchy that you're looking at. And so, it's almost like overloaded, like this kind of like specific thing is doing lots of different things. And that is a wildly different thing than the kind of thing we're used to engineering, but it shows how much broader computing could potentially be. And so for me, I love looking at that kind of thing to point to, oh, maybe we should expand the kinds of ways we should be building chips or hardware or software, whatever it is. But also you have, I mentioned earlier, like analog computers and analog computers, like many people kind of view it as like, oh, they had their heyday back in the day, and digital computers kind of usurped that, and like that's where we're going to go. But it turns out that analog computers actually can still be potentially useful for certain kinds of things. And there are chips that are more analog-like, that are specific- I think they're essentially analog chips that are designed for doing the kinds of calculations that are necessary for large neural networks very, very rapidly. So it turns out that you could potentially be using analog computing for training AI systems and things like that. And so, for me, I love seeing those little hints of like, oh yeah, maybe some of these ideas that we kind of ignored for a while but are now worthwhile kind of revisiting could be useful as well. So there's that. Like there's kind of also the space of neuromorphic computing of like using ideas from biology to inform how we compute with neurons and things like that. Obviously, there's things with quantum computing, which I think there's a lot of things that are moving forward in that, it still feels pretty early, but I'm excited that things are happening there. And going back to actually the history stuff, I actually think that we need, in terms of actually figuring out the next steps of like where we can build, where the future lies, we actually need to spend more time thinking about the history of computing, because oftentimes there's been a lot of like paths not taken or things that were tried or kind of ignored because the technology wasn't right yet. But we should actually be revisiting that. And unfortunately, at least from my perspective, Silicon Valley especially is like particularly ignorant of technological history and sometimes proudly ignorant, which is really not good at all. And like there is something to be gained from learning about, even just looking at computer magazines from the 80s and 90s or even earlier, just seeing what are the things that have been tried to try to revisit that, as opposed to saying, oh, no, the newest thing is always better. I don't need to look behind us. Because sometimes, actually, the things that people made, they should be revisited and they should be looked at. And so, whether it's analog computing or certain ideas that have kind of fallen by the wayside or whatever it is, I think those are also really valuable in revisiting both [?] around... and maybe not for Moore's law, but also just for the kinds of ways we should think it about in terms of like new paradigms for computing or interacting with our machines or thinking about computational tools for thought or whatever it is, there's all these things that I think people have tried and are worth taking a look at.
Eric Jorgenson: Yeah, the world of computing is, as you point out many times in the book, is like very broad, and we're seeing like very deep, deeper and deeper specialization in types of compute and things that get built for specific outcomes in compute. Like the chip specialization, I think, is way broader than it ever has been before, which is cool. Yeah, I think on the historian point, something I've noticed is some of the very best investors are actually incredible historians.
Sam Arbesman: Yeah, I think knowing... knowing history is really valuable and this is- and maybe I'm just partial to it because I find it really, really interesting, and I certainly included a lot of history in the book partly because, well, partly because technology is moving so quickly that I knew at least the history stuff was not going to be changing that much, and so I'll include more of that as opposed to like the most cutting edge thing. But I also really think that, yeah, understanding it deeply is really powerful, but also recognizing that a lot of the things we're thinking about now, like these questions are not new. I mentioned before that a lot of the applications and the way we think about computers, they were around essentially since like the modern era of the digital computer. We were trying all these things, and it took a long time for some of these ideas to kind of reach fruition, but they're not new. And so, when you think about, especially with AI, people talking about issues around the alignment problem or unanticipated consequences, or how we think about how it will change the nature of work and meaning, these things are not new. And so, Norbert Wiener, the developer of cybernetic, he has this, I think it was a- it's a collection of essays that I think was based on a lecture series called God & Golem Inc. It's a very thin book. And he talks about all these topics. And the truth is, not only was it like kind of in this weird, obscure thing where he was talking about like I’m the founder of cybernetics, but it was also in popular culture. So, like around that same time, in the mid to late 1960s, there was an episode of the original Star Trek called, I think it was called the Ultimate Computer. And it was about this super powerful computer being like tested to control the enterprise... and they talked about every topic we're talking about now. And like I remember watching that episode and just being blown away by how- obviously, it's dated in many, many ways, but also how perennially relevant all the things they're talking about were. And so, I really do think that there's something to be gained by looking at how people talked about these things, how they thought about what they would do and how they would change society, and just, yeah, all that history I think is so powerful and valuable.
Eric Jorgenson: Through that lens, are you uniquely worried or not worried about like the alignment problem?
Sam Arbesman: I'm not- I mean, I feel like there's enough people on either side that I’m certainly not unique. But for me, the way I kind of think about how to think- I mean, less about the alignment problem, but also just kind of more broadly like how AI is going to change society, I feel like... one easy way of exploring it is to say, oh yeah, like AI can do X, Y, and Z, but it can never do A, it can never do this other thing. And that other thing is kind of the... that is like the unique domain of humans. And of course, then like, I don't know, 10 minutes later, AI starts doing that thing, and you're like, oh, I gotta move- I have to move the goalposts. And this reminds me of, so within theology, there's this idea of like the God of the gaps, where it's like, oh, how do we define God? We'll define God as anything that science hasn't understood yet. But of course, science progresses, and through a combination of like cosmology and astronomy and evolutionary biology, we understand a lot more of the world. And suddenly, if you adhere to that idea, like your conception of God gets just narrower and narrower and narrower and threatens to kind of just evaporate entirely. And I feel like we do the same kind of thing with AI, where it's like, oh, and the truth is we did this even before AI, where we'd be like, oh, humans are unique because we're the only ones that can use tools or language or whatever. And then we find things within the animal kingdom that show us that we're maybe not quite as unique as we thought. And I feel like we're doing the same thing with artificial intelligence. And so, for me, it's less so around what is unique about humans, because I imagine, like whatever we think might be unique might not be unique, and much more about what do we think is quintessentially human? Like in this case, it's more like, what is the thing that makes me feel most human? And that answer is going to be different for everyone. So sometimes it could just be thinking really deeply about some idea. It could be gardening. It could be reading. It could be spending time with friends and family. Whatever it is, as long as you figure out what is kind of quintessentially human to you and then make sure that AI and technology more broadly is not reducing your ability to do those things, but allowing you to do them in a better and richer way, then I think we're going to be okay. For me, it's much more about like, ultimately, all these tools, all these technologies and computers, like they were all developed for people. And they are meant to be for us. They're meant to be like in service of humans as opposed to kind of the other way around. And as long as we think about that... and think about how we choose to adopt individual technologies and make sure they're kind of in service of making us quintessentially human as opposed to just kind of cogs in some larger machine or whatever it is that we feel very unfulfilled with, then I think we'll be in good shape. In terms of like the concerns, yeah, I have- there are many concerns I have. I definitely think that when it comes to how we think about like work and purpose, like there are a lot of these issues. But for me, going back to like the quintessential humanity kind of thing, I think we need to have this kind of conversation as a society. Because right now, things are moving so fast and we are not really grappling with these questions. People grapple with them maybe individually or kind of at the margins or occasionally, but unless we really sit down as a society and say, okay, what do we want the future to look like? And so going back to like me being really bad at predicting the future, for me, it's like less so about predicting the future and kind of viewing the future as like this thing that just washes over us. We have agency. We are the ones building the future. And so it's really, we need to say, okay, what is the future we want to build? And then try to actually be as deliberate as possible to build that and construct that kind of future, which is one of the reasons why I think not only reading history is really valuable, because it can be useful for like figuring out analogies, like where do the analogies work, where do they break down, but also science fiction is really valuable. Because science fiction can give us a very large and deep and rich set of potential futures that we might want to aim toward. And which is to say not every science fiction future is exact or realistic or right, but allowing us to see not just like cool new gadgets and technologies, but say like how do these things, when embedded in society, affect the legal, ethical, regulatory, societal implications? Like what are all the implications of those things? Science fiction is really good at that. And so, for me, it's much more, okay, like let's read science fiction. Let's think really deeply. Let's figure out, okay, what are the kinds of worlds you want to live in and then work backwards and try to make those things as likely as possible.
Eric Jorgenson: It is tough to separate correlation from causation here, but like historically, science fiction has been a pretty good predictor even over like long time horizons.
Sam Arbesman: So, I think I might disagree with that only in the sense of I feel like there- So I would say because there's probably a certain amount of selection bias where it's like we are very good at noticing the science fiction stories that like nail it and be like, oh yeah, this thing...
Eric Jorgenson: But if you indexed every book from 1920...
Sam Arbesman: ...versus like you look at every... you are like, oh man, these are way off. It's like, for example, so I'm currently reading I, Robot, like the short story, like the collection of like stories from Isaac Asimov. I'm reading it with my son. And the funny thing is, so the book begins as kind of like- the framing device is it's kind of stories being told by Susan Calvin, who's kind of like the robot psychologist, so she was like one of the early people in like US robots, like the company that made the positronic brain in all the robots, whatever. And so, they mention like her age and like when she got her- her age and when she got her PhD or whatever. I think she and I both got our PhD the same year. And I'm like, I'm not living in the future. There's no robots walking among us. That being said, maybe it'll happen in a couple years... But that was wildly off in terms of the dates.
Eric Jorgenson: When was it written though?
Sam Arbesman: So, it was written like the 1940s. Oh, I think eventually it was compiled in like 1950 or so, but like some of the stories were earlier.
Eric Jorgenson: It was written in the 40s and we're 70 years forward and like she was off by or he was off by 10 years, maybe 20, like that's a... not off by 100%, off by 30%, that's a pretty...
Sam Arbesman: That is true. So maybe it was just the fact that like it was so close to my own age, and I was like this is freaky, and it also felt very... I was like this is a science fiction novel. The other interesting thing is like you read a lot of these sci-fi novels and they might be set in the future and like there are spaceships or whatever. But the society is clearly like the society of whatever time period it was written in. So, it's like, oh, it's like they are way- like hundreds of years in the future, but like the way men and women interact is clearly from the 1950s... like it's far off in the future, but they're clearly obsessed with the concerns from like the 1970s around overpopulation, some of those kinds of things. And of course, like there's also many things that are wildly wrong. So I mentioned Star Trek anticipating all the questions we have about AI, but that story was still set several hundred years in the future. There's warp drive, there's transporters, there's a few things that are a little bit different. And so the interesting thing is, on the one hand, we are nowhere near the kind of space opera kind of stuff. But when it comes to information technology, we are far in advance of that kind of thing. So, like for example, actually another time, so another science museum that I went to, in it, there was actually- they happened to have a special exhibit of like all the props and things from Star Trek throughout the years. And I remember looking at the communicator or like the pad that was used in Star Trek Next Generation. And they looked like really, really crappy versions of things that we have now. So, it's like, oh, that looks like the iPad, except it's clearly just a piece of like poorly painted wood. And that's supposed to be like hundreds of years in the future. And so, on the one hand, we anticipated those things, but we were hundreds of years off. And then other things, we are nowhere near developing warp drive or whatever it is. So oftentimes people talk about science fiction as really it's just kind of... it's a serious set of thought experiments or it's a great way of discussing current issues in kind of this far future or kind of counterfactual sort of way. So, I definitely think there are lots of successes in science fiction, but I'm not sure I would view science fiction as a predictive success. I view it much more as like there is a whole large set of science fiction stories and science fiction ideas, and we have the opportunity to kind of pick and choose from them from like the kinds of things we want the world to look like. And so... on the one hand, yes, like the flip phone was inspired by the original Star Trek. I think there's like a clear story of someone at Motorola who like created the track phone or whatever the phone was inspired by watching, having watched Star Trek. And so, there are things like that where it's like not only is it, did it become reality, but it was inspired by these visions of the future. And then there are other ones where it's like it kind of got it right where it's like, oh, like talking about the internet or like, so people talk about the metaverse or like cyberspace or whatever, like that kind of world of William Gibson. On the other hand, you read those things, and you’re like this feels nothing like the web. And so, it depends kind of like how much you're squinting and looking at some of these visions of the future. That being said, I will not- I don't want to say this to like take away anything from science fiction. I think it is an enormously powerful medium, an amazing genre that I love. But I wouldn’t necessarily view it as like the thing that is best for prediction. And the truth is when people try to kind of combine prediction and science fiction more clearly where they're kind of like writing some sort of like futuristic stories or whatever, but like around, working with like a company or whatever, I feel like those stories kind of end up being the least interesting because they're not as like unfettered imagination. And of course, I mean, science fiction, especially hard science fiction, it's not completely unfettered imagination. It's not full on, I don’t know, swords and sorcery or whatever. But I view it much more as it can be inspirational, it can be guiding as opposed to entirely predictive.
Eric Jorgenson: Interesting. As you look forward, based on the research you've just done and this unique moment we're in where maybe computers themselves are like, at least personal computing feels like it's maturing or reaching some sort of plateau, I don't know if you'd agree with that, and at the same time, AI is like we might have just had our kind of 1977, like oh my God, this is applicable to everybody immediately. Is it changing like how you individually are living and working on a daily basis?
Sam Arbesman: You mean AI or just kind of these...? I have to say it's not changing my life maybe as much as I initially anticipated.
Eric Jorgenson: Is it on the same sort of like through line as compute, software, AI? Do you see that all as part of the same sort of like story threading together?
Sam Arbesman: I do. I think, I mean, AI feels... it certainly is a unique advance and very powerful, but in many ways, it is part of this continuum. So, you look back to like early days of computing where there was kind of this- and one of the major ways that people thought about computers were as tools for education or ways for allowing children to more easily manipulate information or as computational tools for thought. And so in many ways, I feel like AI is sort of like the real version of a lot of these kinds of things, and so like, for example... and people for many, many years have dreamt of this idea of like you basically describing a piece of software that you want and then... the computer basically building it for you. And for many years, that really was not so possible. This whole idea of like democratizing coding... and there was like the Macintosh software HyperCard, which was fantastic. And I can talk a lot about that. That was like being in the late 80s, kind of through the 90s. And it was this amazing medium for allowing non-programmers to more easily become comfortable with building software and building things. And then you have like spreadsheets, which if you're doing anything more than just like entering data in, like if you're using formulas or whatever, you are programming to a certain small degree, and that actually democratized things. But in terms of like powerful, like no code or low code tools, there really wasn't that much. I mean, there were lots of attempts, but like nothing really quite broke through in quite the same way until a lot of this like generative AI sort of things. That being said, I think like we're still not quite there in terms of that end goal of like easily building software completely. Because I think right now, we're still very much stuck in sort of like the chat interface. And like the chat interface is powerful but that's not the only way to interact with our computers. In the same way that when we went from like the terminal command line to the graphical user interface and the desktop metaphor and things like that. So I think we need newer ways or just other ways of thinking about malleable software or interacting with AI or whatever it is. But again, I view that all as part of this continuous conversation that we've been having around tools that allow us to kind of do our work better. So whether it's like the Xerox PARC vision of the office of the future and like AI clearly is an extension of that kind of thing, or tools for thought, or like manipulating images and graphics and stuff like that, these are all things that we have been doing for decades, and AI is certainly allowing us to do them much more easily and much more powerfully, but I'm not necessarily sure I would view it as... I mean, it is a step change, but it's also... so I think there's both power in saying, okay, this is radically different, and we do have to actually grapple with this. And going back to what we were talking about, like quintessential humanity and how do we grapple with this? And so, I think there's definitely power in that, but there is also value in recognizing that it's still part of this tradition of the ideas and this kind of like whole school of thought around how we think about what our machines are for. Because then, and going back to kind of like the combination of the history and the future and everything like that... like looking at that history of computing and seeing, okay, what have people trying to use computers for and how have they tried to interact with them, that can kind of ground a lot of the new use cases in kind of this tradition and make us feel a little bit less unmoored when we are grappling with these things that are objectively radically new.
Eric Jorgenson: Yeah. Is the summary of all of that, those threads, compute, software, AI, essentially cheaper intelligence? What is the one sentence summary of like what this tool does for us?
Sam Arbesman: Yeah, maybe it's cheaper intelligence. Maybe it's processing information or understanding the world. To be honest, I'm not sure. I mean, I feel like when I think about computers, they are this general-purpose kind of machine, and I feel like that is- AI is sort of this general-purpose technology. And so maybe it's, I don't know, just like thinking about kind of this continuum of like a Swiss army knife for symbolic manipulation. I have no idea. That sounds terrible. But like, that's kind of what it is. Cause like there's... cause there are, there's your work, there's your thinking about intelligence, but there's also like, and going back to kind of the early parts of our discussion around like wonder and delight, there is this whole playful aspect of computing, whether it's like games, like actual play, but also just creative coding, like building computer programs that kind of generate things that are almost like works of art, or just the very act of programming is this very human idiosyncratic thing that can sometimes have like style and beauty to it as well. And so there's all of that too. And AI is changing those kinds of things, but there's also that. And so, yeah, maybe it's this kind of- and maybe fundamentally, it's around like symbolic manipulation or manipulation of information for fun and for profit. I don't know. It's like better ways of symbol- of information manipulation for fun and for profit. I don't know. That's also terrible, but I don't know.
Eric Jorgenson: It's not a book title.
Sam Arbesman: No, no, no, definitely not. But I think that kind of goes back to the fact that like computers are, because they are these weird like everything machines, then not only are they kind of like general purpose in terms of you can kind of like compute lots of different algorithms and things like that, they can be used for so many different use cases. It means it's hard to kind of distill down what the computer is. But ultimately, I think of like, and going back to what I was saying earlier, like our technologies and... our computers, like they are, we need them to- we need to make them work for humans and not like limit our humanity. And so whatever new era of technology, whether it's AI or whatever it is, that we come across, we need to make sure it's kind of in service of that. And so yeah, I don't know, but I guess that's one thing I always just think about.
Eric Jorgenson: It's really difficult to describe the universality. Like David Deutsch talks about that a lot. Like, there's two universal computers. It's like computers and the human brain, that are like truly universal things. But when you use magic or sorcery or wizardry or whatever it is as your metaphor, I think there's, I'd be very curious if you add to this, but I think there's sort of two components that stand out to me that are very difficult to summarize without appealing to the supernatural which is the universality that we're talking about, like you can do anything with this, and the fact that it's as near as makes no difference to zero marginal cost. Like that's the other magical thing. Before computers and before software, almost nothing in the physical world truly has zero marginal cost to it and we're still sort of having a hard time I think wrapping our heads around some of that. Like what does it cost somebody who's listening to this podcast? It doesn't cost us, it doesn't cost them, like maybe fractions of a cent of data transfer and energy. And that's just a thing that did not used to exist that is marvelous and magical and wondrous.
Sam Arbesman: I think that's a really good point. And so, one of the things I talk about of the nature of code and just like software more broadly or this kind of like creative kind of component to it is that code... Like when you build something with code, it's not- it's text, but it's not like a substance, but it like operates in the world. It's this very weird new kind of thing that we really don't know how to grapple with. And so, it allows us to create things with well nigh zero marginal cost. It allows us to coerce the world in kind of this like magical feature of it. It has all these other kinds of properties, and yeah... and it's different. Like it feels radically different. That being said, I will say in terms of like the physicality, I feel like one of the interesting things around code and computing is that it is deeply physical, but we often forget that. We often forget it until something goes wrong, or it's like, I don't know, like some... you're confronted with some crazy bug or it's like, oh, like the wifi only works when it's raining, which... I wrote this story about this kind of thing where it was like, it made some tree branches wet, and then it kind of lowered something, and that created a line of sight and suddenly the Wi-Fi was working. But when it was dry, it wasn't. And so it's this weird thing where on the one hand, we have built this massive physical infrastructure that we sometimes ignore at our peril. Like the internet, it's real. Like it is a thing...
Eric Jorgenson: Theories of tubes.
Sam Arbesman: Exactly. So, there's actually a great book called Tubes, which is about the physical internet, like saying... yeah, that Senator, as much as we made fun of him, yeah, like he was onto something. Like there is a physicality to it. On the other hand though, because we spent a lot of effort and a lot of capital actually building out this infrastructure, we now have the ability at the margin to just do things that are just costless and free and allow us to, going back to like layers of abstraction, to not have to think about any of that. We don't have to think about the physicality and just be like, I don't know, this is just like weird thought stuff kind of doing its thing, winging its way around the world. And that's wild. And I think... and that is something that, in the grand scheme of human history, has never really happened, where it's like we can now like tell stories to each other, but those stories, they can actually be valued at like billions of dollars. Like when you think about software and companies, like that's kind of wild. And so... And because it's so new in the grand scheme of history, like we're still trying to figure out what that means and how we think about it... But that being said, even though it's still new, it's not static. Like we're constantly upgrading it and thinking about building new versions of these kinds of things. Or now we add AI to the mix. And so, we're in this weird place where like humanity is grappling with this. At the same time, we're just kind of like running to stay still, kind of like the whole red queen kind of thing. Like we're just running as fast as we can to not just get like passed by by all these new technological advances. And I think on the one hand, going back to what I was saying before, humans are really good at adapting. So, I think we will adapt to all this. The question becomes kind of like, at what cost? And that I think is the thing that we need to be a lot more thoughtful about.
Eric Jorgenson: I feel like software is in an interesting place right now... You and I both sort of dabble as VCs, if not... But like certainly have many conversations about how to place capital and where sort of things are going in the near term, long term future. And on the one hand, software investing has been a lot of what's happened for the last couple decades, and it's sort of almost emerging into its own weird little subset of venture capital. But also, the AI explosion is happening. Did this change, did going through this book and sort of exploring this hypothesis, did it change how you think about where you emphasize, where you invest, where you focus for the next, five, ten years? Did it reignite a love of software in you?
Sam Arbesman: That's a fun question. I mean, so I'll caveat this. So even though I work for Lux Capital, I am not on the investment team, so I'm not the one calling the shots in terms of like figuring out where we want to invest. That being said, in terms of thinking about the areas I might want to spend more time or kind of like direct maybe other folks, like oh, you might want to pay attention to this kind of thing, I definitely feel in the same way that early on in computing history, there was all these things kind of that people were playing with that now like it took many decades, but now they're kind of at maturity, I like looking at the areas that are kind of in that same tradition but also are still just weird, playful kinds of things. And so oftentimes, those are the kind of things that are not anywhere near investment yet, but it's the people who are just kind of trying to rethink what software is, or what are the kinds of interfaces that we want, or how do we think about, people use this term like malleable software or democratization of code, like these are the kind of things that I just want more of. And I would certainly say I definitely have spent a lot more time thinking about, certainly working on this book around kind of the playful aspects of computing. That being said, I mean, the major caveat is like those are probably not necessarily the biggest outcomes for venture. But I do think that's where you find unbelievable passion around what makes computing computing. So like, for example, whether it's like there's this whole realm of like the poetic web, like people are kind of viewing... like oh, there's the web, that's like... SaaS and like enterprise software and like big websites and things like that. But there's also people building these like weird quirky little websites, and they kind of call it like the poetic web and it's much more playful and delightful. None of those things are like huge businesses or like venture scale or anything like that, but it just shows that there's this other way of playing with these technologies. And it also shows that, I kind of mentioned this, like the enterprise world or whatever, like the corporate world, which I kind of identify in the book as like this utilitarian stance towards technology. And then you have like the wandering stance, which is much more playful stuff. The thing is, it's not an either or; you kind of need both, like whether it's like the playful stuff exciting people to get involved in it and then eventually building the kind of things that are more corporate, or the corporate stuff that then provides that higher level foundation to make it that much easier to build the playful stuff. And so for me, I love finding that intersection where people are building, like people are building tools that are useful for a wide variety of use cases, including whether it's like business use cases or just a wide variety, but still incorporate the sense of play, playfulness and kind of open-endedness. So, one example would be, and this is a Lux company, but like tldraw where they're kind of building this like infinite canvas for... as kind of like a drawing tool and like prototyping, you can kind of draw and map things out. And so, they're kind of building this as like, oh, this is kind of this primitive that you can incorporate into whatever tool you want. So, if you want to build like your version of Figma or whatever it is, you can kind of use tldraw. But they are also just building all these wild and crazy AI experiments for drawing and art. And actually, there was one of them that came out maybe a year or two ago that I got my kids hooked on, and they just found it amazing to play with. And so, I love that intersection. And so maybe that's the area that really has like some really interesting potential that I want to explore more. And it's a little hard to kind of point to always or describe it fully. But where kind of the utilitarian sense and the wondering sense meet I think is where some of the most interesting things are happening.
Eric Jorgenson: Yeah, I mean, the wondrous is difficult to engineer your way into. It's kind of this series of like tons of diverse experiments that result in like, oh, that's actually really fun. Oh, somebody found out how to make a fun thing useful, and a toy turns into a tool. Do you have- you mentioned earlier kind of feeling like there's evolutions to go on what those UIs for AI are. Do you have an inkling of what that'll be?
Sam Arbesman: So, I don't know. I mean, I certainly think something more visual might... like where you can kind of like manipulate a thing, especially if you're building software. I think just having conversation after conversation with an AI tool feels necessary, but not sufficient. Being able to kind of manipulate the result or change things visually or in some sort of tactile way I think could be very valuable. Other- they don't really use AI, like kind of the LLM kind of AI, but... some of the work that like Bret Victor does with like Dynamicland of kind of using... actually thinking of like a physical space as a computer. And so, there's also like folk computer, which is kind of a similar kind of thing where you actually have like pieces of paper that have code on them. And then those are programs that are being run. Then you kind of move the- you move the programs together or you move other objects, and they all kind of interact in this very, very physical way. That feels wildly different. I don't know if that's the solution for kind of better ways to play with AI. But I just want more people to be trying really weird and different kinds of things. Because yeah, I feel like we got too narrowly shunted into the chat interface, which is not bad. It was really powerful. But I think now we're kind of bumping up against its limitations.
Eric Jorgenson: Well, on the frontiers, we're kind of seeing really interesting... one of the breakthroughs that I feel like is still underappreciated is actually like cheap translation between different modes. Like it's really cheap to turn words into an image and an image into words, back and forth, back and forth, back and forth, or into code. And so, what you can think about is omni input in some interesting ways of like, I actually want to like, sometimes I want to click, sometimes I want to tap, sometimes I want to like delete or cross out with a pen on my screen. Sometimes I want to just talk to the screen and say like move this here, move this there. Maybe sometimes I want to type or copy, paste. But like it should be trivial to interpret whatever my intent is and then translate it. Like the space of inputs should be going way up over the coming 20 years.
Sam Arbesman: There should be this sense of like, oh, like the software should be agnostic as to what is the best way for you to interact with this thing. If you want a terminal on the command line, by all means, go for it. If you want to use like graphical user interface, yeah. If you want to like, I don't know, scribble something on a piece of paper with a pen and then like hold it up to a camera and then kind of go... like do whatever works for you versus kind of the software saying, no, no, no matter what works for you, these are like the two or three ways you can interact.
Eric Jorgenson: You have to speak my language rather than the computer learning to speak whatever you're feeding it. Yeah, I'm hopeful. I was just picturing as you were going through, thinking back to the sci-fi and sort of the history of computing, like I kind of want a weird new Clippy, like somewhere between Clippy and Cortana living in my computer that can interpret- that I can have a conversation with, but that has total control of everything in the digital world. That is a really super powerful thing. I mean, you think about Jarvis, Cortana, whatever, all of the voice...
Sam Arbesman: These voice assistants.
Eric Jorgenson: That are characters essentially, but also can manifest stuff anywhere...
Sam Arbesman: Yeah, here it is. This is what we need.
Eric Jorgenson: I mean, if Microsoft wants to go viral, it's right there. There's nothing we would want to talk trash about more than Clippy. That'd be a high stakes maneuver for Microsoft. If it was great, it would totally work.
Sam Arbesman: I mean, the truth is, even- as long as it weren't terrible and people also kind of recognized that it was just like an experiment, I think maybe it could work. Or they just release it as a fun little joke. And then, yeah, people enjoyed it. But you're right. If they're actually trying to make it, like incorporate it into something that, yeah, then they have to kind of do it a little... there's a little bit higher bar.
Eric Jorgenson: April Fool's Day and then over deliver. Like that's the strategy.
Sam Arbesman: I like that.
Eric Jorgenson: All right. So, what's the transformation? This is always a question I talk with authors about all the time as they're sort of shaping and writing their book. But I think it's interesting as a way to sort of entice readers as well. What's the transformation that you hope the reader goes through from starting this book to finish it?
Sam Arbesman: Okay, yeah. So, I guess it kind of depends on the specific reader. So certainly, I think this book will be of interest even if you are comfortable with code and things like that because I kind of talk about enough random weird topics or at least kind of coming at them from a different sort of lens that I think it will be of interest. And so, I definitely think that this book, whether you are, if you're an expert, I will kind of give you a broader sense of kind of maybe where you are situated in the world of computing and recognizing, oh, computing is actually this much broader thing than I might have realized. Or it could maybe even just rekindle that sense of wonder that might have gotten you excited about this kind of thing initially. More broadly than that, though, I also think that this is the kind of book that I hope people who are maybe software engineers or people who are computer programmers or people involved in the computing world can give their loved ones and say, this is why I find this world so exciting. And those people, they're not going to come out like knowing how to program, like this is not, it's not a programming tutorial, but I'm trying to convey that sense of wonder and delight and the sense in which, going back to what I was saying at the beginning of the conversation, that computing and computation, it can be this humanistic liberal art, that when you think deeply about it, you should think philosophically. It should touch on ancient mythology or biblical texts or ideas around what the nature of life is or how we think about art, and realize, oh, it can be this kind of all-encompassing thing that can be really exciting and fun to think about.
Eric Jorgenson: I love that. I love you. I love this book. I'm really glad you wrote it. You're the perfect person to write it. And this book definitely needed to exist. I've really enjoyed it. And there's definitely people I will gift it to because I think there's... Yeah, I just really love anything that gives people a greater sense of wonder and appreciation for the magic that has been, all these hard earned miracles that the engineers of hundreds of years have been building for us. It's really- it deepens my appreciation for life, and this book has done that for me and I hope can do it for many others. Thank you for writing it. Thank you for coming. It's always a pleasure to catch up and learn from you and get to hear what's lighting you up these days.
Sam Arbesman: Well, I really appreciate it. Yeah. Likewise, this has been so much fun. I really appreciate it.
Eric Jorgenson: Where should people find you, find more about you, find more about the book? What do you recommend?
Sam Arbesman: So, if you go to my website, Arbesman.net, so just my last name .net, you can find recent articles and books I’ve written. The most recent book is the one we’ve just been discussing, the Magic of Code. There's actually a website, themagicofcode.com. That's a little website that I made. You can find it wherever books are sold. I also have a newsletter which you can find on my website as well as a podcast that I do through Lux called The Orthogonal Bet where I interview lots of just really interesting, fun people about crazy ideas from Sim City to science fiction.
Eric Jorgenson: That's awesome. Also, I love that you have a .net domain.
Sam Arbesman: Yeah, it's a pretty weird one, but I love it.
Eric Jorgenson: That's what we're talking about. That's the layer, a sedimentary layer in the history of the Internet. All right, good luck spelling orthogonal, everybody. Go go subscribe to Sam's podcast. Appreciate you, talk soon.
Sam Arbesman: Thank you.