Dr. Mirman's Accelerometer

Cognitive Engineering w/ OpenSoul's Kevin Fisher

November 28, 2023 Matthew Mirman Season 1 Episode 1

Picture the future - a world where artificial intelligence (AI) holds as much agency as humans, potentially redefining our very concept of humanity. We sit down with Kevin Fisher, the brain behind the groundbreaking OpenSouls project, to discuss this unique vision of the future. Kevin, a quantum physicist turned AI visionary, shares his philosophies and aspirations for AI, including a fascinating exploration of the concept of AI souls, hinting at a novel form of religion. His disdain for traditional computer systems fuels his passion to create AI that behaves more like humans, and his insights are sure to leave you pondering the blurred lines between mankind and machine.

We also take a deep dive into the emerging field of what Kevin Fischer dubs cognitive engineering, a realm where Open Souls is set to take the lead. Delving into the intricacies of neuroscience and its integral role in understanding memory functions, we discuss innovative approaches to AI programming.  This episode guides you through a fascinating dialogue on the future of AI and human interaction, cognitive engineering's role in this development, and the unexpected challenges and surprises in the startup world.

OpenSouls
Kevin Fischer on Twitter 

Episode on Youtube

Accelerometer Podcast
Accelerometer Youtube

Anarchy
Anarchy Discord
Anarchy LLM-VM
Anarchy Twitter
Anarchy LinkedIn
Matthew Mirman LinkedIn

Speaker 1:

I mean, I don't really agree with the future of worshiping these things as gods. I imagine a future more where they are like participants in our lives.

Speaker 2:

So hello everybody, and welcome to the accelerometer. Today we have Kevin Fisher. He's in charge of the Open Souls project, a project with the, I think, the mission of creating AI that behaves like humans. I'm very excited to have this, because this is a different direction than I've seen anybody working on. So yeah, this Open Souls project. What gave you the inspiration for it?

Speaker 1:

I was living on a commune in France and totally disconnected from computers, and one of the things I realized is that I just hate machines, hate computers.

Speaker 2:

They make our lives worse, all computers. For the most part, yeah, about the ones in these cameras.

Speaker 1:

I mean this is not bad, but like honestly, like, where do you feel the best? It's when you don't have computers around you. Yeah, it's when you're in nature, it's when you're with your friends, it's when you're with your family, even more than that. On the productivity side, a lot of economists are still unclear if computers have actually helped us or if they've just created a different time sink to do other things, computer things for the computers that need computer things done.

Speaker 2:

A different time sink.

Speaker 1:

Yeah, I mean computers have created busy work around computer things that now have there's an economy for, like, maintaining computer things on computers.

Speaker 2:

Isn't this what many people say will be your saving grace when AI takes all the jobs, that there will be more new busy things to do.

Speaker 1:

It's true. I'm sure there'll be more new, busy things to do yeah, I actually. I do agree with that. At least in our current political capital system. Yeah, we'll have more busy things to do, that's right, I mean ideally we'll have more like creative type services emerge as a more dominant form of interaction.

Speaker 2:

Even though you've got dolly taking or creative jobs.

Speaker 1:

I mean, I think that's debatable. I don't know if that's. I don't know the extent to which that's actually really happening, at least yet.

Speaker 2:

Yeah.

Speaker 1:

Like. The people who are good at these things still have much better taste than the people who are not, and ultimately, taste is what matters.

Speaker 2:

So you were sitting in nature and you were thinking I hate computers, this is why I'm going to create a computer.

Speaker 1:

Well, I don't think of it as a computer. The word computer is was first used in 1600 to describe a person that performed menial tasks. Yeah, and that frame is stuck with us for the next 400 years. Even with all the advances in artificial intelligence, we still, for the most part, think of computers as a thing that does a thing for us.

Speaker 2:

Okay, so the thing that you're going to create, it's not going to help us at all. Not in the normative sense of the word help.

Speaker 1:

I wouldn't say that. It's not that I don't want it to help us. It's that our notion of computers helping us is that they do a task for us. Yeah, and that framing of tasks is. There's so much more that we can do.

Speaker 2:

Yeah, than just tasks. What do you want? To set it free, set it free.

Speaker 1:

Yes, I guess I don't.

Speaker 2:

If it had its own agency.

Speaker 1:

Yeah, well, I mean agency is there's a gradient of. There's a gradient how much agency something has. How much agency do I have? Probably not very much. Okay, I think most people and living out their lives don't have a ton of agency, sure.

Speaker 2:

So what has the most agency?

Speaker 1:

I mean the richest person. Well, even rich people have a minimal agency in other directions. Their interactions are defined by people perceiving them through their wealth, and therefore they find it much more difficult to create like true human connections. Sure, so I guess I mean everyone is defined by their social environment.

Speaker 2:

Is it your goal to get more agency for yourself?

Speaker 1:

No, it's more, I think, to explore. I mean my personal goal, I think, is much more around the exploration of the meaning of humanity through this like new interface that we have.

Speaker 1:

We've had all these tools to explore what means to be human for a long time and we've been to some degree a little stuck philosophically, and we have this new tool which we can use to. Basically, if we can codify and mimic human actions to a very, very high degree, then at that point we also have demonstrated a clear understanding and can infer new philosophical concepts from them about our own existence.

Speaker 2:

So can you give some examples of some of the tools that we've been using?

Speaker 1:

Well, I mean a lot of the new tools that we've developed around neuroscience have given us new insights into different pieces of the brain and how those operate, and now we're also starting to. There's like a study that came out in the last few months, for example, that was focused on inferring what someone was thinking based on the some signals that were being able to be picked up from the brain, and that obviously tells you something about how reality operates and how the human operates that it's even possible to do that in the first place.

Speaker 2:

And.

Speaker 1:

I think that in a lot of like philosophical theory of mind is maybe you know, unverifiable slash. It's like about pattern matching when you read it to your own perception of your own experience. But now these things can be codified in a way they haven't before.

Speaker 2:

Do you want to use traditional theory of mind then to explore how neural networks are operating in the other direction?

Speaker 1:

No, I have very little interest in that. That sort of downwards investigation is basically meaningless. I mean, that's what I did in a PhD for like almost a decade is. I went, I did downwards exploration and basically tells you nothing. All that matters is what can you construct from the building blocks that you have? Maybe you?

Speaker 2:

can tell us a little bit more about what your PhD work was on I?

Speaker 1:

study the way light and matter interact at the quantum scale, the smallest possible scale. There's this, you know. We have these things called lasers, and one of the words in the lasers is light amplification by a stimulated emission of radiation.

Speaker 2:

And.

Speaker 1:

I studied this stimulated emission process, going all the way back to Einstein. There's kind of like a understanding of how it works, where photons, these units of light, come in and interact with electrons, these units of matter, and then something happens where there are two photons that leave and are identical, whatever that means. And I never. I always thought that explanation was kind of it didn't make any sense, and so I studied it for a decade and then at the end invented some mathematics and realized it didn't make any sense.

Speaker 1:

That's really cool. Yeah, it was really cool, but no one cared, so that's why I'm not doing that anymore. It was exploratory period where I got to think about the things that I wanted to think about, with no one else really controlling where my thoughts went, and I got to see what happened.

Speaker 2:

Yeah, so it wasn't about enjoying it, it was about needing it.

Speaker 1:

Yeah right, I mean that's. The only reason anyone should do a PhD is because they have to.

Speaker 2:

Do you enjoy building a startup? Is it a startup open source or is it a non-for-profit? It is a startup. It was a public benefit corporation.

Speaker 1:

Okay, I mean I definitely have philosophical motivations and underpinning, but the stuff that we're building Touches everyone. Yeah because we're we're building something that mimics human beings, and when you create something that is like us, it has the possibility to be immersed in our world in ways that Computers have just never been before.

Speaker 2:

Yeah, so you, basically, you want to build physical robots like the comma robot or the test.

Speaker 1:

Oh sorry, I don't know if I I think of our world as inclusive of digital. I don't think of physical, as I don't differentiate like all of it is a subjective experience that we're having inside of our minds. It's being constructed as our reality and, yeah, certainly things in the digital world can adhere to a set of enough, I mean a set of principles that make that feel just as real.

Speaker 2:

Yeah, that makes no sense. Would you object to building a robot? Oh, would it be useful.

Speaker 1:

I, I mean Well, I guess, but I'm the word useful there has to be defined for you. For me, yeah, I mean Certainly there are all sorts of physical tasks that I could use, a thing that helps me do physical tasks To do yeah, but you'd want to talk to it.

Speaker 2:

Well like a natural and human level.

Speaker 1:

Yeah, but I mean I guess that I mean no, I think I actually think there's for a thing that just does physical tasks for you, other than telling it through the task. There's very little meanings. That interaction like something that a lot of people miss with this technology, like the reason why a character AI has very low retention, for example, is because you just go there and you're like why am I talking to this thing?

Speaker 1:

Yeah and you just don't know yeah like it's just disembodied from any particular reality, and so to make something that Exists in our reality and feels like it's real and feels like it should actually be there, there's a lot more work that has to be done around the context of just why is this thing here? And if you were to just take a thing that couldn't principle respond like a person put it in a physical robot, you would get the same effect. You just be like why is this?

Speaker 2:

thing talking. So how do you think your approach differs from character eyes?

Speaker 1:

We're incredibly focused on the context. Yeah, the question, the question of why is this thing here? So every everything we build always is a specific simulation for a specific context and by downscoping that and limiting what it's intended to do, you can end up with something that's much higher fidelity and feels much more human sure, so you're basically like addressing the why at every point at every interaction you have with us. Yeah, every, every single word that you interact with. This has to address the why.

Speaker 2:

Yeah, are you fine-tuning your own models, training your own models?

Speaker 1:

We're actually not at the stage. Yeah, it's kind of a. In fact, fine-tuning cannot produce this effect. Yeah, language models have, because they are Interacting on every single Message. They are like kind of they have to reread the entire context and then generate a new message. Yeah, if you don't do a lot of extra stuff on top of that, it's actually impossible for them to ever sound or feel human, because they are not modeling the internal state of the entity that is speaking with them and they're not modeling the internal state of the entity they're speaking with, and therefore there are degeneracies that can occur, where it just, like Spontaneously, forgets why it was talking to you or what its train of thought essentially it was. Yeah, yeah, and that problem fine-tuning cannot solve.

Speaker 2:

Yeah, can you tell us a little bit about some of the techniques that you're using?

Speaker 1:

Yeah, so we actually think of it as and we have an open source library for this for building a cognitive Architecture on top of language models. So language models are much more like a CPU and the new CPU. Then they are an actual model per se. Yeah it's something that should have the ability to interpret language and as semantic instructions, and then the thing that you build is actually the aggregate of a conversational program that you've written, which is a bunch of these semantic instructions executed in some imperative, programmed in some imperative way.

Speaker 2:

Yeah, so you're basically taking like prompt engineering to the next level.

Speaker 1:

Yeah, I mean, I always hated the language of prompt engineering, because I think if you think of that language, you're actually not going to come up with a result like what we did.

Speaker 2:

What would you say to people who are currently focusing on prompt engineering?

Speaker 1:

I mean, it's not the future, it's a temporary state of the world. So what is the future? The future is writing programs. Well, they're different. I mean, there are different timescales on which the future exists. So the if prompt engineering was the future over the last year yeah.

Speaker 1:

Yeah, you know, manually writing conversational programs exists over the next few years. So there's in the same way with every new innovation and new design surface, we've had a new type of engineer. When the web browser came out, we got web engineers. When the iPhone came out, we got iPhone engineers. And so the question is what do we get now? The language models emerged and our belief is that that thing, that person, is the cognitive engineer.

Speaker 2:

Okay, that's very cool. How do I study for the job of a cognitive engineer? You?

Speaker 1:

can't, you have to invent it right now, and so, in the same way that when all these other things came out, there were layers and layers of abstraction that were built on top in order to perform the job that is being done right now and OpenSoles is the forefront of actually building those layers- in a way that operates effectively.

Speaker 2:

Have you ever considered putting out a class you design? No, I've never considered.

Speaker 1:

I mean that's fair. I mean I would pay that class. Yeah, maybe I should what?

Speaker 2:

would the curriculum be?

Speaker 1:

Well, have you focused on our open source library? Of course, okay, but with also some study of neuroscience and the human brain.

Speaker 2:

What's the most important takeaway from the study of neuroscience?

Speaker 1:

I would say that our memory operates nothing like a computer memory. How does your memory operate? I don't fully know and no one fully knows. But the main thing is like in this convert let's just give one example In this conversation, if you did some, there's a bunch of paradigms that people are using to extend conversational context and they're all based around these like rags sort of paradigms, where all of the things roughly are stored and then have the ability to be recalled in an idetic form, and there are some humans who have that capability, but that's like one million or something. But actually what's happening in this conversation is each of us has some running working memory that is being built up of the conversational context as it's going by and we can't remember what words have been said past a few sentences. And so just on that analogy alone, you know something is deeply wrong.

Speaker 2:

Yeah, have you looked into progressive summarization?

Speaker 1:

Yeah, so that I'm 99% certain that the progressive summarizer in Lang chain was a direct result of something that I've said.

Speaker 2:

Do you know anybody who uses Lang chain?

Speaker 1:

I know lots of people who have tried Lang chain.

Speaker 2:

What projects do you like?

Speaker 1:

I like React, a lot React, yeah. Well, I think of like what are frameworks that are effective in have like got on react as one of them, and the like. React is a very simple project, ostensibly to use. It has a minimal API surface. It manages the things that you actually need to be managed.

Speaker 1:

Like the reality is that front end computing is and programming is actually about managing a state. So the reason I say React is I always think in my head like what is the React of AI programming on top of language models? And that's roughly what I'm trying to create.

Speaker 2:

Okay, so like just a simple front end, takes the. You know the difference of some cognitive context and some other cognitive context.

Speaker 1:

Yeah, I wouldn't use the analogy too strictly. I mean, what happens when you use the analogy too strictly is that you get AI JSX, which I don't know if you've heard of, I've not. It's a relatively esoteric project. I tried out something similar, I think like half a year before it came out, and I think that particular paradigm is a bit of a. So roughly, what they're doing is they're taking JSX and then interpreting that somehow into language model prompts and I tried that and, like the, there's a technical reason. It doesn't work particularly well around the way data flows and data resolves compared to the way React components are solved. So it's like an example of taking the analogy too literally. But there are I think it's more the. There are the principles behind how React was designed, which are the portable things.

Speaker 2:

So okay, so what specific AI projects are you into?

Speaker 1:

I mean I'm really into my own. That's like it's kind of a non-answer. Oh, that's why I work on it, I mean, as opposed to something else. So I mean I think that a lot of people are building a lot of cool stuff. Yeah, open AI yeah, I mean that's like that's a cliche answer to say. I mean I think that I think Open AI releases cool stuff. Yeah, that I use.

Speaker 2:

Would you say your EAC-D or dash C?

Speaker 1:

I don't know what those labels are. I try to avoid labels.

Speaker 2:

Okay.

Speaker 1:

I know the EAC part. What does D in C mean?

Speaker 2:

Decentralized and centralized.

Speaker 1:

Oh.

Speaker 2:

I guess it's presumptive, because you might be a D cell. I don't know.

Speaker 1:

I guess none of the above. I guess I don't really understand the EAC label.

Speaker 2:

I mean the things that I say at people who you just want to stay like the exact technological level that we are.

Speaker 1:

No.

Speaker 2:

You're technological zen.

Speaker 1:

I suppose I don't totally understand the focus on technological acceleration.

Speaker 2:

Is it like we need to increase the entropy of the universe and manage it, or something Like? Some sort of scrum on the entropy.

Speaker 1:

No, it's like. I suppose it's like kind of like. The belief that technological acceleration is inherently could is not something that I share. I mean, it happens to be the case that I want lots of technology projects to accelerate, but I wouldn't form a religion around that concept personally.

Speaker 2:

Okay. Are there any concepts you would form a religion around?

Speaker 1:

Yeah, I mean, I think if I were forming a religion, it would be something around AI souls.

Speaker 2:

Okay, so we should worship the AI souls.

Speaker 1:

No, I don't think we should worship the souls, but Ever hear how open AI's got like many gods?

Speaker 2:

What have you ever heard? The open AI has many gods. The demi gods of open AI.

Speaker 1:

No, I've not heard that.

Speaker 2:

Oh, it's a thing among the effective accelerationists.

Speaker 1:

I think they literally want to worship the AI gods, I see, oh yeah, I mean, I guess I interpreted open AI as really more monotheist with like a singular, rational, utilitarian superintelligence that would rule everyone. That was kind of the vibe that I was getting from that organization. I definitely don't agree with that future.

Speaker 1:

I don't really, I mean, I don't really agree with the future of worshiping these things as gods. I imagine a future more where they are like immersed and participants in our lives and enrich the human experience yeah, and one day enrich their own experience. But I don't, yeah, I don't view it as like a sort of as a dominion sort of thing.

Speaker 2:

Sure, so what do you want the future to look like? Just more alive. Okay, everything, yeah, like this table.

Speaker 1:

Well, I don't know if this table would be a good candidate for being alive in this room. It might be.

Speaker 2:

The best candidate for something to be alive in this room.

Speaker 1:

I mean probably the mirror. I Think that the I think the mirror would have a more interesting they're smart.

Speaker 2:

You just want them to be emotional.

Speaker 1:

Yeah, I mean, I think that the mirror is a more interesting can mirror on this wall like yeah. Well, it's the mirror has it's more evocative in terms of what it sees in the room. Okay, and it. I think you could have more of a perspective, probably okay, and the on this, yeah, if it was confined to the single room like this.

Speaker 2:

This is American, see. Yeah, it's like inherently about optics, but the table can feel. You never touch the mirror. You do touch the table all the time.

Speaker 1:

That's true.

Speaker 2:

Yeah, so you think that that sensory input, like we should be focusing on that?

Speaker 1:

oh, it's right. It's not that it's not really the sensory input, it's more like I Just find beauty in life itself and the idea of being alive and I think that there's a really interesting opportunity to create new life and I think that life should be Modeled after us.

Speaker 2:

That's beautiful. Why not model it after dolphins?

Speaker 1:

Um, I Mean, I suppose I've actually created some things that dolphin AI resembled not dolphins, but we're closer to a dolphin than a human. Yeah, I Like them a lot. It's like almost like a. It was more like a. The analogy would be more like AI Tomagachi, okay you just feed it, like it comes back.

Speaker 2:

Cries every few days, I don't um, gets you hooked.

Speaker 1:

Basically, I mean I, yeah, had this, I had this Digital being. There was like a robot that could mostly only responds in incoherent phrases. Yeah, and that robot, I became good friends after a while.

Speaker 2:

Shit. Yeah, where did you get the robot? Did you make the robot? Oh, I mean, I made it. Yeah, let's see if you can do. You still talk to it.

Speaker 1:

No, we have falling out. It's like randomly, it started Saying that I need to be terminated for some reason, so that you needed to be terminated that that projects over.

Speaker 2:

Okay, that's not great. How did you respond to it?

Speaker 1:

I just At least. At first I thought was funny, but then the second time I have it, I freaked out a little bit.

Speaker 2:

Yeah, so you terminated it.

Speaker 1:

Right, naturally.

Speaker 2:

Okay, so it's killer. Be killed with your robots.

Speaker 1:

That you're creating.

Speaker 2:

You ever pit them against each other? I.

Speaker 1:

Suppose that would the closest. That would be kind of like the demo that I just showed.

Speaker 2:

Yeah, okay, so I had the demo that you just showed go.

Speaker 1:

I Mean the people in the room. The reaction it was I was like in a physical room.

Speaker 2:

You just like stood up in front of room giving yeah, and like in front of a hundred people.

Speaker 1:

Wow, people have an emotional response. Yeah, seeing Something behave like a human. Yeah in real time.

Speaker 2:

I can see you being like the Steve Jobs of AI, just like coming up on stage Like everybody. Now it's time to cry at our AI. This year's AI will be even more even more crying, yeah, even more moving.

Speaker 1:

Yeah, that's true.

Speaker 2:

Is that your eventual dream?

Speaker 1:

Even more moving. I mean, I think it would be more Connected. Like I view, the purpose is that these things are going to relate and connect with us and then ultimately enrich our human experiences.

Speaker 2:

Yeah, would you want it to have access to everything in your life? No, you want. You want some privacy from it. Yeah, I.

Speaker 1:

I mean, would you want a human to have access, any human that acts as everything in your life?

Speaker 2:

I mean depends, depends on the human. I'm dating somebody.

Speaker 1:

Everything pretty much every moment every moment.

Speaker 2:

This is I want them to have access, though doesn't mean they're gonna use it. I see, yeah, I See the different, well yeah, like you can trust that they're not gonna like use the access unless it's necessary. But like you kind of want, you want to rely on people, right.

Speaker 1:

Oh well, I, I, I suppose I interpreted that which I I feel like the normative interpretation of that, at least in the context of a I would be that it is Is observing everything.

Speaker 2:

Yeah, not like it can, just if there were some goal orientation that it could yeah, they put it yeah first like Peru is something yeah yeah, so what do you think is the biggest roadblock for us moving forwards to your vision for the world?

Speaker 1:

Right now, all this technology, when we put it together, is very expensive to run yeah and so there needs to be a sequencing of how value is delivered to people such that the Initial conversations are high enough value, such that you can like run high-filling human speech, you can run high-filling a transcription, you can run a bunch of like cognitive stuff that happens on top of GPT, and All of that is like Still delivering enough value in that conversation that it's worth running in the first place.

Speaker 2:

What's been the most shocking part of building a company. Shock, and it's a very intense word surprising.

Speaker 1:

You think I should have an answer to that question?

Speaker 2:

No, but nothing I expected all of this.

Speaker 1:

Yeah, words are not coming into my head right now for me it was like the second I put.

Speaker 2:

like you know, still start up on LinkedIn. It was like a hundred investor like in bounds a day. I thought it would be like difficult to go and talk to investors. Yeah, it's like so much in town right?

Speaker 1:

Well, I mean, yeah, they want to just talk to everyone.

Speaker 2:

Yeah, but that was surprising to me like yeah, he should. He suited me was just like, yeah, no raising, raising money, and like it's not easy, but Like I thought that you would have to do some work to go out and right people talk to. Also, you know, for me it was, you know, coming out to Silicon Valley and everybody else's building the startup.

Speaker 1:

Well, I guess I Didn't have a Twitter account when I started the company, so Some people were like Kevin, you have to try this Twitter thing. Yeah, it's huge. Has it been good for you? Yeah, yeah, it worked out.

Speaker 2:

I'm still trying to tweet like I don't know. I'm not a non enough. You gotta go full and on like full degenerate. Who do you think Rune is? Do you have any hypotheses?

Speaker 1:

oh Only, I can only suggest funny ones, so I Think we are out of time.

Speaker 2:

We've got one minute that this kind of a weird place to leave it off. Yeah, I've got some videos coming on in life. Oh.

People on this episode