Dr. Mirman's Accelerometer

How Ellipsis is winning AI coding: Hunter Brooks

Matthew Mirman
Speaker 1:

You're always anxious because everyone's always stepping on your territory. What does it mean to be superhuman and to go beyond what an average human can do? The state of the art changes so rapidly.

Speaker 2:

Hello and welcome to another episode of the Accelerometer. I'm Dr Matthew Merman, CEO and founder of Anarchy. Today, we're going to be discussing AI efficiency with Matt Rastovic, founder and CEO of Respell, a no-code AI automation platform. Matt, can you tell us a little bit more about your work at Respell? Yeah, sounds good.

Speaker 1:

So Respell. As core, we are AI automation, so no code comes with that. As you know, up until a couple, let's say like a year ago, you had to kind of distinguish between no code and what required code. We're really focused on how do you apply AI to make large organizations, mid-level or even just like solopreneurs more efficient. So that could be anything from the automatic things that you do, day-to-day things that are going into your product or into your embedded tools If you're using Slack or a CRM or an ATS for hiring, making those faster on their own or also making your day-to-day tasks Like I need to do research, I need to go tell someone something, I need to do something else making those more efficient as well. So it's pretty broad in its application.

Speaker 2:

That's really cool. How did you get into it?

Speaker 1:

Well, I mean, I've always been like an optimizer by nature, and so I guess my journey is. I grew up in Chicago. I learned how to code when I was probably 10 or so and pretty much right away I jumped into AI, ml as an industry, and I was very lucky to do that because I found out that I loved it really quickly, and that was the next eight years I started doing just everything I could in the space, so I ran the whole gauntlet. I was a data scientist, data engineer, machine learning researcher and engineer. I was a complex systems researcher for a bit.

Speaker 2:

What's a complex systems researcher?

Speaker 1:

Well, that is. I love answering this question because there's no good definition of it. To complex answer yeah. To complex answer exactly. Anything to do with graphs is generally complex systems, things that change. So like, for example, mine was complex adaptive social systems. So looking at social networks, how they change over time, how information flows through them. So it's kind of like a more science-y branch of graph theory.

Speaker 2:

Or a mathematical branch of sociology. Yeah, exactly, that's really interesting. Interesting, do you find applications of that in your day-to-day now, um, not as much as I wish.

Speaker 1:

I mean, everyone that knows me knows that I'm a graph enthusiast. I love talking about learning about it, but I it's kind of interesting. There's going to be some, I think, intersection of like, especially especially knowledge graphs with AI in its current form. But I do think knowledge graphs is like where most people kind of just stop thinking about graphs in general and there's a lot more interesting things you can do with them.

Speaker 2:

But yeah, well, you know fingers crossed. So when you say knowledge graph, you mean like the 90s style axiomatic, like the 90s style axiomatic prologue style knowledge graphs, or is this something else?

Speaker 1:

Well, I mean, they take different forms, so it could be at its core. You have nodes representing some entity and then edges relating to some relationship, and it could be at any abstraction level that you want. So if you're talking about people, then you have a knowledge graph of how people interact with each other. You might have something about a specific topic like math. Calculus is related to linear algebra in this way and that's the edge, or it could be on.

Speaker 2:

It's almost category theory. Yeah, that's really cool.

Speaker 1:

Yeah, I mean they're intimately related, but I was very much so on the more simulation side, working practically with graphs, trying to figure out okay, what can you learn from them that might just be just hidden under the surface. So a lot of information flow, power dynamics, propaganda I guess that's under information flow, but in that same realm, just trying to understand a little bit more about reality through this lens uh, what do you think is the most interesting thing you learned about reality?

Speaker 1:

so most of my time spent on propaganda, just like what forms it takes, what it looks like, how do you detect it, and just generally, like how does it flow in? In general, when you start thinking in systems, like you start to see everything as a system. Um, I suppose that probably works for how you, you know, whatever major you study, you start to see things in that lens. But I was really focused on, like, how do you detect it in a way that is scalable? Because right now, let's say, social media networks, they take the content of something and they try to fact check it, and that's great, I mean it can work, but it requires massive amounts of labor. And they try to fact check it, and that's great, I mean it can work, but it requires massive amounts of labor and time and energy. So my goal was how do you find a different method that is a bit more scalable, that you can just automate.

Speaker 2:

So what is the hardest thing that you've had to deal with over the course of this startup? Is this your first startup? It's my second? Yeah, yeah, okay, which one was harder? I would say the last one so far.

Speaker 1:

Um, you know, cross my fingers, knock on whatever this is weather, um, and the reason why is because first I'm founding something. It's just always really difficult. Um, like you learn so much and you grow as a person. But also we weren't really like set up in the best way. We had four co-founders. We were doing an idea that none of us had experience in the industry for. It was real estate, custom home building. We started like our first day was like we looked up how to build a custom home online, so that was where we started. So a lot of mistakes. In that way. Side Respawn has also been incredibly challenging in its own right. We're a very horizontal platform, so a lot of the advice that goes to startups just doesn't particularly apply to us, and so we're kind of learning as we go.

Speaker 2:

Isn't the advice typically, don't build a horizontal platform, typically? Yeah, what led you to ignore that?

Speaker 1:

I think you have to look at the context right, Because every advice comes from a certain lens and you have to say, okay, does that lens or that bias apply? In my state and I think it probably did in some ways, but not enough to really challenge me away from it I noticed that I mean, it was a fresh landscape. Ai, like LLMs, just came out of nowhere and there was no infrastructure. So that's a point where, much like when the App Store came out, the internet came out, you could build a horizontal company and make it work In places where they're fragmented, probably not going to work. And there's also just a couple of things like I actually think it was helpful in that it helped us figure out what does the landscape look like and how does it change over time, Because we're at the center of it, so we get to see the pulse of the market as we go along.

Speaker 2:

So how long has this startup been? Around 10 months, okay, how's it?

Speaker 1:

going. It's going great. I'm having a lot of fun. So today is the day after OpenAI Dev Day. We've had plenty to think about. I would say overall it was net positive for us, but that's also just one thing that we have to contend for. In general, like we are about seven people, we've raised both rounds of fun. Wow, yeah, I know Exactly Both rounds, sorry, pre-seed and seed. We raised them together right when we started, yeah, and so thankfully put that away really early on and we could just focus on the business. And then customers enterprise customers were growing. Things feel great. But also it's just a constant battle to stay on the bleeding edge and try to figure out, like, where is that going to go, so you don't end up like three months behind.

Speaker 1:

So, when you were raising your precedency, did you do this before revenue or did you already have some production? Yeah, we did it before launch, even, which is very uncommon. I think there were also a few factors that helped the fundraiser. Like first time or sorry, being a second time founder definitely helped. Being in a very happy space helped, helped.

Speaker 1:

But then also I have spent a lot of the last uh, few years just building up a lot of social capital and just getting to know people, um, understanding, like their mental frameworks, and so I had a good, like initial batch of investors that I could lean on, be like look, I'm starting my company, uh, I think it's gonna be really big. I know that you probably think the same thing. Let's make something work. And so I knew that we needed to move fast and that's why we really put the beginning and I made sure that there were going to be those investors there when I wanted to so, going back to what what's hard about this company, so horizontal companies in general is like you're always anxious because everyone's always stepping on your territory.

Speaker 1:

Right, it's like you know we do. Do we have, like, security features? We provide a no-code workflow editor. We have agents that we make on our own. We do a lot of things, and so whenever someone launches their own agent or they you know, they start doing like oh, we have, we're the compliant LLM platform.

Speaker 1:

It feels like they're eating away at your value prop, but most of the times times people don't care. Customers don't particularly care. We haven't run into really any competitive analyses when we go to customers, which is great, and so that's one thing. Another thing might be that the state of the art changes so rapidly in this space. So this is just more of an AI in general kind of problem. But everyone knows this, it goes super fast. But we've spent a lot of time thinking about okay, what are things that are not going to change? What are things that are going to change that we need to like, really skate towards and then also like what are customers buying us for? That's probably been the hardest thing. Is horizontal company in a new space? Nobody knows about AI or what it can do for them. Biggest question we get. I want to use Respell. What do I use it for? And that's a very hard question to answer because you're supposed to do it the other way and we've gotten really good at answering this, but that took six months to figure out.

Speaker 2:

So when you first designed this product, did you have something in mind already, or was it evolved through customer interaction?

Speaker 1:

oh, it's definitely evolved. It's actually stay a lot more similar to our initial uh launch than I thought it would. I mean, I told my investors when, when they first signed on, that the the company you're investing in, is going to be very different. It's not even going to look like the same company so you're investing in the team to be very different. It's not even going to look like the same company, so you're investing in the team and the founder, et cetera, and they were comfortable with that. But I think it's like that also means okay, now we have the money, they are expecting change and we are needing to change where nowadays, I would say we've done a couple of micro pivots, but mainly it's like around messaging around.

Speaker 1:

What is our value? It's the little features, it's the little things that we add and I think our customers they see us as a good, safe bet, for if I want anything AI you can come to Respell for it. It's because we are again very broad and that is actually a very powerful value prop, especially when there's so much confusion around the space. What's the furthest out ai that a customer has come to you for? Some people are expecting agi out of this right. It's like that. I want something that's gonna like uh drive my car or um.

Speaker 1:

Probably the one that's like most salient in my mind would be, um, I think a customer asked me for uh and like an ai agent that could uh like take um what her children say to her and then write like storybooks, just like passively listening throughout the day. I'm like I don't know I can think about how do I like politely let this person know that like that is not currently like what we do? Um, but it was interesting thinking about, because every time they tell us something that like um, we know is currently impossible for us to um to, either impossible for us to build or is very hard for us to build, it gives us lessons about like okay, what are people like actually wanting? And that's, of course, like what we're trying to do eventually is get to the things that people thought were impossible. But yeah, it's fun to hear the like really high in the sky, like I just wish it could do this magical thing.

Speaker 2:

I feel like AI agents writing storybooks is like. This is something that's definitely within the realm of possibility for right now, but it definitely brings up some safety questions.

Speaker 1:

It's in the realm of possibilities for sure. You can definitely do it, even even with open AIs. You know custom GPTs now and safety, I think, is also getting a lot better. I guess like what do you think is the vector there that would make it unsafe?

Speaker 2:

Oh, I just worry about, like you know, if AIs do reach this singularity doomsday thing, which I don't personally believe in, but you singularity doomsday thing, which I don't personally believe in, but you know, you gotta think about this kind of a fun thought experiment like that is exactly how I would imagine them, them taking over humanity, like teaching our kids things like subliminal messages while like we're at work.

Speaker 1:

Yeah, exactly yeah, I guess that is true.

Speaker 1:

Um, I think, like, up until now, we haven't seen any like intentionally malicious intent from, uh, from AI.

Speaker 1:

But I do wonder and I came from, um, you know, when I, from like 2016 to 20, around 2018 or so, I was fairly involved in the like rationalist community, which was, um, like at the core of debating this issue um, what as one of their main core topics, and I've thought a lot about X-Risk and I know I feel like it's not going to have to work very hard if it wanted to remove humans. But I also think that as time has gone along and we've seen how the technology has progressed and where it's failing and where it's working, I'm getting more confident over time. I do think it's going to be more slow takeoff, meaning that AI is not going to be, like overnight, a hundred thousand times smarter than us, and I also think it's going to be a bit it's going to be easier to align than we think, not to say that humans are all aligned with each other. So I think if we're comfortable with a little bit of variation, that's okay.

Speaker 2:

You used to be a rationalist.

Speaker 1:

You could still consider yourself a rationalist, no, I would say like the rationalist and the effect of altruism communities kind of just like fragmented and then fell off and turned into something else. So I wouldn't identify with either of those labels anymore, even though I was back then and I know there's like post-rationalist or I like that term.

Speaker 2:

Post-rationalist. Exactly, it sounds to be rational. It sounds strikingly similar to some of the labels around Marxism. I don't think they're called post-rationalist, but it's something like that Post-reason or probably, yeah, reactionary to rationalists. I don't think they're called post-rationalists, but it's something like that Post-reason or Probably, yeah, reactionary to rationalists.

Speaker 1:

Right, exactly, I don't really know like much about their mental systems nowadays and I do know that, like some of the original folks in the rationalist community started to kind of go down rabbit holes that weren't healthy for the psyche and they kind of created just like odd or, you know, like just irrational beliefs, and so I think, like I kind of divested myself from it and said, okay, well, what do I want to focus on? Because I can be going into these intellectual crowds and going to salons in Berkeley every weekend, or I could be like trying to build something, and there was kind of like a 50-50 split between the people that stayed and people that became entrepreneurs and tried to do startups Just a very big area.

Speaker 2:

You can either be a rationalist or you could build a startup, but you can't do both yeah, and if you do startup, then you're an ex-rationalist, not a post-rationalist, though okay, why is that uh?

Speaker 1:

well, mostly as a joke, but I mean it's uh, a lot of startup, um, people from like that era that either were like students or just like doing something else at a time, like working for a larger company. They, they were like involving those circles and then they kind of like similar to me, they're like I want to focus on something else, put my mental energy into my startup, make something work instead of just talking about making things work so it just took a lot of energy.

Speaker 1:

That's a rationalist yeah, well, it's um you're thinking about like big ticket issues, like okay, what about? Like x risk? And generally, uh, like where's the world going? How do different cultures merge together peacefully and efficiently? Uh, you know how should? How should war be conducted? Like things that if you take any one of those topics, you isolate it, you can create um entire, like identities, out of um. It's like you are a person in the rationalist group that is concerned with many different things and does take a lot of mental energy, and I think part of my reasoning for going to start was I wanted to focus on getting closer to those solutions.

Speaker 2:

So what about the current state of AI? Is making you more optimistic that X-Risk isn't something that we need to worry about?

Speaker 1:

Yeah well, so take off and better alignment. I would say that, especially back in like 2017, 18, we saw this stuff coming because that was like when reinforcement learning was at its peak, because OpenAI was doing it back then too and we didn't have things like RLHF or even the ability to make something speak and kind of give its thought patterns back to us. And so we're seeing that, okay, we can actually interpret these models and observe what they're thinking much more than we thought we ever would, and I think that's a really good sign that we can actually understand these systems. And then step two is controlling or taming, which we've actually done a fairly good job at maybe too good of a job, considering we tend to lobotomize LLM sometimes, but then the and my hypothesis there is just that they're being aligned with the wrong customer mind.

Speaker 2:

They're being aligned with the company as the customer, not the end user as the customer.

Speaker 1:

They're being aligned with the company as the customer, not the end user is like they're being aligned with the hr department, yeah, so I think like that generally that's better than um, uh, like letting it go well, and then there's a slow takeoff, which I think is I think it's hard to like kind of cohesively give an argument for. But I mean, I think a lot about what does it mean to be superhuman and to go beyond what an average human can do? I think in the digital space it can get super smart, it can become, uh, it maybe you can make novel breakthroughs, like just you know, in one night it can solve cancer and also figure out a plan to distribute food to everyone in the world, and I think that's exciting. But also, if you think about okay, it has that plan, how does it get into the real world? And that's really tricky because the real world is very different from what AI might expect.

Speaker 1:

Things break. There's things that we've never encapsulated into words that an ai might be able to reference. Uh, like, for example, oh, um, let's make a uh, let's drive a truck over, but what if you're driving across a bridge and strong wind gust uh, takes you over like that's force that it's in textbooks but there's not a lot of information about it, and so the am, might struggle to adapt to the real world, and that's probably also a contrived example, but you get the point. I do think it's going to take a lot of trial and error in the real world, takes real resources to do so. It takes a little bit longer to enact its will in the real world, but, um, I do think it's and that will slow it down in terms of like overall intelligence.

Speaker 1:

That's my current working theory. I don't know how confident I am in it, but I would say it at least helps me sleep better at night.

Speaker 2:

So what's something that you're excited about for the future of AI?

Speaker 1:

I would say, right now, we have agents and models that are really good at responding to you, but only when you ask, and so if you think about okay, if we want an AI that works like a human, talks like a human, acts like a human and thinks like it, then they're going to need to figure out how to inject themselves in our world.

Speaker 1:

They're going to need to tell us things, give us ideas, be present when we don't even ask them to be, and so proactiveness I think it's going to be a very key concept for AIs going forward and then also just focusing more on like, less so of like being a just a reflection or like a single tool that we can use personally but working in groups of people.

Speaker 1:

Right now, you have your own ChatGPT and you have your chats with it. You don't have group chats with it. You don't have it doesn't know how to work in a multi-person context, and so I think getting it to the point where it is socially malleable and socially flexible is going to know. Getting it to the point where it is socially malleable and socially flexible is going to be very interesting to see how that works. Um, and I think everyone's very happy with chat tpt where it is right now and that it can do very smart things, but um, it's not even scratching on the surface of like what it could be so can you tell us a little bit more about the respell product?

Speaker 2:

what is a task that it could be? So? Can you tell us a little bit more about the Respell product? What is a task that it could make more efficient at the moment?

Speaker 1:

Yeah, so we work with every function in every company. That's the problem of a horizontal company. But we tend to specialize and sell into some specific ones, and those would be like X ops, so like sales ops teams, product ops, data ops, marketing ops, et cetera. And those are teams that they have. They're at the intersection of lots of people they have to support and like enable to do their jobs better. They have a lot of tools, a lot of things to do and tasks to do, and so they're usually underwater or, you know, just barely floating, and our goal is like hey, we're going to help you tie together your tools, help you make your processes more efficient, get more off your plate, make your team more efficient, and it's a pretty good, easy sell. So, for example, sales ops lead management has always been something you think is automated by now, but most companies don't really have all that much in terms of like automations there, and so that would be like new contact comes in. Maybe they filled out a form on your website or you emailed them and they said yes, and you know nothing about them. You want to go research them. You want to go research them. You know sales reps or SDRs are going to go on their LinkedIn and job, the company page to figure out as much as they can about them. And then that's about as much as, like, crms might automate nowadays.

Speaker 1:

But then, beyond that, you might want to do things that are specific to your company.

Speaker 1:

Let's say, if you're selling flowers, then which I know why you would have SDRs for that but you're an SDR selling flowers, then, which I know why you would have SDRs for that but you're an SDR selling flowers, you may want to know, okay, how likely is this person to buy a, like large bouquet versus just a couple flowers at a time?

Speaker 1:

Or are they buying it for their you know spouse or their mother or their friends, or for themselves? And so these are things that, if you give LLMs enough context, you might be able to start making some progress on automating these things. That you say, oh well, if they're going to buy a large bouquet, I want to talk to them, give them a white glove experience, so write a personalized email for them. But if they're going to come in buy a few flowers, I don't really have time, so send them like a thank you for coming in text message or something, and so things like that are, you may think it's like low value, but if you have high volume or you have an employee that is focused solely on improving this metric, that can be a lot of value for you.

Speaker 2:

So in this example, I imagine most companies have very different sales channels. Even companies that have similar sales channels must have wildly different ecosystems around their sales. Does that mean that you have to go in and build basically data loaders into your LLM for every company that you're working with?

Speaker 1:

Yeah, so if you asked me a couple of days ago, I would say, yeah, we have to build out a lot of integrations, and we do.

Speaker 1:

I do think that is actually getting quite a bit easier there's. You know, during Dev Day, I think Zapier, which is an investor of ours they released a functionality called Zapier's AI Actions and basically you just describe in natural language what you want to do and it will look through the tools that you've integrated in your Zapier account and it will do whatever actions. So if you ask it to send a Slack message or if you ask it to look at your calendar, it'll just do that and you just write it out in text, and I think that that's kind of like the future of where these things are going. It makes a lot of sense for Zapier. They're an integration marketplace, they have huge coverage and we don't want to build out 6,000 integrations. So I think there's going to be a synergy between us. We're probably going to still have to build out a few more integrations, as requested, but I would much rather delegate it to the people who are much better equipped.

Speaker 2:

So you see all of these new things like that coming out on OpenAI Dev Day as helpful to your business more than competitive to your business.

Speaker 1:

Yeah, in general, I think we are like a bundling compound startup where, like every time open ai releases something, as long as there's an api, great. We build on top of it, make it more convenient, simpler, build into a bundle of products that make sense to business users and if there's something missing, add that in um. Same thing for zapier or for other ai startups or for you know, if you built an agent. I think all that stuff kind of gives us more value, and sometimes of AI or Zapier, someone's going to build something that kind of obsoletes tech that we wrote and it sucks that we spent time on it but got to move forward, and that just means that we're getting closer to a world where, uh, where you don't have to do individual work, you don't have to um, you can just tell the ai to do it, and so that makes me happy.

Speaker 1:

Can you tell us a little bit about your tech stack? It's interesting when people ask us this because they expect that we use like lane chain or something underneath. Um, it's almost all custom. Like we use view, which is maybe a contentious choice, uh, with, with next on top, we have an expressed uh typescript background backend and then we use postgresql for our database and um. Outside of that. We have like a couple things around, but nothing that's like super critical and everything else is pretty much custom. We built all the connectors to the models. We have our own like analytic suite, um for like pricing analytics, like events, and then also for um like safety. And yeah, it's part of building a new landscape is, the tools can sometimes be too restrictive or when we were looking for them they were too early, and so it's a pretty fresh stack are you using your own models or fine-tuning models at all?

Speaker 1:

we don't do fine-tuning um, because our wants us to do to use our fine-tune model, then we have no problem like plugging that in for them, but we don't offer that as like a service. I think there's better companies equipped to do that and it would be interesting to have fine-tuned models listed on Respell, but we don't get a lot of requests for that have customers asked you for?

Speaker 2:

like custom, fine-tuned models.

Speaker 1:

Not really. Maybe in the beginning they asked us, but that's because, like nothing else existed. So I think people were hoping for some solution for their problems and that just wasn't us.

Speaker 2:

What sort of problems were being solved by custom fine-tuning models in those cases?

Speaker 1:

I think. So I'm thinking like back to February, to like May or so. Fine-tuning was seen as this like way to make an LLM smarter versus just kind of like making it speak or think a certain direction or like domain. And so if you want to fine-tune it on science papers, it's not going to make it smarter, it's not going to like give it more knowledge or tell it to like tell the secrets of the universe, but it's going to make it sound more like a scientist and sometimes if you say don't ever sound like an idiot, then it generates smarter things just because it has less to sample from. So I think people looked at fine tuning as a way to kind of get around their like LLM is not smart enough and they were a bit disappointed, which is why I don't really see a lot of interest in fine tuning nowadays.

Speaker 2:

So what sort of things do you think people are going to be interested in in the next year?

Speaker 1:

I'm interested to see how hardware pans out, like my tab or rewind pendant I. I think that would be like a very. It always helps when there's like a hardware analog to the software, to just like see how people visually feel about it. Because, like in the internet, you see an AI, you're like found one again, whatever, but you see one in real life and it's a very different reaction. So I'm definitely interested in that and I hope that we find some useful applications of it.

Speaker 1:

Think of interest in the software space, like agents and, uh, you know, like more customized or like personalized. Ai is going to be uh name of the game for the next six to twelve months. I have mixed feelings about like agents, depending on which definition you choose. Um, it seems everyone was calling uh ai. When you give it knowledge and actions, um, so like their apis to hit or like certain tools to use, um, they were calling those agents. But then openai said, oh, but now you can build your custom gpt and it has all those, but they don't call them agents, um, and so I'm wondering like okay, if we're going to converge on those. But I do think in general people want agents and what they mean by that is, they want something that's smarter, that is more, get better at like high level reasoning and that they can just trust to do something. They can just like ask you to do something and they can do it. They'll figure it out.

Speaker 2:

Um, we'll see when, when we get there and what it looks like, but I think it's probably gonna be the main one so I was having this conversation earlier today, actually, like what it means, like, of course, like every like week, somebody is having this conversation with me about like consciousness and ai and agi. Like I feel like that's just like that's just the name of the game right now, just like you can't avoid it as much as I would want to avoid that conversation. Yet, uh, I think right now it is actually bringing up interesting questions, unusually, because we're at this cusp where, like, maybe it could just be data that makes the models like just slightly smarter. So I guess, like, what do you see as a potential path towards getting to this ASI?

Speaker 1:

So you know, legal disclaimer I'm not a researcher, I have no idea anything about what OpenAI is doing either, but I think I can give some context on, like, how people have thought about it over the past, you know, decades or so. Um, there's always kind of been like two camps of like what's eventually going to be like super intelligence, and that is, uh, the like holistic intelligence camp, which is the one model to rule them all. It's the. I built TPT-8 and it can fold my laundry and do everything else better than I can. I think OpenAI roughly believes in that. It's kind of hard to tell, though, as I'll get to in a moment.

Speaker 1:

Then the other camp is collective intelligence, and so that is a collection of AIs that work together and coordinate to create kind of like a superorganism that's much smarter. And if you think about our definition compared to society, it's a bit odd, because we think of humans as artificially intelligent in the maximum range range, but if you think about corporations as intelligent organisms, then they're, on the average, much smarter than humans. And so do we want something that's smarter than humans or do we want something that's smarter than corporations? And it's the same thing with these camps holistic versus collective intelligence, collective intelligence of a very smart holistic intelligence may actually be overall smarter. So all that suffice to say, I have no idea which one's going to win.

Speaker 1:

It seems like OpenAI is going in the holistic intelligence direction. They think that they can build a model just by feeding it data, making architectural changes that can get to AGI or ASI levels, but at the same time GPT-4 is a mixture of 12 different models, so that's technically collective intelligence and they have a really smart router. I suppose I've been in the latter camp, collective intelligence, pretty much since the beginning. I think it's going to be the easier, faster and unfortunately more chaotic way to get to intelligence where it's more experimental. But I don't really have any strong reasons why. It just seems like intuitively, that if we create agents that orchestrate between each other, like our society does, we can get there faster than it would to try to recreate our brain.

Speaker 2:

Is there anything that worries you still, even though you're optimistic about X risk? Is there anything that worries you about the future of AI?

Speaker 1:

I could be worried about another winter coming where we just have no idea how to scale it beyond current implementation. Maybe GPT-5 isn't, as it's just a marginal improvement on GPT-4. It's entirely possible. It's entirely possible.

Speaker 1:

On a more existential part, I would say I'm a bit worried about, maybe, the geopolitical risk involved with AI, on two fronts. One, there's, of course, a race to get AGI across different countries, some of which we are not aligned with and don't have. What's the word meshing cultural values? Uh, uh, what's the word Meshing cultural values? Um, I think that's mainly. I'm not terribly concerned about that, but more so of, uh, the supply chain of GPUs and involved chips, uh, and what incentives, uh, the race to AI could bring for, uh, heightening those geopolitical issues. So I think that's probably the more concerning one I would say.

Speaker 1:

The one concern I don't have is that people are going to be automated out of jobs. I do think, even for the artists, even for the copywriters, ai is going to as tech always has help us do a lot more. And the way I kind of categorize that is right now you're doing like you're doing, doing work, which is you're doing tasks and workflows and, um, you know, individual contributor work, um, but everyone wants to be doing the decision making work, whether it's deciding on things that like what you want to do during the day, or deciding on creative aspects, or deciding on, well, decisions that come up to you of. You know we need to do X or Y, and I think if we can get to a point where AI is doing everything but the decision making, overall, humans will be happier and we can be far more productive. And that's, you know, that's our hope with Respell.

Speaker 2:

Yeah, I hope for the same thing. Well, thank you very much. This is very insightful.

People on this episode