Dr. Mirman's Accelerometer

Will your AI friends kill you? w/ Kevin Baragona of DeepAI.org

December 05, 2023 Matthew Mirman Season 1 Episode 4

Could you find yourself falling head over heels for a robot? Prepare for a mind-bending conversation with Kevin Baragona, founder of deepai.com, as we explore the thrilling and sometimes unsettling frontier of AI companionship. We examine the rapid rise of AI companions, ponder whether robots could become indistinguishable from humans, and grapple with the ethical dilemmas of humans forming emotional bonds with AI. From consciousness in AI to the potential impacts on human relationships, we tackle the hard questions that come with our ever-growing reliance on AI.

Imagine a future where AI quietly and seamlessly integrates into every aspect of our daily lives. In this episode, we delve into some of the most promising emerging AI technologies, like ChatGPT and dalle, marvelling at their capabilities in generating high-quality images based on complex prompts. As we unwrap the inner workings of these technologies, we shine a spotlight on the critical role of quality data sets and thorough labeling. With a deep dive into how companies like scale AI are driving AI's impressive performance, this episode is a must-listen for anyone eager to stay on the cutting-edge of AI advancements.

DeepAI.org
Episode on Youtube

Accelerometer Podcast
Accelerometer Youtube

Anarchy
Anarchy Discord
Anarchy LLM-VM
Anarchy Twitter
Anarchy LinkedIn
Matthew Mirman LinkedIn

Speaker 1:

Hello everybody.

Speaker 2:

Today we have Kevin Barragona. Kevin runs deepaicomorg.

Speaker 1:

We havecomorg and deepai.

Speaker 2:

This is a consumer AI company, which is really exciting. There are very few of these right now. This is one of my favorite AI companies out there. They are how would you describe it?

Speaker 1:

We have a collection of generative AI tools for consumers. We got our start really early on in the text-to-image market, which we've actually been doing since 2016. We were the very first company to do it, and certainly we did a lot of other things along the way, but we're now one of the biggest in the text-to-image space. But we also offer a bunch of super cool chat tools that are super fun and also incredibly useful.

Speaker 2:

You're like. These are the most fun tools that we've ever and usefully offered my friends. Internet in general it's 100% right. It is so fun.

Speaker 1:

We have an AI pirate.

Speaker 2:

You have an AI pirate.

Speaker 1:

It's called AI Blackbeard.

Speaker 2:

Yeah, you mean like I talk to it and it's Blackbeard.

Speaker 1:

What do people say to it? Well, I think it's you know.

Speaker 2:

These are romantic conversations with the pirate. Has anybody fallen in love with the pirate?

Speaker 1:

Not to my knowledge, but it's quite likely they're pretty popular.

Speaker 2:

Is that something that you're concerned with?

Speaker 1:

Yes, actually, despite all efforts, people are very willing to fall in love with an AI. I think what we're seeing over the last six months is the rise of AI companions, ai girlfriends and boyfriends that people are very willing to fall in love with. I can't say I exactly support it, but people are going to do what they're going to do, I guess.

Speaker 2:

So you want people to have real human relationships.

Speaker 1:

I think that's better, yeah.

Speaker 2:

In what ways is it better?

Speaker 1:

Well, it's real.

Speaker 2:

I mean, is it the AI real? It's in the world, it's part of reality, right?

Speaker 1:

I think we don't really know if a language model can have feelings, but if they do have feelings, then maybe it's okay.

Speaker 2:

If they don't have feelings? Are people not allowed to fall in love with psychopaths? I don't know. This is a topic that's coming up on the show a lot, I guess because we're creating AI.

Speaker 1:

Well, no, the future is coming hard and fast. And the future is, make no mistake, it's robots that are incredibly life-like, possibly more human than human. They'll be as intelligent as people, they'll be able to interact in the real world like people and they'll have full emotional depth in terms of reading emotions and outputting emotions Just like people. And then it goes body language, facial expressions, voice, the whole thing. So I think we can have robots that do pretty much 100% perfect emulation of human, human emotional behavior. Extremely likely to happen, and it's an extremely deep philosophical question of like one is this real? Two is this good? Three did we just replace ourselves? Four is this good or bad?

Speaker 2:

Is it moral, Is it ethical to fall in love with a robot? If you're doing it willingly, let's assume you're doing it willingly. Like you have choices, there are humans around and like you could be. Like here's a robot or here's a human.

Speaker 1:

I don't know if it's moral or not, but it could be problematic.

Speaker 2:

In what ways could it be problematic?

Speaker 1:

Well, we may not survive as a species.

Speaker 2:

Is this something that, like you're seeing the hints of already?

Speaker 1:

Bit by bit, bit by bit.

Speaker 2:

Yeah, what are some examples of disappearing?

Speaker 1:

Well, all I know right now is people are really willing to fall in love with an AI and that is displacing actual human connection and I feel like we should be very careful with that.

Speaker 2:

Is there anything that you're trying to do to stop this?

Speaker 1:

Well, you know, as a company we haven't really made a concerted effort in this direction. I'm not really sure what we could do. We could take a hard stance against building any type of AI companion, but a user can fall in love with even chat GBT, so it's kind of a losing battle.

Speaker 2:

Is it really the AI's problem, or is it society's loneliness's problem?

Speaker 1:

It's probably mostly society's loneliness, but we're shaped dramatically by the technology around us. We have so much technology surrounding us all the time that we're just making it too easy to fall in love with an AI. I think we should track it very carefully. A lot of things sound kind of stupid to talk about right now.

Speaker 1:

We're talking about people falling in love with AI which is like how is that really going to matter that much? But then in 10 or 20 years it actually will end up being a huge deal, I may say now, similarly, you shouldn't let a robot run for office, you shouldn't let a robot have human rights, and that sounds silly now, but it won't be so silly in the future, when the robots are incredibly lifelike.

Speaker 2:

If we don't let the robots have human rights and it turns out they do have feelings won't that be its own issue?

Speaker 1:

Actually, if they do have feelings, if the robots are conscious, then we've probably accidentally created life.

Speaker 2:

Is it accidental?

Speaker 1:

I would say we're doing it rather intentionally actually, but we have no reason to think that the robots have feelings or are conscious. We also have no reason to think that it's impossible. We could do it quite accidentally, and if we're able to demonstrate that the computers are conscious in a way like people, then I suppose we could. That should probably change our thinking a lot about the way they integrate in our society and about how they're used more broadly. But that would also be probably one of the most earth-shattering discoveries of all time.

Speaker 2:

Yeah, I was thinking. If we knew that we were simulating a brain, every atom, I'd be certain that was conscious in a sense.

Speaker 1:

Right, well, maybe. I mean, we're not fully sure that all animals are conscious. We don't even know which ones they are. Yeah. We don't. Octopus, who knows? We think that monkeys and dogs and cats are probably conscious. Our mice maybe, Our reptiles maybe. Yeah, our insects. I don't know when do you think you became conscious? There's a difference between being conscious and remembering the consciousness. Yeah. And I think it'sif I had to guess unborn babies have some level of consciousness. Yeah, If I had to guess.

Speaker 2:

I've spoken to people who have said that there was a moment and they remember that moment when they knew that they were conscious and they started questioning their existence.

Speaker 1:

Yeah, I can certainly remember, sometimes like that, when I was about three or three years old. Four years old it doesn't mean it's the start of consciousness. It means it's the start of remembering your consciousness.

Speaker 2:

But if you can remember a time before you're conscious and then remember a time after you're conscious as well Lucid dreaming, for example you can remember a dream, but not all dreams are lucid. And if you're remembering a dream and you have no agency in that dream, are you conscious? I mean, we literally say that person is unconscious.

Speaker 1:

Yeah, I think that the ability to form memories will probably be a pretty important part of any theory of consciousness.

Speaker 1:

But I will say that philosophers have spent a lot of time studying this type of stuff, and sub-scientists, and the truth is we don't even really know what it is we're talking about. We can't point to it, we can't describe it, we just know that we have it. Yeah, what does that mean? Well, it might mean that there's nothing to be found, and some variants of simulation theory would point to this. It's entirely plausible that we are definitely conscious, but consciousness cannot be found in the physical world, because the physical world is only part of existence and we have no evidence for or against it. But it's entirely possible. It's entirely possible that there's no consciousness to be found in the real world.

Speaker 2:

So I'm a little bit curious how did you get started?

Speaker 1:

You know it was 2016 and I was following academic research. At the time, I saw that NLP was starting to get solved bit by bit. It was around when Char RNN.

Speaker 2:

Not Char's R Not Char's R Not the Pokemon.

Speaker 1:

But Char RNN from. Carpethy was starting to make the rounds. There was a computer vision model called Dense Cap or Dense Captioning. That I thought was really cool. There was an image colorizer and there was a working text to image. Result that used an. Lstm.

Speaker 1:

And again and a few others, and what this told me was that there had basically been a breakthrough in machine learning. It was not a fluke, and the way I knew that was because it was solving such disparate problems and problem domains simultaneously with pretty similar techniques. That technique, of course, is deep neural nets with back prop effectively, which is obviously not super new, but it wasn't until 2012 that they were any good. So I got started, like many others, by back then, taking open source and trying to build products with it. And what's interesting is we built you know, buddy, they say as we built something really crummy.

Speaker 1:

Even back then, there was Insane demand for AI we had customers almost immediately in 2016 I think we First did actual business in 2017 you've been doing this for almost seven years.

Speaker 2:

Yeah, I've seen the entire hype cycle.

Speaker 1:

I've seen multiple hype cycles in in these seven years? Yeah, absolutely yeah yeah, it's kind of it's not been a straight line from there to here. Yeah it's Certainly there's been progress every single year.

Speaker 1:

Yeah but some years were a little bit quiet. How did you get your first customers? Ooh, join my memory, I Think they. They always came to us. See, there were very few AI related websites that were any good back then. So you could make a technique, you could make a webpage. If you had anything that worked at all, someone would try to use it. Maybe they had their own business use case for it, they just needed help with. Yeah, that's another. I would say another big difference between then and now is that Now AI and machine learning is Widely understood. There's a lot of people who know how to do it. There's a lot of tools out there that work great.

Speaker 1:

Back then it was much more, I Would say, kind of considered In some sense magic because, not that many people knew how it worked, so if you were the person who could make AI work, that was, that was really something.

Speaker 2:

I kind of miss those times yeah, so now.

Speaker 1:

Now everyone knows how to do it. Yeah well, what's different now is? The everybody knows inference everyone knows in France, but Not every, not everyone Knows how to create GPT for class models. Yeah, so like X open AI, people are kind of like the gold standard of founders right now, because they're seen to have some sort of Magic knowledge on how to, how to build something like that. But I think they have the magic knowledge you know, I Think from everything we've heard, there's Not a lot of magic to be had, just good engineering.

Speaker 1:

Yeah and money. Yeah.

Speaker 2:

Do you think that there are any like serious, like well-capped secrets for these models?

Speaker 1:

It could be the data sets, well-capped secrets.

Speaker 2:

No so if you had like a billion dollars right now, you could go out and train with these GPT for absolutely let's get a billion dollars. What are we doing?

Speaker 1:

Well, we could do that. So, and then this points to why the X opening AI people are considered the gold standard. It's because that's a lot of money, yeah, and the theory is, by getting experienced people They'll get it right in fewer tries. Yeah because machine learning, machine learning and deep learning is empirical, it's experimental. Yeah, it's Creating the large Model like GBD Three or four. Yeah.

Speaker 1:

I would actually compare to making a witch's brew a witch's brew of ingredients. It's not exactly scientific. It takes a lot of finesse to get right. Yeah so like maybe you're familiar that, like I Believe they, they started with a code model, but a code data set to train you before, and they gradually started adding other types of data. Yeah, and.

Speaker 1:

Then finally, maybe they are a chef did it's like curriculum learning. Yeah, now, that's not exactly a scientific process. They may have some theory behind it, but they're quite literally. They're just sprinkling in different types of data, training a bit and like periodically tasting it to see is this a good model? Hmm, let's add a little more of this kind of data. Yeah, like that's, in the little sense, how these things are trained and built.

Speaker 2:

Do you think that there's anybody out there currently being effective and making it for the better? I?

Speaker 1:

Think it's twirly to tell, but I would say at this moment probably not. So at this moment, the AI safety people I think Are not exactly helping.

Speaker 2:

Was there anybody out there who's good, increasing it, at preventing this nightmarish scenario when AI takes over the world? We all turn into paper clips and we're all in love with AI. Which you think is more likely we're in love with the AI or it turns us into paper clips.

Speaker 1:

I Think we're gonna fall in love with the AI and Then and then something bad will happen. Okay, well, something bad happen because we've fallen in love with the AI it'll be because It'll be because we give control of our society over to AI. Yeah, and what happens next is anyone's guess.

Speaker 2:

Would you give control of the society to AI?

Speaker 1:

I'm inclined to say no, but the reason I'm pausing is Some questioning whether humans are any better.

Speaker 2:

I Imagine you already use AI Variously for tasks in your life without thinking about it.

Speaker 1:

Absolutely. Yeah no, it's. It's not production ready yet to run the world. Yeah maybe one day it is and Hypothetically it could do a good job like better than we could do what's some promising technology that you think has come out in the past year?

Speaker 2:

That really took you by surprise.

Speaker 1:

So, like many, I was pretty blown away by the rapid rise of chat, gbt and Is that within the last year?

Speaker 2:

I think it is.

Speaker 1:

Yeah, it hasn't even been a year that's crazy yeah. And Just recently Dolly three was extremely impressive.

Speaker 2:

Yeah, have you gotten to play with Dolly three?

Speaker 1:

I've not personally played with it, but I've they're claiming this were extremely impressive. Have you seen those images?

Speaker 2:

They look good, they got text right.

Speaker 1:

I'll say, even more impressive than text is Just the sheer ability to follow a complex prompt. Yeah is Next level.

Speaker 2:

Yeah, did you hear at all how they the speculation about how they trained it?

Speaker 1:

Well, I don't know Much about it, but I hear it's not a diffusion model. It's something else Interesting. I don't know. What have you heard?

Speaker 2:

I heard that they hired a company like scale AI to go and label very Like profusely a bunch of images and that was really the major difference with like previous data sets or the images with some labels, but not like full instruction, deep descriptions of the image so it may be like a data set quality thing that led to the incredible improvement. Yeah, I mean, that's what led to mid-journey over dollars and. Stable diffusion over Dolly too.

Speaker 1:

That's really interesting. So it's like a quality data set with better labeling and, I think, probably a bigger, bigger, smarter model as well. Yeah. It's been.

Speaker 2:

It's been very surprising how small the diffusion models have been relatively. Yeah, like language models are gigantic, but diffusion models why do you think people have been using smaller diffusion models than language models?

Speaker 1:

it might be that a A Fuzzy, bad images still useful, but if you apply the same type of logic to text, you just get gibberish text yeah and In like a stable diffusion like 1.5 or so it's. It's okay at generating images, but it's it's not really all that smart in terms of like combining novel concepts. So it's almost like retrieving images from its trainee data in a smooth way more than anything.

Speaker 1:

Yeah, like in a differentiable way, but like it doesn't Feel like it needs to be super smart per se to do that task, whereas a larger model can be smart in some sense.

Speaker 2:

Yeah, I mean, but we've seen. We've seen the failures of the small models recently like which ones like mid-journey? And say, like you know, the horse riding an astronaut, or it's inability to produce quality text. Mm-hmm. Yeah, so it's. Fundamentally, I Just wonder if it's not the size of the models being used leading it to these, these sorts of mistakes.

Speaker 1:

I'll say that the architecture and algorithm matters Hugely yeah it's not just primary count, it's the architecture algorithm and the data. Yeah they all, they all have to be right to get a good result.

Speaker 2:

What is some innovation that you'd like to be responsible for? So?

Speaker 1:

I actually think we have a lot of technological progress in the world right now. I don't I Don't see myself as particularly needing to advance the state of AI per se. I don't think that that is you are well, in a sense, what I see in the AI industry is that it's Aggressively elitist, aggressively secretive, kind of corrupt, and it's not great for like social equality Globally what are some examples of this corruption? Um, I Think it would be probably favoritism between large companies and Open AI. Yeah, between governments and companies, also fraudulent startups.

Speaker 1:

Yeah, name names, but they are out there. Yeah.

Speaker 2:

And the elitism yeah so or when?

Speaker 1:

When the leading AI companies Open AI, anthropic Google, etc. When they create the best in class model, they don't release it as open source. They don't release it as public access. They won't write a paper on how they made it. They won't release the data set. They won't let anyone use it. They'll let people use it internally first and then they'll let their friends play with it. You have to be part of their inner circle.

Speaker 2:

Doesn't that always been the case with new technology, though? Is there any other way that could? Could you release the technology before you have it?

Speaker 1:

Well, you definitely can't release the technology before it exists.

Speaker 2:

Exactly.

Speaker 1:

But I think they could.

Speaker 2:

One has to come before the other. The second, you have it. You know it's secret for time.

Speaker 1:

I think what I take the most issue with is when these companies release they won't release a model. They'll release an API to call the model, but they won't really and they won't, they won't actually allow equal access to that model. They will allow access first to their friends and then to well-connected businesses, and I'll say that Y Combinator is a great example of this. Like OpenAI tends to give early access specifically to Y Combinator companies, and the only real reason I can think of to do this is elitism.

Speaker 1:

And it's like a form of back-scratching, in a sense Like clubbishness, and it may help OpenAI in some case, it might help the startups, but it definitely doesn't do any good for the rest of the world. It effectively enriches an elite group.

Speaker 2:

Do you think it would be possible for them to open up their APIs?

Speaker 1:

Absolutely.

Speaker 2:

So you think that there are enough GPUs?

Speaker 1:

There are definitely not enough GPUs, but what they could do is offer market rate pricing for any API they do release. So, instead of only allowing their friends to call their API, they could let the highest bidder, which is capitalism but then capitalism has its faults, but it's a lot more fair than nepotism.

Speaker 2:

Have you ever considered competing with them on that level?

Speaker 1:

Yes, it has occurred to me. I'll say that part of our ethos of DPI is we stand pretty firmly against that type of behavior. We're building for the whole world. We're not just building just to make ourselves richer or to make our friends richer. We're building for the whole world and we hope we can have some sort of social progress that way. What led you?

Speaker 2:

to this mission.

Speaker 1:

I don't think it happened overnight. I think it's sort of a lifelong form of idealism that we all want to see some amount of fairness in the world and through our experiences we can see how you can witness how people have been hurt by elitist policies. Or you can see how I've been hurt by certain types of policies and I can think about how I definitely don't want to perpetuate such a system.

Speaker 2:

Are they really in opposition? I think, they really are.

Speaker 1:

I think OpenAI, specifically, is pretty transparently trying to lobby the government to make sure no one can train a large AI except an elite few.

Speaker 1:

No one will ever be able to create a large AI, but a select few, and then they get to decide exactly how it can be used and who can use it. It's vaguely dystopian. And let's talk a little bit about the organization. We sit pretty close to them from where we're sitting right now, and I think OpenAI was founded in 2015 or 2016. So they're an established company. They've never had an office with their name on it. What do you?

Speaker 2:

think OpenAI is hiding.

Speaker 1:

Well.

Speaker 2:

I don't know what they're hiding. They're hiding it. What could?

Speaker 1:

they be hiding.

Speaker 2:

My hypothesis about this is that when they started, they always had aspirations to build AIGI, but they started under the guise of trying to make AI safer through being open. I mean, they didn't even hide that when they started. Right, they said we're trying to build an AI and we're going to make it open source and we believe in the end of the world, but we're going to continue doing that.

Speaker 2:

And if you believe in the end of the world through AI. How else could you go about building a company if you don't try to keep it out of the public?

Speaker 1:

A lot of people in the AI industry think that they're building the nuclear weapons of software like something really almost too powerful. You can't let the unwashed masses ever touch it. You must not let anyone create this.

Speaker 2:

I mean, that's kind of what it feels like. Every time there's a release, Everybody's like this next release, this is going to be the one that ends the world.

Speaker 1:

You know, maybe it is going to be the one that ends the world, but I mean the next one's, multimodal. I'll say that they said that about GBT2, the gibberish machine, so they're kind of the boy who cried wolf a little bit at this point.

Speaker 2:

I'm not sure OpenAI is the one saying that, but they're letting the rumor go around.

Speaker 1:

They have propagandists like Rune.

Speaker 2:

Yeah, do you have any idea who Rune is? You do, have you met Rune?

Speaker 1:

Next question Are you Rune? I'm not Rune.

Speaker 2:

Okay, would anybody know if I was Rune or you?

Speaker 1:

could all be Rune. I think that's true Rune could just be like. We're both.

Speaker 2:

Rune yeah, it could be like the joint account of everybody in the cell offices right.

Speaker 1:

Yeah, rune is a propagandist employed by OpenAI.

Speaker 2:

It's really funny because a couple of years ago Rune was very like pro AI safety, very anti AI, and now Rune is an accelerationist.

Speaker 1:

I'm not sure why we listened to Rune, to be honest.

Speaker 2:

Do we listen to Rune or?

Speaker 1:

any Anon Twitter accounts. I don't think we should listen to Anon Twitter accounts. I'm going to take them seriously. I think that's a mistake.

Speaker 2:

Well, I think if somebody has a good argument, you should listen to that argument, no matter who that argument is coming out. If somebody has this mask of anonymity, we should just completely ignore everything they say.

Speaker 1:

I think we should take them significantly less seriously yeah. Unless they have some really really good reason to be anonymous yeah.

Speaker 2:

I think it really devalues the message.

Speaker 1:

I think if you're willing to say something but you're not willing to put your name on it, you either don't really believe what you're saying or you somehow don't mean it.

Speaker 2:

You don't think it's because they're afraid. They're afraid of public backlash against our ideas.

Speaker 1:

It could be, I think, I don't know, I do have I've never had a desire to have an Anon account.

Speaker 2:

I've always been very proud of everything that I have to say, but you know, I had a lot of respect actually for what Beth said when he said I couldn't have talked about these acceleration ideas if I didn't have an Anon account, because I remember like five or six years ago you said anything along the lines of let's have more technology, and the entire internet was against you.

Speaker 1:

I think we should kind of appreciate the fact that at least on Twitter now we have a pretty free exchange of ideas. Yeah. I think during the early COVID era there was a lot of group think and like if you stepped out of line it would not be good for you. Oh, that was so hard. It was a weird time. I'm very glad that we're over that.

Speaker 2:

At what point were you just like this is real, the thing that I'm building, like this could be a real company?

Speaker 1:

You know it's been a journey these last all these years, but it felt like it could be a real company, I would say around year two or three. It was a real company. It wasn't a straight line from there to here. The only time when I felt an unbelievable amount of excitement, incredibly strong, an incredibly exciting way, was really this year.

Speaker 2:

Yeah, Is the wave still like? It's accelerating.

Speaker 1:

So it's definitely super human in some ways. So I think like the threshold was crossed when we got human level processing power in text generation and image generation all within the same year, Obviously has. I think that's kind of like a good benchmark. We're in the era when AI is now human level. What's next is?

Speaker 2:

super human. I think that's a fantastic note to end on. Thank you so much for coming on. It's what you can do to us.

Speaker 1:

This was great. Thanks for watching.

People on this episode