Dr. Mirman's Accelerometer

How we got 12,000 users building AI: Max Rumpf, CEO SID (YC)

Matthew Mirman Season 1 Episode 18

This episode of Accelerometer, we talk to an eclectic Max Rumpf from Sid.AI as he discussed his latest startup successes, insane stories from San Francisco and the perks of being a Y Combinator-backed startup.

YouTube episode

Accelerometer Podcast
Accelerometer Youtube

Anarchy
Anarchy Discord
Anarchy LLM-VM
Anarchy Twitter
Anarchy LinkedIn
Matthew Mirman LinkedIn

Speaker 1:

You almost profit most from what I see. If, how long can we still get the kinds of gains? We're now in a very weird time. Probably this is normal right and probably I should just adjust.

Speaker 2:

Welcome to another episode of the accelerometer. Today we've got Max from Sid. It is Sid, right, yeah, sidai. Max recently did Y Combinator first person I'm talking to from summer 23, which is very exciting. That's the batch after I graduated and is also located in Switzerland, which is really cool. Yeah, actually, moving back to SF.

Speaker 1:

Oh really. So here waiting for the 01 to clear and then returning back to San Francisco?

Speaker 2:

That's really exciting. Yeah, how do you enjoy San Francisco?

Speaker 1:

I think it has its charms. It has its upsides and downsides, but no, I think if you're building something in AI or at least we feel that way especially building something that's developer facing, it's good to be close to the people, be able to meet them in person and actually go to the community events the good ones at least.

Speaker 2:

Were you in cerebral valley.

Speaker 1:

Ones. The late ones. I think I'll leave the commenting that. No, I think it's good. I think there was very little cohesion. It was basically just showing up at a very nice co-working space.

Speaker 2:

There is an actual organization called cerebral valley.

Speaker 1:

Yeah, and they organized that and that was a bit. I think it was interesting talking to a few people, but there was very little actual program.

Speaker 2:

So what is it exactly that you were building?

Speaker 1:

Yeah, we make it super easy to connect data to LM apps. I think the big point was hey, you have this large language model and my mental model is it's a bit like a first day intern. It's smart, eager, it wants to help, it really, really wants to, but it doesn't yet know anything about the person or company that it's working for. It doesn't have any of that contact actually to do that well, and currently people are building these incredibly complex systems from scratch on vector databases with embeddings, et cetera, et cetera, and it's a lot of trial and error and actually figuring all of this out, and we believe it probably should be as simple as an LM API call, just how OpenAI made serving large language models super simple. We think the data part for AI should be as simple.

Speaker 2:

I completely agree. That sounds awesome. Is that what you applied the Y-commitator with?

Speaker 1:

No, we applied it to something quite different and in between submitting the application and us getting the interview four weeks later so we'd already almost written it off and then we get an email Sunday night we want to see you two days later and we had completely changed what we're working on. What we're doing, this was a subcomponent of the product that we were building before, and we just talked to more and more devs and we're like, hey right, everyone's kind of struggling with this. Why don't we take this component that we've had, you know, 11 months building and actually productize it? So we showed up at the interview and my first words were what you're writing the application is totally not what we're doing anymore. We're doing something completely different, which might actually be a good strategy.

Speaker 2:

What were you doing in the application?

Speaker 1:

In the application. We also built the front end. That went ahead of it. We also built the interface with which people could actually then communicate and chat, et cetera, et cetera, and we just feel like there's so much creativity out there and we can probably not in-source all that creativity. And we're AI researchers and devs right, and our core competency lies in building the infrastructure and that part of it, and not also the interface that people end up seeing.

Speaker 2:

But that was a bit of a very quick pivot. Yeah, how long were you working on it before YC?

Speaker 1:

Quite a bit yeah, quite a while actually. I think we at the very beginning we started pre-chat GPT, which was a very weird time. I think I did a lot of free ads for OpenAI back then. It's like, yeah, there's this company and it's actually kind of cool. So one thing this is Zurich local, I won't name names but one thing that I've heard from a lot of people in their 40s and 50s is that they're ghost-riding essays for their kids. And so I think, like two and a half years ago, this person is telling me at this event ah, last night at 11 PM, my kid came to me like, hey, I didn't do the homework, can you write this essay? I don't know what it was on, what was it? And I was like, ooh, do I have a tool for you? And I had, I think, probably like 12 execs from Zurich on my OpenAI account so they could log on at like 11 PM to ghost-ride essays for their kids back before chat GPT was around and they could do it themselves how did schools like that?

Speaker 1:

I don't think the schools ever found out. I mean I don't think the schools were aware that GPT3 was even a thing, right. I mean this was back in the day where you still had to write an application to get API access, and I don't think they ever found out. I mean they would also not be happy with the parents ghost writing the essays in the first place. So I think this mechanized automation doesn't change much about it.

Speaker 2:

So back then, though, people weren't even noticing that it was automated, Because, at this point, like half the content that people show me was like this is obvious to chat GPT.

Speaker 1:

Yeah, I wonder how no one has really been able to imbue it with a bit more personality, especially in writing. I mean, I believe it. How hard? I mean maybe we should try, but like, how hard would it be to actually like fine tune this on good writing and not just, you know, average writing which it's mostly outputting, and I'm quite sure it could actually get quite close.

Speaker 2:

I mean, that's kind of how mid-journey like originally beat out Dalí right, it was just trained on better images, so why not just train it on better writing?

Speaker 1:

Yeah, yeah, no, I think if you curate it well, then yeah.

Speaker 2:

Have you guys been training any of your own models?

Speaker 1:

We haven't done any from scratch training. We've done quite a good chunk of fine tuning especially, You've been training.

Speaker 2:

what is your moat?

Speaker 1:

That's a very good question, and the answer is, of course, we don't have any moat. No moats, no moats.

Speaker 2:

Let's have a moment of silence for the moats.

Speaker 1:

Now I think we do a good chunk of fine tuning, especially like for email thread summarization, which is so gnarly. I don't recommend anyone work with email data ever.

Speaker 2:

What makes it gnarly? What?

Speaker 1:

It's so low information density I think that email is perhaps so. I mean, we've worked with documents and knowledge bases and it's just so low density that if you actually just try to embed that or try to understand it just from the raw text, it's too much Looking forward. Great to hear from you all the best. I hope you've been doing well, et cetera, et cetera. And so little actual transactional information that you have to do an incredible amount of scrubbing and cleaning in the first place if you want to put it in a data retrieval system and make it work at least decently well, so is all that scrubbing not your moat?

Speaker 1:

I mean, we've built an incredible suite around the tech to actually do this. Well, I think the funnest part, for me at least, has been, I think, back from doing research. It's usually you think of a new mechanism and then there's no way to test that mechanism, so you're also thinking of the benchmark for that new mechanism and you're basically writing your own test and grading yourself on it. And what we can do is we have production traffic. I think we just crossed 12,000 end users connecting their data a week ago, or something like that Wow, congratulations. And what we can do is we have two different retrieval approaches and we can pump some traffic over one retrieval approach, one, some traffic over the other, and we can actually see hey, this performance better and we can see that in our back-end system without having to invent any sort of mechanism for it. So what we sometimes do is if we see some interesting new approach pop up on Twitter or wherever, we'll just implement it in our back-end and see if it's actually any good and oftentimes it's mm, but sometimes it's decent.

Speaker 2:

How are you improving your systems before you had that sort of user feedback?

Speaker 1:

I think the improvement starts with a user feedback. I think right, if you're already improving your system before you have users on it, you're probably not doing it correctly. You're probably in the YC way, not building something people want.

Speaker 2:

So you solve the chicken and egg by getting the chicken before the egg. Yeah, yeah, just saying yeah. How did you get those first users?

Speaker 1:

I think talking to people right. I mean, we were the developers that would have used our product ourselves. So we're quite sure that if we could build an abstraction, it would be at least somewhat useful to a lot of people, and I think there's always an incredible amount of product iteration to actually get to something that's easy. But we knew that there was some need for it and we knew what kind of developers had some need for it. It helped that we built some super popular open source AI software that went viral. So we were instantly thrust into all these communities, into all these other developers playing around and struggling with the same issues, and I think it was split half inbound from Twitter through Twitter and so the community in the other half is the YC batch, if you're selling AI infrastructure, and 160 out of 200 and what 18 companies are AI companies, that really helps.

Speaker 2:

What was the hardest part of YC for you guys?

Speaker 1:

I think you almost profit most from what I see. If things are going badly, I think, then the advice helps most and the partners jump in and try to help you and try to navigate you around pivot hell, or however you want to call it right, where you're looking for a direction in which to pull it. But if everything's slowly incrementing and slowly improving, like luckily it was in our case, then I think most of it is yeah, sounds good, like just continue on. And I think we are sometimes panicking that we're doing something vastly wrong, right or something else that we're optimizing for the wrong thing. And I think we were making most of our own problems during YC. But what was the hardest thing during your YC time?

Speaker 2:

Nothing working.

Speaker 1:

Nothing working OK.

Speaker 2:

This one day we were just like we tried to get users, we tried to get the product to physically work. Literally nothing works OK, and they had great advice. They were like find the thing that is working and grow from there. And it's like now a lot of what I do. Who were your partners? Dalton? Same for us.

Speaker 1:

OK, yeah.

Speaker 2:

Yeah, Dalton was great.

Speaker 1:

No, no, we had Dalton and Pete. I think Pete was in Pekumen. Yeah, Both excellent. He's now a full-time group partner. So we graduated from visiting group partner to full-time.

Speaker 2:

Yeah, we had Liz who had a kid right as the batch ended. It was great timing.

Speaker 1:

Yeah, friends of mine were in the winter 23 batch and they said she was more or less texting them from the hospital bed. She's like doing some other YC thing now.

Speaker 2:

As a VC at another firm. All of the partners that were so impressive the entire time, yeah, and sometimes I feel like they didn't understand what we were doing, but still, their advice completely changed our perspectives about everything, I think you go to group office hours and office hours it's mostly therapy.

Speaker 1:

It's mostly like yeah, I think most of their advice is actually based on your reaction and what you tell, and they're incredibly good at if you try to circumnavigate a topic and don't want to talk about it. They're very good at putting their finger into that and I think that's a huge part of it, and especially in the early stages it's a lot of. I think that's also when you need the therapy the most. When things are going well and things are compounding, then I think therapy is useful and interesting but less important.

Speaker 2:

So has that been the most helpful thing for you as a founder.

Speaker 1:

In YC, I think that and the community and the others, and being able to get that high quality feedback and being able to actually talk to them and also kind of seeing that normalizing all of the issues that you might see in everyday life. I think a lot of things feel existential at the early stages of a company and if you're surrounded by 100 other people also going through existential crises, then you realize probably this is normal and probably I should just adjust my level of what I believe is existential and learn to work with it. And I think that's been the other part. And they are incredible at selecting people for the program, I think, not just smart people but actually also nice people, which is a rare combination and genuine and one to help, and I think that's been awesome.

Speaker 2:

That's definitely true. Have you had any existential crises going through the program or even before YC?

Speaker 1:

Yeah, definitely, there's always stuff that goes really wrong. I mean, we pivoted quite a bit going. We effectively changed our customer. We effectively throughout 75% of our code base. We ended up almost completely rewriting the component that we had before, because of course it's a different thing to do this as infrastructure and to do this inside a product where you have maybe more predictable user growth.

Speaker 1:

I think our CTO is the day we went live with our largest customer to date. He we went live at like 9 am and then like at 10, we had a thousand people not a thousand, but like seven 800 people pour in over that first hour. And then he's like, yeah, I'm going to the gym for an hour. I'm like, what do you mean? You're going to the gym for an hour? What if stuff breaks? And he's like, no, nothing will break and nothing broke, so I'm totally fine with it. But no, that was fun, right. And I think, like that's the other part right, if you're building infrastructure for people and that's also the fun part, right, we can spin up, you know, extra instances to handle extra traffic, right, and it's just that much more optimization actually to get it to run well.

Speaker 1:

And we were still incredibly surprised at how bad, like the existing tools are around, just serving in LLM and like a co-located, fast fashion was incredibly difficult. Right, because we were sensitive to the response time of the entire pipeline. Right, it's important for us to co-locate our LLMs with our actual, you know data. And we tried Vertex AI. We first tried SageMaker because we're on AWS. Then we switched over to GCP, we tried Vertex AI and we were getting insanely high latencies, insanely low performance. And then we went ahead and good friends of ours in the batch they're offering this as a service that they helped us before they launched to actually get our instances to run well. And then we had our own just you know, bare metal VMs that you know we ran it on and it was just so much faster than what these you know built out services could offer, which I think is incredibly weird that you know it can, that you can outperform Google and Amazon and, like an afternoon, so you're working for LLMs that aren't just GPT-3, GPT-4, right?

Speaker 1:

Yes, yeah, no, absolutely. That would be way too slow and expensive. Yeah, I think we, the largest one we've played with this seven bill. I don't think we'll go much over seven bill. I actually think there's a, just from a pure performance and speed perspective. We're latency critical, right. So I think it's it's very, very important for us to actually make that go well.

Speaker 1:

And it's one thing if our customers are waiting for open AI, they shouldn't also be waiting for us. So who are your customers exactly? So I think they split across two, two different groups at the moment. The one is the AI writing companies. So, for example, type AI. They have an AI document editor, right, and you can imagine, if you tell it to, you know, write a slogan for anarchy, right, yeah, and it doesn't know what that company actually is, right, it'll almost by default, fail at that task. And what they do is you know they first, when a customer sends in a request, they first ask our API for extra content context on that query, and then they use it in the downstream generation and then they use Open AI models for that.

Speaker 2:

Yeah, you just came back and you're going back to a few months in San Francisco. What was your wildest experience in San Francisco?

Speaker 1:

I think. I mean you're never short of wild experiences. I think we live close to Castro and you know there's just regularly naked people walking around, and then you have. You know there's so many people lying on the ground that you sometimes, you know, have to circumnavigate them.

Speaker 2:

Circumnavigate them. We can pull on the ground.

Speaker 1:

To get to where you're going. And yeah, I haven't been to a Safeway in San Francisco without something weird happening.

Speaker 2:

So what was the weirdest thing that happened at a Safeway?

Speaker 1:

Yeah, there's one guy who jumped into you know the deli section where they had like all the open stuff. I hope they throw it all away.

Speaker 2:

They make a sandwich. Uh-huh, did you stand there and watch as they were jumping into the big thing?

Speaker 1:

I was slowly reverse walking while someone on the intercom was saying you know, security to the deli, security to the deli.

Speaker 2:

It's very hard. Security to the deli, the man who jumped in.

Speaker 1:

No, there was no longer any comm. But oh, and another one was I was filling out the 83B, I was posting the 83B elections. I went to the local post office and then I was sitting there, you know, filling out some addresses and the envelopes, and next to me this guy walks up with a package. Uh-huh, he hands it to the guy. The guy asks you know what's inside? And he says human remains. And there is absolutely no reaction from the post guy. He's like, yeah, sure, yeah, that'll be 21, 22, right, and uh-, does that not happen in?

Speaker 2:

Switzerland. How do you ship your dead bodies Switzerland?

Speaker 1:

It is, um, I hope not by regular mail. Uh-huh, I was wondering. It's probably ashes, right, but hopefully the probably is the important one here. It's not just, yeah, like one hand, Um and uh, yeah, and there was a well, during my time being a post at the post office, that wasn't the weirdest thing that was happening. So there's this one other lady walking around.

Speaker 2:

It's a very eventful hour.

Speaker 1:

Yeah, with a dad, that's almost every hour. So there was this lady walking around with an oil painting of a guy and she kept referring to him as her husband and she was having a fight with her husband. This product, this oil, it's in Castro's. I'll send you the address later. Uh-huh, it sounds like a good post office. She was just walking around in circles and the post guy he didn't give a fuck if it was about the human remains or the lady walking around, and this is how you fight your customers in San Francisco right.

Speaker 1:

Oh yeah, he got at the post office. Yeah, she definitely has a startup as well. Uh-huh, of course, probably.

Speaker 2:

You know it used to be the case. I would be driving around in San Francisco Like early days of Lyft and Uber. The Lyft and Uber drivers were very talkative because they weren't doing it as their career. They were just like this is fun. That was Uber's original pitch and every Uber driver I had spoken to was building a startup. It was literally like on Silicon.

Speaker 1:

No, we used Waymo and Cruz as much.

Speaker 2:

I avoided. I avoided founders in cars. Where are YC founders?

Speaker 1:

So we try to get the AI to move us around, which was actually a lot of fun and excellent. And then scared? No, not at all. I think. Like always, some weird stuff happened on 4th of July. It kept thinking that we were in a collision because of the fireworks going off, so it uses sound cues. I'm imagining it uses sound cues and I think the lady, on the other hand, you know, was already saying okay, you know, this has happened a few times. It's probably the fireworks and I love. Also, if you've ever been in a self-driving car and there's been an issue, they have to ask that you're okay. I don't know if it's, you know, government policy or company policy, but they ask if you're okay three times in a row. And you're three times in a row, I have to say no, everything's fine. And by the third time you're like okay, yeah, I've said before, I don't need any other.

Speaker 2:

Well, now it's. You're asking me too many times. This is my emotional damage.

Speaker 1:

No, no, no. But yeah, no, really good, really reliable, and with Waymo you could actually do a lot of fun stuff. You could have yourself driven to safe ways and then you could go shopping and the Waymo would just circle the block until you were done shopping and then you would get back in it and it would take you back home. So you'd always you know one of our friends these are electric.

Speaker 1:

They are. They are One of our friends. So Waymo and Cruz used to be free until, like what was it Mid-July, end of July, something like that, and in front of mine, I think, he had 478 rides for free over that free trial period. And yeah, it would show you know if you're meeting them for coffee, would they have the Waymo circle or would they have it. You know, do something else, that's, you know how much time you have.

Speaker 2:

So you can see people's or you could only see your own.

Speaker 1:

Ah no, no, no, just like if you're meeting up with someone else arriving in a Waymo, then they could have it circle, or not.

Speaker 2:

So when you were spending your YC time in San Francisco, did you ever leave San Francisco?

Speaker 1:

Oh, we definitely planned to leave San Francisco. We went to the retreat in Sonoma, which was fun.

Speaker 2:

And that was fantastic. Yeah, that was a crazy two days. It was incredibly well organized.

Speaker 1:

I think that is true. I feel like I've met everyone.

Speaker 2:

And I thought I met everybody and then halfway through I was like I still don't know who you are, and it was also great for iterating the two line description. Yeah.

Speaker 1:

Because everyone, the first question was always what do you do? So you'd be answering that question probably 200 times a day and you could switch it up and sometimes you'd say, a bit, you know crazier stuff, and then you'd see how people react. You're like, yeah, maybe that was a bit too crazy or that was a bit too weird or that was a bit too boring.

Speaker 2:

What was your craziest two liner? That's probably something around.

Speaker 1:

AGI and AGI Doom, which was much more-.

Speaker 2:

Were you bringing about the Doom or preventing it?

Speaker 1:

So this is an internal co-founder dispute. Some people on our team are very much AI doomers, some others are not. I'm in the not camp. You're in the not camp? Yeah, definitely.

Speaker 2:

Well, that's really interesting. Most split team Like you cancel each other out, right?

Speaker 1:

Yeah, yeah, someone just says, you know, after 10 years there'll be nothing after that, so we might as well enjoy the journey until then. And then some others like, yeah, actually this will be a lot more fun in five or 10 years. But, yeah, no, let's see, I'm actually. So my signal for how close we actually are is how much non-core stuff OpenAI releases. So like, just, you know the thing about being in OpenAI's shoes, if you actually believed, you know, you were a few more orders of magnitudes and compute away from something like AGI, then you wouldn't care about chat, GPT, enterprise, right. That would be like on your list, you know, at 98th position, right? You'd be like, yeah, let's get your AGI. And then kind of all of our other problems go away. Right, and the more stuff they release, the more you know I'm more cautious about the timeline. So every single time there's a new press release from OpenAI, I'm like, yeah, this feels like a company where product managers are coming in, right, and they actually don't believe that they can get there in a near-term timeline.

Speaker 2:

So would you say that you guys are not going to solve AGI because you're releasing products?

Speaker 1:

Yes. No, I don't think it's our explicit goal yet, right, maybe tomorrow. No, I think what we're doing effectively, you know, data analysis, what we would really like are reasoning-only foundation models. One's that if the data says the capital of France is London, then the LLM should output that as well, without having any opinion or bias towards the right answer itself and I think, in the future, right, if you want to have data first, you know analysis models, you're probably going to need some sort of you know scrub from any sort of knowledge models to actually solve that. Well, and, if you ask me, I think that might also be the ideal to actually get closer to solving AGI, because knowing what the capital of France is, you know is nice, but it doesn't actually. You know figure for intelligence. My geography teacher would be sad, but you know it's memorization, it's not intelligence.

Speaker 2:

So is this like how you would say that hallucinations are going to be solved?

Speaker 1:

I actually I didn't go into it deeper. I actually saw an interesting paper from the SRI on this recently, yeah.

Speaker 2:

Were you involved? No, I was not involved in the paper. I think that was by lab.

Speaker 1:

Yeah, do you know, luca.

Speaker 2:

Yeah.

Speaker 1:

Yeah, I talked to him from L and QL.

Speaker 2:

Yeah.

Speaker 1:

The guys. I really liked them Also. Yeah, I think I sold their product to about I sold it to more YCAI companies than I sold my own product to. I was because everyone was trying to get it to output it to Jason and every single time you know they'd look at their co-pays and it was like 30% parsing Jason's and trying to, you know, niceify Jason's, that an LLM output and like, yeah, like you can just enforce it with a grammar and like the silly way to do it. And I think since then a few other tools have popped up to do this, but back in what? March April, they were mostly the only ones.

Speaker 2:

So you think that tools like that are going to solve hallucinations?

Speaker 1:

I think. No, I think you have to dig deeper into the model. I don't think other than that. Right, you're probably much better poised to answer this, but the way I understand the SRI mechanism, it tries to more or less take a sentence, try to extract the entities and then see how short it is on these entities actually having that relationship. I think sampling this is. I didn't read it, but I think something like that or something like you know, based on the approach that Nthropic has released right of just more or less nuking certain information centers right and killing them off, if you know, we feel like they're not essential for reasoning.

Speaker 2:

I have a hard time knowing what is a promising direction of research, because everything I feel like I've centered down on something. All of the direction of research changes.

Speaker 1:

I think for me, the big, big question is how long can we still get the kinds of gains out of just more data and more, more params and more compute? I think that very much will determine the timeline that we have. If the scaling was hit a wall at some point, then I think we're in big trouble. If you're in the EAC team because it's unsure, right, if the transformer, too, is around the corner that will address this, because if you look back at, I'd say, probably like pre-transformer neural nets, it's a lot of artisanal science. It's like really smart people in research labs around the world thinking of a new mechanism to improve this. You know a new kind of loss or a new kind of layer or a new kind of something right to actually address it and to get some more performance on this specialized task.

Speaker 1:

And then the transformer really gave us an industrial process where it's like, yeah, like you can just throw more stuff at it, right, and if you can convince investors to give you more money and if you can build scrapers that are good enough to get more data right, you can probably create better outcomes. And that has never really been true. And the question is, you know, are we in a golden age right now where this works like zero interest rates and well you know, the drone power of AI and that era at some point so is there anything about the future of AI that worries you, I think we're now in a very weird time where spam hasn't yet fully taken over.

Speaker 1:

I think it's getting there, and when I'm putting stuff into Google these days, I'm pretty sure half of the results are AI generated, or at least partially AI generated, or, you know, that's just my vibe.

Speaker 1:

There's no way to actually make that and I think, right If I let's say I had absolutely no ethics right and I was trying to shill a product to you, I could conceivably create a Wikipedia-sized website reviewing every single product on the planet right, and doing so incredibly fairly and in an unbiased manner, and then just giving your product or my product an insanely high review on that right.

Speaker 1:

I could effectively spin up an infinite amount of websites that self-congratulate each other and do incredibly well and that look incredibly reasonable and are incredibly neutral in every single perspective, but one that I care about or that a government cares about, and, I think, just the cost perspective. If you actually generate this with, I'd probably wager you could do this quite well. Something as small as a seven bill-ish kind of model, it's probably not even in the thousands of dollars, probably more than the hundreds of dollars to generate a Wikipedia-sized website, and I think that part really scares me and I'm unsure whether the other side has really kept up. Like have Google's algorithms gotten that much better over the last four years? Right To actually stem that tide and stem that potential.

Speaker 2:

I think maybe at a certain point, like a human would notice, yeah, and a human would be able to just put in something that stops it right, how do you? Mean a human would Well if you created a Wikipedia-sized website that fools Google out of Google's algorithms. A human at Google might notice that you're doing that, but how many?

Speaker 1:

people. I think like yeah, but how many people? The cost is so low? How many people at Google would you need? Because humans are really bad at looking at information at scale. What's?

Speaker 2:

something that you look forward to in the future of AI.

Speaker 1:

I think, just being able to focus on what matters. If I do a very, very honest accounting of my day or my week, most of it is doing stuff that doesn't actually move the needle. That doesn't move me closer to any of my internal goals. Right, and I think I watched Iron man back in the day and the part that I liked more than the suit was Jarvis just having that ability to create at almost infinite speed. Right, and being able to focus on only the things that matter and being able to disregard everything that doesn't, and I think that's my dream.

Speaker 2:

Max, thank you so much for coming on the Accelerometer. It was really wonderful having you. Thank you.

People on this episode