Dr. Mirman's Accelerometer

Shrinking LLMs: Kyle Corbitt, CEO OpenPipe (YC)

March 13, 2024 Season 1 Episode 17
Speaker 1:

In this particular customer's case, they really care about speed and cost. The information isn't particularly sensitive, the transcript that they're sending. There's no personal information on there besides just kind of the numbers. But there's no way to link those numbers to an individual. So I don't think that was their primary motivation.

Speaker 2:

Hello and welcome to another episode of the Accelerometer. I'm Daphne Matthew Merman, CEO and founder of Anarchy. Today we've got the pleasure of being joined by Kyle Corbett, founder and CEO of OpenPipe, a tool for fine-tuning models. Thanks for coming on, Kyle. Yeah, happy to be here. Can you tell us a little bit more about what OpenPipe does?

Speaker 1:

Yeah, so we make it extremely easy to move from just an OpenAI prompt to your own fine-tuned model, and the way we do that is we capture all of the requests that you're sending to OpenAI, as well as the responses you get back and put that in a dataset and then use it for fine-tuning.

Speaker 2:

Why would you want to go from an open app or from a prompt to your own model?

Speaker 1:

The primary reasons I mean there's several, but the biggest ones are cost and latency, so you can get much, much lower cost, especially, I mean. So it really depends on what you're trying to do right. For certain types of prompts, certain type of activities, you just need the best, most capable models you can have. But what we're seeing is that there are a lot of folks out there who are using OpenAI's endpoints because they're really convenient for tasks that are just way below the kind of difficulty of what OpenAI's general purpose models are able to do, and so they can get much faster and much cheaper completions with smaller models.

Speaker 2:

What sort?

Speaker 1:

of improvements have you seen? So I mean cost-wise, we can come in at between 1-10th and 1-50th of what you're paying right now, depending on basically how small a model you can get away with. And then latency-wise, we're looking at about half the speed right now. That's actually something that's high on our priority list. That's just sort of the native improvement we're getting with a pretty naive way of serving. We can actually drop that by another 2-4x from where we're at right now.

Speaker 1:

What gave you the idea to go after this, I've been interested in fine-tuning and well, machine learning in general for a really long time.

Speaker 1:

In fact, in undergrad, I did a bunch of research in that area. This was in the 2013-2014 time period, and the problem was at the time that basically the sort of state of the art there was everyone was just doing models from scratch this concept of fine-tuning or transfer learning and things like that didn't work, and so I ended up starting a company at that point, but I didn't do it in machine learning because it just felt like you had to have these huge data sets. Nobody had really figured out how pre-training worked, and so it only really worked if you had these huge labeled data sets. And so, anyway, just basically watching the ecosystem evolve over the intervening nine years, at this point at some point in 2018-2019, it became obvious we're going to be able to do things with much smaller amounts of data, because we figured out that difference, basically the way to use unlabeled, unstructured data for pre-training, and so anyway, yeah, trying to make that accessible to people.

Speaker 2:

That's really cool. So can you give us an example of one of the prompts that you might be using the system for?

Speaker 1:

As one example, we have a customer who is they work on behalf of consumers. You can give it sort of like as a mint, but basically the way they work is they will extract your credit card history, kind of like your payments, how much you have due, things like that, put in a dashboard for you. And the way they do that is with bank IVR systems. So they'll call your bank on your behalf, they'll go through the phone tree, through the automated system, and then get the balance and everything and then they take that transcript, they'll put it through a speech to text and then previously they were using GPT 3.5 to extract structured information and now they're using a fine-tuned model.

Speaker 2:

So in the case of a bank, is one of the reasons that they might not want to use GPT 3.5 the data privacy of this, or is it entirely just speed for them?

Speaker 1:

In this particular customer's case, they really care about speed and cost. The information isn't particularly sensitive the transcript that they're sending. There's no personal information on there besides just kind of the numbers, but there's no way to link those numbers to an individual. So I don't think that was their primary motivation. We do have other customers where that is really important to them and we provide a self-hosted option as well that can actually run within your own cloud account, and for certain customers that's really important because they don't want to send it to us or OpenAI or anyone else.

Speaker 2:

Can you give us an example of one of those customers, or is?

Speaker 1:

that too. Private, yeah, not publicly. None that we've announced yet. Are there any customers that you're particularly excited about?

Speaker 1:

I think the general class of application for this that I'm most excited about is actually one that we don't have any customers in this bucket yet. But I think, just in the same way, that there's a lot of, basically business models and use cases that are unlocked because OpenAI has released these really incredible models that didn't exist a year ago. I think once people realize the power of these smaller, fine tune models and specifically the whole pipeline of like you can now label data with these larger, more powerful models and then sort of like, just in time, you know, fine tune a smaller model for a certain task, we'll see that there's going to be incredible use cases that no one's building right now because people don't realize how cheap this is and is going to be. But things like, hey, let's just like read every single comment on Reddit or you know the second selection of subreddits, like every single day, and like extract certain pieces of information or classify them and say, hey, these are the ones we care about. So I'm really excited about for that customer, but I haven't seen them yet.

Speaker 1:

The Reddit classification customer yeah, every piece of. I want someone who, every single day, scrapes all the new information on the Internet and uses OpenPy to classify all of it. We'll make it that feasible for you.

Speaker 2:

That's really cool. What's the most interesting thing that a customer has done with your system so far?

Speaker 1:

We have one customer that I think this is crazy and we didn't honestly know this would work until they started and it did, but they're actually so. So they contacted us and they were, like you know, we're interested in sort of like summarizing and extracting key information from like government regulations, and so I asked them okay, give me a little more detail, like what, what types of regulations, what governments? And they're just like, oh, all of them. Like we just want to get you know, we have this data set we've already scraped it of like basically every national and state level government worldwide. We have like millions of pages of regulations and we just want, like you know, there's certain information we want about, kind of like, hey, what types of people does this affect? Like you know, basically stuff that they can then index and expose to their customers, and as like, we just want to like classify all of it in like all of the languages. And so I told them it's like that sounds ambitious.

Speaker 1:

I don't know if you know that we were, we were using Lama 2 and then we were also going to try with MISRO them. I was like I don't actually know if it's going to work in all these languages. Like, well, you know, let's, let's just give it a try. So, you know, they generate some training data. They built a model and it's actually working like they're running it right now. They've got, like you know, it's going to take us like a month to get through all of them, but but, like literally, like all of the you know, they're basically classifying every regulation in the world, which I just think is super cool, that that's like a thing that's possible now.

Speaker 2:

Do you know what they're going to use that regulations?

Speaker 1:

for so their business they actually are. This is a new area they're expanding into, but currently, basically, they help multinational companies figure out if there's local events going on in a certain area that might affect their business that they should be aware of, and so I think it's an expansion that, yeah, to tell them like what if there's regulations that, particularly you know new landing regulations that might affect them.

Speaker 2:

So you just came through YC right. How was that experience?

Speaker 1:

for you. It was fantastic. I'll actually tell you a story that I haven't told publicly before. But so I applied to YC and it was already the batch had already started, because we started this company in July and the batch had started at the end of June. And so I we applied anyway. We applied super late, I mean literally, you know, after they'd already started, and YC was kind of enough to give us an interview.

Speaker 1:

But at the time I was based in Seattle and I'm starting this company with my co-founder, who's also my brother. We were both up in Seattle at the time and so we had this. We had an interview, we talked to Harj, the partner we worked with at YC, and we went through it. I felt like it was pretty good and everything. And then he got back to us that evening and said look, you know you guys are awesome, I think what you're doing is great, but you know the batch already started and like you're not in the area and this is our first batch we're doing back in person, and so I really think it's probably not a good fit. But why don't you talk to us again in a few months and like, if it's going well, maybe you can join the next batch and that'll give you time to kind of like work out the logistics.

Speaker 1:

So I was pretty bummed about that. You know it's, yeah, the rejection is never fun. But I like talked to I talked to David, my brother, and we were like, okay, well, this seems really important, we really want to do this. And so we just emailed him. I was like, well, what if we were in YC, like or sorry, what if we were in San Francisco, like tomorrow, Like could we join? And he didn't give back to us that night.

Speaker 1:

But he got back to us the next day and we figured it felt like a long shot. I wasn't even sure, honestly, if that was the real rejection reason, because you know, sometimes people say one thing and mean another. But he got back to us the next day and was like, I mean, yeah, I guess so. So anyway, we like packed everything, flew down to SF. You know, I think it was like a couple days after that by the time we because he took a day to respond, but yeah, we just started and it was really like formational, it was such it was clearly the right decision, both doing YC and also being in San Francisco. I mean, honestly, I think that was really. We wouldn't have done that without that impetus, and it was so critical to finding our early customers just being around other people working in the same area. I mean, if you're building an AI Dev tool, particularly a tool that's like for other people working in AI, you know, being in this area is really, really handy.

Speaker 2:

No, you got to be in San Francisco for that. I think that's a really cool story because it really shows how flexible YC can be, which is not something that I really understood until like that last time I went to YC. And I feel like a lot of other people must also like not realize that, like they're looking for people who are trying to bend the rules a little bit and trying to not treat it like it's a formal process.

Speaker 1:

Yeah, yeah, so it worked out for us.

Speaker 2:

Yeah, so what has been the hardest point of your startup journey so far?

Speaker 1:

We've had a lot of hard technical decisions and also just like product decisions and probably like trying to guess where the world is going to be. You know, like three to six months from now is like such a hard problem at this point that that's probably the hardest one Like. I'll give you one example, you know, as we were designing our APIs and everything we were trying to figure out. So we were trying to figure out should we be, like, purely open AI compatible? Should we be a drop in replacement for open AI, or should we do something a little bit different, a little bit more general? And then we can, you know, go on top of Anthropic or open AI, or you know, bard, if they like, ever release it, or you know, some of these other third party providers, and like that actually had a pretty big implication on just the way we designed our schema, the way we designed our training process, like everything sort of like is, are we going to try and copy this or not? And it was really came down to a guess like is open AI going to continue to be dominant? Is like. And if they are, it makes sense to just like maintain compatibility with them, even if we lose some customers who don't use them. So that was that was a really tough decision.

Speaker 1:

We decided to go with full open AI compatibility and this was back in July when it was I don't know if you remember that time, but it did start feeling like, you know, anthropic was starting to get more noise and everything, and I think, up to this point at least, like that decision has really paid off for us.

Speaker 1:

Most people are still on open AI, most people are migrating to us from open AI, so I think that was the right call. Another one was when we first launched, the goal was always to do basically what we're doing now, but we started with actually just like a prompt studio, and that was the first product we built, the first one we had users for, which was basically like, hey, you can import a bunch of your you know actual data and then write a bunch of different prompts and look at the completions and compare them side by side and kind of like trying to decide how much effort to put into that, how much to support it, and then ultimately, we made the decision to just fully jettison that and go all in on on fine tuning. Those were tough choices. I think we made the right call when we dropped it and I think we probably actually, in retrospect, should have just started in July just straight up with the fine tuning stuff. But yeah, like trying to navigate that transition was pretty hard as well.

Speaker 2:

So you mentioned that getting your name out there at the beginning or getting the word out there about your product was difficult. How did you approach?

Speaker 1:

it. We found what's worked best for us so far is basically just writing interesting content and then getting it shared. So we wrote just like a guide, using just open source libraries, to fine tuning Lama 2, got, like you know, over 1000 of votes on Hacker News on that and I still that was back in, I guess, beginning of August at this point but I still get like people who are like, hey, I remember you wrote that that was super useful to me, so that made a big difference. And then within the YC community we've just been really active, you know, getting our name. We've done a couple of posts there kind of explaining. For example, we did a comparison on our customer data sets between Lama 2 and Mistral, which was pretty well received and things like that. Just to you know, let people know that we exist and we have a solution if you're running into this problem.

Speaker 2:

So what do you think is the difference between Lama 2 and Mistral?

Speaker 1:

Mistral is just a stronger model. I mean, I don't think that there's like any like. I think a lot of people started off saying like well, let's look at the areas where Lama 2 is stronger than Mistral and honestly, at this point I just think across the board, basically Mistral's better. It's really interesting. Like it feels like we've got these alien intelligences and we're just like probing them and trying to figure it out. But you know, I'm also judging them. But it really does seem like there's sort of this factor of kind of like general intelligence that these models seem to exhibit and like one that's better at one thing is very often better at other things. I mean, the one exception would be like there's not a strong correlation Like, basically, if you only have been trained on English, it doesn't matter how smart you are, you're not going to do that great on multilingual mange marks. But other than that there's such a strong correlation between performance on different benchmarks. It's pretty actually surprising to me.

Speaker 2:

Do you think that if AI does become sentient and decides to take over humanity, it's going to be because we spend so much time, you know, quizzing it and giving it SATs?

Speaker 1:

Like it's just going to like hold that against us. You're saying yeah, oh dear, I really hope not. Yeah, I hope not.

Speaker 2:

What do you think is going to be the reason that it decides to take over the world? Because it's got to take over the world, right. Yeah, I mean so, With anthropic beliefs, it's going to take over the world.

Speaker 1:

Yeah. So no, I do think. I think Anyway, I guess I have nuanced views on this but like, yeah, I think if you have an intelligence that's like far smarter than any human, like I would expect, at some point that's going to be the one calling the shots. And that when I hear people have this argument about like well, what's the actual mechanism that's going to do it, it's like that seems like such a silly argument. It's like if you've got, you know, a colony of like 50 gorillas and then there's, like you know, a human there, it's like the humans gonna figure something out, you know. It's like if you let the human live and and keep making more humans and stuff, it's like it doesn't matter. The gorillas are way stronger and, like you know, even if, like it's just, intelligence matters a lot. Even if we don't know the exact way it, you know the exact thing. So, anyway, yeah, I hope it likes us. I think it will. I'm optimistic that way, but I guess we'll find out soon enough.

Speaker 2:

So you do actually believe that it'll be in control, and yet you're still working on something to improve AI, essentially.

Speaker 1:

So I would start with the caveat that, like we're only working with really small models, I don't think Mr 7b is gonna take over the world. So yeah, I think I would feel more conflicted working on kind of the cutting-edge foundation models. That seems scary. At the same time, I do think there's a real competitive dynamic there and I totally understand where anthropic, for example, is coming from, where they're Really worried about this. But you know, that's sort of like better us than them, which sounds like a cliche and just a justification, but like I think there's there's some validity there.

Speaker 2:

I mean with Moore's law. Don't you think that the models that we're going to be working with it, like in the open source, in a couple of years are going to be just as big as GPT-4?

Speaker 1:

Yeah, I would hope so. Yeah, but GPT-4 I'm not worried about taking over the world, either or GPT-6.

Speaker 2:

I guess, like maybe it's eight years down the line for you guys that you have to worry about this stuff.

Speaker 1:

Yeah, I think I Don't think it's gonna be a situation where there are many hyper intelligent AI's and we're just like all playing with them, but we're not in a situation where they like have like a large amount of power and control. That that I don't see how you would get to. That point seems like the first one that emerges is is gonna kind of like have the run of the field, is my guess at least so yeah, I get.

Speaker 2:

Do you plan at all in your systems to start thinking about AI safety?

Speaker 1:

At our stage, not really to be on. Well, okay, ai safety is such an overloaded term. So in some areas, yes, a lot. In other areas, no, in in the sense we're talking about right now, where it's, you know, kind of like basically making sure that it has controls and everything, not really not at all. I mean like it's really down to model size, like it's just not there, it's just not that dangerous. On on the other side, I mean, there are like real concerns around kind of like well, are we going to leak our training data at inference time? That that kind of AI safety stuff. Yeah, we think about a lot like how do we make sure that our, our customers are being thoughtful in the way they, they train and use our models?

Speaker 2:

that is an interesting question. How do you approach that?

Speaker 1:

so we, first of all, right now we're like very hands-on, so I will have calls at least one call with basically everyone who's using our Platform in production and like I think that's a lot of it is just like me like, yes, that's, that's obviously not scalable like Infinity, but but right now I think that's a pretty good way where I can just talk to them and figure out what are they using, what are the risks here?

Speaker 1:

You know, particularly like the training data leaks, also like prompt injection, if they're, if they have prompts where there's some sensitive information and you know, these are things that that I can kind of like help walk them through Concretely. The way we approach that is, if I mean, one really simple Solution that we've done with a couple of clients is we actually have an in-house like PI reaction module which will use specifically on the training data. So we'll make sure that as your training, if you do have any private sensitive information there, we'll remove that. So even if the model does remember and memorize that it's training data, it's not gonna like throw that out at inference.

Speaker 2:

So basically, you have one model that's learning what the training data explicitly was and then learns to identify that in the outputs of the model when that model is running not exactly.

Speaker 1:

It's a preprocessing step before you even run the inference, so it's. It's just basically masking out any PI I within the training data before you even use it for training.

Speaker 2:

Okay, yeah, that sounds very important to have. Yeah, what's something that you're optimistic about in the future of AI?

Speaker 1:

I mean, I think that, like there are so many jobs that can't exist right now, that will exist because we're just going to be so much more effective. I think there's so many products that can't exist right now that will exist because we're going to be so effective. I really do think we're going to be in a place where search is going to be so much better. Retrieval is going to be so much better. It's going to be so routine to fine-tune one of these models and spend a few bucks. Read through my entire web browsing history since the beginning of time.

Speaker 1:

I remember there was some article I read about. I think it was like some Mongolian prince. I don't remember anything more than that. Go through everything and find it for me. You'll just be able to have a model that does that. It takes a couple of minutes. It takes a few cents. Anything around information, retrieval and search is going to be just way, way more effective. Also, just like creation these models, they can code on the fly. They can create special purpose workflows for you on the fly. Anything that's repetitive at all, or even if you squint at it, it could be considered repetitive. That's just all going to be completely gone.

Speaker 2:

As a startup. Where, would you say, you guys are right now? What has been your top priority over the last few weeks?

Speaker 1:

Right now. It's definitely about selling. It's about finding customers and getting them onboarded. We have a lot of product work to do still, of course, as well, but we're at a place now where our product works pretty well. We've had people who have onboarded successfully and then I haven't even done my intro call with them until they actually have a model that they're serving in production, which that, for me, is like that was a huge milestone. That happened last week actually, where someone, without ever talking to us, actually had a model deployed in production. So at this point, yeah, our product works pretty well and we're selling. And then, of course, we have a really long roadmap as well, but getting more users is top priority, definitely.

Speaker 2:

So you mentioned before that you came down here with your brother. I heard that right. Yep, that's correct. So you're building this startup with family. What's that been like?

Speaker 1:

It's been really good actually. My brother, he's also really technical. We've spent a lot of time together, grew up together, obviously, and it's a different relationship than we've had in the past working together, but it's been really fantastic. He's really smart, he works really hard, he's very personal much more personal than I am and so I didn't choose to work with him because he's my brother. I chose to work with him because he was the best co-founder I knew, and so it's been great, that's really awesome.

Speaker 2:

So you guys have never worked on a project before together.

Speaker 1:

I mean not to this level of intensity. We did like we let's see. So we have a seven-year age gap. I'm older, so when I was, I actually I worked at YC previously I lived in the Bay Area for about seven years and one summer while I was working at YC, he actually came down and lived in my garage while he was at university and called it his summer internship and was just working on a bunch of startup ideas and I was still working full-time, so I was helping out part-time. So we got a little taste of it there. But, yeah, definitely never to this intensity. What were you working on at YC? So at YC, so I started there just as a software engineer and then, about halfway through my time, I ended up leading the startup school team. So that's basically all of the tech, all the stuff you know, the product that is facing founders who haven't gone into YC yet. What?

Speaker 2:

do you?

Speaker 1:

think about startup school. I think it's a fantastic resource. I used startup school extensively as I, even after I left YC, kind of like to track our progress, and actually before working with my brother, I was even using the co-founder matching product that we had at startup school, which was really cool. I met a lot of really great people. I think it makes a really, really big difference for people outside the Bay Area. I think if you're here, honestly, you don't really need startup school. Maybe I shouldn't say that because it was my product, but you're just so immersed in the culture and that's really what we were trying to replicate with startup school is make it so if you were outside of the Bay Area, like you could find other like-minded people and you could like grow and you know. Basically, it's more about the worldview almost of like-. This is both feeling like I can make a change in the world and then sort of like the tactical steps to how you do that. That's just like not very evenly distributed outside of this area.

Speaker 2:

When you decided to come, you said you were not in the Bay area. When you decided to come down with the road, what's the transition like, or been like, for you?

Speaker 1:

We started the company in Seattle. I really love Seattle. I'm from Seattle originally. It's a beautiful city and also, I mean, it's got a great technical ecosystem as well. So you know, if you weren't going to be in the Bay area, seattle is not a bad place to be, but it's been really stark, honestly, the transition. So we came down here, we started we actually started on a completely separate product. It was an agent back before. Everybody was doing agents back in March of this year and so we spent we were doing that from up there and from March until July, which is when we got into YC and we came down here and we've been to.

Speaker 1:

You know there's a great meetup in Seattle, the AI Tinkers meetup, which I would recommend anyone who's in the Seattle area go into that. It's a fun group of people and there's like some a few. There's some other startup meetups and stuff. But it's just like you come down here, it's like every single night there's something interesting happening. All of the best founders I should not say all the best founders, but the concentration of like, the concentration of really strong founders is so high and you just hear about things you know earlier. You know you're at parties and someone will mention, like you know, the rumors about whatever open air is about to drop and stuff, and it's just like you know. You, I feel like there's a reason why the companies that are moving fast just are here and it's because you know, if that sort of like larger ecosystem, that that gives you a bit of an advantage.

Speaker 2:

Did you go to the YAC rave on Monday?

Speaker 1:

No, that was, I thought about it, but yeah, not really my scene, I guess.

Speaker 2:

I mean, I wasn't expecting it to be a rave. I show up and, like you know, one of the morning crimes appears like this is the strangest networking event I've ever been to. Yeah, yeah, people are dancing, not networking. What do you think about?

Speaker 1:

YAC. Yeah, it's interesting I feel like I'm YAC adjacent, like I definitely like there's a lot of people in that community who I love and respect, like you know, like I'm a GitHub sponsor of Technium who's, like you know, a really strong member of that community. I do think that, like, getting at the smaller model sizes, I'm like 100%, let's get it out as fast as possible. But I think, when they take the next step of extrapolation to just saying like there should no be no restrictions on anything, let's just like move as fast as possible, that like I mean, I don't think so. Like I just, yeah, I think that's taken it extrapolating too far, which and I understand where they're coming from because, like I'm generally like very techno-optimist, like I'm generally in favor of like hey, let's not restrict. You know a lot of things that we do restrict regulation wise, so so I totally understand where they're coming from. I guess I just think that, like you know, the like extinction risk of this specific technology is high enough that I don't share their opinion on. On on AS specifically, what regulations do you think that we should have?

Speaker 1:

Honestly, I think that the part of this latest, you know, government thing where it's sort of like you have to report training runs over a certain size. I think that that's a very reasonable place to start. I think that we should know, like who's building these really big models and what they're planning on doing with it. And once we get to a point where it feels, like you know, like these models do have the capability of acting independently and have reached roughly human parity or superhuman levels, then then at that point I would say like hey, we should probably pause, like I'm very in favor of a pause at that point, and saying like, let's, let's study these, let's use these in a controlled environment and and let's like see what impacts this has on us. I don't think we're there yet. I don't think any of the models that are out right now it should be regulated, but I don't know that we're that far from that point either.

Speaker 2:

How will we know when we have reached models that are superhuman or even a human level intelligence?

Speaker 1:

Yeah, that's a fair question. I will say that like I really really actually appreciate the way OpenAI has gone about this, compared to, say, like Google, where they're just like, hey, it's too dangerous, we won't release it all. I do think that you know, openai's release of, say, gpt-4 like helped us kind of like answer the question with GPT-4, because I don't think you could have answered the question just by giving it a bunch of a battery of tests or just by, like them, red teaming it internally. I think you probably did have to let people put it out in the world and see what they could do with it, and try and build agents and try and make it autonomous and and and see how far they could go. So so, yeah, I think that's the best answer we have right now is like have people try really hard to use it to like kind of like do stuff autonomously and and see if it's capable of doing that in practice.

Speaker 2:

So I think we have time for one more question, I guess. Do you have any advice to any aspiring founders who might want to go to YC or might just, you know, want to build a company?

Speaker 1:

Hmm, I mean I think like I have continually underestimated sort of like the, the returns to ambition. Actually, like in my life there have been times where I've like worked on smaller projects and I've worked on bigger projects. Like one of the first things I tried to build with with AI following the GPT-4 release was like this translation service for document translation and it worked fine and it was great and and I actually ended up I decided I didn't want to like really turn that into a big company. So I ended up like selling the technology to an existing human-powered translation service and they've got it live now as kind of like their AI version, but like the work that was required to build that and then like start selling.

Speaker 1:

It was like not really less than like the work I'm doing right now and like the upside for what I'm building right now. I think it's like just so much higher and the business is already more successful than you know. That other one was, despite that, taking a similar amount of work, and so I would just say like, if there's a big problem out there, like just go for it. You know like don't hold yourself back, don't feel like you're not the one to do it for whatever reason, just find the biggest, most important thing you can you get that feels like it's missing the world, and then build it well, that's really awesome.

Speaker 2:

Well, thank you so much for coming on, kyle, this is fantastic. Yeah, thanks for having me.

People on this episode