Dr. Mirman's Accelerometer

Secrets of LLMOps with Noa Flaherty, CTO Vellum (YC W23)

December 20, 2023 Matthew Mirman

Discover the frontier of AI innovation as Noa Flaherty from Vellum joins us to unravel the complexities of Large Language Model Operations (LLM Ops), an exciting convergence of engineering and business strategy shaping the future of enterprise AI solutions.

Episode on Youtube

Accelerometer Podcast
Accelerometer Youtube

Anarchy
Anarchy Discord
Anarchy LLM-VM
Anarchy Twitter
Anarchy LinkedIn
Matthew Mirman LinkedIn

Speaker 1:

This is Noah from Velom. Very excited to have Noah on the show today. Yeah, thank you. Yeah, noah's been doing a lot of really cool stuff around LLM. What do you call it?

Speaker 2:

Just, the general category. You can call it like LLM Ops, but we really think about it as like building a platform that helps enterprises build LLM-powered features end-to-end and accelerate the pace at which they can do so.

Speaker 1:

That's really exciting. Yeah, this is like an entirely new field.

Speaker 2:

It is I mean, I'd say like the closest analogy is probably like ML Ops, which is not so much of a new field but taking a lot of the same principles and applying it to large language models, ai and some of the new stuff that's been coming out.

Speaker 1:

Was anybody doing anything like this when you started? Not really.

Speaker 2:

There was a couple of companies I think the one company that had been through YC, but it was a very, very new field. I mean, if you think about when we started, it was like January, but really this whole craze started with chat GPD being released in the November beforehand, so that kind of wave started pretty closely to when we did. What cut you into it. It's a wild journey, I mean. Ultimately, I was at Dover, which was a recruiting tech company founded through YC in 2019. And in early 2020, they had access to like GPT-3 in private beta, and so we're using GPT-3 back in those early days to do things like classify incoming candidate, replies to emails to as to whether or not they're interested in the job, and felt Many of the pains of like what it took to make these models reliable test them, regression, test them, tweak the prompts, prevent things from breaking once you go into production and that was kind of like the birth of it. All is like seeing some of that and seeing these models get better and better, but the problems persist.

Speaker 1:

So what were you doing before Dover?

Speaker 2:

Before Dover, I was at a company called DataRobot in Boston and I was on the MLops platform team building, kind of like similar tooling, but for the world of machine learning and data science. So we were doing things like helping companies detect, drift, track, accuracy over time, latency, service metrics for your more traditional classifiers, multi-class classifier, time series models, things like that.

Speaker 1:

So basically, you've been doing MLops since before MLops was a term.

Speaker 2:

Like on and off.

Speaker 1:

yes, Were you like the first person to have it on your resume, I wouldn't go that far now. Did you look around at other resume? I gotta add this one.

Speaker 2:

I'd say while I was at DataRobot, that's when MLops really started to become an industry. You saw other really successful companies capitalize on that phrase, like weights and biases. It's been a lot of success these days in the world of MLops and it is a whole field of its own, for sure.

Speaker 1:

Did you always know that you wanted to be an entrepreneur?

Speaker 2:

Pretty much, I would say. I'd say in high school I always kind of straddled the balance between being technical and engineering, but also seeing the value of like it doesn't matter how cool of a thing is that you build if people don't find it useful or if you can't sell it or if you can't help people learn about it. I'd say that urge kind of has ebbed and flowed throughout my career, but it's always been there.

Speaker 1:

You could have executed on that urge at a company, right At a company, at another startup.

Speaker 2:

Yeah, and I'd say I did in different forms. Most of my career has been spent at startups and even within the larger startups, on teams that kind of acted like startups within it. I'd say through Onshape, the first company I was at through DataRobot, then through Dover. At each of those was playing many roles, wearing lots of hats and kind of like having the training wheels on for to start my own thing someday, so you would describe not starting your own thing as having training wheels on.

Speaker 1:

That kind of implies that you need to right that.

Speaker 2:

You need to work somewhere else first before that was my path, the path that I chose. It's certainly not the path that a lot of people take and have a lot of success with. But yeah, like my take was, I wanted to learn the go-to-market motion and I did that through Onshape and I wanted to become like a Rewsitting their thinking, like I'm here learning to go to market motion. To some degree, yeah, kind of like I would purposefully put myself in Just out of college go-to-market motions.

Speaker 2:

I didn't call it at the time I was like, huh, I should learn how to sell something. I should figure out what it takes to convert people on a marketing landing page.

Speaker 1:

Did you learn how to sell something?

Speaker 2:

I did, I'd say so.

Speaker 1:

What was that process like?

Speaker 2:

Mostly just becoming friends with sales folks, becoming friends with marketing folks and genuinely trying to be helpful. If a sales engineer was out sick, I'd volunteer to help do a sales demo and hop in and cover for them and do stuff like that yeah.

Speaker 1:

How many sales demos were you doing?

Speaker 2:

There was a good couple of months period where I was probably doing 10 a week, something like that, eight a week.

Speaker 1:

Did you find that more fulfilling than the engineering work?

Speaker 2:

I really enjoyed it. I think I love being able to do a bit of both. I think if I'm doing either one for too long, I get a little antsy. There is something very fulfilling about preempting what people's objections might be, deconstructing the path of the conversation, and showing them what will both prevent them from objecting to a solution, but also genuinely be useful and valuable to them. There is something fun about that for me.

Speaker 1:

Do you do a lot of that now? Yeah, totally Definitely. Do you think that that's necessary for a founder to do a lot of this?

Speaker 2:

Once again, I think everyone probably has their own path and it depends who your co-founders are. For me personally, most of my time, I'd say, is spent on product and engineering. I learn what we should build and how we should build it by seeing, like, what causes prospects faces to light up when I'm doing them a demo of something. So I think, for me, I treat it as like a feedback loop where I can pitch something, test a message, float an idea that I'm thinking about prioritizing on our road map, see how people react to it and then take those learnings with me.

Speaker 1:

So you set up your feedback loop to be very in-person, like a very or maybe not in person, but very face-to-face.

Speaker 2:

Yeah, and these early stages that we're in, I think that's super useful.

Speaker 1:

Yeah, so that was an intentional decision, very intentional, yeah.

Speaker 2:

You know there's different approaches. We could go like wide distribution, try to like free sign up everything and be much more like quantitative data driven. But who knows, that might be a strategy that we take someday.

Speaker 1:

Yeah.

Speaker 2:

But there is something very like impactful about trying to sell someone something and seeing how they respond and iterate based on that.

Speaker 1:

How did you find your co-founders?

Speaker 2:

The three of us worked out Dover together. We had worked together for like two and a half three years. We were amongst some of the early employees there, and so we're just both friends and coworkers and got along well and were able to work remotely together for a long time.

Speaker 1:

Did you all leave your job like at the same time to start this, or is it like one by one?

Speaker 2:

No, no, no. Yeah, it's a fun story, I think, like Akash and Sid. Like Sid was like the impetus for a lot of it where, like he, I think, had floated to me over the course of multiple months. Hey, you know, have you thought about starting your own thing? He did a similar thing to Akash, with Akash To Akash.

Speaker 1:

Yeah To Akash, you're going to start your own thing. You're going to do it, that's right.

Speaker 2:

And then, I think you know, they came to the conclusion that they wanted to start something before I did. And then I approached me. I was like wait a second. We work really well together. We have complementary skill sets, like the timing feels right, like yeah, let's do it.

Speaker 1:

Has it always been an easy path for you guys.

Speaker 2:

No, definitely not.

Speaker 1:

What's been some hard moments.

Speaker 2:

Hard moments Early on in YC. We, I guess for those that don't know like we did, apply to YC with a very different idea and learned about two weeks in that we needed to do a hard pivot and change the direction. So that was like an emotionally challenging time. I think you know we're very fortunate in that we landed on an idea that we were really excited by quickly that's now vellum and that it was the right idea for us. And then it. But yeah, that was, that was a tough time for a few days there, for a few days.

Speaker 1:

Like it was the hardest two days.

Speaker 2:

Oh it was the last it was like two months Days of like existential crisis, like two and a half weeks of hardcore validation and worry and then but even that is a very short time period- yeah, what was your first idea?

Speaker 2:

It was. You know we come from product engineering backgrounds and we saw time and time again that it was very hard for product people to be super data driven and see, like what, like what are the common insights and patterns and feature requests and bugs that are coming up from not just any customer but a lot of the customers that we, they cared about. And so our goal was to like help bring data to that, to that mission, by like basically crawling gong call transcripts, sales transcripts, customer success check in call transcripts, customer support tickets, and like use LLMs to surface the repeated patterns that were coming up from customers and then slice and dice that by CRM data. So maybe there's a high value contract coming up for a new one and you want to get ahead of that and see what kind of things that customer has been asking for and prioritize that in your roadmap.

Speaker 1:

And what led you to pave it away from that?

Speaker 2:

Yeah, great question. One of the greatest learnings through YC that we got and feedback that we got is this like concept of sell before build, like don't try to build the thing, go, try to sell it before you spend too much time writing code for it. And we did that and we couldn't sell the thing. Like there's a lot of casual interest and people are like, yeah, I would use that. And then you'd ask okay, here's the Stripe credit card link and then Cricket. So we learned pretty quickly that although it's useful and interesting, it's not the commercially viable product that we wanted to build and sell.

Speaker 1:

So your second thing, it went out. You went selling first we did.

Speaker 2:

We did a lot of LinkedIn outbound user discovery calls like pitching, like understanding what the problems are, pitching our ideas of solutions, getting verbal commitments that they would pay for such a solution, and after we did that and organized our thoughts and spotted the patterns, we're like cool game time. Let's do this.

Speaker 1:

What was it like when you made your first sale for that new product?

Speaker 2:

I still remember it was one of maybe this wasn't literally our first, but one of the first was to a batch mate of ours during YC. It was for $750 a month. That customer ended up investing a little bit in us, which was cool. But Sid and I were sitting next to each other in front of our laptop on this call and I just say with a straight face, that'll be $750 a month. And he's like, okay. I'm like cool, I'll send you the Stripe link right after this. And he's like okay, close the call, close the computer, turn to Sid. I'm like holy shit, we did it. And we're all excited and clapping.

Speaker 1:

So it was a very glorious moment, yeah.

Speaker 2:

Are they still a customer? They're not still a customer. It ended up being like in those very, very early days you're going in lots of different directions with where the product might go and we still keep in touch Great, great friend and investor, Very excited in what we're doing. But ultimately it was a different product vision than what we ended up going down.

Speaker 1:

You say that, like you know, it's an X. We still keep in touch, that's right.

Speaker 2:

We hit it on good terms, that's right.

Speaker 1:

How did it feel letting them go as a customer?

Speaker 2:

Yeah, that is sad, right, when your product vision differed, like changes routes from where some of your early customers like the vision that you sold, and it's sad but ultimately the right thing for the business. How did you make that decision? Pretty simply, I'd say. You know, in those days we're very, very keen on fine tuning as a solution and you know that's making a resurgence now and is yet again interesting, but at the time it ended up not being the right tool for the job in most of the cases that we were exploring. And so for some of our early customers, we're helping them build these fine tune models and decided that should just not be a priority of ours right now.

Speaker 2:

And so it's kind of matter of fact, I guess, is the answer is like hey, you need this. Like, keep them. You're happy to keep the model here's the data that used to train it, but unfortunately we won't be able to keep doing this for you, at least for some time.

Speaker 1:

You're the CTO. How was that decision made?

Speaker 2:

It's less of an engineering decision, more of like a product and business strategy. We effectively ran an experiment where we for two months, like went hard on trying to sell primarily sell, not build fine tuning found some like really interesting you know use cases for that, delivered on that but ultimately still like at least in our experience, and what we're seeing is a lot of the market is still trying to do the basics right, like a lot of software companies are just trying to use the basics of AI to power like pretty simple features and there's a lot of lower hanging fruit for them right now and you know there are companies out there that are going to be very successful focusing on that niche, more mature part of the market that does need fine tuning today, and so I think with a lot of these things, it's kind of just periodic reevaluation of what, where the market's pulling us.

Speaker 1:

Yeah, so the market pulled you to you being the CTF.

Speaker 2:

Oh no, but I mean the market. I mean you asked how, how is the decision made? That were you asking how. How is it decided that I became the CTF? Oh sorry, I misunderstood.

Speaker 1:

I requested it.

Speaker 2:

I thought you're asking how is it decided that we do prioritize fine tuning? I see, so Sid and I are co CTOs.

Speaker 1:

Really yes.

Speaker 2:

So Akash is our CEO, sid, and I are co CTOs, but we each have our specialties and our areas of like responsibility and focus. So what would you say? Your specialty is product engineering. Just really like working closely with customers and prospects and translating like what it is that they're trying to achieve.

Speaker 1:

What do you think engineering is into? Product engineering?

Speaker 2:

I mean everything's product engineering at the end of the day, I suppose, right.

Speaker 1:

I specialize in the engineering part of the engineering. I guess you can have internal like data analytics if you're an enterprise.

Speaker 2:

Yeah, ideally that's all like serving some product that you're selling as a business right but yeah, so that's all Sid's job the internal analytics. Yeah, I mean, sid is like a wizard at back end infrastructure, the ML side of things, so we complement each other quite well there.

Speaker 1:

What aspects of ML are you most excited for?

Speaker 2:

I mean the easy answer is that are still true, are I'm excited for these models to get better with their reasoning capabilities? Yeah, I think multi modality is continuing to be like really interesting and what it can do. So those are like some of my gut reactions that, despite them maybe being generic, but yeah. I think there is a lot there right, like as these, truly as these models get better, their reasoning capabilities get better, they're able to, like, draw meaning from different mediums, like that's going to open up just more and more and more.

Speaker 1:

Is your platform already set up for multimodal models? Not now.

Speaker 2:

No, we are like you know, being the size that we are, like being very focused, is something that I believe in Ultimately. Like we're going to, we're keeping our ear very close to the ground and, again, where the market pulls us and if the market pulls us, to be more compatible with multi modality, like we'll do that.

Speaker 1:

Okay, so going a little bit more towards the machine learning side, do you keep up with the modern research?

Speaker 2:

Personally not as much as ideally I would.

Speaker 1:

I think, like there's.

Speaker 2:

There's a lot of great work being done. Of course, you know I spend my mornings reading AI newsletters and keeping up with some of the highlights.

Speaker 1:

And shit is happening at a crazy pace right now.

Speaker 2:

Yeah, I mean honestly like I could use like some LLM agent. That's like scouring the internet for those things and surfacing highlights for me.

Speaker 2:

Yeah, maybe maybe we dog food vellum and do that. Yeah, I take like a very pragmatic approach to a lot of this, which is what are prospects asking about and what are our customers asking for and what's preventing them from getting their use cases into production. And you know that's where I spend majority of my focus. But every once in a while we try to like resurface and look around and then look ahead and go back under to what our customers are asking.

Speaker 1:

You ever get like on the phone with a client and they're just like here's this new paper. Do you have this implemented?

Speaker 2:

Yeah, papers, new feature releases, competitor drops, all the thing. I mean that's like one of the coolest parts about our job, I'd say, is like we get to interface with all of these AI companies who themselves are very smart and keeping up to date on things, and it's just like a cool front row seat and what's going on?

Speaker 1:

How do you deal, at least on a product side, with the amount of new things going on?

Speaker 2:

There's a couple ways and both the excitement and challenge of us in our industry is being able to have good intuition around, like what's a passing fad versus what's here to stay and should be built into a platform. How do we decide? Again, it's primarily driven by customer demand and what use cases within companies it's unlocking.

Speaker 1:

What recent things have like customers asked for? We got to get this into our system.

Speaker 2:

So a long time. It was our workflows product, where we just heard over and over and over like people needing to string together multiple prompts or chain together prompts or invoke business logic between them, but the problem being that it's really hard to do that at a rapid rate of experimentation as well as test and see each step along the way what's going right and what's going wrong. So we just felt compelled to build a solution for that, so you built a solution for that.

Speaker 1:

Yeah, customers liked it.

Speaker 2:

So far. Yeah, it's been great. That's really cool. Yeah, it was cool. We partnered with a couple of our existing customers early on and like who developed with them and made sure that it was like solving their specific use cases. So for example, like one of them has built a wellness chat bot who's like entire LLM backend, is powered by Vellum workflows and it's been really cool, just like whack-a-mole, like fixing every issue, adding every feature needed to like make that end to end possible.

Speaker 1:

What's one of the craziest things that you can talk about that a customer has used your product to do.

Speaker 2:

Yeah, one that I like to talk about is like generating rap lyrics Like they was a geohot. No, no, I probably can't say names or anything Like yeah, like they, it's actually like one of our highest traffic usages to generate like at scale, like original rap lyrics for a consumer.

Speaker 1:

I'm now imagining it's Kanye West or something Can't name names. All of Kanye's lyrics are AI generated.

Speaker 2:

That wouldn't be the craziest thing in the world. So that's cool and that's actually led to like some interesting additional customers that are trying to generate rap lyrics and other languages.

Speaker 1:

You don't have any of these rap lyrics here with you.

Speaker 2:

I don't that's too bad. I was hoping for a rap. That would be a great tee up, though, if I just was also, coincidentally, a great rapper and could, of course I mean if it's AI generated you know you can memorize the raps, like how bad, I'm sure you can do terribly even with the memorized raps. That's where monitoring is important. See what outputs this LLM is generating in the rap.

Speaker 1:

Observability, that's right. Are the lyrics good?

Speaker 2:

It's been a little while since I've checked, but last I did they are, and the usage of it is going up, so Okay.

Speaker 1:

The good lyrics are driving a lot of usage for that company.

Speaker 2:

Yeah, it's, but I mean that's kind of like, I guess, a good example of the fact that AI could be used for so many different things, from like whether it's generating raps to like data extraction, summarization, I mean, yeah, we see a lot of it.

Speaker 1:

What uses do? Did you expect for AI before this boom that like did not pan out?

Speaker 2:

I mean the ones that I expected are largely the ones that I still see a lot of. I guess you know your like blog post generators, email, like sales email generators, chat bot for X. There's like I think as we started, those were already pretty hyped up and that they continue to be quite hyped up and a lot of them are successful, I think, to make them really good as harder than people think usually. Yeah building a prototype of them. One of them is quite easy.

Speaker 1:

What is something that excites you about the future of AI?

Speaker 2:

To me like if every person or every profession had, like an expert assistant that can help them do that thing. Like that to me is just very exciting, like if I'm a plumber and I can have something on my phone, that's like I can take pictures of things or describe what I'm seeing and have it like provide contextual advice to the task at hand, backed by the knowledge of thousands of plumbers before me. Like that's really cool. And I think you can apply that same concept to most professions and most industries helping us all just like do better work, more efficiently, higher quality, but also like gaining knowledge ourselves through that process.

Speaker 1:

Is this something that you're currently building?

Speaker 2:

I view our role as providing the tooling and platform to make it easier for other companies to do that reliably and more quickly than they would have otherwise. So we ourselves are not building towards that goal directly, but my hope is that we are making like enabling many more companies to do that than would have existed otherwise.

Speaker 1:

If that existed, assuming this perfect version of this like pocket AI, existed, would there be other companies?

Speaker 2:

I don't think one like. I mean, you know, here's my I guess prediction. We're getting into prediction mode. You know, like I think that will be for a long time. Many companies that are building like verticalized versions of that for specific industries and use cases. I think we're a long ways off before that gets consolidated into one company that's providing the AI assistant for everything.

Speaker 1:

What leads you to that?

Speaker 2:

conclusion. I mean there are some professions where maybe the knowledge is generic, but there's a lot of professions in which the knowledge is contextual based on the data or experiences of people, that these models aren't trained off of themselves.

Speaker 2:

So I think, like you know, when I think about building a good AI app you know, if you're a customer of Velo me, probably heard this in a demo but it requires like four good things, like four big things, or like four pillars of AI, as I would call it. One is data, like data that's unique to you, your company, your use case, whatever it might be things that the model might not have been trained off of.

Speaker 2:

Second is experimentation being able to like quickly iterate on prompts or workflows or chains or whatever it might be, or the data itself that's being fed into these prompts in order to power the experience. Third is like life cycle management and like being able to make changes with confidence and reliably and observing and monitoring what's happening as a result of those changes. And then fourth is optimization like you take all that monitoring data, you take what you're seeing in the world and feeding back to the first one data and creating a feedback loop to make an AI application better and better over time. So that first one data. I guess the long winded answer here is like any data that's unique to that profession or that scenario or use case, like that's not going to come just all from one big company.

Speaker 1:

What if there's one big company that owns all the professions?

Speaker 2:

What do you mean? One company?

Speaker 1:

that owns all the carpenters, the plumbers, everything Exactly.

Speaker 2:

Yeah, I guess, like if we're speculating way off into the future yeah, sure, I mean the future's been happening fast. Yeah, yeah. I suppose, like companies like Google or others that have a wealth of this stuff, could be well positioned for some of the more like generic roles but ultimately, like every enterprise is going to have data. That's not that you can't find on the internet.

Speaker 1:

How do you keep this data private when you have these models? That will just spew information.

Speaker 2:

Well, it's less about the model. I guess it's more about where the model is running, how it's using that data, what you're doing with the information that's being spewed out.

Speaker 1:

I mean, if a company exposes a model trained on its private data, you could essentially get that data from the company by just asking the model to generate.

Speaker 2:

Yeah, it's dependent on the use case, like if that company exposes their model to the public internet and end users. That are pulling then yes, but in many cases the output of that model will be used to serve the people within that company or direct customers of that company, and not necessarily like the public.

Speaker 1:

So the mode is companies that are using models internally on internal data.

Speaker 2:

Or using the output of that model to power user experiences, both directly and indirectly. So the direct use cases, the explicit use cases, are things like your chat bots, where your end customers are chatting directly with the model, but there are many. My belief is that most usages of AI it's going to be the more mundane stuff, like classifying emails in the Dover recruiting case, or transforming, like extracting structured data from unstructured text and using that to power some downstream piece of software.

Speaker 1:

What goes into building some of these production grade apps?

Speaker 2:

Depends on the use case. There's a lot of different architectures depending on what it is that you're trying to do. If we take a chat bot, for example because we see a lot of those chat bots the end goal is a user chats in and gets back an answer, or maybe some action is performed as a result of this. So what goes into that? Basically, you need knowledge, like you need some knowledge base powered by the experts in this field. You need to be able to have the AI use that knowledge and produce answers. You need to be able to have the AI know and to try to elicit more information to be able to provide a useful recommendation. And so, in this instance, they used a series of prompts with retrieval, augmented generation mixed in with some classifiers along the way.

Speaker 2:

So, for example, user chats in. You have a prompt up front saying hey, is this something that I, as an AI, am well equipped to answer, or should I either like decline to answer or hand off to a human? So oftentimes there's classifiers up front to discern whether or not the AI should even be involved in the process. Then you move on to a following step where maybe you are pulling relevant knowledge in order needed to answer that question. Oftentimes that means, like vector databases and semantic search, but sometimes that just means good old fashioned DB queries with based on like user metadata, retrieve past conversations with user, retrieve and do that deterministically with a post stress query Do semantic retrieval for like topics that we know about?

Speaker 2:

include that in a prompt that's has really good reasoning and can produce a factually correct answer, and then you might have a final prompt that applies the tone of voice that, like a wellness expert, might use.

Speaker 1:

So you would say that there isn't one solution fits all in this case? No, very rarely Okay.

Speaker 2:

Is there a solution?

Speaker 1:

that you see being applied more than others.

Speaker 2:

I'd say the probably like the simplest form is what I see applied the most, which is retrieval into a single prompt. That's probably like where most people start but and that is like pretty easy to build. Like, yeah, you can prototype that and demo it quite well, but rarely is it sufficient for like actual use cases in production and handling the edge cases that you're likely to run into.

Speaker 1:

Is there something that people could be doing better if they just spent a little bit more time, did a little bit of research?

Speaker 2:

You know I might be biased here myself being in the tooling layer, but I think like you should probably not spend all your time doing a ton of research and you should probably try to find tools that help you accelerate the process off the shelf. So, but I mean, you know, we're one of the players in the space. I think there's a lot of ways that we can help there, but, like, not everybody needs to be a semantic search expert.

Speaker 2:

Like you know, use one of the many like frameworks out there that help with retrieval whether it's us or others that have great defaults and at least start there Like use that to build your own and like my advice to founders in the AI space is like focus on your customer and like what is the value you're providing, not like the cool, shiny tech that backs it.

Speaker 2:

Yeah, If you can start by proving that you can deliver the value, fast track what it takes to get there. Then you can, like, optimize the tech to make it even better later. Yeah, is there something that you see a lot of people doing wrong at the moment, or a lot of?

Speaker 1:

misconceptions that you're trying to get to A lot of people doing wrong at the moment, or a lot of misconceptions about.

Speaker 2:

Well, trying to build everything yourself is one of them. Probably the second is, I think one interesting thing I've observed is and I mentioned this at like one of the AI meetups during the YC reunion weekend is that a lot of people, especially in enterprises like emotionally feel this need to jump straight to like fine-tuning a model. They feel like I need a moat, like I need something unique to me, like I need a fine-tuned model. I have all this data and it's more of like this emotional desire to be unique and less about their customer and like accelerating the rate to deliver value to that customer.

Speaker 2:

So, it could very well be that a simple prompt with GPT-4 does better than a fine-tuned model, Certainly to begin with. What do I think people do wrong? I think they get too focused on the tech and not their customers.

Speaker 1:

What's some hack that you guys figured out during Y Comments that just like really helped you out.

Speaker 2:

I mean, some of these are cliche. Maybe One hack is to actually not just like, listen, but act on the feedback from our partners. I think a lot of us founders sometimes think that, like, we're the exception and the thing is like you need to be the exception, at least in at least one or two areas, so, like it's easy to make the excuse that, hey, this piece of advice, like I'm finally the exception for this one. But very rarely were we actually the exception and learned that the hard way. So we talked about selling before building. That was one that, like, we convinced ourselves for a long time. Like, hey, for this product to work, people need to feel it, they need to use it, they need to like. You know, we came up with all kinds of excuses, like it's the economy, like all kinds of crazy stuff. But once we like, finally listened and just stopped building and started selling, we learned pretty quickly that it wasn't going to go anywhere. So the hack, one hack, was just actually acting on and listening the advice that we're being given.

Speaker 1:

How did you guys make the decision to apply to Y Combinator?

Speaker 2:

Great question. The company that we were at had gone through Y Combinator and you know we had to heard great things from them, great things from people in our network. We've been in the tech space for a while, so Y Combinator was always a thing and so it kind of just felt like a first. It was like a really good forcing function to start this thing had we, you know, I think like getting in, getting the funding, that was like the juice we needed to really like make a jump and become entrepreneurs.

Speaker 1:

So you hadn't quit your jobs before applying to Y Combinator?

Speaker 2:

No, oh, that's cool.

Speaker 1:

Yep, yeah, a lot of founders that I know quit their jobs before. Actually, you're the first person that spoke to that like Y Combinator, you know, I guess, took that risk on it.

Speaker 2:

Yeah, I guess so.

Speaker 1:

Why do you think that Y Combinator chose you guys?

Speaker 2:

Great question. You know we I think we have a track record of working together for a long time. You know we've worked in tech for a while. We have diverse backgrounds and what we've done and what we've worked on, so it's worked out well so far for us and I think they saw that.

Speaker 1:

Do you ever feel like they picked wrong?

Speaker 2:

No.

Speaker 1:

Like, I'm perfectly confident. I'm amazing at this, Like you know.

Speaker 2:

I think like, yeah, we were meant to do this and it feels cool to have been recognized for that and have be like given that chance and I feel confident in that.

Speaker 1:

Yeah, what's the most gratifying thing about being a founder?

Speaker 2:

One thing that I personally love is like having a company be like I will buy what you're selling but, I need these things. And then for us to just turn it around in a day or two and or like a couple of days and it's built, and then they pay us and we're like wow, we did it. Like yeah, like having something that people want but they want even more, and then us being able to deliver on that and then them being happy about that, that's like really, really gratifying for me.

Speaker 1:

How much do you say you work?

Speaker 2:

Quite a bit.

Speaker 1:

Yeah, yeah.

Speaker 2:

I think like well, I mean, everyone has their different spectrums. For me, you know, I probably wake up 7, 730 in the morning, start answering customer Slack messages and teammate messages right away. Probably I'm like at my computer working by like 930 or 10.

Speaker 1:

Literally, are in bed, like on your phone.

Speaker 2:

Like I probably shouldn't respond to this until after I finish my coffee. Yeah, and I'm probably at my like working, like actually at my computer working until probably like 8, 9pm and then a bit more Slack's until like 1030 or 11, and then yeah, and then repeat.

Speaker 1:

You got married recently, right? I did get married just in June. Yeah, congrats, thank you. Yeah, how does your wife feel about that schedule?

Speaker 2:

Yeah, she's very understanding and supportive. She actually lived with Sid and Akash and I and during YC for four months and, man, we might not be alive without her and her support during that time. So, yeah, she's a former founder herself. She works at Toast as a you know technical lead there, yeah, like restaurant software Okay, and so she gets it.

Speaker 1:

Yeah, I can imagine.

Speaker 2:

She helps, you know, make sure that I stay alive.

Speaker 1:

Yeah. Do you think you would have been able to do any of it without her?

Speaker 2:

No, definitely not, Definitely not.

Speaker 1:

That's really wonderful. Yeah, so is there anything that you want the world to know about YC being an entrepreneur?

Speaker 2:

Yeah, AI. I'd say like this you know, AI hype wave is like one of the most exciting waves I've seen in my career and I think a lot of people feel similarly like I genuinely believe that this is like as transformational as, like the internet.

Speaker 2:

Yeah, so it's an incredible time to build a company, to take a risk, to like go to events and meet people and find who might be your founder, or think about who you've worked with, cause chances are you already know your founder, your co-founder. And, yeah, if there was a time to build a company, take a risk, it's probably now.

Speaker 1:

Well, thank you for coming on, noah.

Speaker 2:

Yeah, thank you, matthew.

Speaker 1:

Thanks for having me. I hope I've inspired you on this laugh.

People on this episode