Dr. Mirman's Accelerometer

Patent Pioneering and AI Integration with Evan Zimmerman

January 26, 2024 Matthew Mirman
Speaker 2:

Hello and welcome to another episode of the Accelerometer. I'm Dr Matthew Merman, ceo and founder of Anarchy. Today, we're going to be discussing AI and Law with Evan Zimmerman. Evan is the founder and CEO of Edge, another YC startup from my batch, which is working on AI patent assistance. Legal AI is something close to home for me, as I grew up in a legal family, I personally couldn't imagine building a startup to sell to my parents and brothers, but I'm curious to know what it's like, I guess, to start. Evan, can you tell us what your thought is about when we're going to be seeing an AI lawyer in court?

Speaker 1:

First of all, thanks for having me. It's a funny question to ask, because when you think about court, it's very different than litigation. So litigation includes the whole process of getting up to court, and actually, in most countries, you settle before you get to court, and so the main application of AI before you get to a court is going to be actually like things like the document preparation. One of the main areas for legal tech that's been successful is e-discovery, so helping to make that more efficient. And as for court itself, I think that's going to be actually quite a while, but I do think that there's an application for it as a co-pilot, because one thing a lot of people don't realize that you make a lot of decisions in the moment, right, someone says something that you don't expect, and you have to decide sometimes like during a recess what you're going to do in response, and so I think that you actually might see something there where, like, you get a suggestion of, like, a strategy from an LLM that knows about the case, where all the evidence is context. So context windows are going to have to get way bigger, but stuff like that. I will say, though, that the literal answer to your question I actually don't think is going to be as a product, as a test use case. So this is really cool.

Speaker 1:

Example DocuSign, which is one of the original legal tech startups. People don't think of it that way, but it was, and there was this huge debate at the beginning of will people actually accept e-signatures? And that was one of the biggest limitations they had in getting people to adopt their product, and so they set up mock trials. They literally paid retired judges and showed, if it came up in court, how you would defend it. And they were at a risk, like they could have lost, but they won, and that made people incredibly comfortable because they were like, okay, here's how I defend this if it comes up, and that's kind of the end of the story. That's how you got e-signature. I think that you are going to need to see people doing similar things with the law with LLMs, where they're going to do these mock trials and have judges ask them questions and stuff and see how they would defend the use of this in court. And then people go, oh, a judge would accept this. No, feel good.

Speaker 2:

So are you going to do that with your own system? Set up mock trials for your patent assistant systems.

Speaker 1:

So we aren't doing mock trials. One idea I have thought about is a mock prosecution, so the way that a patent works. So just like litigation, patents are kind of a whole process. So it starts with the disclosure, which is where the inventor describes what they're doing. Then you have the drafting process, which is when you write the patent and then you send it to the patent office. Now sometimes the patent office approves of it with no changes Usually they don't and that process of negotiating with you as PTO and dealing with their objections is called prosecution. And the last step is once you actually have the patent. There's sort of the life of having it and potentially like litigation.

Speaker 1:

So for us, doing a mock trial doesn't really make much sense because in litigation the only thing that actually matters is your litigator's ability to prove that the invention reads on the claims.

Speaker 1:

And then there's like how good is the patent?

Speaker 1:

And that's just an examination process of the quality of the patent.

Speaker 1:

So to actually get there, what you would really do is want to do the prosecution of saying, hey, I'm producing these patents, do? I think that it's good and I think that might be a good idea. But in our case we have a patent attorney who's reviewing it very closely. Patent attorneys review patents way more intensely than like litigators review briefs, because each one is a lot more different, whereas like a brief, a lot of times like half of it is boilerplate. I think that the thing that I look about more is that, because patents are eventually made public and we know which patents are written with us and which ones aren't, we can calculate or we will be able to calculate in like a year or two your percentage likelihood of getting a patent with us versus without us. Also, people care about how many rejections they have and how long it took to get approved, and we'll be able to calculate the improvement there. And for me, that's actually a much more important deal, because that's the proof is in the pudding.

Speaker 2:

What stage exactly of the life cycle of getting a patent does Edge currently?

Speaker 1:

live Edge currently lives in the first two steps disclosure and drafting. So the reason, so the very first thing that we do is typically, when you have an invention, you either, if you're a very sophisticated company like Microsoft, you've done this part in house but someone who's a lawyer sends an engineer or a scientist a form where they describe what they do, and then they talk with them to make sure that that thing is better, and you know that form is filled out better, and then what they do is they take the contents of that form and they literally write a document summarizing it. So that's disclosure. And then there's drafting, where you take that disclosure and you turn it into like a patent, the thing you send to the USPTO, and those are the two things that we help with today.

Speaker 2:

So basically, you've got an AI which is telling you like you're not being precise enough or you're not giving us enough information.

Speaker 1:

Yeah, or sometimes the opposite. So like you're being too precise, yeah, too precise in the sense that like, let's say, you wrote something in Python. Like you will have engineers who will say I wrote this in Python, and like, being in Python is not material to the innovation. You're not patenting something in Python, you're patenting an algorithm. So another example, you see, is like someone will describe what they did. Let's say, there's like a 10 step process. They'll describe all 10 steps, but sometimes only step eight is the invention right, and so it'll be like you don't need do you really need to describe all these steps, like which of these things are actually novel, and so on.

Speaker 2:

How did you get into it?

Speaker 1:

Yeah, so I got into it because, fundamentally, like, patents have been part of my entire career. I mean, I've been involved with, you know, starting and investing in startups, and every single one that worked actually had intellectual property. I mean, even I had a VC firm and we sold a company to Walmart for $200 million and it had a patent, a software patent, which is very rare, they're actually very hard to get. You know, a consumer product company that I started, like pretty much all the profits came from patent licensing, not even the product itself. And then, of course, my dad, by the way, is this is an inventor. He has like over 30 patents.

Speaker 1:

So I've seen this throughout my whole life, throughout my whole career, and I sort of saw that it was this market that's way more important than most people realize for the success of a business. And one thing that I also saw and I'm sure this is going to relate to something you're going to ask later is even though the fact that patent attorneys actually have to be engineers. You would think they're more forward thinking, and they are, but they actually have some of the worst tooling of anyone in the legal industry and I thought, oh, this is kind of a really cool opportunity to make a difference. So that's something that's just incredibly cool. And also it's a big challenge, by the way, because unlike a lot of legal tech, which is just tax generation, this is the most multimodal area of the law.

Speaker 2:

So you mentioned that the majority of startups that succeed, or at least that you'd work with succeeded, had patents, which begs the question do you have patents for your current patent startup?

Speaker 1:

Startup. Oh, believe me, we are working on it. Right now we're just moving so fast that we haven't had time to come up and breathe, but we are going to, for sure, file for at least one patent, and you can guess what tool we're going to use to create that patent.

Speaker 2:

Yeah, I'm hoping that you use your own tool dog food a little bit.

Speaker 1:

Oh yeah, and also I think that, like we're thinking of making a blog series, like we have a newsletter and like, oh, follow, edge along to see what it's like to use it to file a patent, so, or maybe a video series, like a YouTube series, because like the thing is that, like, once you've, so when you're talking about confidentiality for patents, it's only until you file the patent. Then, like, you can do whatever you want. There's your patent is not destroyed by disclosing it before USPTO publishes it. People don't because there's not a great business reason to do it. In our case there is because there's marketing value from it.

Speaker 2:

But other than that, so you mean that there's a business case in market? Are in disclosing your patent? About your patents?

Speaker 1:

Absolutely, and that's actually the thing about patents is that marketing is part of the function of patents. Like people think of them as what they are, which of course, is a huge part of the value of the patent. But actually patents serve a very important marketing function. First of all, they look cool, those diagrams. People literally print posters of them. But separate from that, it signals to people that you're innovative. It signals to investors some level of defensibility. People don't normally think about this as marketing, but it actually increases your value to an acquirer. Because they think about bigger companies don't like messing with IP. Sometimes they may make a business decision to say come and get me, but that's less common than you think. They're actually more respectful of IP than the smaller companies because they know that if something happens like they look worse in front of a jury and so they're more likely to get a big judgment.

Speaker 2:

So if you're a small company thinking about getting a patent, what would you advise?

Speaker 1:

I would advise a few things.

Speaker 1:

So this is probably going to be surprising from a guy with an AI patent startup, but I definitely think you should get an attorney.

Speaker 1:

Like, our job is to make the process way more efficient, and the reason why I should talk to an attorney is because, even though I obviously love patents, they are not the right answer for everything. Like there's multiple types of intellectual property, including trade secret, and that may be the best solution for you. So, first of all, decide whether it makes sense to get a patent. A lot of attorneys will even tell you that on like a first call, like they won't even necessarily charge you. The next thing I would say is that, once you actually have it, engage with an attorney. Definitely ask if they're using Edge and, if they're not, tell them that you want them to use it because it will make them more efficient. Understand the pricing and make sure that you understand what the process is, because some people get unhappy because they don't understand how prosecution works, for example, and then they get really upset when they're like, oh my God, I got an office action and they have to spend more money responding to it, and then what they'll do is they'll abandon the application.

Speaker 2:

So you guys are primarily working with the lawyers on the site, right yeah, and helping with the disclosure.

Speaker 1:

That's right. So the way that we think about this is we think of a patent as a bundle of features, and we basically have different bundles that you can buy and some bundles the companies might want to buy. Most of it the lawyers will want to buy. I do want to make one very important distinction, though, which is big companies have in-house lawyers, and so if you're like an Oracle or a Thermo Fisher yeah, like you'll have lawyers who are doing the same work that outside counsel do and you'll want to buy the whole package, whereas you're like if you're a small startup in the biotech space and you only expect to file a few patents a year, you're going to buy our slimmed down in-house package. That just does helps you with the disclosure and the management, but that's right.

Speaker 2:

We are mostly focused on the attorney, so can you provide an example of what one of the interactions with your system would look like?

Speaker 1:

Yeah. So I'll sort of go over two examples. So first of all say is the disclosure right? So the way that it will work is it could be that your attorney is paying for it and since you are disclosure form or your in-house counsel just has it and you have a portal you can log into anytime you have an idea. But disclosure form is literally a form. It just asks you like describe the invention, what's novel? Some people want to customize it and so basically what you do is you talk to it and then you, the assistant, will actually ask you very specific questions to make it better and, by the way, if you think a question was bad, you can just delete it so you don't have to answer it.

Speaker 1:

Then what you do is, instead of having to then write the first draft of the disclosure because, remember, the Q&A is used to write a summary of the Q&A we write the first draft for you and you can edit it. The reason we do that is because we interviewed a lot of inventors and the biggest complaint that we got was I hate having to spend so much time writing. A lot of people become engineers because they don't like writing. Then what you do is you have something that's in a portal that can be reviewed by your council. Then the next thing is, I would say, is okay, you actually have the disclosure. So how does that work? So the disclosure becomes the brains for our assistant. It's just like a human right. If you said patent something, my first question would be well, patent what? So you give it the disclosure, so it has its brains.

Speaker 1:

Then what you do is every patent attorney is different, but most of them follow the following order They'll write the claims first and then they'll make sure that the client is happy with the claims. Then what they will do is go to a critical part of the spec, which is the way the patents work is you have claims, which are the thing that you actually are protecting. Then most of the bulk of the length of the patent actually explains what the claims are. It's called the enablement, because it enables people to understand what the claims are. So after the claims, they will usually do the drawings. Some people will have a first draft of the drawings first to help them write the claims, but other than that, even when people do that, they'll then make a final version of the drawings right after the claims. Then what they'll usually do is from there go on to the big writing part, then the easy parts, like the summary, the background. That's usually last. So the main way that this is done today is, frankly, in Microsoft Word and Visio.

Speaker 1:

The way it works with us is way, way easier. First of all, it's not this big bulky thing that's meant to serve everyone. It's specifically made for patents. We have made it really easy to reference things so that you don't make mistakes. You can ask the assistant to give you first drafts of the things that you actually want.

Speaker 1:

Now, one thing that we don't do is the press a button, get a patent thing, because I think that clearly, as you can tell right, there's a review step at each point and there are dependencies. We want to make sure you're happy before we draft a whole thing for you and then you have to review 30 pages and you're like well, I hated the claims the whole time. We don't want to waste your time, so instead we do it piece by piece. So the claims we will draft you a first draft of the claims. Or, if you want to have us make something that's more specific, you'll pick claim 13 and say I want five dependent claims to XYZ. It'll do it for you.

Speaker 1:

Drawing on the figures, we have a really easy figure editor and actually soon we're going to be rolling out AI tools that will help you make the figures without having a draftsman Drawing and writing explanations of the figures. That's incredibly tedious. Press a button, get a first draft and on and on. That's how we think about it is. We're giving you drafts and recommendations and edits based on your direction. You can write what you want to write and you can edit what you want to edit. You're still in control as the attorney.

Speaker 2:

What's been the hardest part of building this so far?

Speaker 1:

It is not what most people think it is. So most people think it's like something with a vector database, or, oh, there are so many patents. How did you wrangle that database? The hardest part by far and I'm going to say this, by the way, to all of the people who are working on AI application startups it's user design. People forget because AI is so freaking magical. They're like, oh, the AI will handle everything for me, but actually the design is critical because think about a person. Right, garbage in, garbage out. If you give someone bad directions, you'll get a bad result.

Speaker 1:

Artificial intelligence is no different, and so we think constantly about how to design it to, by default, elicit people to give us good context in a way that requires less effort than possible, because if you take too much effort to give context, then you haven't actually saved any time relative to just writing it yourself. Now, on top of that, we want it to feel like a pleasurable experience. We want every aspect to feel as good as the AI feels, and one thing that I tell some of our customers is I want you to enjoy using our software. I want there to be things that delight you, like the first time that you use an iPhone, like that's the experience that I want people to have, because that's going to show that we care, that's going to give people confidence, to give us the benefit of the doubt if something happens.

Speaker 1:

That's like a bug and again, that actually makes the product easier to use. Like we've had people say, oh man, this is better than GPT4. Like we've had multiple customers reach out to us to say that. And yeah, there are things we do on the back end to optimize it. But I don't want to say that or not. That I don't want to say, but the thing that I just I don't want to bother them, but the thing that I would say if they said how did you do it is, I would say, one of the most important things that we did is we designed it to help you, give us more help for the assistant. That's by far the most important thing.

Speaker 2:

Design it to get the user asking more questions or giving better answers.

Speaker 2:

So when I was starting my startup journey, I was actually looking at building for lawyers, because I have a market that I have access to, yeah, yeah. But what was really surprising to me was that I was speaking to my mother and they had all these things that they wanted AI to be able to do, and you'd say, ok, well, let's build that. And then you'd say, ok, how do we get to other people who would want this? How do you, how do you actually find evangelists for your thing? Because lawyers are actually like, unlike startups, he is in an adversarial position very often, and I found that a lot of lawyers are talking to didn't have good ideas about how to get to other lawyers.

Speaker 1:

Yeah, handle that? Oh man, that's a great question and, by the way, one of the things that you said is actually really true. Like a lot of the legal tech founders I know have a family member who has a law firm or is a lawyer and that's actually their first customer. In my case actually not the case. I do have lawyers in my family, but none of them are in patents. You know, the closest that I got was just talking to the patent attorney who my dad has worked with, who loves the product.

Speaker 1:

I think that some of it is what you do in every industry, right. You put out great content. You engage with other attorneys like lawyer influencers are absolutely a thing, but this is one of the things that's cool about patents, like it's not as adversarial, but even the adversarial context. A lot of times they know each other, because your opposing counsel today could be your co-counsel tomorrow. I think that the reason why you're seeing what you're seeing is actually because one of the reasons that lawyers are bad buyers of technology is because their operating expenses are very low and very limited. If you think about it, there aren't many things that you need to be a lawyer. It's not like if you're starting a construction company. Right, think of how many things you have to buy just to get started. You get used to supply chain and we don't think of some of buying software supply chain, but it is In their case. You could actually get by with a typewriter for certain jobs. It would be annoying, but you actually could.

Speaker 1:

They don't buy technology as often, so they don't go to like not technology I should say really anything. They're not as used to a procurement process. They aren't talking to each other a lot about the tools. It's not that they don't know how to reach other lawyers, it's that they're not really familiar with talking with each other. And that's one of the things about AI is it's actually really changing the game there. It's again because so many lawyers are looking at this in a way that they have never looked at technology before. They're talking to each other the way that you would actually see like buyers talking to each other in, like the marketing industry, and because they're doing it themselves. It's not like they're asking people in other industries how do you procure software. They're going to come up with their own process. It's going to be different. I'll tell you.

Speaker 1:

One big difference that we've seen is ethics councils, so a lot of law firms, especially ones that get bigger, have ethics councils that tell them they should or shouldn't do other things, and they use this for technology too, and so they have these ethics councils, who are not at every firm, but at some firms, part of the process, and so we've actually had to do a customer education journey, and so far we've actually been pretty successful at getting people to understand the differences. But that's not something that you're going to see. Even pharma they have ethics councils, but not for the procurement process. The ethics councils in pharma are part of the study design process.

Speaker 1:

So it's an exciting time in legal tech because it is a huge industry Globally. It's like a trillion dollar industry 300 billion in the US alone and they're building a procurement process they never had before, and I think this ends with them being more even outside of AI. Like technology is kind of a little bit like a drug, like once you get used to it, you kind of can't stop. It's like cars in that way, too right, or watches, and so I think once they get, they sort of take the AI as their gateway drug. You're going to actually see them buy a lot more technology because they're going to realize that they were underestimating the value they were going to get from it. But it's going to be a unique procurement process for sure.

Speaker 2:

What are these ethics councils? Think about AI safety.

Speaker 1:

Oh, AI safety. They don't ask about safety at all because the lawyer it just hasn't.

Speaker 2:

Mike, this conversation hasn't entered the lawyer realm.

Speaker 1:

But it's also because they think of themselves as like a guild right, and so it's sort of like a presumption that if you're a lawyer, you're like reasonably trustworthy. That's why the safety conversation hasn't answered. There are tons of lawyers thinking about AI safety for other people using AI, so things like misinformation, fraud, they're more worried about how good is it. So like can I? There was a horror story early on. Or this is chatGPT3. So this is like way earlier back, but a guy used it for a brief and there was a case that wasn't even real and he ended up getting disbarred. So you get questions about the quality. The other big question you get is privacy, because again, this is one of the this is ChatGPT has been a real blessing and a curse. Mostly it's been a blessing because it popularized it, but it's been a curse in that ChatGPT is a consumer product. A lot of enterprise users used it, expecting the same level of protection, which you obviously don't have in any consumer product. I mean, you think about Google, like you have way less privacy with Google than you do with like Algolia, right. So there were examples of like Samsung this is the most notorious one I've been asked about the most. They uploaded code trying to get help from ChatGPT and then eventually they found they could reproduce the code because it was used in training data and, like I've been asked about this horror story like five times now and part of our customer education is saying listen, chatgpt is a consumer product. Like it's the same thing with Facebook, like don't post something on Facebook, you wouldn't want something to see. And then I tell them something like do you feel the same way about Dropbox or whatever cloud service they're using? They're like I don't. And I'm like OK, why don't you? And they're like well, it's selling to an enterprise user and they have a certain level of understanding about the privacy promises they're getting. And we make a lot of commitments to our customers on privacy, a lot of commitments, but, by the way, some of our competitors are not willing to make. And so, because we do that, then they're like OK, I get what you're doing. And this is something we've had. People talk about making exceptions for their ethics councils, for us still, their default being no for some of these situations. And I will say one last thing for people who are selling not just to lawyers, to any of these enterprise customers in conservative industries is that there is a lot of customer education you have to do Because the field has moved so fast.

Speaker 1:

I had to explain to one client what a context window was, to explain part of what we do for our privacy protection for our customers, and they were like, what's a context window? And these are the guys who are supposed to be making the decision on what's allowed, and they didn't know what a context window was. And I'm saying this not as in like oh man, these guys were so stupid. I'm saying how fast is the field moving that the people who are making the decisions don't know what now is the basic terminology? The reason is because they made that decision four months ago when this wasn't part of the major terminology, when it wasn't even on their radar. And you have to have a lot of sensitivity to people who are making these very high stakes decisions that are supposed to be durable and stand the test of time. In a field where there are literal breakthroughs every month, that's a very hard position to be in.

Speaker 2:

So I love that you bring up privacy. Given the recent executive order, I'm wondering whether you guys use your own LLMs or open source LLMs how you guys think about privacy.

Speaker 1:

Yeah. So the way that we think about it is we have some things that are in-house, like built on open source stuff, and some things that we use providers for. The way that we think about privacy is that everyone is using some level provider in some way. Even if you're using, like, lama 2, you're still hosting it on Azure, right? So I think more about in terms of we can totally control what we do and we are very rigorous about what we do with encryption, with sandboxing, like literally we don't even. First of all, we don't train on any customer data and we make that commitment, but we also we don't even. I mentioned context window. The example there was. I was explaining to our customers that we don't even use within the same customer information across context windows. We use them across context windows if it's in the same session. So like when working on a patent and we decide to open up another agent and it needs context, then yeah, like that will have access to that. But if you're working let's say you're a patent attorney and you have five devices you're working on for a client like client one's patent will know nothing about client two through five's patents. They will know nothing about it because we just don't use that context at all. But we think a lot about qualifying vendors because, again, everyone's using something like if I'm using a sauna right for my task management, by the way we use Notion you're actually having client confidential information right, like if you have bug tickets, that's a thing that you have, that tells you something about who your customers are. So we think about that kind of stuff. We think about what are the best practices, how do we qualify it? And the last thing I will say on that is I'm not sure if you've ever read like cybersecurity reports from like some of the big companies, but every year these guys all come out with reports Microsoft, ibm, McAfee used to, but now they're acquired. But since they got acquired it's I think it got subsumed. They all say in every one of them that a majority of breaches actually happen because of user error, because of like operational problems, and so we think about that kind of stuff, like we use password managers and that kind of stuff, and that's part of how we. One of the things we're actually working on now is actually a document that we can literally send to these ethics councils and stuff explaining key terms, explaining how we interact with them, our security posture. We're super serious about it, that kind of stuff, because we want people to feel comfortable about that aspect.

Speaker 1:

And the reason why I'm saying this is to sort of bring it back to your direct question is that I think that sometimes we have these sort of red herrings where we think X is inherently more secure than Y. For a certain reason, most people would make the assumption that an open source tool like a llama that you would find tuned, is more secure because it's on your server and you control it, and I would say well, that aspect of it is more secure. But what if your security is worse than Anthropics? Then you've actually introduced a risk. What if you're doing some tuning and you have a junior engineer who made a mistake and now you're training on CCI Client Confidential Information? Now you've introduced a risk and security is, as you know, it's something that constantly evolves and so when new issues come up, you have to actually reassess.

Speaker 1:

You then have to go back and say, ok, am I still happy with our posture? Are there new questions that I need to reach out to my vendors and ask? Are there new security practices that we need to build? Are there new engineering tools that we have to fix or bugs we have to patch. Maybe there was some zero day that came out and now we have to update a security tool we were using, and so on. So I think that that's more important than the provider versus open source for privacy.

Speaker 2:

What's something that you're worried about in the future of AI?

Speaker 1:

You mean, like for the world or for our business, either. I mean, I would say in terms of the world. I published this article like two years ago that I called dumb AI. I think that, like the Nick Bostrom Super Intelligence thing, is complete hogwash. I think that it's really a shame that so many of our brightest minds have been like taken under the trance of this book. That, I think, is completely insane. Have you read it? Oh yeah, I read it, and there was another book that came out. I think one of the reasons Super Intelligence had such a big impact. There was another book that came out that same year and it didn't reach the public consciousness as much, but everyone who read Super Intelligence also read this other book, and it was even more humor. It was the one that introduced the world to the paperclip thing. So which book was that I got to remember? I don't remember off the top of my head, but I'll find it and send it to you.

Speaker 2:

I've been making my way through Super Intelligence. It's a difficult read. It's difficult because it's stupid yeah.

Speaker 1:

He talks about these idea of oracles and genies and I'm like this is such a dumb way to think about it.

Speaker 2:

It makes a lot of assumptions. Every line is and they could be doing this, so they will be doing that, and thus we should be worried about it.

Speaker 1:

Yeah, and I think for me, the thing that's even worse is it's like look, you do in scenario planning, make assumptions, but you also assess the likelihood of those assumptions. And those assumptions are usually not like magic, fairy dust. It's not like, oh, what if we had an omniscient being? And I'm like that is not a thing that is possible. If we have an omniscient being, I don't care about anything that you have in this book. So my theory behind dumb AI is that I actually worry more about testing procedures. It's interesting to ask about safety.

Speaker 1:

My biggest fear is that with AI actually is a safety issue. It's that I think we don't have an adequate enough use case-based testing, and so I worry about us deploying systems or agents in critical systems where we have not fully understood some of the applications or some of the risks or some of the problems because we just didn't think about it, and that becomes too hard to fix because explainability is so early. A lot of my dumb AI fears will go away when explainability gets better, but they'll still kind of be there. And the other thing that I sort of think connected to the dumb AI thing is that I think that there's a lot more correlation of risks than people consider. So the difference between meiosis and mitosis, where the benefit of meiosis that's inefficient but it reduces risk by making it so that there's more heterogeneity and so you can't have one random flaw that comes and then strikes everything out.

Speaker 2:

Well, thank you so much for coming on. It's been great having you. Thank you so much for having me. Thank you.

People on this episode