Dr. Mirman's Accelerometer

Unlocking Custom AI with Nickel’s Oscar Beijbom

Matthew Mirman

Unlock the secrets of creating powerful, custom AI models with minimal data! Join us as we sit down with Oscar Beboom, the visionary founder and CTO of Nickel, to discuss how his company is revolutionizing image and text classification. You'll learn how Nickel's innovative approach allows businesses to train AI on their own data effectively, even with just a handful of examples. Discover how foundational models and state-of-the-art techniques can be leveraged to tailor AI solutions that precisely fit unique business needs, such as monitoring the health of indoor garden plants through smart classification.

Oscar also demystifies the common misconception that vast datasets are essential to get started with AI. He explains how Nickel utilizes pre-trained models and parallel training to find the best fit for specific tasks, ensuring high-quality results with minimal input. This episode is a treasure trove of insights into the future of custom AI solutions and the critical role of high-quality data, drawing fascinating parallels with the successes of large language models and diffusion models. Don't miss this enlightening discussion that promises to broaden your understanding of AI's potential in transforming industries.

Accelerometer Podcast
Accelerometer Youtube

Anarchy
Anarchy Discord
Anarchy LLM-VM
Anarchy Twitter
Anarchy LinkedIn
Matthew Mirman LinkedIn

Speaker 1:

I just don't think one-size-fits-all models are going to work. If you start a new company, you want to do everything manually. It's okay, right, just roll up your sleeves and do it.

Speaker 2:

Hello and welcome to another episode of the Accelerometer. I'm Dr Matthew Merman, CEO and founder of Anarchy. Today, we're going to be discussing narrow AI and open-world AI with Oscar Beboom, founder and CTO of Nickel, a custom classification company. Can you tell us a little bit more about your work at Nickel?

Speaker 1:

Right. So, nickel, we try to make we arguably take one of the most simple and straightforward but also the most broad use cases of AI machine learning, which is classification of images and text, and we put the simplest possible API wrapper around that. So, essentially, all we're asking you to do is provide some example inputs and outputs. We take care of training, finding the best model for your problem, deploying it. We also help you annotate the data, iterate on your data and capture any issues as you go into production.

Speaker 2:

So what exactly does custom classification mean?

Speaker 1:

It means that you're classifying your own data into your own set of categories. So, let's say so, one of our customers sell these indoor gardens that are instrumented with. It's like an IoT product that has a bunch of cameras, internet connection to it and they want to say you know if the plants are wilting, they want to send a little signal to the customer to know that they should water it. So that's an example of a custom classification, because if you send an image of that thing into, say, a pre-trained classification API, for example Google Vision API, it'll just tell you plant and maybe room and these sort of very generic labels that are associated with the picture. But that's not what they're interested in. They're interested in these particular aspects of the plant, right? So custom classification means that you're adding, you're training an AI on your own data to do exactly what you want to do with it.

Speaker 2:

Do most people have enough data, who come to you like right off the bat, to get something like this working Right yeah so that's a common misconception.

Speaker 1:

And when I started doing machine learning I'm actually old enough that it's sort of embarrassing to say, but it's like 20 years ago I started doing this it certainly was the case that you needed thousands or tens of thousands of examples to get something real going. That's not the case anymore. So what we do at Nickel and I think what any reasonable actor would do is you take models, whether it's for images or text, that are pre-trained on vast amount of data, so they already have a sense of the world, of the general concepts, and then you take those models and then you fine-tune them on the particular problem right Again on the particular inputs and outputs, and in that case you can actually get away with very little data. So we actually just require two examples per class to train the first model, and we typically see up until five or ten examples per class. That's usually good enough to go into a first production version with.

Speaker 2:

So would you say that what you've built amounts to having essentially built a foundation model for vision that you're then fine-tuning for specific cases? Yes, that's correct.

Speaker 1:

That's pretty cool and in that sense we're an aggregator. So we have some models, foundational models that we train in-house, but we also source from other platforms, right From academia. Any open-source product, we can just pull that into our infrastructure product, we can just pull that into our infrastructure. And then what we do is actually we have a whole catalog of them and we train them all on your data and we do it all in parallel. We actually spin up, I think, over a thousand nodes instantly, train it all, just check which one works best, and then that's the one we pick for you.

Speaker 2:

I know in the case of LLMs and diffusion models, having data that is high quality turned out to matter a huge amount, even for the foundation models, like one of the big reasons that people cite that Midjourney is more loved than DALI was because they decided to train them on beautiful pictures. Has that been something that you've thought about at all? It's an interesting question.

Speaker 1:

I mean we mostly. The thing that ultimately matters is I think it's a little different when you try to train something that is completely general, sort of an open world AI. We can talk more about that later At Nikko, because we are training narrow AIs, that is, for a particular use case. The most important thing, the goal, the rule, if I may say, is that you train on data that reflects the data you'll see in production. So if your goal and you know, if your goal is to generate you like aesthetically pleasing images, you should make sure to train on that. But if your goal is to generate

People on this episode