Dr. Mirman's Accelerometer
Welcome to "The Accelerometer," a cutting-edge podcast at the intersection of technology and artificial intelligence, hosted by Dr. Matthew Mirman. Armed with a Ph.D. in AI safety from ETH, Dr. Mirman embarked on a unique journey, to join the accelerationist dark side and found his YC funded company, Anarchy. In each episode, Dr. Mirman engages with brilliant minds and pioneers in the YC, ETH, and AI spheres, exploring thought-provoking discussions that transcend the boundaries of traditional AI safety discourse. "The Accelerometer" offers listeners a front-row seat to the evolving landscape of technology, where Dr. Mirman's insights and connections with intriguing individuals promise to unravel the complexities of our rapidly advancing digital age. Join us as we navigate the future of AI with The Accelerometer.
Dr. Mirman's Accelerometer
Revolutionizing Interspecies Communication: Sarama CEO Praful Mathur
In this episode, we engage with Praful Mathur, the founder of Sarama, a groundbreaking startup focused on interspecies communication using advanced AI and machine learning. Praful shares how his company is leveraging deep learning and neural networks to decode animal languages, starting with dogs and aiming to communicate with dolphins by 2035. He discusses the use of artificial intelligence in unlocking the secrets of animal behavior and consciousness, and how this innovative technology could transform our relationship with animals. The conversation also explores the challenges of scaling such pioneering pet technology and the potential implications for industries like factory farming. Join us to discover how cutting-edge AI solutions are making the dream of talking to animals a reality.
Episode on youtube
Accelerometer Podcast
Accelerometer Youtube
Anarchy
Anarchy Discord
Anarchy LLM-VM
Anarchy Twitter
Anarchy LinkedIn
Matthew Mirman LinkedIn
The language was created out of a need. When you talk about interspecies communication, what you're really looking to do is have interspecies understanding.
Speaker 2:Hello and welcome to another episode of the Accelerometer. I'm Dr Matthew Merman, ceo and founder of Anarch. Today we're going to be discussing multimodal app infrastructure with Prapal Mathur, founder and CEO of Terama, an interspecies communication project. Prapal, can you tell us a little bit more about your work?
Speaker 1:Yeah, so ultimately, we've been working on interspecies communication for a while. In 2019, I started a group in LA to do work with dolphins and other marine mammals, and then, ultimately, what we found out over the progression of the pandemic is that it's much easier to work with companion animals because they're in front of you and you don't have to chase them through the ocean. And secondly, once you get through interspecies communication, it's much easier if you can modify behavior simultaneously and also have some ground truth. So that's where we ended up working for.
Speaker 2:What do you mean by modify behavior simultaneously?
Speaker 1:So dogs are known to have a part of their brain that just understands human speech, and the reason for that is because it co-evolved with us and they are very easy to educate and teach them new behaviors, and vice versa. When do something, we also modify our behavior accordingly, and that's true for both dogs, cats, horses or any animals we have in our home, and, oh, except for maybe smaller like snakes and and those types of animals. But even then, you're trying to understand your pet the best and then find a way to make their lives much more meaningful.
Speaker 2:So when you say simultaneous, like really the behavior of the human like being modified at the same time as the animal, or Correct.
Speaker 1:So when you talk to dog trainers, the number one thing they say is that their title is a misnomer, because they're mostly training the people on how to behave around their dogs, so that way they're not inadvertently reinforcing negative behavior that they don't want to see. So ultimately you'll have people who go I don't like my dogs jumping on the couch, but then call their dogs to their couch immediately as they sit down, and so those are the types of things that we see. And when you talk about interspecies communication, what you're really looking to do is have interspecies understanding, because we can speak to dogs and they can speak back to us, but where we are lacking the context is what do they mean when they have those behaviors?
Speaker 2:every time. So the interspecies communication here, like my understanding of human dog or human cat communication, is that a lot of it is nonverbal Correct, right. A lot of it is visual, right? So is the work that you're doing? Does it incorporate those cues?
Speaker 1:Yes.
Speaker 1:So if you had asked me a couple of months ago, the answer was no, because we were mostly working on audio models.
Speaker 1:But today we now are expanding out to video models, and so it allows us to do body pose detection and body language understanding. The challenge is that nonverbal or verbal needs to be calibrated in some key way, and so the way that we work with people is that we try to find a baseline, a ground truth, and then we evolve the model so that way they're both understanding dogs on a universal basis, which is a motion detection or pose detection. But then we need to have very specific models, which is where we host user-specific models and then we bring them down to the device. So that way, for your specific dog, when they bark that way, that means I want to go out. And then it's about how do we expand their vocabulary, and there are companies like Fluent Pet that have done an incredible job on vocabulary expansion, which is why we really like their team and have been working on how do we expand that for our product as well?
Speaker 2:How do you measure accuracy of your systems here?
Speaker 1:So this is. It's one of those things where for a particular dog, it's much easier than other dogs. So for some dogs who, like my dog, kai, will continue to bark until she gets what she wants, whereas Cash, the other dog, will be a lot more flexible, and so we've ground truth a lot of our systems on Kai. With Cash, on the other hand, we need to take into account body language a lot more, which is why it's been a little bit slower to work with his corpus ascension.
Speaker 2:So what sort of metrics are you measuring In?
Speaker 1:terms of user growth or in terms of the communication, in terms of the communication, so in terms of communication, ultimately, with Kai, when she barks and we give her what she wants, then she is fairly satisfied. When we don't give her what she wants, then she is fairly satisfied. When we don't give her what she wants, she will continue to bark, and so that one is fairly straightforward. So the main thing is the time to get her to stop barking. The part where we're looking to expand is the number of words she can employ during her barks, and that's where we're leaning on a lot of the work that Fluent Pet has done. We haven't created a strong benchmark there yet.
Speaker 2:So people who are using the app, do they provide this feedback to you?
Speaker 1:Yeah, so the next version of our app is going to allow us to get back thumbs up, thumbs down from people. Currently, we've just been trying to make sure that the camera was stable, so in another week or two, we'll be there.
Speaker 2:So when you say that you have a different version or that you have an evolved version of the model for individuals, is that like a fine-tune of an existing model or how does that work?
Speaker 1:it's a fine-tune and a vector similarity, and that allows us to both get a very lightweight understanding of what vectors we're interested in and then, once we have what part of the cluster we're looking at, then we can identify the model we need to understand that type of behavior.
Speaker 1:So, ultimately, if there is a bark, that means or is in the kind of realm of wanting to go out, but we haven't deciphered it, and so then we start to say, okay, this is what our hypothesis is, this is what our confidence is, and then once people execute that, then they tell us whether it was right or wrong. Now, obviously, if it's like needs to go to a specific park or a beach, that may be much more difficult, but where we're working on and this is where expansion happens is that we then ask clarifying questions back to the dog. It's very much similar to like how we're trying to understand consciousness and llms. It's that you try to figure out the train of thought and reasoning and that's where, like some of the research we've been looking at, is like what people are doing in the ml space so space.
Speaker 2:So just a question about, I guess, dog communication. I read somewhere that we found a like enclave of apes in Africa that was able to speak in three word sentences. What sort of languages are dogs able to speak in?
Speaker 1:So dogs can use two to three word phrases, especially with things like fluent pet, where they have the buttons, they click on the floor and they can compose phrases. Now these phrases can become quite complicated and they do mean things that are non-tribal. So there's a dog who was they saw the NBC crew and mentioned I devise to mean camera. There was a dog, Obani, who's famous. This was not recorded, but she said a bird lost home and everyone thought that the bird was just flying in a weird direction, but it turned out they're the bird in the chimney, and so they're able to reason about stuff. There was a dog that showed or talked to a person and said you're sick, and the person was very confused until a couple hours later where she fell very ill.
Speaker 1:And so dogs are able to reason about a lot of things and they can use two to three word phrases, and they're also able to understand time as well. Now, a lot of this has not been empirically. There's empirical evidence, but it's anecdotal. There isn't enough to satisfy institutional demand, for is this conversational or not? And so our thought is let's go deeper and find the limits where this can be.
Speaker 2:Are you interested in producing more of that evidence than publishing papers?
Speaker 1:We're interested in providing the data set. There are other people who will do much better than us at publishing the papers.
Speaker 2:Have you considered working with those people at all?
Speaker 1:Yes, we are already starting work with a few universities. It's more about timing and our priority right now is to grow the core base of the user group, because the more users we have, the easier it is for them to make the deductions they need for their papers.
Speaker 2:Yeah, that makes a lot of sense. So you mentioned before that you were working on dolphin research. How did you get involved in that?
Speaker 1:So what happened is I was in logistics for a long time and I wanted to start tackling the challenge of climate, and a lot of the climate impact comes from ships, trucks and a lot of the supply chain itself. I mean all the way from manufacturing to consumption. It's a it is where climate impact is mostly concentrated, and a lot of the brunt of climate is in the ocean, and so I started to review a lot of studies and papers and I was on this like TED playlist and one of them in the ocean was a woman by the name of Dr Denise Herzing who talked about how she has enough data to start understanding the language of Atlantic spotted dolphins. So I mean the first thing as a founder you just try to find that person, track them down and then try to get in front of them. They decided that you know they wanted to be the first to publish, which comes back to the whole publishing world, which is why I'm a little bit negative to it, even though multiple people on our team have had published papers in nature and science, papers in nature and science.
Speaker 1:But uh, there is this impetus that you need to be the one to get the, the, the research out, and if you're not the one, then you lost. For me, it was more about who is actually doing this work and who's open about it, and through another founder, I ended up meeting the group called orca sound, and orca Sound is this open source, open collaborative group working in understanding orcas, the southern resident killer whales up in the Puget Sound area, and they've recorded the JK and L pods and they have enough data. So through their data set, we were starting to do some analysis of how do you identify signal, how do you reduce the noise, how do you source, separate, and then we created a good understanding of the pipelines that are required for interspecies.
Speaker 2:So what would you say is the most challenging part of understanding dog language at the moment?
Speaker 1:The ground truth right. Because, to your point earlier, if you have a dog that barks a lot and it's fairly noisy, and they bark every time something's outside the window, every time the doorbell rings, and so sometimes those barks convey information, Sometimes that bark is just attention seeking. And so what our first thing to do is tell people how much of their dog are attention seeking behavior and how much of it is actual conveyed information. And then the way we figure that out is we start to identify what are the highest likelihood of the bark, and if there's a bark that comes up quite often in many different contexts, it's very much attention seeking, Whereas if you have a bark that's very distinguished and it's very context sensitive, we know that's conveying information. And so from there we start helping people realize okay, what do you need to do to start training their behavior to allow this to be more variant?
Speaker 1:And the other thing is that people have, you know, they generally want their dogs to bark less. But what we're finding now and I thought this was so for a long time we said, okay, we don't want to encourage people's dogs to bark more, but what's interesting, there's a niche of people who are very interested in communicating dogs verbally and it's because they know that we speak, they want their dogs to speak, so that actually hasn't been as much of a challenge. Now there are people whose dogs don't bark and they don't want their dogs to bark. So we have to do some more behavioral analysis. But we're seeing, and then the button dogs. We're going to do some interesting work with them. But I think between buttons and barks there's going to be some interesting dynamics that get arranged here.
Speaker 2:The dogs that are barking are people. Before they come to your app, are they able to tell the difference between the types of barks themselves?
Speaker 1:You can roughly tell the difference between three to five, but then when you start getting much more nuanced, it gets harder. And this is the same thing everyone talked about. With the buttons, people go if you need buttons, you don't know your dog and you're a bad person. But what really people found out was the buttons. You can extend what your dogs are able to convey, and there's something interesting about language, which is that when you have a word, you can start thinking about it. So there's two things where you can think about things and then convey them, or you can start to expand your thoughts based on what you can convey, and this is something we've seen in a lot of different places. We've seen this with, so going to like a human language catalog there are.
Speaker 1:In Nicaragua, before the I think the Sandinistas took over, there wasn't pervasive education for everyone, and so what happened is that once the Sandinistas took over, they said, okay, it's universal education, and for all the deaf children, they went to a school for the deaf and at first they had sign language that they used with their parents, and so the sign language was generally, you know, I want to go out, I want to eat, and so it was limited to a certain set of things. Once they started making friends in school, they created more of a universal language. It had grammar and had other factors that allowed them to become more sophisticated. But then what happened is, in every iteration of the language, it became much more complex. So the first group of children were able to standardize. The second group were able to create grammar.
Speaker 1:The third group was able to create tone and different ways of conveying information, and it allowed them to convey much more. So much so that the professors, teachers, principals had no idea that they're bringing somebody who was a linguist to actually decode the language. And once they started to identify this, it became interesting that the language was created out of a need. And so we see the same thing with other languages as well. But yeah, I mean. My hypothesis here is that with dogs, once they start to be able to convey new words, they'll start thinking about those words, and then, once they are able to standardize with other dogs, the language is going to evolve very quickly, which is why I'm more interested in verbal communication rather than butt communication, so do you see also inter-dog communication.
Speaker 1:Inter-dog communication is mostly non-verbal today. You do see inter-dog communication within breeds. So golden doodles, when they bark at each other, generally have a clearer sense, whereas if you see a golden doodle bark with, like a german shepherd for instance, it's a much different, or vice versa. I have a friend who is a large german shepherd and just a small growl from him will elicit fear in my uh golden doodle and so, and vice versa, she can bark and he gets annoyed, and so a lot of this comes down to somewhat of a standardization across breed, which is what I really, really want to see.
Speaker 2:Even though there's not the sort of standardization. Is that something that you aspire to be able to decode?
Speaker 1:Correct. So right now it's very much similar to what I mentioned before is that there's a language at home. The next challenge is how do you get a language between and? So this is where we want to start working with daycare centers and places where there is a structured environment for a lot of dogs to come back and interact.
Speaker 2:So earlier you mentioned that your multimodal AI infrastructure was something that you guys had spent a lot of time on and it was something that, in fact, businesses wanted to use. I guess what was the difficult part about building that?
Speaker 1:So what's really easy to do is use an open AI model and connect to it through an API. The harder part is being able to fine tune models, either on the device or on a hosted service, and then being able to identify that particular person's model and then download it back to them. Now it's pretty straightforward you put in an S3 bucket and you can. But then when their dog updates, or they have an updated understanding of their dog's language, or we get an updated understanding automatically, then we have to continue to fine-tune it and they update the model and then download it. And then we have to version it and they have to see how those different versions actually impact the inference. And then when, what purse? When people will.
Speaker 1:Before, they gave us thumbs up and thumbs down. It became harder with video, but because there's different sections, we need to know whether we got it right or wrong. But ultimately, what we want to know is when things are going well, we want to ensure that that's reinforced, and when, when things are going poorly, we also want to identify where we need to make that adjustment. And so a lot of this came down to not just how you host it, but how do you evolve and iterate and fine tune it over time, which is something that took a lot of work to just get running.
Speaker 2:Yeah, that sounds challenging, Potentially like it could be useful to other types of companies.
Speaker 1:It's funny you mentioned that, because what happened is I was showing this a friend of mine, another founder, at an event and he mentioned I want this. I was like, okay, well, you know it's coming out in a bit. And what kind of dog do you have? He's no, I want the ability to run these models. I was still confused. Like you know, are you building a dog app? And he's like no, I want the ability to run models. The same way, you're running it on device and you're doing it on video and audio. He's that I try to do this and it's a really negative experience trying to upload inference remotely and then bringing it back down. And then he said also, even if you just look at the LLM case, I spend a ton of money on just open AI. That would be much easier if I could just fine tune something. And so that was when it became very obvious. And then I spoke to a few other founders.
Speaker 2:They all had the same challenge and very quickly we were able to get quite a few customers, so is that part of the business that you're excited about?
Speaker 1:I'm very excited about the interspecies communication. But from a revenue perspective that's stable, then B2B is the easiest way to drive a lot. So the goal is to take a lot of revenue and drive it towards interspecies, because I think that we will solve the problem of talking from dogs to dolphins by 2035. But we need about a million to two million a year on dogs to dolphins. Dogs to dolphins.
Speaker 2:Yeah, do you see dogs and dolphins communicating very often in the field.
Speaker 1:Funny. You mentioned that there's many videos on YouTube with dogs swimming with dolphins in the open ocean and I don't understand, like, why people are okay with it.
Speaker 2:but it's very, very common, or I mean uncommon, but more than once you know I I was in Hawaii recently on a snorkeling boat and I was just shocked Like a pod of dolphins came up to us and it was just like, yeah, we like the boat, like we're going to swim along with the boat, like they followed us for a good like 45 minutes and played with us and I can see that you know, dogs would want to swim with us.
Speaker 1:It is interesting how dolphins, orcas and other very large predators are very accustomed to people in a way that they in many cases lot of attacks unknown if they're attacks or whatever. I know people are very reticent to say anything one way or the other, but it does seem that this is a different type of behavior than he's ever seen.
Speaker 2:Yeah, I think I saw that with the orcas attacking fishing boats after COVID right.
Speaker 1:Correct. I mean mostly they went after the rudder. They also damaged the hull. In some cases they've sunk ships. So it's very different behavior than we've seen in recorded history.
Speaker 2:Yeah, I heard an explanation that it was during COVID. The oceans were a lot quieter than they usually were and they were finally able to communicate and figure out like oh it's human boats that are the problem, let's go back to the human boats and stop that, that's the thing that's very fascinating about orcas specifically is how they're able to communicate.
Speaker 1:Now they have similar types of communication with sperm whales, humpback whales, different types of dolphins Some of the largest superpods have been seen of multiple types of dolphins. Some of the largest super pods have been seen of multiple types of dolphins which are like atlantic spotted, a bottlenose spinner dolphins. They had like multiple, multiple thousand member pods which they thought only humans were capable of, but so they're conveying real information and able to do it across pods, and they're able to do it within their pod and across generation. So that's the part that we're really excited to understand. So orcas are my next main mission, but right now it's all about dogs, cats and horses.
Speaker 2:I feel like orcas must have a data paucity problem. I mean, we have a lot of recordings on the internet of humans speaking to dogs, right, and there's probably even a lot of labeled information out there of humans talking to dogs. But how much labeled information, or even just recordings in general, could there be of orcas?
Speaker 1:So this is the interesting part, which is why I shifted from orca communication research to dog communication research. There is this way of communication which is underexplored because we're so focused on unsupervised learning around. Could we create ambassadors with these captive orcas that are? Many people want them to be free, right, and so there's this interesting disconnect. There's a group of people who want to put all these orcas in sanctuaries and leave them, you know, to live the rest of their lives, and I think they'll do some research, but it's mostly like a retirement place. Think they'll do some research, but it's mostly like a retirement place. And then there's a group of people who want to monetize the behaviors that they can get out of orcas.
Speaker 1:I have a middle ground, which is that can we work with orcas to become our ambassadors with other pods?
Speaker 1:So this goes back to John Lilly in the 70s 80s where he talked about a dolphin serving as the chair of the ocean on the UN.
Speaker 1:And so the only way you can get that is if you have an ambassador and an ambassador is a individual who has an understanding of multiple cultures, I mean, unless you have a lot of money and you get an ambassadorship because you've been nice to the president.
Speaker 1:Generally, large country ambassadorships or the diplomatic arm are usually set up for people who understand both cultures very, very well At least their staff, at least their staff and so it's very rare that you want somebody going into India who has no idea that there are multiple languages and different cultures, and so that's where I would like to see things happen. And there's a few groups that are going to put money behind interspecies communication next year, and my advice to them is is let's look at a way that we can d or remove orcas and large animals from captivity, but also allow their that some productive capacity, so that way you give a real reason to SeaWorld, for instance, to basically sink their business on land, so that way they can continue some level of continuity in the oceans. But it's a tough ask. I think that SeaWorld will continue to lose public share value until somebody can go in and buy.
Speaker 2:So what worries you the most about this business?
Speaker 1:Until recently, when we got the B2B angle, it was about sustaining revenue, because consumer is notorious for churn. The main thing that I'm concerned about now is, if we show that this is real, the pushback we'll get will not be from. It'll be from some of the most powerful interests, right, because if you show that there's consciousness in animals, there's an entire multi-trillion dollar industry that is, some of the most powerful entities in the world that would like that to be reversed. I mean, they're the animal factory farming world, right, and you look at them anywhere in the world, they have a disproportionate impact on local politics. I mean the EU.
Speaker 1:There was a recent article about how there was an overwhelming bill from a lot of public support that got overturned by very deeply embedded lobbyists through very undemocratic methods, and you can see this in Brazil, you see this in the United States, you see this in the EU, you see this in India, you see this in any country in the world, and so those are the people who have this desire to maintain this world. It's also, for most people, they think it's a perpetual thing, but it's only existed for less than 100 or 200 years. It's the industrialization era that we really got these massive factory farms. They're actually detrimental to society and to our climate, and so that doesn't mean everyone needs to go vegan, but it does mean that we should. Those are the people I'm concerned about concerns just being able to advance this in a way that can continue to get the kind of research results that we want.
Speaker 2:You know I'd never considered that, like the factory farming lobbyists might be against interspecies communication. Do you think that could be part of the reason that it's been so overlooked as a scientific field for the past 30, 40 years?
Speaker 1:I think it comes down to. I forget who talked about it, but it was about how society has gone from being optimistic and technologically like. We kind of believed at one point that everything was possible. So the last time we started to work on interspecies communication is when we'd landed someone on the moon. I think it was optimistic determinism, right, and we knew that there was a future that was real and we could get there, and it was this boundless energy. And we lost that for quite a bit of time. And now, because of a lot of the advancements in machine learning, a lot of advancements in computer science, the fact that Silicon Valley has produced an enormous amount of wealth and there's a lot of unexplored territory is when. This is why it's interesting. But if you even talked about this a couple of years ago, you would have been thought of somebody who was insane. I know because I talked to many of the people and they did sound a little bit outside of their element. But now people don't think anything is impossible.
Speaker 2:Yeah, I guess LLMs and AI have. I mean, they were in the same boat like a few years ago, right, correct, people thought that, you know, a machine that talked like a human was impossible.
Speaker 1:And the last couple of years have shown a lot of things to people in society that things we thought were impossible are real. I mean the fact that we have reusable rockets, the fact that we have the ability to launch satellites in space, the ability for these communication systems to continue to stay up even in a war and we're seeing multiple wars occur right now and there's often communication from both sides in in these, in these areas, and and you're able to get this despite all of the you know there's there's key positions that want you to see a particular side, but that we we've maintained this and I don't. I mean I think that what a lot of people are are seeing is that what they thought was impossible is possible.
Speaker 2:Even something as as simple as uber or airbnb being global brands in less than a decade, is that was unimaginable to to people so, going back a little bit to this idea of it being problematic if we discover that, like dogs or, you know, food animals, for example, are conscious. Just knowing that they can communicate, is that enough to know that they're conscious?
Speaker 1:no, what is more important is that they can communicate with us. With us, uh, not in general. Cows communicate to one another, sheep communicate. They can communicate dangers to one another. They can communicate who to trust, who not to trust. There's a lot of complex behaviors that we've already observed, but we've been able to downplay that as saying well, they're really these automatons that have evolved to elicit noise. So that's not really. I mean, it's communication, but it's not real. So it has to be between people and the animals. And even if it's something simple, but it goes back to this whole thing about. And even if it's something simple, but it goes back to this whole thing about. We knew a couple of years ago that lms could get to human level, because in the 70s and 80s we'd seen things like eliza that was so convincing that the researcher's assistant continued to use it as a therapist, and that was with very uh, ancient technology, and so we knew that we were able to beat the Turing test for quite a while.
Speaker 2:But the challenge was people had to experience it, and once you experience it, it's very hard to go back that, no, it doesn't carry a lot of water the idea that animals are autonomous, given that we have such a hard time even mapping the neurons of small animals like dematodes. But on the flip side of that, we have models like LLMs, which they seem to be intelligent, like human. They, they can do a lot of acid, definitely general intelligences, and, like you know, I people have problems with the word egi, but me they seem generally intelligent, but yet, like there's still some question of are they conscious? And the more you play with it, the more you're convinced like no, this thing is not conscious, this thing is just like reacting in a certain way. So, yeah, I'm just wondering like what, what do you think it would take to convince people here?
Speaker 1:I mean, I think that some group of people's already convinced which is the same, which is a similar type of thing where? So I agree with you, I think that LLMs are not conscious and LLMs are generally intelligent, but there are a lot of places where they underperform, and I mean even simple types of inference these systems are incapable of making today. These systems are incapable of making today. I mean, reasoning is something that is very difficult to even know, if it exists, but we can see that there's some slight. I mean, you could just like I did this recently just I gave the prompt enough information to understand what it should come out with, but it wasn't able to reason because it was something I'd never seen before, and so, um, so in any case. But there are other places it is able to reason about different elements, as long as within the scope of language, and so what I would say is that, for animals, it it's going to be very similar, right, where people are convinced that, like I myself, that animals are incredibly conscious, they have the ability to reason, they have the ability to understand complex scenarios, able to do, and what they're able to not do, I mean, you will find dogs that'll bark at people until they give a treat, but they won't bark at a baby to get them a treat. Right Now someone will say that they've seen a dog bark at a baby. I mean, yes, but like the thing is that dogs are fairly capable of these high level order functioning.
Speaker 1:The reason why you don't believe it is because you haven't personally experienced it.
Speaker 1:You are looking at the dogs in a way that, okay, well, they want a treat, so they're going to bark, but then it's hard to explain why they are. You go well, they've seen me give a treat prior, so they're using. But then you're like okay, at some seen me give a treat prior, so they're using it, but then you're like, okay, at some point there's a reasoning, right, but it comes down to shifting the way we have constructed the narratives around animals and our relationship with them. A lot of it comes from the early part of the century, where we were just determined to show that humans were the superior intelligence and certain humans were more intelligent than other humans, and that's why we created all these tests, to ensure that there was a level of hierarchy and intelligences. But what most researchers are now finding is there's a general continuum and it comes down to individuals rather than because I mean some dogs are going to be smarter than some people and some people are going to be smarter than other people.
Speaker 2:What's some advice that you would have to people who are aspiring to work in this area of research?
Speaker 1:Honestly, the main thing is just get started. If you have a particular area that you're interested in, reach out to me. We generally have the scope of all the research that's happening in the space, because it's a fairly small world and the people doing the most interesting work are not the ones you read about. They are the ones who are at home playing around with some new model, doing something that is slightly different than what everyone else is doing, and they're applying it in a very controlled setting. I mean, it goes back to how the Wrights brothers beat.
Speaker 1:All the other initiatives that were incredibly famous and had a lot of money is that they were very methodical and they played it step by step and they did things really in small increments and eventually they got a flying machine. But when you look at some of these bigger initiatives that have millions of dollars, a lot of PR, you could very quickly go. Well, they're going to solve it. They've already worked on it with whales and dolphins. I can tell you confidently, knowing everybody in the space, no one's even close and the people who are going to get closest are the ones who are taking very small, iterative, methodical steps, not these giant leaps where there's like some foundational model. At the end of it it's like not going to be the way to do it.
Speaker 2:Well, Prabhupada, thank you very much.