Dr. Mirman's Accelerometer

Revolutionizing Consultancy: Cyqiq with Tino

Matthew Mirman

Discover the revolutionary tool that's changing the game for management consultants everywhere! Tino, the mastermind behind Cyqiq, joins us to share his story of creating a product that automates the tedious work of data analysis, allowing consultants to shine where they do best—solving complex problems. With a beginning rooted in Zurich ideation sessions and a leap into Berkeley's Skydeck accelerator, Tino's tale is one of innovation and serendipitous product-market harmony, captivating consultants on a global scale.

Full Episode on Youtube

Accelerometer Podcast
Accelerometer Youtube

Anarchy
Anarchy Discord
Anarchy LLM-VM
Anarchy Twitter
Anarchy LinkedIn
Matthew Mirman LinkedIn

Speaker 1:

Hello and welcome everybody again to the accelerometer. Today we have Tino, who is running a company called Psychic. Also comes from Switzerland, started the company in Zurich. I think Very excited to have him here. There aren't many Swiss founders, so I would love to get his experience here a little bit more about what that was like. Why don't you tell us a little bit about your journey? Yeah?

Speaker 2:

for sure. The journey of Psychic started in November last year. At the time it was just a couple of friends we kind of had. He started off like a lot of people. We wanted to start a company. We had no clue what exactly we were doing. I had actually started a company before. We had an idea of the broad strategy we wanted to follow, but we didn't really know what we wanted to build. We spent a lot of time ideating about different topics that we're actually good at and ended up at consulting ultimately, at least the consulting industry. Long story short, we came to Berkeley because one of my co-founders was still doing their masters there Through the university, got in contact with Skydeck, which is an accelerator program here, got into Skydeck, founded the company, started building the first versions of our product, failed miserably a couple of times along the way, then finally, a couple months ago, found the use case that we're actually sticking to, been building ever since. Our first customers are now in the seed stage of fundraising.

Speaker 1:

That's really exciting. What's the use case that you're sticking to?

Speaker 2:

We looked at the consulting management consulting industry as a whole Our past experiences in this space and a lot of our friends are consultants we realized that the core of what management consultants do is solving complex problems and coming up with creative solutions. That's really cool. Unfortunately, to get there, you actually spend a lot of time on data analysis, using Excel sheets and repeatedly going through reports, interviews, survey data and the likes. That just takes a ton of time. It's not very hard and it's also not very fun. We built a tool that analyzes survey data and expert interview data and consolidates that into reports. These reports are then used in, for example, commercial due diligence, but also other projects like transition implementation, restructuring, et cetera. We help a lot with preparing the management consultant to actually be ready to come up with complex problems and solutions, rather than having to do a lot of the pre-work for that himself.

Speaker 1:

I've never worked with a management consultant. Can you tell me a little bit? Can you give me an example of one of the complex problems that a management consultant would solve?

Speaker 2:

Let's start with one of the use cases our client is using right now, for it's called commercial due diligence.

Speaker 2:

If you're not familiar, commercial due diligence happens in any larger M&A transaction. It's a process that it's an M&A transaction Sorry, an M&A transaction like a merchant acquisition when a big company buys, usually a smaller company or a similar size company. There's a lot of due diligence that you do because you want to figure out or is the company that you're buying actually what they claim to be? Now, if this is a larger transaction, you would usually hire a team of consultants to actually go through all of the data in this organization to make sure they are what they claim. Commercial due diligence is a subset of due diligence that typically involves a lot of qualitative data. It's usually talking to experts in the industry, interviewing them about things like well, where is the market heading? What are the trends in the industry? At the same time, you also talk to a lot of employees of the company that's being acquired to understand what are the cost centers, who's in charge of what? How do the operations look in this organization?

Speaker 2:

Because it's an M&A transaction, this process only can last two to three weeks because that's the no shop time of an M&A transaction. No, shop time is basically just a time. Yeah, it's pretty quick. It's basically there's a time where the seller of the organization is not allowed to take other offers from buyer. In that time you have to make your buying decision. That is why these projects have to happen so quickly. As you can imagine. You do all these interviews with these experts. You do surveys with the company's clients to understand if they're happy or not with the product. Then you have to analyze all of this data and put it into a report. That takes you a lot of time because it's unstructured text data. Large language models are actually uniquely positioned well to solve this problem, because you have a very structured process. You have a lot of qualitative data and you have a predefined output, which is this survey report. That means you can use this technology while still providing the guardrails to make sure that the output you provide is actually what you expect.

Speaker 1:

What sort of things prevent these transactions, these mergers, from actually going through?

Speaker 2:

There's a lot of things that can happen. Often it's just basically the strategy team may look at the market through these expert interviews and realize actually the market is not developing the way we hoped it would and you decide to pass on the buying decision. More often than not, a lot of this pre-work has already been done and the due diligence is more of a formality. If you don't really find anything along the way, that's a really big red flag. It would typically go through.

Speaker 1:

Do you say that your software helps find these red flags?

Speaker 2:

To a certain extent. Right now we won't replace commercial due diligence consultant completely. I don't think it's responsible to just let a large language model kind of you surface them, the flags.

Speaker 2:

Exactly, you surface them, thank you. Our job is to point the consultant in the right direction, yeah, to basically show. Hey look, here's the data, these are the correlations that seem non-obvious or seem interesting, and here's where you should dig deeper. And then for the obvious stuff, obviously you don't have to focus so much on it, because we've already consolidated that into a report for you.

Speaker 1:

So when you guys were working as consultants, were you working as a consultant? I was.

Speaker 2:

I was working back in the past life. I was working as a financial due diligence consultant.

Speaker 1:

OK, so this hits home for you, Like this is a problem that you specifically have.

Speaker 2:

Yeah, I spent a lot of hours going through Excel sheets, copying, pasting data from various annual reports, kind of trying to build my own workflows for everything to match and for every project I would do more or less the same thing over and over again, which is why we came to the idea to automate this, because we're like OK, look, although every company is different, if you abstract away from it the process of how you go about it, there's a playbook and we can follow this playbook and we can actually build a system that follows this playbook and make sure that the technology we're using remains within the scope of this playbook and doesn't completely go off the rails. So for us, it makes a lot of sense to stick with these well-structured projects for now.

Speaker 1:

You weren't doing mergers and acquisitions, but then you went into M&A after. How did you make that decision?

Speaker 2:

Yeah, so I personally actually didn't go into M&A. I went into venture capital after for a little bit, which actually also requires a lot of due diligence.

Speaker 1:

So then I got to ask what's my mode?

Speaker 2:

My mode.

Speaker 1:

What's my mode? That's always what you ask, right, if you're a VC. Every time you sit in an interview it's like what's your mode? What's our mode? Tell me what is my mode.

Speaker 2:

Yeah, it's a great question. We're still working on that. No, I'm kidding.

Speaker 1:

No, you've got a mode. You've already built your mode.

Speaker 2:

No, look, we haven't built our mode. I think we're too early to have a really defensible mode.

Speaker 1:

How much would it cost to build your?

Speaker 2:

mode Around $3 million in the next 18 months. Ok, so you've hired some contractors, you've got the prospect of site.

Speaker 1:

There are no large rocks underneath where you're going to be building the mode. Yeah.

Speaker 2:

OK.

Speaker 1:

So $3 million is quite a bit to build a mode with. You expect that's like nuclear powered mode.

Speaker 2:

Yeah, yeah, no. Our core thesis is that for any AI startup out there, that's an application layer where it works. On the application side of things, I think UI and UX is going to be the only real differentiator. Ultimately, the models out there right now everybody's more or less using the same ones. Of course you can fine tune and build your own data set to a certain extent, but with what we've seen with the developments in generative AI recently, it's hard to predict whether or not like that's really defensible in the long run or whether models are just going to get so good that the fine tuning itself won't make a huge difference. We believe right now you can differentiate if you manage to build a workflow around the specific use case that fits the existing work that in our case, consultants do very, very closely, and that's actually what we've done.

Speaker 2:

It hasn't cost us a lot of money so far. But to actually make this defensible large business, we have to build this workflow that we've built. We have to repeat this in many other projects across the consulting industry and for us the biggest risk is that we don't make this jump from, say, commercial due diligence into transition implementation projects, and that's still going to require a lot of exploration and a lot of interacting with these consultants to actually build something that holds.

Speaker 1:

How are you going about doing that exploration?

Speaker 2:

Yeah, so originally we started with just interviews. I think at this point we probably interviewed over 200 consultants from you know, from all the way from partners at McKinsey to freelancers kind of doing their own management consulting firm. These are usually ex consultants that just realized they get paid more if they don't work for a firm but they just offered of services directly. But now we've kind of transitioned to design partnerships and paid pilot projects. This is one on the one side. We realized people are a lot more committed to work with us if they're actually paying us money.

Speaker 2:

Because it's weird, because we don't care about the revenue in this stage we're not willing to build something that works with them. But if we offer it for free, we realize people just don't put effort into it. But as soon as we charge them, they're actually more willing to work with us. So we're like okay, we'll take your money, but like, please also spend time with us. So now with every client we have, we hold two or three hour workshop every week. We try to release newer updates and versions of the product every week as well. And the core of all of this is really understanding how they use it. Because again, like building something that fits existing workflows is the only way, I think, you can differentiate yourself in today's environment in the short run. In the long run it's a different story, but in the short run, for us I think that's what makes a difference.

Speaker 1:

So where did you find your first, your 200 consultants to interview?

Speaker 2:

Oh man, I mean a lot of places. So obviously LinkedIn is a great source. We did a lot of outreach on LinkedIn. Obviously, coming from consulting background and I've been to business school so we have a lot of friends in the consulting industry, so initially that was very easy. We also started building a newsletter. You can find it on scikitai if you're just in following us. So we've through that. We've been able to generate a lot of interviews as well. And now, like, as we've gotten more mature, I think we started. We started really, I mean, at the beginning was just you know, random messaging people and now, like, we have like proper outreach contains on LinkedIn and these seem to still work for surprisingly well.

Speaker 1:

You ever do any like cold calling?

Speaker 2:

Yeah, we've done some cold calling. It's a miserable experience.

Speaker 1:

It was a miserable experience. What happened?

Speaker 2:

I mean, usually you get to voicemail and then they never call you back, but then you still have to. It's not scalable, right.

Speaker 1:

Hello, I am a robot. Please pick up the phone for my sales goal.

Speaker 2:

It's not. It's not like a really scalable way to reach out, because every time you still have to pick up the phone, you have to dial the phone number. Usually takes you two or three tries to get the right one and then no one picks up. And then you leave your voice message and then out of I don't know 20, 30 companies, If you're like lucky, three or four get back, which is still a 10% conversion rate, which is not horrible. But at the same time it's, as I said, like right now everything is email campaigns and LinkedIn campaigns and it's working. It's working fairly well for us and at this point actually, I guess it's a bit of a luxury problem in a sense, but like we have enough clients in the pipeline, we have enough pilots on board that we're actually kind of at capacity. We can't serve more people than we have right now, so we're not putting a lot of effort into outreach right now.

Speaker 1:

Yeah, that makes a lot of sense.

Speaker 2:

I mean, it's lucky, I guess.

Speaker 1:

That is definitely a champagne problem.

Speaker 2:

Yeah, I mean it's not going to stay like this, but like right now we're riding that high.

Speaker 1:

Yeah, what is something that's been particularly hard for you?

Speaker 2:

It was, I guess, one of the hardest things for us, and then that's going to sound really, really, really stupid, but it's actually figuring out what we wanted to do. Yeah, we so many times we thought we had a really good idea. We built the first version of a product. We were super excited for it. People told us, yeah, we definitely want to use this.

Speaker 2:

And then you try to sell it and nobody wants to buy it and you put it on there, for you put it out there for free, and still nobody uses it, and that's kind of devastating. And it's devastating if you do it like three, four, five times in a row and at some point you're just like you're kind of over it, like you know what, like I don't know, maybe people just don't get me or I don't get people, but like whatever, whatever we're doing is not working. So all the more it's been super rewarding when you actually find something that people want. But I guess, like this initial process and I mean it's an ongoing process, right but I feel like once you actually feel the traction for the first time, you've kind of proven yourself that you can do it and it's easier after that. Even if now, like still we do stuff that fails, people don't want, that's okay Because, like we know, if we do it enough, we'll get back to what people actually want.

Speaker 1:

What makes that so difficult?

Speaker 2:

I mean, I guess, the fear that you're kind of wasting your life.

Speaker 1:

Yeah, Does that go away though?

Speaker 2:

No, I still have it, you'll have paying customers.

Speaker 1:

It'll be like you'll make 10 million a year and I feel like you'll still have that.

Speaker 2:

Yeah, yeah, no, it's fair. I don't know. I always picture myself becoming like a startup coach.

Speaker 1:

Startup coach.

Speaker 2:

And that's like really not where I want to be.

Speaker 1:

This is like your failure plan becoming a startup coach.

Speaker 2:

Yeah, I don't want to be like too negative about it, but I do feel like I've met a lot of people that just don't really know what to do anymore and then they just become a coach, and it's sometimes. I don't want to generalize. I'm sure there's people out there that can add a lot of value, but a lot of the people I've met also just don't add a lot of value and I don't want to become that person. I've been trying to build a startup over the fourth time now never worked out. But here let me share my advice there's always some learnings along the way, but I don't know.

Speaker 1:

Did you feel that fear of being unsure? You were on the right path while you were working as a consultant.

Speaker 2:

I guess not so much, because I always knew that this was kind of a temporary thing.

Speaker 1:

So this feels permanent.

Speaker 2:

This feels very permanent as a consultant.

Speaker 1:

Is somehow both permanent and you're not sure that it's the right thing to do.

Speaker 2:

Yeah, no, but it is right Because it's like. You're like okay, I'm committing to this now. I just worked for an entire year without salary, pretty much, and I'm committing to doing that. If it has to happen for five more years and I'm doing this all on a statistically speaking one to two percent chance that my company might hit the Series A and I might be able to pay myself a salary, not even get rich just have a living life and that just sounds like not a wise financial decision.

Speaker 2:

It just seems super risky, a lot can go wrong, but for some reason I still can't not do it and that sometimes stresses me out a bit. I'm going to be honest, like sometimes I wake up and you know, you have like your friends, actually one of our first, one of my initial initial co-founders. He left and went to work for McKinsey, yeah, and I'm like how was dealing with that?

Speaker 1:

How was dealing with losing a co-founder? Oh, that was a rough time.

Speaker 2:

Yeah, that was a rough time. I mean, he was more than a co-founder, he was a really really good friend. Yeah, no, I don't want to say what still is, hi Mxyz.

Speaker 1:

I mean, you leave the start up life, that's it.

Speaker 2:

Yeah, yeah, it's like yeah, man, you're sorry, you really messed up, yeah, no, I mean that was hard on all of us, like we talked about it a lot. I think for him at the time it was the right decision and we obviously gave him as much support as we could and he still supports us as much as he can. So like it's great Like everybody loves everyone, but then still, like it still happened you know yeah.

Speaker 1:

Like yeah so obviously that was a rough time. Love can't isn't enough to keep you like building a server.

Speaker 2:

Yeah, yeah, it's a lot of work. It's a lot of work, it's a lot of friendship.

Speaker 1:

It's a lot of friend-relationship. That's all you need to build a start up.

Speaker 2:

Yeah, yeah, I mean honestly, I think that is an important part of it. Yeah, I mean just having a team that has your back and you have to really like the people around you. I mean Michael Vanoz and I. We've been living in the same house for like eight months now. We've only usually had two rooms and one sofa, so there's like, yeah, there's no privacy. Yeah, you work together, you work together and you know how this is. You don't do much else, right?

Speaker 2:

So every time you wake up yeah, you go to the gym with your co-founders, you eat dinner with your co-founders and you go to work and you go to sleep. Yeah, that's intense. You have to have a good team. I think that's the biggest, no one of the biggest part.

Speaker 1:

What do you think is the conviction that, like the co-founders that were for all of you guys like that, this was real. That, like you were ready to move to San Francisco, was it getting that first customer?

Speaker 2:

No, we came here way before our first customer. Yeah, we've known each other for over six years.

Speaker 1:

Yeah, I mean, I've known people for 12 years. I haven't built startups with them.

Speaker 2:

Yeah, that's true. But like over those six years we've worked together, we've studied together. We knew we worked together. We trust the work we do with each other a lot and we've been talking about starting a company for probably like four or five years at this point and then at some point the end of last year, the timing just aligned. For the first time All of us were in a position where we're like I could start something new now and it could be a job where we could just build this. And yeah, that was the first time we all had that time on our hands and yeah, it just made sense. It was never a question, I don't know. We never really decided should we do this or not. It was kind of obvious when we had when everything lined up, we're like, yeah, we're just going to do it.

Speaker 1:

We are going to.

Speaker 2:

We'll figure it out, we'll figure it out along the line and we're still figuring out. Had you just decided who had what role, it kind of naturally happened.

Speaker 2:

You know, like we didn't sit down together on the first day and we're like these are the roles that we have. But Richard had the most tech experience from any of us. He was in charge of building or deciding how we build not necessarily what, but how we build and where and when. So he took on that role of leading the technical part fairly early on. And then Nuno and I kind of just went in two, I guess, a bit different directions. Like Nuno just completely ingested the core of customer exploration and how to do it and he got really, really, really good at it and he's also he has quite a hand with designing stuff.

Speaker 2:

And then my background was was I've worked, I've been a co-founder of a startup before, I've seen a little bit of the startup world, like I've gone through this process of fundraise before, like, and for that reason I guess I had a bit of an idea of what was coming and I think like I was able to like at least at the beginning, to kind of give a little bit of comfort that like we're heading in the right direction, and that put me in a position where I was kind of managing managing at least like parts of the strategy. But you know like we, still, we still, like we're all. We consider ourselves very equal. We have a very flat hierarchy.

Speaker 2:

We we vote we vote on things. With this, we take decisions together.

Speaker 1:

Yeah, does that ever cause problems?

Speaker 2:

I mean sometimes. I guess you have three people, so yeah, I mean we have three people that there are no stalemates.

Speaker 2:

Yeah, in that sense. In that sense it makes sense. But yeah, I mean, obviously, you know, sometimes you, you don't agree, and then someone has to take a decision, or you vote, and then not everyone is always happy with that, but you learn to, you learn to deal with it at the same time, obviously, sometimes you, you maybe discuss things that don't need to be discussed too much, or other times I mean, yeah, you, you feel like you have to move forward and you don't discuss something that you should have discussed, and then then that backfires as well, because everybody's like, well, why didn't we discuss this? So, like, finding that balance is really hard sometimes. But I think as a team, we we do all agree that typically, when, when we work on, when we work on something together and we take decisions as a team, the output is generally better than one of us does it alone. That's why we do it as much as possible.

Speaker 1:

What's the biggest technical challenge you guys have had working with AI?

Speaker 2:

OK, we're building, as I said, we're working on an application layer, so we're using a lot of stuff that's already out there, but then, because everything's so early, nothing's documented. So just at the beginning I mean now it's a bit different, but at the beginning it was just like even just figuring out how to use things like Lama, index or Lang chain and actually get it to do what you want to do was tough. Now our biggest challenge that we're facing is probably a lot of people face. It is the data pipeline.

Speaker 2:

Like we deal with a lot of unstructured data. It comes from various sources, often Excel sheets. Every Excel sheet is organized differently, and how do you build a system that can ingest all of these different Excel sheets and documents that all have their own separate structure and put it in a somewhat standardized format?

Speaker 1:

What pieces of technology or what ideas are you excited about in the future?

Speaker 2:

I mean generally, I'm like curious where large language models are going to go. I'm super excited about OpenAI just recently releasing the vision aspect of things. Obviously they will be good at OCR, but at the same time, for us this is again it's like what. Can we now look at an Excel sheet and understand its structure? These are the types of things where I have no idea about how capable these models are right now. We haven't tested the newest release yet, but I'm really excited to see where that goes and I think overall, I'm just excited about integrations with large language models.

Speaker 2:

Like, obviously, the plugins into chat GPT were already a game changer. I'm sure you've used things like code interpreter and suddenly you have a chatbot that is not just answering questions, which gets a bit old fairly quick, but now suddenly you have something that can interact with a lot of different technologies across different levels of interaction, and that's also ultimately what we're working on. Like when I talked a lot about automating workflows and being able to build integrations and give certain autonomy to systems like. This is super interesting, a little bit scary, but super interesting. I'm really excited about all the possibilities that nobody's come up with so far. I'm sure there's going to be some mind-blowing stuff happening in the next five to 10 years.

Speaker 1:

In what way is it scary?

Speaker 2:

I think it's scary because the technology is, maybe it's new. It's maybe not as new as it's obviously not like a year old. Yeah, I mean it's been around for a bit longer than a year, but it's gained public recognition for a year, which means it's too young to have proper testing on it. And I think there's a lot of parts where people don't really know what's going on. There's probably a few select people that really in the world that really understand the deep inner workings of these models and everybody else is like ooh, nice input, it did something creative.

Speaker 2:

And now, if you give that access to things like your bank account and it can start doing I don't know purchasing decisions, like what happens how does advertising change? For example, are advertisers going to start advertising towards large language models? Imagine I build a product and that product has a large language model. I'm using an agent and that agent is able to build integrations to different products themselves and it can decide which product to use. So are you now advertising to the agent or to the end user if the large language model is taking the decision on which product to use for a certain job? Or then, obviously, if you go more into the political aspect of things like. I think Sam Altman was quoted saying I think it was a couple of months back. Basically, he wants to go into a direction where everybody, on their own device, has their own personalized large language model. And now what does that look like? Is this going to be like the social media echo chamber on steroids? Do you get your own feedback of what you want to hear and who's controlling to what extent?

Speaker 2:

This is still okay, right, but if you have very slightly skewed worldviews and your personal AI just enforces those to an extent where everything becomes super polarized? I think those are questions yet to still be answered.

Speaker 1:

I'm really intrigued by this idea of advertising to the large language model. The implication there is, of course somebody's going to come out and create a marketing agency designed for large language models.

Speaker 2:

I think that that's a realistic possibility. It's actually a thought of a friend that brought this to me. I was like, yeah, it was pretty mind blowing. I was like, yeah, you're right. As a developer, I choose which integration I want to work with for a certain task, but if I have multiple, which database do I choose? There's multiple out there. But now if I'm giving that freedom to a large language model, what's it going to choose and how do I advertise to that? It's interesting.

Speaker 1:

On that wonderful note, thank you so much for coming on the ExceleMembrometer.

People on this episode