Customerland
Customerland is a podcast about …. Customers. How to get more of them. How to keep them. What makes them tick. We talk to the experts, the technologies and occasionally, actual people – you know, customers – to find out what they’re all about.So if you’re a CX pro, a loyalty marketer, a brand owner, an agency planner … if you’re a CRM & personalization geek, if you’re a customer service / CSAT / NPS nerd – you finally have a home.
Customerland
Empowering Non-Tech Users with AI Solutions
In this episode, we talk with Mark Ogne, CEO of Symplexity AI, about a practical approach to integrating AI into business operations. Mark, a seasoned tech and marketing executive, shares his journey into AI and explains how Symplexity AI goes beyond consumer-facing tools like ChatGPT to help businesses manage large volumes of unstructured data effectively.
We discuss common challenges with AI-generated content, such as achieving emotional depth and relevancy. Mark explains Symplexity’s method for training AI on detailed client-specific data to produce more sophisticated, contextually relevant content—avoiding complex prompt engineering and enhancing personalized creativity.
Symplexity AI’s tools are designed for ease of use, making it accessible even for those without a technical background. Mark highlights how this platform supports SMBs by automating routine tasks, allowing employees to focus on strategic goals. Join us for insights and strategies on using AI to improve content creation, sales, and customer experience.
Using my platform. The key difference is that our clients are operating directly on top of OpenAI rather than through a chatbot, which is the consumer interface, and so we allow an unbridled access to the full robustness of AI.
Speaker 2:Today on Customer Land, mark Agni, who is CEO of Simplexity AI, and I'll let him do a kind of a better introduction than I'm sure I ever could on what Simplexity is. But Mark and I had a conversation several weeks ago where I got to find out really what he's doing and what his plans are for this really neat platform. So I thought this is an ideal place to kind of expand on that, explore it and bring it to this little kind of a corner of the world. So, mark, thanks for joining me. I really appreciate it.
Speaker 1:Right on, mike. Thank you very much and looking forward to knowing your following better, and hopefully this is a cause for helping to engage with some of them.
Speaker 2:Well, yeah, I think so. It's a fascinating idea you've got. And well, I won't editorialize just yet, except to say that after our call a couple of weeks ago, I kind of had to sit back and go. Huh, he's right, I never thought about it that way. And the use cases are giant, many and big. So, with that, what the heck is Simplexity, ai, what is it?
Speaker 1:So it is what I would call an expert-trained custom language model. So, taking a step further back as to kind of how I got here big deal, selling person to an operating executive, to marketing I've been around this block three or four times now and I think always I found myself to be an innovator. So I've won awards and things like that around website personalization in the 2000s and employee advocacy and social media marketing in the 2010s, account-based marketing in the late 2010s and been CMO multi-times in early, later stage companies. And the really good part about leaving a role is when you have something really cool to go play with, and so this was the impetus. For me is I had been dabbling with the different language models and things like that and I now all of a sudden had the time to do something like really dig down deep and what I found is not many people really understand some basic, fundamental things like chat GPT is not the same thing as open AI, and when I say that to people, some will contradict me and say, well, no, it's. You know people, some will contradict me and say, well, no, it's the same company. It's like ChatGPT and what we look at for Claude and all the other tools. They're a consumer interface into an enterprise product.
Speaker 1:I went through my Gladwellian 10,000-hour tour here when I mentioned I'm an innovator. Part of that is just probably that I'm pig-headed and I'll just keep shooting at something until I finally figure it out. Um, and that's probably actually a trait that I have, so I'm just tenacious that way. And um, I had a consulting project and we were going to work on messaging and website and things like that and I thought, wow, this would be a great time to start using a code interpreter or the ability to upload files into like, I use chat GPT a lot.
Speaker 1:And so I thought, fantastic, I'll look up all these podcasts and um, uh, you know, uh, whatever webinars and things like that, find the transcripts, load them into the AI solution, have them interpret it for me. And two months later, I still hadn't figured it out fully, because chat, gpt or any of the other tools will only allow you to load. Well, they'll physically allow you to load. I've loaded 50 page documents before and I'm here to tell you that a one hour transcript is 30 to 40 pages of unstructured text and it'll read the first three to four pages until you've read the whole thing and it didn't, and so I've got my own experience with that, that's.
Speaker 2:That's a really interesting one. I don't think anybody really understands that, no, and it won't tell you it, won't you know?
Speaker 1:in fact, I would try to trick it and and say like, tell me what the very last word in this transcript is, and then say, ok, wait, let me go think.
Speaker 1:And then it's going to run something different, which is OK, I'm just going to go to the very end and it'll say, like you know, done, or whatever, and it's like OK, like, but I could not get it to read the whole thing and so I started looking at open AI developer forums and I'm a technical business person that technical people laugh at when I say that, but I actually connected some Python from Git to OpenAI APIs in Slack last fall, about a year ago now, and had this epiphany it's like, wait a minute, I don't need to use ChatGPT, meaning I can go directly to the AI. And the reason I thought that was something that was interesting is because the developer forums I'd hear people talk about, you know, pushing megabits per second into open AI and I thought, oh, that's got to be horrible. I can't even put up, you know, 6,000 characters in a prompt and have it freak out. They've got to have hallucinations that are just ridiculous. And they don't. They don't. What's the difference.
Speaker 2:What are they doing that the rest of us, regular people, don't know about?
Speaker 1:So that is what drove me into that technical challenge of can I get away from the GPT, the chat part I mean. So when you pay by the month, you get throttled, you get constrained because AI is. It's really super expensive. I mean one time I read about meta and they were going to get 600,000 processors and I calculated that they said somebody said yeah, but they're a sustainable renewable energy company. I said okay, well, what does that really mean? It was like 800 acres of solar panels, 120 major windmills and enough battery volume to fill up the Dallas Cowboys Stadium, which holds over 100,000 people and still have 20 million cubic feet of battery in the parking lot. These things consume so much power and just the hardware itself is incredibly expensive. So $20 a month.
Speaker 1:What they give you think of it as a person the brain space. It's called context window, but it's they call it brain space or they call it context what I call brain space. They'll give you eight to 16, maybe 32 K of brain space and then when you fill that up, it tries to get rid of the excess. And that's where hallucination tends to occur, because in the pay by the month they have to try to make money out of this, so much money in power and equipment and things like that. So they use one, a smaller brain, and two, the way that they get rid of the excess thought is first in, first out. It's cheaper to do it that way. So if you said the world's best thing in the very first prompt that you entered into the system, it'll be the very first thing that it gets rid of when you hit that brain space, hit that context window.
Speaker 1:The other part that's very human-like is how much can I speak out and how much can I hear back. So I mentioned before, like hey, it'll take a 40 or 50 page document of a transcript. It'll read the first four or five pages though. Why is it doing that? Because to absorb all of that content is a larger context window than it's able to use one. But but two, it's expensive. It's expensive to have it ingest all of that information and then it'll give you back I don't know 700 words at most you try to go to 2,000 and it just starts speaking gibberish. So it's the brain space and the communication back and forth. It's called context window and then thread volume, I guess you could call it, and then you know thread volume, I guess you could call it Now operating directly into OpenAI or into a large language model. First, what I see right now is the biggest context window. Like let's even just say, chatgpt in a paid account is 32K, in OpenAI it's 128K.
Speaker 2:So it's a lot bigger. Now it also uses…. We were that just to, just for additional context. You know the, the 32k versus 128k. That's what we're inputting, or that's just a? Or is that the literal brain space?
Speaker 1:that, that we have a literal brain space.
Speaker 1:Yeah, I think the metaphor is good, but they call it context window. It's how much can I keep in my head at one time? Right, right One. It's a lot more. But, even more importantly, the way it gets rid of excess context in the open AI direct relationship is using an algorithm of recency and frequency. So the stuff that I'm not talking about is the first stuff it gets rid of. So you get far.
Speaker 1:Is it possible to hallucinate in open AI? For sure, any system like that. You can. Is the likelihood there? Not for the type of stuff that we do? Like you know, I loaded up a.
Speaker 1:I figured out a way to scrape an entire website five levels deep with one prompt. It comes back. It came back with 650 pages of text. Every bit of text on that website is about 1.1 megs. Loaded it in boom, it was just great. It handled it like a champ. There was no hallucination, there was no forgetfulness and this type of stuff. Why? Because I pay for it to do that. I'm paying by the compute rather than by the month, the monthly fraction, right, right, right. So in that world, it'll. Whatever I want to give it, it'll. It'll process it Right, it'll process the heck out of it. But I pay for it, and so I say I in the royal, I meaning like using my platform. The key difference is that our clients are operating directly on top of open AI rather than through a chat bot, which is the consumer interface, and so we allow an unbridled access to the full robustness of AI.
Speaker 2:Now the way that we-, so, on the one hand, that would indicate there's more potential there. You can do a lot more stuff, a lot more complex stuff, a lot more complex and bigger stuff is what I'm hearing you say there and you can prompt it with. I think I'm hearing you say this, you can correct me if I'm wrong. You can prompt it with work. To do that requires a lot more than just a few sentence prompts for a simple task.
Speaker 1:So it's fun. I'll grab a couple of things you said there. One I reject the concept of prompt engineering. Prompt engineering is a necessary thing within the consumer chatbot because you're trying to get it to do things that it's not inherently wanting to do. So when you first log into ChatGPT, what are the prompts for you to do? What outfit should I wear to a party? Um, you know how do I text my neighbors to come over for a barbecue on the weekend? Now, that that's not helpful for you.
Speaker 1:In writing, for your, your subscribers. In fact, the stuff that you have to write about is so multidimensional that it just it consumes a lot more context window. You have to look at who's the audience, the persona, the type of company, the situation, which country are they from? Like there's so many different. And then there's impact of externalities like the economy or whatever externalities like the economy or whatever like it. As a chat bot, it's not good at doing that because you can't give it enough context about what you're trying to write about. Now. The llm has hundreds of billions of you know data points of of stuff it could tell you about quantum physics and in how to raise a crop in the desert, but it's really awful at understanding the depth and the needs of your audience versus another publication in their audience and what your strategy is versus like. It's awful. It doesn't decompose those ideas well at all and so what we found sorry, go ahead all, and so what we found, sorry, go ahead.
Speaker 2:Yeah, no, I'm. I'm thinking of um, of specific examples, um, or occasions from my world where I've run into those very hurdles and tried to figure out uh, how can I end run this thing? How can I duct tape a solution together? How can I get chat GPT to give me something that's useful here? And I think what you're telling me is that, under the current kind of format of my you know how I'm accessing it it just can't do what I'm asking it to do.
Speaker 2:A groundswell starts small, quietly, building into something powerful, unstoppable. That's also how market momentum works. If you're launching a startup, introducing a new product, rebranding or rolling out a new service or initiative, it's not enough to simply show up. You need to build momentum strategically. Strategically, specializing in go-to-market strategy, groundswell works with organizations at every stage, creating custom plans that help products and brands break through the noise and grow, ensuring a launch that doesn't just happen. It sticks From understanding your target audience to perfecting your positioning. Groundswell's approach ensures you're not just catching the market's attention, but keeping it. With expert guidance, your product moves from launch day to long-term success, turning that initial wave of excitement into sustained growth. If you're ready to take your product, service or brand to market, it might be time to think about a groundswell strategy. Visit groundswellcc to learn more.
Speaker 2:A couple of years ago, we produced a report where we researched the Fortune 100, went into their 10Q and 10K filings to look for language that would indicate that they're serious about CX and customer centricity Not a very scientific approach, but we're looking for the language. If they're real about it, our theory was that it would show up in some of their investor and board materials. It was a lengthy process, but it was ultimately very revealing. Um, and it was a lengthy process, but it was ultimately very revealing. So, uh, I'm a fairly new chat GPT user. This is maybe like I don't know, four or five months ago, six months ago, something like that, and I thought how clever am I? I'm going to ask chat GPT to do the very same thing. And um laid it all out there and some very clear and concise instructions, and it came back and said to the effect of okay, here's how you do it. And I came back and said no, I want you to do it. Oh, sorry, let me get to it.
Speaker 2:So over the next it was probably four or five days I went back and forth with ChatGPT saying, no, I want you to do it. It kept trying to shirk it off on me and I kept saying, no, this is your job. At one point it said, okay, I understand, I'm going to go do the work, and I think it was two. Three days later it came back and said, okay, I've got your results here, and it gave me about two paragraphs of completely wasted time. It just was not about doing this kind of work because of the size and complexity of it, and I've since tried to duct tape and band-aid and stretch and twist prompts into it to do other kind of complex tasks, but have found no real solution at all.
Speaker 1:So the nature of the challenge isn't artificial intelligence, it's the manner in which you're connecting to it and trying to use it. And it's, to be really candid, it's because you're paying by the month. I mean, right, you know I have days where I can. You know just me working. In one of my instances I could spend twenty dollars a day in compute. I'm pushing big files around asking questions, but I can get back. You know two thousand word chunks of you know content that are like so robust. You know 2000 word chunks of you know content that are like so robust. You know so.
Speaker 1:Human quality, um, you know it, it it'll blow. It blows people away. I mean frankly, it really blows them away. And the challenge here that I think you know I'm looking at it for content these days for a couple of different reasons. One, I think that's what a lot of people are thinking of it for.
Speaker 1:It's a good use case creation of content or strategy ideas, things like that. Two, it has a low perceived risk. I'm not putting up your company financial statements. Lower perceived risk. And three, the upside is so huge.
Speaker 1:And why is that? It's because writing good content is hard. The average human could put two to three ideas together simultaneously to analyze something and write. But oftentimes the stuff that you need to write about are eight or 10 or 15 different dimensions, and unless you have that, what do they call it right? And and unless you have that, what do they call it the unconscious competence on a topic, you're going to fail at it.
Speaker 1:And so the vast majority and I say this as a marketer, as a person who's, you know, run content teams and writes a lot of content most marketing content is, you know, anywhere from awful to not great. You're being kind and it's you know. I say that in a self-deprecating manner. I mean it's hard to do and you know people talk about highly personalized content as being elusive. And it's exactly this point to get highly personalized, you have to look outside in and understand the nature of that persona and their situation and their needs, their challenges and objectives, and then how what you do provides value, in what areas and like. It's so complex for the human mind to do. But AI it's not smart, I don't think, but I think that it's just built to look at things multidimensionally and it does that really well, and so in this use case of content and strategy, it just it's mind-blowing what it can do.
Speaker 2:Yeah, I would love to just see what that looks like in the real world. I, um, I play with chat gpt quite frequently and I think it's really good at uh outlines. It's really good at putting together an outline which gives me a framework to write from. It is terrible and I mean terrible at writing with any kind of a human feel you can see right through this stuff. It is um, it's it's got no heart or mind behind it. It's got a lot of data, but you know that's not a human there and, um, I've had a lot of fun, frustrating fun, trying to get it to, to do otherwise, but it's just not going to you know so a couple of different things.
Speaker 1:You you know the amount of instructions and training, data and prompt that you can push into that system. It's tiny compared to what you need to get it to write well now in writing. Well, there's, there's several different things. Um, it knows so much about everything but it understands the world in a very vanilla way. It doesn't know how to really decompose it. And you know we talked before about your audience and their need, the persona that, whatever like it, doesn't really do that well, and so what we do is we train a corpus of data that it knows how to digest Well. Is we train a corpus of data that it knows how to digest well? That's specific to a client and their particular audiences and needs and capabilities and solutions and all this type of stuff. We take the work on up front so a client can just chat with it, just talk. I mean no prompt engineering at all. We also do a lot of work around um, deciphering writing style, voice, things like that, and so negative, a lot of words. Um, you know we define audiences and personas at a really good level. It can digest that really well, but you can't do it with a couple of pages and a few thousand characters of a prompt, right, right. And so what we, what we get, and I showed you a couple of weeks ago. It was like you start to have a conversation with it and you say, oh hey, tell me about, you know this really minute point here. Wow, let's let you know, outline a 2,500 word article on this topic, and you know it gives you a very robust, super detailed, very granular and on message response. Back Now, context is something that I talk about a lot, and not just the context window.
Speaker 1:But this is that idea of when we tell it, when we decompose a client's information, put it up there. We're giving it context to the situation of what you as a writer, exactly what you need, who you're talking to, why, what. We connect all that stuff together for the LLM. So it doesn't have to think about it Now, functionally, what it's doing is that data, the query and instructions go up. It shops for information, pushes it into OpenAI, where we allow OpenAI to look outside of that our information about 20% of its aperture, so we're not relying upon it to know about your readers and their needs and a particular topic. We tell it all about that. So it's coming at it with your point of view already, which causes us to be able to get around what you were just talking about.
Speaker 2:So talk to us a little bit about the specific use cases and you know, I know in prior conversations, uh, you've kind of targeted sales and marketing operations as key users. But how might one of those groups use the platform and what do you anticipate? What are they seeing as a result?
Speaker 1:So I have several different use cases right now. In fact, just this morning signed an agreement with an SEO company. The person I'm working with there is frazzled. I mean, they're having a blessing of a lot of work to do and it's just really hard and time consuming to come up with the creative brief, to write a page, to do this. And we started doing some demo tests where it's like it's so easy now because we're training it on the corpus of data for their client. It's not trying to guess, we're not trying to have to tell it how to think. We've already told it up front.
Speaker 1:And so content creation short, long form, uh, video briefs for short videos. You know scene selection and scripts and things like that. Translation to five, six. You know common languages. Fantastic at it. Wow, now something the more specific you get with it, the more differentiated it becomes. So if you're talking to it and saying, hey, you know this persona, that industry, this need set, this solution, this, you know, challenge, whatever this environmental variable, the more specific you get, the more it stands out, because the human mind can't do that.
Speaker 2:And so, when you see it, it's like yeah, go ahead. Yeah, I'll just say which. With chat GPT it's just the opposite. Result Right Higher specificity and input more generic the output.
Speaker 1:Right, right, I agree, that's my 10,000 hours of, you know, gladwell experience. There is, um, yeah, been there, been there, done that? Uh, the other thing that's interesting is sales strategy, or sales enablement. So we train, we instruct an instance differently, we instruct it to the salesperson. We have a couple of formats. They could have a conversation and get their answer, or they can just like select buttons.
Speaker 1:It's from this industry, that firm size, this role oh, it's a different role. I'm going to type the role in. You know it's their challenge is this? My objective in this meeting is that, and it'll come up with. Hey, here's the top three topics this person is likely interested in. Here's prompting questions to see if they are interested in it, if they are. Here's some discovery questions, like, and it guides them through a conversation of what might this person be most likely interested in. And my experience with this is, in any organization, the top 10% of sellers understand about 50 to 70% of what the company can do. It kind of drops to the X axis pretty quickly and as you get further out on the tail, the more people either have one or two things that they know how to talk about and then the broken clock is right.
Speaker 1:Twice a day occurs they get a sale or they just, frankly, I've seen people just make stuff up and you know, then it's like, okay, can't we do that? I thought we could, and you know, oh, we got to figure out how to do it. And whereas you use this and you could have anybody on the team become, you know, 70 to 90 percent knowledgeable, and not that they had to go into months of learning, but they could just describe their situation in very detailed manner and get a great answer back.
Speaker 2:So the quality of the output is based on Simplexity AI's ability to ingest specific language so that it builds context for the company, the use case, the audience, the whatever, all of that. To me, being a non-technical person, that sounds like a heck of a lot of work and time involved to create that learning. In reality, what does it look like? You know, if someone who's listening to this is thinking this could be a lifesaver which it sure sounds like it could be uh, what's involved in in integrating it into their workflow? I mean, of course, costs aside, but you know how long would it take mid-market marketing company to be active in using Simplexity?
Speaker 1:I'm one of the founders A lot of this stuff. I actually did the beating my head on a wall until we figured it out, type of thing. What you're talking about here. It's an interesting challenge because it's not a technical challenge and it's not a business challenge. It's in the, it's in the middle and it it still today.
Speaker 1:I don't see any documentation of how to organize training data that performs nearly as well as what we've been able to get at at all. So to the nature of your question. Our goal is that you come to me with a need. We have a series of questionnaire to go through, then we ask for data, we sign an NDA and all that type of stuff. We do the work for you. Most people that we talk to has some level of anxiety over AI or a lack of knowledge, and it just seems like holy cow. This is ominous. I don't know what to do, so just say look, we'll run a demo. Okay, cool, that works great, here's how you use it and we do the onboarding. It's typically a couple of days of work on our part, but you just described multiple situations where you spent days fruitlessly.
Speaker 2:Oh yeah.
Speaker 1:Oh yeah. So I think that is one of the things that's under calculated is the opportunity cost of your time working with tools that aren't going to get you where you need to go. You can't out-prompt them, and prompt engineering shouldn't be a thing. The thing is context, and that's that training information. And then there's another layer that sits just in front of that, where we program it to understand what are you trying to accomplish, who are you trying to talk to, what are the output formats you're looking for, what is your writing style and brand voice, and all of this stuff goes adjacent to that, so that you just as a human, approach it and start, I like to say, talk to it or have a conversation, but you just start typing like, hey, tell me about you know this aspect of that? Oh, okay, cool, oh, I like that idea here. Let's talk more about that, tell me. And it's super easy. That's a whole thing. We try to make it so easy. Anybody of any technical capacity could just type some English and get a human, quality, on-brand response back.
Speaker 2:That's our goal you know, um, we talked about this a little bit in our earlier conversation, but it it brings to mind all of these studies that are coming out lately about how much companies are about to budget for investments into AI.
Speaker 2:And it's this giant, you know, big, big chunks of technology, big, big chunks of marketing budget being pointed at AI, big, big chunks of supply chain, anything that requires a lot of computation, anything that requires a lot of computation. But I think one of the things you've brought up in these conversations is the very human hesitancy to actually engage with this. It's one thing to deploy AI as a data analyst, to analyze your stuff, huge quantities of data, and come out with something useful on the other end, but as a non-technical person, as a non-data analyst, there's enormous power in AI if you can figure out what to do with it, and it sounds like your approach here is specifically geared toward getting this tool set in front of the broader audience. So it's so that can be useful. I'm paraphrasing, but that's kind of my big takeaway from our, from our couple of conversations.
Speaker 1:You know, when I was beating my head up on the consumer chat bots, I just finally stopped and asked myself it's like, if this all, if this is all that AI is, why is this even making noise right now? It's, it's, it's interesting, but it's only modestly interesting for a business to look at it. And that's where I kind of, after many, many, many hours, figured this whole thing out. Now, one of the things that you're heading towards in the conversation is, you know, today most organizations do not have a policy. They do not have, you know, the conversation going on internally about how should we use it, what should we do? I wrote an article on a MarTech a month ago, but I brought up a process that we use and you know, the idea is the plan. The second one is the use case, the need. So I think these are iterative cycles where you have to go okay, use case and need, use case and need. Okay, great, we figured out where we can create value in the organization. Third step the goal. And the goal has to be effectiveness. So everybody talks about efficiency, but efficiency is a byproduct of what AI can do. But if you don't focus on effectiveness, you're just going to scale garbage. Focus on what is it that creates output that you need and solve that, and it will become more effective on its own.
Speaker 1:The next one is the challenge. The challenge is context, and that's really what I have focused on with our organization is the tools should be built to wrap around you and not you around it. Wrap around you and not you around it. And today that's the biggest challenge with AI adoption is, people think that they have to. You know, go to prompt engineering school and you know, get a degree in something it's like no, that's, you're just using the wrong tool, it's not built for you. And then the path is the last one and there you have native applications like chat, gpt, claude, things like that. You have embedded solutions You'll find in your major like uh, uh.
Speaker 2:Gemini, et cetera.
Speaker 1:Yeah, well, so the first one, native Gemini and all that, Claude, chat, gpt, right. But embedded solutions would be like you know, in your systems of record and systems of measurement around your company, there's AI that starts to natively get involved or embedded into that system. And then the third is purpose-built, and that's what we do with Simplexity. But those are the three kind of channels that you can start to explore.
Speaker 2:Try to make it real. I don't know if this is even practical or feasible, but I think that just having seen it kind of over your shoulders in a prior conversation, I'm fascinated by it. I think it's for what we do here in our operation could be transformative. I'm going to invite anybody who's listening to this, who's as intrigued as I am, to visit Simplexity AI's site To set up some sort of a working demo where anybody who's on our site can access your site for nothing else than just to see what it is, or something like that, because this is what everyone knows chat GPT to be be unthrottled.
Speaker 1:Right.
Speaker 2:And that's a big deal. That's a really big deal. With that, Mark, thank you for this. I really appreciate it. I think you're on to something that's certainly a big deal, A huge deal for SMBs, who finally have some way to access this giant tool brain and do something useful with it. But on an enterprise level I'm just thinking of the dozens and dozens of marketing operations and CX operations that I know of, where they're deploying AI, but it's for giant computational purposes, not using it as powerfully as they could to get some of the day-to-day grunt work done To do as you said. It becomes about efficiency, but allows us worker types to really focus on outcomes and not so much about outputs.
Speaker 1:Right on. Hey, mike, it's been fantastic. I'm really happy that you invited me here, and it's always great to talk to you.
Speaker 2:Well, thank you, so let's do this again.