The Interconnectedness of Things
Welcome to "The Interconnectedness of Things," the podcast where we explore the seamless integration of technology in our modern world. Hosted by Dr. Andrew Hutson and Emily Nava of QFlow Systems, each episode delves into the dynamic interplay of enterprise solutions, innovative software, and the transformative power of technology in various industries.
With expert insights, real-world case studies, and thoughtful discussions, "The Interconnectedness of Things" offers a comprehensive look at the technological threads that connect and shape our world. Whether you're a tech enthusiast, a business leader, or simply curious about the future of technology, this podcast is your guide to understanding the interconnectedness of it all.
The Interconnectedness of Things
Uncovering AI Pitfalls and Opportunities in Business (Ft. Matt Martinez from DragonOps)
Discover the untold truths and common missteps of AI implementation in business with our special guest, Matt Martinez, a seasoned cloud engineer at QFlow and founder of DragonOps. Alongside Dr. Andrew Hutson, Matt uncovers the frequent pitfalls and misconceptions that businesses face when jumping on the AI bandwagon without clear objectives or quality data. We delve into the role of FOMO and competitive pressures that often drive companies to prematurely adopt AI, leading to suboptimal results and wasted resources.
Join us as we shed light on the real-world use cases of generative AI in software engineering and beyond. Matt shares eye-opening anecdotes about the misconceptions non-technical managers have regarding tools like ChatGPT, emphasizing the irreplaceable value of human expertise. We discuss the practical challenges of using Gen AI for coding, including the necessity for continuous adjustments and the inherent limitations of these models, stressing how human oversight is crucial in ensuring AI-assisted solutions are effective.
Lastly, we navigate the complexities of integrating AI tools with business processes to generate meaningful insights. From the challenges of data preparation to the importance of well-structured and interconnected data, we provide a comprehensive overview of how businesses can effectively harness AI. With a focus on transcribing and summarizing meetings to improve efficiency and communication, we explore the future potential of general artificial intelligence.
To learn more about how DragonOps can help your business, visit https://www.dragonops.io/
About "The Interconnectedness of Things"
Welcome to "The Interconnectedness of Things," where hosts Dr. Andrew Hutson and Emily Nava explore the ever-evolving landscape of technology, innovation, and how these forces shape our world. Each episode dives deep into the critical topics of enterprise solutions, AI, document management, and more, offering insights and practical advice for businesses and tech enthusiasts alike.
Brought to you by QFlow Systems
QFlow helps manage your documents in a secure and organized way. It works with your existing software to make it easy for you to find all your documents in one place. Discover how QFlow can transform your organization at qflow.com
Follow Us!
Andrew Hutson - LinkedIn
Emily Nava - LinkedIn
Intro and Outro music provided by Marser.
Welcome back to the Interconnectedness of Things, a podcast brought to you by Qflow, where we dive deep into the technology shaping our world today. I'm your host, emily Nava, and whether you're curious about AI, digital transformation or how these innovations impact industries like government and healthcare, you're in the right place. Today we have a special guest joining us, matt Martinez, a cloud engineer here at Kubeflow and the owner of DragonOps, a tech firm with a strong focus on AI and software development. Matt brings a wealth of knowledge in AI implementation and how businesses can and sometimes shouldn't leverage AI to get ahead. Matt, it's great to have you on the show.
Matt:Thanks, Emily. I am super pumped to be a part of the show and to discuss AI with you and Dr Hudson today.
Emily:As are we. Yes, Dr Andrew Hudson is here as my co-host. He will be interjecting with his own nuggets of knowledge here and there. Want to say hi, Hudson.
Dr. Hutson:Hey, how's it going? I am ready with nuggets.
Emily:All right. So today we're going to explore a topic that's super important for anyone looking to stay competitive in the digital age how people are misusing AI or missing out on huge opportunities when it comes to AI for business use cases. So the age-old question how can I use AI efficiently? So let's start by talking about some of the most common ways businesses misuse AI. What's your take on that, matt?
Matt:Sure. So you know I mean AI is all over the place and you know, with all the hype there is out there right now, it's really hard to blame anyone for wanting to just dive in without too much direction and too much of a plan. The problem with that is AI isn't magic. It needs a clearly defined objective problem and high quality data. It needs to be tied into your core business objectives, and if you fail to do any of those things, then you're not necessarily going to get the insight or the results out of it that you want. So yeah, it's just, it's very easy to jump into AI without being aware of the right and the wrong way to harness things like that.
Dr. Hutson:That's a good point, matt, and it almost feels like we need to step back in time a little bit. Right, and we don't. You talked about the hype of AI, and I used to work with someone who would call everything a hype train and apparently it was a good thing, and I would get all upset of like, well, no, that's hyperbolic. Why would you want to be on a hype train?
Emily:Because it's fun.
Dr. Hutson:Because it's fun. Ai seems to have caught the imagination of a large majority of people because of the great demo ability of generative AI and so people being able to interact with this, they start to jump to conclusions on what it is actually capable of doing without truly understanding it, and I think, if we take this back in time a little bit the past 20 years or so the effort of building data centers and getting data organized and getting to a source of truth and that gold standard this seems to be the logical progression of things that technically, are already AI before ChatGPT ever came on the scene. It just has a novel probability matrix that it's using to say. This is the next word part that should probably come next, but that's it. It's not intelligent, it doesn't think it's based off all the training data that it has. This is probably the next right thing to say, but then, unfortunately, you also get hallucinations.
Emily:Yeah, unreliable answers.
Dr. Hutson:Yeah, and you get degraded performance over time too. So I think it's the wrong way to use it is to use it as a replacement, or for companies to use it out of context of their work, and so then the question becomes how do you get the context into the system? And I think that brings up questions like security and privacy which, Matt, I know you know everything about security and privacy and locking it down.
Matt:Oh sure, yeah, no, that's a whole other topic for conversation, and I completely agree with you. You know, businesses they right now a lot of these businesses are prone to just dive in without defining exactly how they need to be using it or training it, and it's just a mess from there. I also I love that you said that we need to kind of go back in time and hold the brakes on this a little bit. I completely agree with that as well. You know, I like to think that a lot of these businesses jumping into AI without thinking about the right approach is kind of like me buying a new kitchen tool off Amazon every three days. It's all about the hype.
Matt:I'm hyped, I want to make a sous vide, this or a slow roasted that and three days later I bought the tool. I don't know how to use it, I didn't prepare for it, I'm not trained in it, so now I just have an expensive tool and no results, and companies are ending up with the same thing. They're overpaying and underplanning and ending up with a disproportionately low amount of results for their effort and their investment. What do you think that is?
Dr. Hutson:Matt, what's that? Why do you think that is? Why do you think folks like is there this? Is it FOMO?
Matt:Well, you know, I bet it's a few different things. Fomo is absolutely a part of it. We are hardcore riding the AI hype wave right now. I bet it's also a fear of your competitors. Everyone is using AI and I think the implicit understanding there is if you're not also using AI and if you're not using it right now, that's a missed opportunity in business. You're not going to streamline your business. You're not going to generate that additional revenue. Keep up with your competitors. So you need to jump on it now, whether or not it's the exact right time or the exact right approach, and I think that's the misconception that's biting a lot of people in the butt right now.
Dr. Hutson:Yeah, I think there's a lot of merit there. Sorry, emily, were you going to say, say something?
Emily:I was just going to say that they, they companies feel like they need to be using it and so they just slap it onto a process or slap it on somebody's work plate and just say figure this out like we need to be using this. You figure out how we use it, and that is a whole other issue of making the knowledge worker figure out how to use AI.
Dr. Hutson:I'm going to catch 22,. Right, I mean, there's the. Well, I want to try to figure out how to use it, but I have to have the use case in order to justify bringing it in. Just this past week at the AI World Conference in San Diego, I got to hear from different folks around governance for AI and how it's seen as a black box and, without naming company names, just talking at general councils for large companies and CIOs. They're getting confronted to bring in generative AI, specifically chat GPT, into areas like coding or information governance, but everybody's afraid to do it because they don't really know how it works and they're very concerned around, I guess, the assumption that it has sentience and therefore would have some legal ramifications of liability, which I had never really considered before.
Emily:What do you mean by that?
Dr. Hutson:Sentience is like it's a thinking, feeling entity. It's a non-human thing that can think, feel and do and that's based off of your ability to type in something into the chat and asking it. If it's alive, do you think that you can do something? And I think, because it's trained on human language and human interactions to train these large language models, it would be very likely to have a very human-like response, because that's what it was trained to be like self-legislating, self-autonomous, that it is making these decisions without direction from a programmer. What do you guys think?
Emily:I think that comes into question some quality issues that people are running into when they're using these models that have been trained on unreliable sources.
Dr. Hutson:Yeah, go ahead, Matt, no you go ahead.
Matt:Sure, I'm just going to say, you know I couldn't agree with you more, dr Hudson. You know there's this prevailing opinion that these AI, these gen ai machines, are self-legislating and, you know, even nefariously self-aware, etc. As someone who works with this stuff all the time and deploys this on a daily basis, I can say that's very far from the truth. Right now it also sounds like, you know, we're talking about two different problems, two different sides of the same coin. We're talking about overuse due to hype and underuse due to fear. I see both of those and you're absolutely right, dr Hudson.
Matt:You also mentioned overuse of chat GPT in software engineering, and that's something I see all the time. I wish I had a nickel for every time I heard a non-technical project manager say something like. For every time I heard a, you know, a non-technical project manager say something like all of you developers need to be using chat GPT to develop your code. That's a big misconception. It's a result of the hype and it's not really how it works. You know that when you use Gen AI like that, it has to be for objective based business component tied you know, goals, and it has to be with human intervention. There have to be human checks and you know it has to be with human intervention. There have to be human checks and it has to be this pairing. It's not going to be a silver bullet or a catch-all, it's a balancing act.
Dr. Hutson:Yeah, I mean, do you have any? I mean, I have my own personal anecdotes about using it for coding. Do you have any yourself?
Matt:Yes, I think using, I think, chat GPT as a source of code in software engineering is kind of like the unpaid intern in a kitchen or something like that. Like, yes, you need them, you're training it, they're helping you, but it needs far less responsibility than I think most people are giving it right now. You know, 100% of the output that's given to you by Gen AI still needs to be poured over by a human. I mean, you know, obviously not at scale when you're talking about millions or billions of datasets, but for things like software engineering, where the problems that you're asking Gen AI to solve can be represented by code blocks of one, two, 300 lines of code. Yeah, you know the. You need to be supervising it very carefully because this code is going to go into production and be used for all kinds of business use cases. What about?
Dr. Hutson:you. Yeah, man, For me specifically, I have. I can see the merits of using something like that as a starting place. If I know nothing and I'm trying to figure out a new concept whatever that might be could be coding, could be anything else it could be a place to help me get started. Um, uh, but I have found.
Dr. Hutson:First of all, when I started using this back in before, it was ChatGPT, so I was using it as command line, ChatGPT3, using Python on my local M1 Mac. Okay, the amount of tweaking that had to happen even to get something intelligible back was difficult then, and I would think the height of its capability was when it was first released in November, December 2021. And ever since then, I've seen diminishing returns in the quality of the solutions returned and I've gotten more specific with my prompts and more targeted than I was before and getting worse results and really having to go through lots of iterations to the point that I just stopped and I'm like it's not doing it for me. I need to go someplace else to figure out how this actually works so I can get it done.
Matt:That's very interesting. I'd be very curious to learn more about the use cases. You know it's faster for me to ask a Gen AI assistant for 500 lines of code and then correct the 50 that are glaringly or obviously wrong than it is to write those 500 lines myself. And you know that's fortunately, at least for me, who spends a lot of time writing infrastructure code and backend automation code for the cloud and DevOps that appears to be the case is, gen AI can usually get around 90% accuracy for what I need and I end up actually saving a considerable amount of time just correcting the pieces that I can pretty quickly tell aren't correct than writing everything from scratch.
Dr. Hutson:I think that's the key difference. So when using these models, if I come at it as an expert already and I need something to enhance or speed up what I already know how to do, then yes, I think it's a novel enhancement to my workflow all day, flow, all day. To think that it can replace someone that doesn't have those knowledge, skills and abilities I think is not even close to the realm of possibility, and I have speculation for why that is, and I think it mostly comes down to its source training set. How often have you and I come across folks that have wacky ideas about coding, logic and best practices and people will publish anything out on the web without verifying it. How many different articles and forums have you been on before? We had ChatGPT that would tell you a solution and it didn't actually work and you had to find it like 50 lines down and some comment to even get close to something that could actually operate.
Matt:That is spot on. I couldn't agree more. You and I see this all the time, having worked together for years now. In fact, I actually worked for a client who ran into this specific problem a few years back. They were using machine learning for smart agriculture solutions out in California, for things like yield prediction, water management, stuff like that. They were faced with two paths. Path one was hire a senior engineer to help maintain and write better code. Path two was keep their current junior engineer and implement chat GPT to write the majority of their code. They chose that option and it it ended up going very badly for them. You know they, they. To this day, three years later, they, they, they're not able to push a ton of new code. You know they're. They're forced to consistently deal with tech debt because they've pushed code that you know their junior engineer, just at that point in his career, didn't understand.
Dr. Hutson:So yeah, I completely agree with you yeah, and I and I think that that that little anecdote there goes across many other places. So it's anything that you're so co-pilot is like the new thing that's built into visual studio code that everyone wants to talk about. But then you know, there's other systems that can be trained up, like mistral. And then, um, well then there's a classic story of chat gpt of a bunch of lawyers going and asking it to give it evidence for its legal argument and that it made up court cases. Uh, so that's cool way to check your work there, guys.
Matt:Classic lawyer joke that, and it was absolutely delightful, yep.
Dr. Hutson:And so that's how I think about AI in the workplace and where folks go wrong with it. This was a very long answer to the very first question, emily, do you feel like we covered it good?
Emily:I think you more than covered it, but I really loved the anecdotes and the personal experiences with it. I tend to use chat GBT as kind of like that intern. I like to say they're a paid intern though, because unpaid internships are just, they're not it, but I totally agree with everything that you both have said. So I want to shift the conversation a little bit too. So we've talked about how not to use AI in business. We don't want to over automate, we don't want to rely on it too much, because there's the quality issue, but there are some opportunities that businesses are maybe missing out on. So what do you think those areas are?
Matt:Sure. So you know it's really hard. I think businesses can miss out in one of two main ways and it's you know, as a whole as an industry we haven't made the advancements we need. Or some companies in an industry are using AI for specific targeted you know use cases and some aren't, and those in that case are missing out on these use cases, but you know it spans industries.
Matt:I mean you can talk about demand forecasting in a multitude of industries. You know doing things. Talk about demand forecasting in a multitude of industries. You know doing things like analyzing historical data and market trends to reduce overstocking or understocking. You can talk about using demand forecasting to help scaling and cost optimization in the cloud and in software to help with staffing levels in any number of industries. You can talk about education. So my wife is a former teacher and we're still very much figuring out how to improve things like intervention for at-risk students or curriculum development or personalized learning with AI. I think the big takeaway here is there are many of these cases where it's not necessarily a missed opportunity so much as it is. We've started to harness the power of AI, but we're not done harnessing the power. We're not done revolutionizing how to use these tools in every industry.
Dr. Hutson:Yeah, I agree with their bet on we're not done, and I also wonder if we're making some assumptions that we aren't being explicit about. So one of the things I like to do is level set on a definition of AI, and I kind of like the IBMs that they put out recently Maybe it wasn't that recent their definition of AI is a series of steps that would normally require human intelligence or intervention, and so, with a definition like that, we have to be honest with each other that AI is not new and it's been used. We've all been. If anybody has a smartphone, welcome. You've been using artificial intelligence, right? Anybody ever used the GPS and a map. There you go when to go, turn left and right it's helping you out. Where to go, turn left and right, it's helping you out. That's a classic example of something that we're already using.
Dr. Hutson:Now, where it gets interesting and where there's been a huge barrier are these predictive things that you're talking about, matt, and that's been difficult because, while we have gotten very good at storing data, we are very bad at relying on it, verifying it and getting it back in a way that we can make a decision on. It's not unheard of to ask a company how many terabytes of data they have. 20 years ago, a terabyte probably wasn't in most people's lexicon. Now we're at petabytes and zettabytes. The amount that we can store is enormous, and it's created this new problem of we need to be able to sift through it, and that's becoming harder and harder and harder. When I think about how we're going to get into the next phase of using artificial intelligence tools, I think it is smart to focus in on how are we organizing our information to aid a computer to sift through the copious amounts of data in the same quality that we would train a human to do it. It doesn't mean we're creating sentience I want to be clear on that but we have to be able to explain our decision-making, as humans organize the information in a way that can replicate those decisions, and then we have to be able to programmatically put that into an algorithm, for example, to have a predictable decision that we can rely on.
Dr. Hutson:All those things are really difficult to do and have nothing to do with chat, gpt and everything to do with how a company is able to externalize the corporate knowledge that they have, how they're able to organize and store it and then how they're able to then use it either within their current staff or within an AI model.
Dr. Hutson:And if you can't even get your staff up to speed because everything is tribal or locked in somebody's head or full of rock stars, you're never going to be able to get it from a computer.
Dr. Hutson:In fact, the only people I know that have truly gotten value from data are called quants quantitatives quantitatives In the financial sector, working for hedge funds. They're able to do things like, if the temperature drops in Nebraska by three to four degrees next to the oil pipeline, that that would slow down production of oil, which would increase the price of oil during the duration of three to six weeks. Therefore, I should make this move on oil barrels so I can make X number of dollars that level of correlation of the most minute data to make a decision which, by the way, they will never tell anybody how to do, because they are advantaged to being the only ones who know how to do that, because they can manipulate markets and pricing as a result. The rest of us have to realize that that's what it takes to actually get human level decisions that we would rely on. Now, if you want bad human decisions, welcome, we already have it.
Matt:But I don't think that's what we want to use AI to do.
Dr. Hutson:I don't think we're like hey, let me use ChatGPT to make a wrong call yeah yeah, can I have some broken code please?
Emily:but I think a lot, a lot of people don't realize that chat gpt can be wrong. They just think, like you said, it's just this all-knowing sentient being that has all the knowledge in the world. But humans have flaws and AI was created by humans, so it's going to be inherently flawed.
Dr. Hutson:So the only way to make that better is to give your best knowledge to the training and sift out the the false positives and the false negatives to the extent possible so that you can have higher confidence and reliability in whatever outputs those algorithms will produce. Not an easy thing.
Emily:All right. So we've talked a lot about the misuses, missed opportunities, how businesses should be relying on AI. I kind of want to talk about integrating AI into your existing workflows or softwares and how you think that. What's the best way to go about that? I'll throw that one to you, matt.
Matt:Sure. So you know something Dr Hudson covered a moment ago is that you know one of the most common pitfalls we're having right now is this eagerness to jump into AI without proper preparation around your data. We talked about data siloing, which is absolutely a big thing. One of the most common pitfalls we're seeing right now is companies will adopt these AI products that require on multitudes of information, but these companies are large, they have different departments with different data sets that are different structures, were, you know, raised by different teams and are disparate and don't work well toward a cohesive, single AI-powered goal. We see that all the time.
Matt:You know many businesses aren't failing to at least try to integrate their tools and their processes with AI. The failure is often in the time. You know many businesses aren't failing to at least try to integrate their tools and their processes with AI. The failure is often in the integration. So, even if the data isn't siloed, you're running into structural issues, which is another thing Dr Hudson mentioneda moment ago. So you know you have to get all these things right. You have to have unified data with purpose and meaning that's clean, and then you have to tie it to core business objectives, for example. You know you can use all the expensive AI analytics software tools out there, but if you haven't spent enough time planning how you're going to use these tools to forward business objective X and business objective Y, then your insights won't be as helpful as they could have been and you're going to end up back on the drawing board or worse, you'll have harmful insights that can actively lead you in the wrong direction.
Emily:And so do you think people are expecting or I guess, yeah, expecting the AI to know how to sort through their data sets and clean their data for them? I would, yeah, Hudson's nodding. So yeah, that can be very dangerous.
Matt:I bet it's a good mix. I bet you'll find plenty of people out there who think that AI has the ability to draw insight from incomplete or bad data sets, when in fact it doesn't, and you'll have people who have no idea what it's capable of and they're just using it, just to use it, and they're going to end up with the same result.
Dr. Hutson:I think there was an old Einstein quote about that right Keep doing the same thing over and over again, expecting a different outcome.
Matt:einstein quote about that right. Keep doing the same thing over and over again, expecting a different outcome. There we go. Yeah, einstein loved to wax about chat gpt it was his big thing back it was a big thing for him.
Dr. Hutson:Yeah, I agree that between that and quantum mechanics yeah, those two favorites, um, oh, I wanted to build on some of the stuff that you had said there. So, uh, let's bring it to a specific use case that I have seen get rapidly adopted and improve the work of everyone at our company, and that's a simple thing of taking the transcript from a recorded meeting using AI to convert the audio into text and then the text into a chat that you got an initial summary of what happened in that meeting and you were able to interrogate it for more detail, for more detail. The other neat thing that it does is when you arrive at the meeting late, rather than stopping everyone and saying get me up to speed, you can keep your mouth shut and read the summary of what had happened while you weren't there, so that the meeting can continue and you can be informed. These are small, subtle injections of using a large language model to help solve common gripes.
Dr. Hutson:We'll say I'm like what was on that meeting? Who said what? What was I supposed to do All that stuff? I'm like what was on that meeting. Who said what? What was I supposed to do All that stuff? And it's been to such an extent that people can't imagine or it becomes jarring to join a different kind of meeting that doesn't offer that, because they become so accustomed to it in such a short amount of time.
Matt:I wouldn't agree more. You know, one of the things I've been focusing on so far is the pitfalls that happen when you don't integrate AI with your existing processes the correct way. But I love what you're talking about and that's been absolutely wonderful. In fact, you know, I've seen meetings that weren't recorded by this tool. I've seen leadership turn on recording for the last 90 seconds of a meeting just to be able to dictate what was spoken about so that they can have the power of this tool at their fingertips. You know I have to guess that half of the Confluence articles and JIRA tickets that we've made in the last several months were powered by this tool.
Dr. Hutson:I wouldn't doubt it. I've certainly used it for summaries of reviews. We've made JIRA tickets from those meetings. Like you said, what that highlights in this one or series of use cases is that when you have a specific purpose and the tool that you are using is targeted at that narrow use case, it can be really effective without anybody getting mucked up. And am I using AI or not?
Emily:because no one says AI, but that's what they're using whether they know it or not, and it doesn't really matter, and I think that's the mark of the correct way to use AI, is you don't?
Dr. Hutson:know you're using it Correct, and I think if we take that nugget I know I was going to bring nuggets today if we extrapolate that out to knowledge work in general, then we can start to come up with some principles or guidelines that, if we are intending to get value from a large language model, we should first start with simple use cases and we should reinforce it with our own learning and data, which creates a new problem. Why would a company share its proprietary data, its knowledge, skills and abilities that it's externalized into some system? Why would they share that with a third party like meta or open AI or anthropic? Why would they? I wouldn't.
Emily:They wouldn't and their lawyers would not want them to do that under anything.
Dr. Hutson:You're exactly right. I mean, I sat through those panel discussions of lawyers saying it's a black box, you can't pass compliance if you're sending your stuff over there. You can't rely on those answers, don't do it. And when you start to hear that, you start to think, okay, hear that, you start to think okay, now we're just getting into an era of how, not if. How can we effectively do this while upholding compliance standards, while protecting privacy and ensuring security? The only answer that I have for that which I'm sure there's others is you have to local host it, you have to do it yourself.
Dr. Hutson:Now, thankfully, many of these models that are out there to be used are open source and there isn't a cost associated with using a model that's been pre-trained. The question just then becomes how do I get what Emily knows, what Matt knows and what Hudson knows into that model in the right way, so that when I need something from it, when I ask it a question or I ask it to do an action that normally one of us would do, that I would do it with enough reliability that we could enhance our work, like we always. I always say this we need another Emily. Well, we can't have another, emily, because she's too opposite. Well, we've still got to make. We got to multiply Emily by a billion. How do we do that? And I think this combination of how we externalize what we know, combined with the models and tools that have been published, the higher the likelihood that we would get reliable value consistently from these tools. What do you guys think?
Matt:Well, I, for one, couldn't agree more. What you're describing is a very delightful, super magical trend that we're seeing more and more of companies that are using self-hosted large language models to get that kind of insight and that kind of value out of data that they wouldn't otherwise export or share with large companies. Like you described, this is an incredible tool and a really cool trend to watch people do. This is an incredible tool and a really cool trend to watch people do, and it's encouraging. People are often intimidated by what it takes to self-host somewhere like the public cloud are becoming, if not more affordable, sometimes more affordable, but if not more affordable and also at least more scalable or, more you know, workable in a way that where you can scale this up and down and make it fit within your budget. That's something we do at Dragon Ops. Right now actually is a lot of LLMpowered data and insight management, and it's great. It's one of my favorite things that I see people do with LLMs right now.
Dr. Hutson:Well, that's awesome. I want to know more about that. Like you're using LLMs or other AI models, that approaches within your own tool to help people leverage the cloud, yeah, oh sure, yeah, and you know, you said it.
Matt:one of the biggest problems is that people want this level of insight, but they can't or don't want to, or are afraid to, share their data to get that level of insight. So one of the things we do at DragonOps is you know everything that we deploy as a platform services company for people. It's in their own private AWS accounts, and then what we can then do is deploy cost optimized and scalable versions of LLMs with an easy interface so that people can grab the kind of insights you're talking about without having to export or do anything hard or risk their data or risk their data governance or anything like that or risk their data governance or anything like that.
Dr. Hutson:Yeah, I mean, that's so perfectly positioned for the moment that right now, so many people want to leverage these things, but as we get serious about it with these companies, they can't really pass it by governance or compliance, so making it incredibly easy for them to spin these models up without the worry of it going outside of their I guess maybe the right term is their walled garden, right Like it's in their stuff. It's kind of the magic sauce that your companies bring into the world, which is awesome.
Matt:Oh sure, yeah, we're trying to solve two of the main problems around AI right now, which is awesome. Oh sure, yeah, you know we're trying to solve two of the main problems around AI right now, which is that people don't know how to use it and people are afraid to tag their data to it. So we're trying to help with both of those things. So I couldn't have said it any better than you did. You're awesome, man.
Emily:Yeah, that's amazing. I had a question about like building your own LLLM just using your company's data. Can you explain more of what kind of data you would need to build your own LLM? Is it your emails or is it your notes? Can you tell me more about that?
Dr. Hutson:I would love to, but this is something we've given considerable thought to as a company, even, I'll say, as an individual, thinking about knowledge work for the past two decades. The number one thing that people have to do in order to leverage these models is to try to understand the relationship and context of their information. Okay, the challenge then is how can I do that at scale? Because there's really four growing domains of information that we have to try to sift through as humans. The first is very, very personal. So you mentioned email and notes. You know you and I like Apple products, so let's take Apple notes as the place where I'm going to put all of my notes. Matt, I know, is a big fan of log sec. It's another great place to put notes, and all three of us have to use email. We'll just talk personal, not professional, yet All of us use email. Just taking that corpus of data and getting it organized is monumental. The habits and the discipline that have to go along with getting those few things organized for the individual is a lot.
Dr. Hutson:Now, if we grow out to the next tier, that's, the groups that we are a part of Could be our family, could be a church community, could be a sports that we do, could be board gaming convention, could be Anthrocon in Philadelphia, who knows? But these would be the groups that we interact with, that there's some information that we need to collect and sift through and connect with our personal information that we're interoperating with every day. And then, if we go further out, still beyond those groups, and thinking about the organizations to which we belong, we can think about the companies that we belong to, or even location-based, like our city, our state, our country, all these things that are happening at an org level that are not incredibly intimate but still affect us in some way. And then the final area is the world. So Apple did a really good job simplifying this down to two groups. I use four because I think it's better than two. The two groups that Apple talks about is yours and the world's, so your information is held private.
Dr. Hutson:To use the Apple intelligence, because apparently AI now means Apple intelligence if you work at Apple, because apparently AI now means Apple intelligence if you work at Apple, and then anything that's not Apple, which would be world, so you would have consent and approval to send your information out when you choose.
Dr. Hutson:It isn't done automatically, but that's still sending your data out to an LLM that's hosted by another company to get something back rather than keeping it within your walled garden. As we think about these as a future and knowledge workers, these four domains need to be managed in a scalable way for us, for anyone to actually get value from it, and that's been the really big barrier for meaningful progress on any of this. For any one person, there's tools that hyper-focus on one of these areas, but not all four, and, as a result, it's difficult to get the right context, the right diagnosis, the right diagnosis, the right prescription, the right prediction from any of the AI models, because these things aren't interconnected and as the interconnectedness of things grows in importance and yes, I am punny all day long because I'm talking not only about this podcast but in the actual interconnectedness of things- Not just the internet of things.
Dr. Hutson:That's right.
Emily:The interconnectedness of things.
Dr. Hutson:All of this has to come together in a way that is unobtrusive, that doesn't overwhelm, that becomes second nature, and once we're able to achieve that, that's when we'll really get some value from machine learning, predictive analytics, prescriptive analytics and even narrow artificial intelligence. Or, if people are correct in their assumptions, we're going to achieve general artificial intelligence in the next decade, which, based on what I'm seeing so far, I think is an optimistic target.
Emily:Matt, do you have something to add?
Matt:Oh sure, I mean a couple of things. One totally with you about general artificial intelligence. Oh sure, I mean a couple of things. One totally with you about general artificial intelligence. Anyone who's afraid of that becoming a thing in the next decade go ask ChatGPT how many R's are in the word strawberry. And then you might Dr Hudson about making sure that if you want to get the most possible value out of LLMs right now, then you're going to make sure your data is accessible and strategically well thought out and easily consumable and readable. The data folks at Qflow are amazing at putting so much forethought into the structure and the relationships of their data, such that you know if anyone's going to get a huge amount of success out of LLM usage, it's going to be the guys at Qflow.
Dr. Hutson:Oh, that's sweet of you to say, definitely a big thing, that we're trying to help everyone solve, everyone solve, and that you know, as we, as we think about a way and we start helping others frame the problem and and giving them that path and that solution. We hope to make it better, but that's probably another episode is if we want to jump into retrieval, augmented generation and knowledge graphs.
Emily:If listeners out there, if you would like to hear more about that topic, let us know. We'd love to hear some feedback from any of our listeners out there, but I feel like that's a perfect place for us to end today's episode. So I wanted to thank you, matt, so much for being on the show today, and I wanted to give you a formal place to talk about your work at Dragon Ops and the amazing company that you've built.
Matt:Sure, yeah. Well, first of all, you're very welcome and thank you as well, emily, and you, Dr Hudson. It's been an absolute blast to talk about this stuff with you and to go on this journey with you. Yeah, so, dragon Ops, you know, we're aware of these problems that we've been talking about, where people want to jump on the AI bandwagon but they don't know how or they're afraid of misusing it or becoming, you know, sweet spot. And to give you that guidance and to help you use these tools the right way so that you can get all the benefits that other companies in your industry are gaining, without all of the trouble and hassle that comes with it.
Emily:Totally, and where can people find you?
Matt:Yeah, so we're right at dragonopsio. Right there, hit us up and we'll get you back well, there you have it.
Emily:Um. So thank you all for um joining me on this podcast. Um. Thank you, matt, thank you, dr hudson. Um be sure to tune in to our next episode, uh, where we'll dive into more cutting edge topics. Um, and if you enjoyed today's discussion, don't forget to subscribe to our podcast and share it with your network. Thanks for listening.