The Interconnectedness of Things

You're Doing AI Wrong: Lessons in Data and Decision-Making

QFlow Systems, LLC

Share Your Thoughts With Us!

In this episode of The Interconnectedness of Things, hosts Emily Nava and Dr. Hutson dive into the world of artificial intelligence with a fresh perspective. Dr. Hutson, CEO of QFO Systems and AI expert, challenges common misconceptions about AI with his presentation "You're Doing AI Wrong." Recorded live after his talk at the AI World Conference, Dr. Hutson explains how AI, large language models, and knowledge management are evolving, while providing practical tips for integrating AI with real-world data. From a brief history of data to building your own knowledge graph, this episode is packed with insights on how to make AI work for you. Whether you’re a tech enthusiast or business leader, you won’t want to miss this conversation on how to harness AI’s potential the right way.

About "The Interconnectedness of Things"
Welcome to "The Interconnectedness of Things," where hosts Dr. Andrew Hutson and Emily Nava explore the ever-evolving landscape of technology, innovation, and how these forces shape our world. Each episode dives deep into the critical topics of enterprise solutions, AI, document management, and more, offering insights and practical advice for businesses and tech enthusiasts alike.

Brought to you by QFlow Systems
QFlow helps manage your documents in a secure and organized way. It works with your existing software to make it easy for you to find all your documents in one place. Discover how QFlow can transform your organization at qflow.com

Follow Us!
Andrew Hutson - LinkedIn
Emily Nava - LinkedIn

Intro and Outro music provided by Marser.

Dr. Hutson: Okay. Coffee. If you don't drink it, it's okay. Probably best not to start now. But for those of us that have little pro tip for Starbies go. 


Emily Nava: Exclusive. Exclusive. Exclusive on the internet. Interconnectedness of things, exclusive. 


Dr. Hutson: Exclusive right here. All right. So a couple of pro tips. Number one, if you can bring your own cup, bring your own cup. You get extra points for bringing your own cup. You're saying, oh, I already knew that Hudson. Tell me something I don't know. Okay. Fine. Let me raise the stakes a little. 


Okay. Next, bring a cup that is quite large. Personally, I bring my green 32 ounce or everywhere I go. It's great for a little bit of ice water, but also great for coffee. 


It's multi purpose is what I'm saying. Okay. Fill this up. They're like, oh, that's way too big. That's like a Trenta. 


That's expensive. No, no, no, no, no. Hold your roll. I'm a nitro cold brew person. Okay. 


Go ahead and order that taller grande. And here's what you do. They give you the cup. It's not full all the way. You look at it and you say, can I buy my refill now? Mm-hmm. Boom. Right away. Of course you can. We give refills. 


Emily Nava: And now you got a free refill. Free refills. 


Dr. Hutson: A more full. 


Emily Nava: And if you're not in a cold brew, you can get a pike place. Absolutely. Get a hot coffee. 


Dr. Hutson: Get a smaller size with a refill. That's just smart. And then you don't have to wait around to drink your coffee and then get the refill once your cup is done. Right? 


Emily Nava: Yeah, but wait, there's even more. 


Dr. Hutson: There's even more. Okay. So the kids, the kids these days, they're loving Starveys, but they don't drink coffee. They are little sugar diabetes patients waiting to happen. Okay. 


Emily Nava: My personal favorite is a Javitship Frappuccino. If I want oil, treat. 


Dr. Hutson: I am having like shakes right now. Just thinking about that. The jitters. Have you had one before? No. I mean, it sounds delicious. Let's be honest. 


Emily Nava: It's a cold chocolate chip cookie with ice cream in a cup. What? So good. 


Dr. Hutson: Is it caffeinated in any way? 


Emily Nava: I think there is like a touch of coffee. Okay. 


Dr. Hutson: So like coffee adjacent. Yeah. 


Emily Nava: Like do you want some coffee with your sugar? 


Dr. Hutson: Here's some coffee beans shavings on the top of your treat. So I was like, oh man, I am going to make out like gangbusters with the personal cups. I had all of them bring their own cup. They can order the thing, share it around, do that. Look at my points. Only counted for one. Only one personal cup counted. 


Emily Nava: In the one person? Okay. This is unacceptable. Don't like that. 


Dr. Hutson: Yeah. This is unacceptable. I thought so instead the next time I went with me and the Mrs. I did two separate orders. Pick them up at the same time, but they were separate transactions. Boom. 


Got credit for both personal cups. So don't bulk order. You wanted a time in it and things will work out. Okay. There are our pregame tips for coffee before you hop into the podcast. 


Emily Nava: And you can't get those anywhere else folks. You heard it here. 


Speaker 3: If you got a coffee using one of these tips, we would love to hear from you. We have a fancy new fan mail feature on our podcast episode description. If you go ahead and click the share your thoughts with those button, you can send us some fan mail. So we'd love to hear from you. 


Alrighty. So welcome back to the interconnectiveness of things podcast where we dive into the intersection of technology, AI and data management. I'm your host Emily Nava. 


And today we're joined by Dr. Hudson, our co-host here. He's a seasoned expert in AI, business intelligence and workflow automation. He holds three graduate degrees, including a PhD from the University of Missouri. He's currently our CEO at QFO systems where he's spearheading efforts on knowledge management work through SASS solutions with our platform QAction. So today I've invited Dr. Hudson on here to take us on a journey that challenges common assumptions about AI and that his curated talk titled, You're Doing AI Wrong. So he'll share some lessons learned from QFO's 20 year evolution, his personal experiences in LLMs, AI and knowledge management. So if you've ever wondered how AI really fits into decision making or how to build a better knowledge foundation for your company, you're in for a treat. So let's jump right into it. I'm with Dr. Hudson with You're Doing AI Wrong. 


Dr. Hutson: Oh my gosh, what an incredible intro. We're going to have some fun with this one today. So all of this is based off of a talk I recently gave out in San Diego at the AI World Conference. And it was really interesting to talk through that with the attendees there. 


A lot of them centered around the legal team, compliance, records management across private and public sectors. So it's a great intersection of folks that I got to interoperate with and share these thoughts. The great thing is what I'm about to talk about today is something none of them knew about. And in fact, address many of their concerns as they're starting to think about AI. Kind of cool, huh? Super cool. 


Emily Nava: And interesting that you brought something new. 


Dr. Hutson: But before we go too far, we're going to do a brief history of data. Okay. Just a little interactive. We're going to talk through this. I'm going to ask Emily great questions. I'm going to ask her to describe what she's seeing for our audio only listeners. You will just close your eyes and imagine greatness. Here we go. So I now pull up an image right now. Emily, how would you describe this image? 


Emily Nava: All right. So I'm seeing what looks like a stone with different markings etched into it. It looks like some kind of ancient language. Nailed it. 


Dr. Hutson: So this is a clay tablet that's been dried. Kind of looks like a stone. And it was etched with cuneiform, ancient language for writing down in ancient Sumeria. This is one of the first examples we have of writing something down and it getting recorded that you can look at it from a different time and place. 


The reason this is a significant moment is it's based on our current understanding up until we had this writing. The only way to convey information would be in person and vocally. You'd have to remember it all and oral traditions passed down stories and learnings and commitments and all these different things. Well, what this now does is allows us to get it out of our head and whether we're there or not, we're able to communicate. And in fact, three, four thousand years later, here we are, knowing what this person wrote down. 


Emily Nava: That's pretty cool. Do we know what it says? 


Dr. Hutson: I think this particular one is a ledger. So this is looking at numbers to understand what is owed. Oh, like a receipt. Like a receipt or a general ledger for a company. So here's my inventory and things like that. So you can always see like accountants are as old as civilization, it seems. Yes. 


Emily Nava: Shout out to the accountants out there. 


Dr. Hutson: So now here's another image. Describe what you see for our audio listeners. 


Emily Nava: Okay. So it looks like a like a screen print image, black and white. It looks like it's showing the printing press possibly. 


Dr. Hutson: Okay. Nailed it. So this is a wood carving that's been printed out to or screened and it is of the Gutenberg press. So this is another significant movement in how easily we can share information. And in fact, the press itself was not the innovation. The pieces that you're seeing here, the big wood screw that they're pulling and pressing down, they had that for a long time and it uses to make wine. In fact, that's how Gutenberg got his parts. Because he saw somebody pressing it out and he got an idea. The innovation was the repeatable font. So he and his team spent painstaking amounts of time making hand making little metal pieces of fonts that he could combine together to make words in different ways and then press it down and an S look the same no matter where else it showed up. And so it made it extremely readable and then more people had access to information than they had before. So this is a big step. Fantastic. Fun fact. Fun fact. 


Emily Nava: The first font is called black letter because of the Gutenberg press. It was covered with black ink. And so they made a digital recreation of the font and it's called black letter. 


Dr. Hutson: Great call out and you're absolutely right. Isn't that fun? That's what they spent a ton of time doing and that could be a whole podcast on fonts and communicating. But we'll move on to the next picture. All right. What do you think you see now? 


Emily Nava: This looks like a black and white drawing or sepia drawing of a typewriter. 


Dr. Hutson: It is. In fact, this is a rendering of the first typewriter and I was amazed to learn that this is from the 1740s. I didn't think typewriters came around to the late 1800s at the earliest, but this had been around for much, much longer. The reason I bring up this example is to show the progression. So first we just spoke about everything and that required synchronous activity. Then we found a way to get it out of our heads and now I could disseminate information without being physically or temporally there at the same time. Now with the Gutenberg press, I could disseminate copies and more people to get access to the information. 


It just wasn't the one clay tablet. Now it's more portable. So rather than having an entire room to disseminate the information, I could have something a single person could carry. So now the power of me generating that information is much more personal. Kind of cool. Very cool. Okay. Next image. What do you see? 


Emily Nava: So it looks like an upright piano with a bunch of, it looks like clocks in the top of it. And then on the right, it looks like some kind of press, like paper press. Yeah, really close. 


Dr. Hutson: So this has a story. There was a fella taking the train in the late 1800s. And as he was sitting there, he was looking at the ticket master walking through up and down, taking people's tickets. And he noticed that he was punching holes in the tickets to know where these folks were supposed to get off or if the ticket had been consumed. He had a method and he thought to himself, wow, that's a really fast way to record information and super simple. 


I wonder if I could use that with my work. And it just so happened, he worked for the US Census Bureau and they had a big problem. The US population was growing and the handwritten census that they had to try to keep track of was getting more and more impossible to do in a handwritten way and keep track of everything accurately. So this is the first census tabulating machine. And you're right on the press, it would press down and those are dials to count based off of the holes that were punched in the cards, kind of like a scantron when we were kids. This innovation turned into IBM. 


Wow. This was the thing that made IBM, which I thought was absolutely fascinating that the first tabletting machine then grew into some of the first computers. But this is a great example of there was it's so much information that we need to collect. We need a different way to collect it beyond typing it out. 


So we had to use an invent tools to help us deal with the sheer amount of information. Okay. Seeing all of this. 


Why? Why do people spend so much time collecting data? It doesn't start to make any sense. 


Like, why do I need to do that? What benefit do I think I have? Like, is it going to help me hunt and gather better? 


I don't think so, but it could. Right now. So then why are we collecting all this data? Well, for me, it comes down to decisions. We've probably all seen the science fiction movies of people who are so brilliant because they can take all of these variables and seemingly predict the future. Right. It's like a superpower we all wish we had. And that's all born from us wanting to make the best decisions to advance it's dark cells in the world. 


That's all. And whether it's all the way back when we were hunter gatherers, where do we find the best food? Where do we find the best hunting grounds? 


Where's the best soil for growing? All of that had to do with how can we do things better and we need information and data to help us do that. That's the core of all of this effort in my view. To advance our species? 


Yeah. At the end of the day, it comes down to we're able to adapt and master our environment in a way no other animal can because we figured out if we learn and collect information. We'll make better decisions. Survival. 


That's right. And there's a lot of different thoughts and hypotheses about philosophies on thinking and how people make decisions and the things that they go down to make those decisions. And for our talk today, I'm going to really break that down into two simple categories. The first is called case based or similarity. So that's when we're able to make a connection ourselves to something that happened before and to something that's happening now. So we're able to match and say, oh, that's probably the same. You can say, last time I went and ran out in the rain, I got wet. So pretty confident if I go out and run in the rain next time, I'm going to get wet because the cases seem similar. 


Okay. Now you get more complex with what the cases are over time, of course, so you can start to see those similarities. And then as you grow your cases, your examples, you'll start to see overlap and discover new cases that you hadn't known before. 


And additionally, what you'll start to do is move to the second tier of decision making rules. I know water is wet. I know rain is water. I know rain will make me wet. And that will be always. I don't have to go outside to confirm it. We can do those same things as we develop from when we're little kids. That's all we're doing. We're trying to find the different cases and experiment and will this work? 


Will that work? And we start to build that corpus of knowledge as we go. And it takes years and years and there's people who are better at it and worse at it. 


But that's all they're doing. It's a process of learning. Okay. The thing, however, is you need context to build that. And knowledge needs context. The context in which we live has to do with what's very, very personal to you. 


Then it goes out to the group. You can think of your family, your neighbors, your community. Then it grows beyond that to your organization. 


That could be your company, your city, state, province, country. Maybe even continent. And finally that goes out to the world. So everything, world or even universe that is available out there helps you build your rules and cases. Now the proximity to you will naturally give it more importance. 


So things that are on the other side of the world are going to feel more far away than something happening in front of your face. And that's just natural. That's how we think about these groups. And I make a nice little visual here of concentric circles that kind of show how they radiate out from you. So that's how we really think about the context by which your knowledge is grown. 


Emily Nava: And so each group of, or I guess each realm of knowledge is affected by the lesser realm of knowledge. So like your personal context is going to affect your context for world knowledge. 


Dr. Hutson: Absolutely. Nailed it. And that's why it really starts from you. And another word for that is bias. People can't get away from their bias. They're how they see the world, how they think about it, what they pay attention to, what they don't pay attention to, who they spend their time with. All of that is really impacting your decisions and your world. The other thing that is happening right now is all of that stuff that we did between the cuneiform and the tag-blading machine and computers, all that stuff made the amount of information of data grow exponentially. And it makes it harder and harder every day to incorporate group organ world into your personal view. And so we think we use tools that help us disseminate through that. And we don't always really understand how that works. 


And now our latest hype, the hype train, the exaggeration machine, is that AI is the solution. But is it? Well, let's find out. Okay. AI only works based on the data and the rules that it's been given. 


Emily Nava: Let's repeat that for louder in the back. 


Dr. Hutson: AI only works based on the rules and the data that it's given. I think we today have an unrealistic understanding that AI is somehow sentient, thinking, and reasoned. It is simply a reflection of those that have built the rules and input the data by which to build the outputs of the AI. So that's a problem. If you're going to use an AI but you haven't given it any of your data, it will necessarily not include you. And if we agree that you is the most important context, or at least from your perspective, of you, group, or world, if it doesn't include you, it's harder to use it to make decisions. The other thing that AI needs is the context. So it needs good data that's put into it, but it also needs to know how that information is related. Because if it doesn't have the context from your point of view, from your group's point of view, but only has the context, let's say, from a billion internet pages, I'm sure that they'll be able to derive some value from those, but ultimately it doesn't have your needs for context. You have to do something more. And I think over the past couple of years as generative AI has been part of our collective experience, you've likely already seen that it can do some wild stuff. It doesn't make any sense. 


And it can't really get what you're trying to ask all the time. No, no, no, no, no, no. I don't want it formatted like that. No, no, no, that's not what I mean by that. 


So, I'm going to go back to the video. So, if you have a data problem and AI needs context, it's likely not the solution unless you can build context with your data. And how do you do that in a way that AI can take advantage of it? It's a big question. A huge question. 


Emily Nava: Other than spending that time training AI how to do what you want, I'm not sure. Yeah. 


Dr. Hutson: Well, and I won't speak for you, but I'll speak for others that aren't on here. I don't know that everybody is an AI engineer. 


Emily Nava: No. I mean, that takes some incredible communication skills and knowing how to communicate with robots. Yeah. 


Dr. Hutson: And I mean, not to mention the statistical knowledge that they have to have. Do they know the difference between the ng value and a p value? And probably there's that one listener that knew what I was talking about and everybody else has stopped listening by now. Yeah. But most people won't have that ability, nor should we expect them to to take advantage of these things, but it is a problem that if it doesn't have this context, how can they truly take advantage of it in a meaningful and reliable way? And to give that answer, I like to break down a couple of concepts to help folks understand what I mean when I say things like data, information and knowledge. So when I talk about data, and I have a great visual here, imagine in your mind just a bunch of ones and zeros. Okay, that's just data. If you go look at that, you're like, I don't know what the heck that's telling me. Okay. 


Emily Nava: And that's an important distinction because data can mean a lot of different things to a lot of different people. So for this conversation, we're talking about binary code. That's right. 


Dr. Hutson: And to take that even farther, not necessarily only binary code, which is the visual that I use, but just any piece of data that has no data. Definition or context. So in my example here, there's a bunch of ones and zeros and then you have a number 80 and a number 120. 


I highlight it. But do those numbers mean anything to you? 80 and 120? Probably a billion things or no things, right? Yeah, no clue. So that helps us then up level to information. 


Information now gives that those data points context. So it's blood pressure is 80 over 20, over 120. Excuse me. Oh, okay. I see, oh, those two pieces of data actually are a measurement that pertains to blood pressure. Okay, well, that's information. But then you start to ask, why do I care that you have that measurement? 


Speaker 3: Right. What can I do? Okay. 


Dr. Hutson: Cool. And that's where we elevate the knowledge. So knowledge is taking pieces of information. Okay. So remember that's data with definition and relating it to another piece of information so that you can see that blood pressure of 180 over 120 is considered controlled based off of the NCQA. Quality measure that unmanaged blood pressure would be 90 over 140 or above. 


And because you have that relationship back and forth, now it gives you the context you need of do I need to do something about my blood pressure? Mm-hmm. Right? Okay. So now we think about that in different terms. Okay, so we established data information knowledge. Let's think about that in terms of giving context to AI. So here's some new terms. A list. Emily, when you think of a list, what do you think of? 


Emily Nava: A list, or it's hard to describe what a list is. I guess words or numbers in one column. 


Dr. Hutson: Yeah. At most they could be in sequence, but not necessarily. It's just that you see some lines and then words and letters that you can read is typically a list, right? Right. But you don't know how they're related. 


Yep. A taxonomy then is lists with hierarchies. So in the example we're showing here, grandparent-parent child, and you indent as you go. Now you understand that there's a top-to-bottom relationship between those terms. And you can have multiple hierarchies. And probably the classic example is when we think about the classifications for animal, species, genus. That's a classic taxonomy example. There's a hierarchy. But what we find when we get into strict hierarchies or taxonomies, not everything fits into a nice box like that. And there's relationships that happen between these things that would violate the taxonomy. So it doesn't really work with the real world. It might work with a Dewey Decimal System, but spoiler alert, most libraries are leaving the Dewey Decimal System because it's outdated and they're making up their own. 


Emily Nava: Sorry for all my Dewey fans out there. And the next evolution after taxonomy, just like it is with information and knowledge. So taxonomy is information. Knowledge is an ontology. That's where you can take the discrete pieces of information and start to relate them together in a meaningful way. So this ontological creation is the context that information needs. And spoiler, I guess not a spoiler, I mean you listen to this podcast, AI needs that. So now the question becomes how do we get that to AI? 


Emily Nava: So as you were going through those three on the list, the taxonomy and ontology, I was thinking of the New York Times game connections. And how it's just a three by three square of just random words. And then you have to pick like why those words are related. 


Dr. Hutson: Yeah, something humans are kind of good at, right? But man does it take a lot of resources and power to get a machine to do that. Yeah. And at the end of the day, we need to give the machine the ability to do that. Given that great example that you just shared of like what humans are good at, now that's really getting into the heart of our solution. Knowledge graphs. Now if you haven't heard of knowledge graphs, it's okay. 


This is like an insider thing that only people who know typically know. Knowledge graphs are a way for a database to capture ontologies in a way that other databases can't. Normal databases are built on constraints. 


So you can think of I'm connecting two tables and this one ID has to be constrained against them. That's not how people think. That's fine for financial transactions. It's not fine for knowledge work. Things are more connected in deeper, more robust ways than traditional relational databases can allow. Knowledge graphs comes in to help you build that out. So given that a knowledge graph can capture your concepts around ontologies, there's some tips I can offer to help that go faster for you. 


Emily Nava: So this is how you build a knowledge graph. That's right. Okay. 


Dr. Hutson: The first thing to start with is identifying the types and relationships of the data that you're going to use. Now that's the heart and that's the foundation. So it could be I have my notes, I have my projects, I have my research, I have areas of interest. All those things can start to build out of how you would describe your world and how you would classify that stuff. Now I'm not saying that's easy and I might have just overwhelmed 98% of the people listening just to do that. So you need a way to make it a lot easier and approachable, but at the same time this is the heart of what you need to build out that knowledge graph. And the relationships is really the definition and the quality of how you're identifying and connecting these types of information. So relationships is not just is it related, but how is it related? Is there other descriptions that you can provide for that relationships that are unique to those two pieces of information? That's what a knowledge graph can help capture, but they need a human to define it. For the most part, humans are intuitively good at this. But where they struggle is trying to use the terminology that the system wants them to use. 


So I'm going to use things like types and relationships and nodes and predicates, edges. That doesn't mean anything to anybody, but they do know that an orange is a fruit. Right? They know that there's a connection between those two concepts and words. 


That ease just needs to be built in a way that's approachable for folks to adopt. The next thing, after you have the types and relationships and you can easily start to build that out, you need to think about where your data is coming from. So to bring us back before into our personal, our group, our org, our world, what are the sources of information that you find most useful and appropriate? Do you have any off the top of your head, Nava? 


Emily Nava: Like my favorite sources of information. Yeah. I mean Google to start and then reputable sources like I guess talking about news in PR. Yeah. Or just like weather.com for the weather. 


Speaker 3: Take a minute to think about it. So there's those places that we go to see what's going on socially, so like Reddit and Facebook. But if you think about our sources, there's news that we want to know what's going on there. Maybe there's your local, I got kids, right? So there's the local school and what's going on. 


There's their separate activities. There's election information going on how I disseminate through that. So there's all these different sources that might be more important to me than another person. So it's really understanding how you can identify those and start to collect them. 


Okay. Now, how do you store it? How do you grow it? And how do you improve it? And the only thing I can say is that it takes habit, discipline and consistency. Mm-hmm. Guess what people are not great with? 


Habits, discipline, consistency. No. Yeah. So for anyone to be insane enough to adopt something like this, they'd have to get some major benefit from it and have it be as least disruptive as possible. 


Mm-hmm. So how others are approaching this problem is having their knowledge graphs built behind the scenes as they go. So I'm not asking Emily to go build out a knowledge graph. I'm asking Emily to say, should this thing be added to your knowledge graph? Mm-hmm. Well, that's an easy, that's a button. Cool. Yeah, do that. I can have a system give me recommendations based off of Christopher Reeves and some research we got from IMDB. Looks like he's acted before with Gene Hackman. Is that important to you? No. 


Speaker 3: Should it be? Well, it's important to me. Okay, great. 


Dr. Hutson: Yeah, it was a similar old movie when I was a kid because they both made Superman the movie. It came out before I was born, but I still like the movie's kid. Yeah. And that's cool that it can make those recommendations for me, like the thing we're showing on the screen right now. But as good as knowledge graphs can be to help out AI, they still have challenges. First has to do with data integration. So we talked about before identifying those sources. The ability to actually connect it into your knowledge graph in a meaningful, seamless, updatable way can be difficult. More and more we're seeing how tools and SaaS products offer integrations with hundreds of different softwares out there. So the possibility, the potentials there, but the way in which we want to integrate it has to be extremely personal, as well as acceptable to the group and to the org if what you're doing personally, like you just didn't care about the movie I like, it doesn't disrupt you, right? I can still like the movies I like and it doesn't bother you. And now the knowledge graph knows what I like and knows what you like, but they don't have to be the same thing. 


Yeah. The next thing is scaling, the speed by which we have to grow these knowledge graphs. And that can feel overwhelming. And it can feel like, well, if I want to, if I have to connect everything together and I've integrated it, then my knowledge graphs have to hold all that data and it's just going to get too big and then I'm not going to be able to use it and I'm going to get overwhelmed. 


That's exactly what's going to happen. So you have to be really particular about what you include and how you include it. And then finally is the quality. If anybody has used AI, generative AI specifically recently, they can see a diminishing return. 


Yes. For some reason you're going down a chat and it's just getting worse and worse as you go or workloads that seem to work really well in the past don't seem to work well now. And that all has to do with the sources of data as well as the reinforcement of that data. And so being a really good steward on the quality is super important for a knowledge graph. So here's some best practices. Start simple. Mm-hmm. Start with system generated data so you don't have to worry about different terminologies from different folks. That makes it more reliable. It keeps the quality up. 


Emily Nava: So kind of like how we, how we had to define what data meant because the word data can mean a lot of different things. Yeah. 


Dr. Hutson: Do I mean data like the binary thing or do I mean data like the character on Star Trek next generation? Right. Like who are we talking about? Okay. That's exactly right and that's where the collaboration comes in which is my next tip. So if you, it is very easy to build a PKM, a personal knowledge management system. However, the power of knowledge graphs in AI happen when collectively we can take advantage of it. 


That doesn't happen in silos. And so being able to collaborate with others and how you build out the corpus and knowledge for your team and organization is absolutely critical for competitive advantage. The final tip is daily hygiene. Things change. Do things are learned. 


Other things become less important. Being able to keep that up to date in your knowledge graph is necessary. The problem becomes how, how do I do all these things? 


Because it's not easy just to do it and just like, okay, great podcast. But what do I do? Well, we haven't found it. 


We haven't found what to do there. But holy cow, we know that if we can get all this together, now we can use these knowledge graphs to come into our AI models and something called RAG, retrieval, augment and generation. This is critical. This has taken all those stuff, all those things that your team has built in cases that you've built and rules that you've determined and collecting that together in a knowledge graph type graph database. And being able to connect that with those open source, large language models that are already out there. That they can self reinforce and self grow based on the rules that you've offered. But at the same time, you get the context of what's important to you. That's the next level. 


That solves the problem that so many people were dealing with. Well, that's cool that you have that out there. But what does it have to do with me? 


And now you get it. When you can combine these two things together, you give why it's important to you. So some other topics that have come up as I've given this talk really center around privacy. And legal's information governance, compliance, all these different things. Companies just aren't ready or willing, or should they be to just give away their knowledge graphs to these public language models. 


GPT, for example, open AI, Anthropic, Microsoft, Google Gemini, Olamah. Meta. Meta. 


That's right. They need a way that they can take what they have, build it within that little moat, keep their corporate data the way they need to keep it, incorporate information from external sources as well as internal to their team, being able to leverage these large language models to help them accelerate without having to invest a billions to train them. And then do that in a way that's incredibly easy for them to adopt. Internally, the team to adopt. And you know, today that's what we're trying to build. A Q-flow. 


A Q-flow. That's right. We are building out the ease for your team to create a knowledge graph using local LLMs so that you can take advantage of these tools without worrying about data privacy, without worrying about the lack of context to your team, and without worrying about the overhead of training everyone on how to build their own knowledge graph. We just do it for you. And that's part of your daily use. 


Emily Nava: And I feel like that's the sign of good AI is when you don't know it's working. It just works. 


Dr. Hutson: That's right. When it just works, now you've got some great technology there. When it helps you give, gives you superpowers, as I like to say. Yes. Helps you do more of a bicycle for your mind, as Steve Jobs would like to say. We're going to give you a rocket ship for your mind. There you go. There you go. All right. Well, that's a big talk, long talk, but one I think is important. And I had a lot of fun covering it with you. I think you were a great, great, great, great. Yeah. 


Emily Nava: I particularly loved the history lesson on how humans have recorded and shared data through the millennia. And I'm, I am not of the, if you know, you know group. So, but I am starting to learn more about knowledge graphs and I'm just fascinated with the innovations that are coming. And I really think that Q flow is trying to make it a turn it into a very practical thing to use for, for companies. So I love this stuff. I think it's so, so interesting. And hopefully all of our listeners have too. So if you enjoyed today's discussion, I do urge you to check out some of our other episodes. 


And we will continue to post new content surrounding AI machine learning, data management, knowledge management, and how they're reshaping the world of business right in front of our eyes. So, and if any of what Dr. Hudson said sparked any questions or feedback, use that share your thoughts with us link in the in the episode description. We would love to hear from you. Oh, I'd love to do like a listener email episode. 


Dr. Hutson: Yeah, rapid fire lightning round. That would be so fun. Yeah. 


Emily Nava: So if you want your question to be answered by Dr. Hudson or myself or somebody else that has been interviewed on the podcast. Send them  in.