The Interconnectedness of Things

AI's Double-Edged Sword: Balancing Privacy, Security, and Progress (Ft. Tim Koehler, CEO of QFlow Systems)

QFlow Systems, LLC

Share Your Thoughts With Us!

In this episode of The Interconnectedness of Things, host Emily is joined by Tim Koehler, CEO of QFlow Systems, and Dr. Andrew Hutson to explore the complex landscape of ethical and secure AI usage. Drawing from decades of experience in IT and AI, Tim shares his journey from Y2K projects to pioneering knowledge graphs, emphasizing the importance of safeguarding data across technological waves. Dr. Hutson reflects on how his early struggles with organization led him to embrace AI as a tool for managing the overwhelming influx of information in the digital age.

The discussion dives into the promise and perils of generative AI, addressing the challenges of misinformation, diminishing returns in AI efficacy, and the essential role of knowledge graphs in contextualizing and unlocking organizational data. The trio also tackles the critical balance between data security, privacy, and innovation, advocating for local, open-source AI models to preserve both control and competitive advantage.

This engaging conversation sheds light on how ethical practices and cutting-edge technologies can work together to empower businesses in the era of AI.

About "The Interconnectedness of Things"
Welcome to "The Interconnectedness of Things," where hosts Dr. Andrew Hutson and Emily Nava explore the ever-evolving landscape of technology, innovation, and how these forces shape our world. Each episode dives deep into the critical topics of enterprise solutions, AI, document management, and more, offering insights and practical advice for businesses and tech enthusiasts alike.

Brought to you by QFlow Systems
QFlow helps manage your documents in a secure and organized way. It works with your existing software to make it easy for you to find all your documents in one place. Discover how QFlow can transform your organization at qflow.com

Follow Us!
Andrew Hutson - LinkedIn
Emily Nava - LinkedIn

Intro and Outro music provided by Marser.

Emily Nava  0:00  
Emily,

welcome back to the interconnectedness of things, Podcast where we explore the transformative power of technology and the future of work. I'm Emily your host, and in this episode, we're diving into the critical topic of ethical and secure AI usage. So joined with us today, with Dr Hudson, is a very distinguished guest, our CEO of kubeflow systems, Tim Kaler, and together, we'll be discussing the responsibilities of business leaders in developing and deploying AI safely and responsibly. Welcome to the podcast. Tim, happy to have you.

Tim Koehler  0:48  
Thanks so much for the invite. I'm thrilled to be here and been feeling a little FOMO here in the first early going I know well, we wanted to make sure that we worked out all the kinks before we got you here. So so now we're now, we've got it all figured out, and it's and we're just, we're so grateful to have you here. And of course, we always have Dr Hudson here.

Emily Nava  1:11  
Want to say hi,

Dr. Hutson  1:15  
yeah, hey, I'm here. Definitely not as important as Tim. So we're just glad to have his greatness on the on the call today, and Emily, you're awesome as always, absolutely well we couldn't do without you. Dr Hudson, you and your crazy quips.

Emily Nava  1:32  
So before we get into the nitty gritty of the ethics of AI, I kind of want to take a step back and let our listeners know kind of about your journeys, each of you, and into the world of AI, and what kind of sparked, sparked the interest and and how, how you got here. So, Tim, do you want to go first? Sure be glad to

Tim Koehler  1:58  
so it's given this a little bit of thought and realized that almost 30 years in now to providing solutions, uh, IT solutions in one shape or another. From my time as a software engineer now to the CEO, and I've never felt, you know, every step along that way, we've moved from one wave of technology to the next. It almost feels like a startup. Every single phase I can imagine, 30 years of software. Just imagine that exactly so, whether it was, you know, working on a y 2k project on an as 400

mainframe to developing thick client applications using Visual Basic to web based applications on the internet and now all the way to Knowledge graphs and graph databases and AI,

what I realize is every step along that way is the security and safeguarding of those systems has been paramount. So there's nothing changing on that front. Absolutely. Yeah, good point.

Dr. Hutson  3:16  
I totally agree with you on that. Tim, I mean, it seems like the longer you're in any industry, the the similarities and patterns seem to compliment each other might have slightly different approaches, but a lot of the problems remain the same

and and just for anybody who's listening, An as 400 is like a Nintendo Switch that fills a room. So hopefully that's enough context there. That's not like paper based, right? Tim, the as 400 they weren't using punch cards on that one. You were still, no, we were at a monitor, green screen, monitor. But, yeah, the old green screen. Nothing better. Yeah. Spent a summer changing two two digit fields into four digit fields, you know, to prevent the, oh my God, in the year 2000 Yeah, that was, that was a big deal talking to a frontline worker. Oh yeah,

Unknown Speaker  4:19  
he was, he was on it. That's awesome for anybody who doesn't know what the y 2k problem was.

Unknown Speaker  4:28  
Apparently, whenever they made the computer programs, they're like, oh, we'll never need to worry about the first part of a year. The only thing that matters is the last part. And that was true from like 1960s to 1999

Unknown Speaker  4:43  
but as soon as the year 2000 came, the systems couldn't tell between 19 102,000

Unknown Speaker  4:50  
which was a big fear. They made a whole show about it called Futurama. Check it out if you can.

Unknown Speaker  4:59  
It was.

Unknown Speaker  5:00  
Of fun, and nothing collapsed. It was like a big story about nothing happened and noone came.

Emily Nava  5:07  
Yep. Anyway, I remember I was, I was just a just a kid, but it was a big, big deal in my neighborhood, big deal, just a pup, just, I'd

Unknown Speaker  5:20  
love to share a little bit about my artificial intelligence journey. If I could absolutely please do so. I don't have the tenure that Tim has shared, but certainly have done my best to become immersed. And you know, when we think about AI, especially guys, Tim and my age, you know, we think about Star Trek, and we think about Terminator, and we think about all these applications of artificial intelligence as either being the pinnacle of society,

Unknown Speaker  5:55  
like a Star Trek or some downfall of society, right? There's these two extremes. But all that is is nonsense. All of that is, you know, folks out in Hollywood thinking of stories where it really came down for me on how I started. This is back in college. And for those that don't know, I have earned the title of dumb jock. Played football and definitely got more concussions than I should have, right?

Unknown Speaker  6:28  
And because of that, I was worried about forgetting things, and spent a lot of time with different softwares to try to keep myself organized.

Unknown Speaker  6:40  
That was really the start of all of this. I had all of this information for two undergraduate degrees, and then later two master degrees and then a PhD. While I was working full time, and I started a business during graduate school, I really never figured out how to not do things and like to chill out,

Unknown Speaker  7:01  
I'll say like.

Unknown Speaker  7:04  
And so I wanted stuff to help me do that, because I felt like there was a lot to get done. And how I figured that out was, the more I could be organized and retrieve that stuff, the better off I could be. And that kind of parlayed into data warehousing and data estates, data lakes, and then artificial intelligence and and it kind of breaks down into

Unknown Speaker  7:33  
their every single person in the world has the Same problem. Fundamentally you have the stuff that's really important to you. You This can be anything from like you're taking your blood pressure to you want to remember a friend's birthday, right? Really important to you. And then that expands out to your community and your organization, your company, your school, whatever it is. And there's the things that go along with that, and there's different layers of closeness to you that you have to pay attention to. And then finally, there's all the things that happen in the world. Think about your country, and the politics and new presidents that we have. We just had Inauguration Day yesterday, all of that stuff and things that are happening you're trying to sift through and the information. Age has only made the speed by which we can generate and capture that information more the veracity, the truthfulness of it needs to be evaluated, the variety of information that we can get, videos and images and text, and the the velocity, the speed by which we get it, the 3v of what used to be called big data. And so as we're thinking about AI

Unknown Speaker  8:51  
and the in the ethics around it, it's, it's a reinvigorated problem that's been there since Tim started. I mean, Tim, the work that you were doing on as four hundreds was around data. It was around making sure it was accurate right

Unknown Speaker  9:08  
100% and that's what I love, the way that you're thinking about the root problems and how we're going to solve this with the latest wave of technology. Because we live in a world now where, you know, a world of short attention spans and immediate gratification and the latest hype, right? We really have to be thoughtfully considering these problems at the fundamental level,

Unknown Speaker  9:37  
yes, to make the biggest impact. And so that's what I love about your journey and how you got here today. Well, thanks Ben, appreciate it.

Unknown Speaker  9:48  
So both of you started talking about how this we're getting so much data nowadays with things are going too fast. People have short attention spans.

Unknown Speaker  10:00  
The speed by which we're getting AI is just going at an unprecedented pace. So with all of that, I imagine, comes some significant risks. So thinking about that, the risks, but also the opportunities. What excites you most about where AI is going now for the current state of the AI industry, I'll go

Unknown Speaker  10:26  
ahead and start Sure. So what I'm most excited about is really the convergence of technology today. It's the fact that

Unknown Speaker  10:38  
AI and everything that the mainstream is seeing with large language models, and that's all really, really cool. But when you think about combining that with technology like a graph database and knowledge graphs to really unlock the power of what this technology can do today, is what gets me most excited. We're, we're talking about

Unknown Speaker  11:06  
bringing this to you and your domain for with your data, and making it your biggest competitive advantage, totally. And you're, and you're talking in terms of enterprises, absolutely so businesses taking

Unknown Speaker  11:23  
really building upon

Unknown Speaker  11:26  
what we've been building for a really long time is we've been very thoughtful about creating relationships between your data,

Unknown Speaker  11:35  
and now that we

Unknown Speaker  11:39  
can adopt the latest technology that's available to create those relationships in flexible ways, and bring the AI and llms along with it,

Unknown Speaker  11:51  
we're gonna

Unknown Speaker  11:53  
we have the potential to give breakthrough solutions in a way that

Unknown Speaker  12:01  
I think we're only starting to skin the surface of what's possible totally to try to solve the big data problem. Exactly, you've got this mountain of data. Now, what do you

Unknown Speaker  12:13  
do?

Unknown Speaker  12:15  
And so, Tim, you got me you got me pumped up there. So I really want to to hop in on what you're talking about with the convergence of these different tools, right? So why is the in your mind? Why is the relationship between discrete pieces of data important?

Unknown Speaker  12:39  
This is the knowledge. This is what we bring,

Unknown Speaker  12:46  
the knowledge that's locked away in different places within an organization. There's knowledge in systems. There's knowledge in the people who have the experience and how they operate with these systems. There's knowledge in the context all around you.

Unknown Speaker  13:04  
It's, it's your understanding of how, how all of this fits together. And now we have technology and tools that are available that can make this so much easier for you,

Unknown Speaker  13:17  
so you don't have to be concerned with, you know, putting all your effort into putting things into the right place and categorizing them correctly.

Unknown Speaker  13:28  
You could spend all day trying to sift through everything and figuring out what to do with it.

Unknown Speaker  13:35  
But, yeah, it's called a librarian right exactly

Unknown Speaker  13:39  
now we can leverage the latest technology that's out there today that that does it for you, and let it do the heavy lifting while you can focus on what makes you passionate in your in your job.

Unknown Speaker  13:52  
Yeah, that that reminded me of a talk that I sat in in 2009 when I was out in Boston with Harvard Medical. And there was a quote that was given to start a conversation off around how to include data at the point of care. And the quote was, if, and this is the president of HP, and he said, If HP knew what HP knew, we'd be three times more productive.

Unknown Speaker  14:21  
And that quote stuck with me for this long because I thought it was insane that we didn't know what we knew. I just could not let that go. And what you're talking about is exactly the difficulty of not knowing even what you know, much less an entire organization, and who is going to sit down and become their own librarian?

Unknown Speaker  14:47  
I don't know anybody that would do that. I sure as hell wouldn't.

Unknown Speaker  14:51  
And what happens then, short attention span, jumping context, I'd rather look at my Facebook feed than go.

Unknown Speaker  15:00  
Look at the latest research gate publications or find out what the latest research is on whatever topic that would be most important to me. Just give me what I want so I don't have to carry the cognitive load. That's, right, yeah, it's, it's too much, and that's, that's what we want to prevent and so you have a problem with you mentioned that Nava of

Unknown Speaker  15:27  
too fast, too many different types overwhelm, and now we have tools that seemingly can make that better, but tools like AI, specifically generative AI, have a point of diminishing return. So as your complexity increases, the value of these things decrease. And I'm not sure if anybody has stopped to ask why?

Unknown Speaker  15:55  
Why would something that seemingly could break the Turing test that we all read news reports, could pass the bar,

Unknown Speaker  16:04  
have diminishing How could that happen? How could it have diminishing returns? Tim, do you have thoughts?

Unknown Speaker  16:15  
I'm going to pass that one back to you. You

Unknown Speaker  16:19  
go first.

Unknown Speaker  16:20  
The reason is what you had talked about before, which is the knowledge graph, the relationships between the data, all of these huge models and the millions, if not billions, of dollars that is being invested in creating these things have

Unknown Speaker  16:40  
the same fatal flaw.

Unknown Speaker  16:43  
They are trained on available data.

Unknown Speaker  16:46  
So the question you need to ask yourself, Is your data available to train it?

Unknown Speaker  16:53  
If all of us are being honest, no,

Unknown Speaker  16:57  
it's not. Some of it is, which is why it's kind of good. But when you start getting real specific, you're not getting that out of your head. You're not training anything online. How many of of us write our own blog posts versus consume

Unknown Speaker  17:14  
how many of us think about the problems we solve on a daily basis and write that down? How many of us are publishing our notes online?

Unknown Speaker  17:21  
That was nobody,

Unknown Speaker  17:24  
and as a result, we're never going to get the value from these things. So the Knowledge Graph is simply another tool that helps communicate to a computer or an AI model

Unknown Speaker  17:38  
the meaningful relationship between discrete pieces of data, nodes and predicates, is the proper terminology. So this means, just like in your brain, a cow can make you think of milk, but also it can make you think of a burger. Well, milk and burgers aren't the same thing, but I know they are, because I know that they're related,

Unknown Speaker  18:02  
computers wouldn't be able to tell that unless you have a humongous relational database, which would then not be performant, right? So that's why this thing was made,

Unknown Speaker  18:14  
and that's the breakthrough impact that we're talking about. Is because you're talking about taking all of this knowledge and information that is currently locked away or forgotten,

Unknown Speaker  18:25  
and making it systematic and making it always available, always up to date

Unknown Speaker  18:33  
that can Be and

Unknown Speaker  18:36  
insights created.

Unknown Speaker  18:39  
Predictions. Recommend, you know recommendations, predictions

Unknown Speaker  18:44  
to take the actions you need in your business,

Unknown Speaker  18:47  
exactly right and and further, I would say this creates three major problems, even though we're talking about connecting this up with AI, if you don't have a knowledge graph, the three major problems that it could bring up, and that's number one,

Unknown Speaker  19:06  
tons of awareness without understanding. So when we're talking about AI coming out to the market, everybody's heard of it by now. Everybody's saying it. Hey, look, I got AI in my app. I'm going to call it Apple intelligence. If anybody has used Apple intelligence,

Unknown Speaker  19:25  
they they're stretching that last word quite a bit, because it's not really centered on

Unknown Speaker  19:32  
solving a problem yet. And so you have this issue of mass awareness without the understanding. So it's really great that everybody knows about it. It's not really great that no one really understands it, because then they're going to come to conclusions. I was doing that talk in San Diego last year with a bunch of records managers and lawyers. You guys remember me talking about this, where they literally, not figuratively, thought.

Unknown Speaker  20:01  
Uh, chat GPT was thinking

Unknown Speaker  20:05  
because

Unknown Speaker  20:07  
it wrote thinking on the chat

Unknown Speaker  20:12  
that's a little scary.

Unknown Speaker  20:15  
I What? How else would they think differently? And that's where the understanding comes from. That, if you really look at how the transformative generation happens, it's all probabilities based on the training model. It's the probability that the next word part is going to be accurate. So I don't know how we can call that thinking when it's just that's probably right, which is also how you get hallucinations and all the other stuff, like, you'll get made up lawyer problems. Or, sorry, not lawyer problems. Well, you might get made up lawyer problems. I was not more thinking of case law. It'll make up case law that doesn't actually exist.

Unknown Speaker  20:54  
That's really interesting to me, the perspective around that thinking prompts and giving the perception that that's what it's doing, and people so that's what they understand, and so that's one of the things that

Unknown Speaker  21:11  
really keeps me up at night, is when

Unknown Speaker  21:17  
inaccurate data is presented in a very confident matter of fact. Way, yeah, yeah. And that's that kind of leads to the second big problem,

Unknown Speaker  21:32  
which is the lack of training controls. So if I am hyper confident, but I don't have controls over how the data was trained or the model was trained.

Unknown Speaker  21:43  
Then I learned this term listening to, I think, I think the podcast was called My American life, or modern American life, or something. Anyway, the term is called a modern jackass,

Unknown Speaker  21:56  
and I loved it because it's basically saying I've been exposed enough to know of a topic, and I have the confidence as if I have learned this thing for 20 years, but I know nothing like you read a magazine article about architecture and now you're going to give somebody a tour of Paris.

Unknown Speaker  22:18  
It's just this mismatch and and I think that that's the biggest thing when we're thinking about the controls of AI, is we will mistakenly put greater confidence onto the result, especially when we don't know better.

Unknown Speaker  22:32  
Yep. And I think when folks have the experience, we know that there's the diminishing returns when using these models, we can see it.

Unknown Speaker  22:41  
Not everybody can.

Unknown Speaker  22:43  
The the next generation of folks coming in that's living with these things, they they will take it as gospel,

Unknown Speaker  22:51  
and they will further disassociate

Unknown Speaker  22:54  
the responsibility of knowing

Unknown Speaker  22:59  
how something is supposed to work, and having this be an augmentation, and they will instead make it a replacement, whether they're doing it consciously or not, that is the danger

Unknown Speaker  23:10  
for sure.

Unknown Speaker  23:12  
So

Unknown Speaker  23:13  
go ahead, TEV, it makes me laugh a little bit, because I remember when

Unknown Speaker  23:20  
we'd sit around the dinner table, and my mom was the best. She would always she would drop some

Unknown Speaker  23:28  
pearl of knowledge, and we would always say, We interesting. Hadn't heard that. Where did you hear that? Well, I heard it on the radio.

Unknown Speaker  23:36  
And

Unknown Speaker  23:38  
that was always the we would always look around, you know, became a running joke, but because she heard these things on the radio, it had this sense of credibility that was just automatic and understood and not questioned. Yep, so I think that's one of the keys, is we have to continue to bring that questioning mindset and trust but verify, right?

Unknown Speaker  24:07  
I think you're spot on. And I think the it's almost like we're at book ends of this, the sorry but the boomer generation, right? They had newspapers and they trusted, and then we started to get a lot of questioning around the 70s and 80s. So those who were born around that time really well, we can't rely on just because someone says it doesn't make it true. And now we're getting to the other side where it's Well, I I heard it on a podcast, or I heard it on or I read it on Facebook or Tiktok, right? We give way too much credence to just because it was generated that must make it true. And so that's where the question then becomes. Let me ask you guys this,

Unknown Speaker  24:55  
the divide between privacy and.

Unknown Speaker  25:00  
Security that these things bring up. So in a world where

Unknown Speaker  25:05  
we know we need to be more forthcoming with what we know, if the models are going to work better, we know that,

Unknown Speaker  25:14  
and we have the capability to do that with knowledge graphs,

Unknown Speaker  25:20  
how do we keep our security, and how do we keep what is important to us secure, or rather,

Unknown Speaker  25:27  
not, like encrypted, insecure, but the other side of privacy, which is I don't necessarily want Emily to know everything that I put into my notes, or I don't necessarily want my competitors to be able to take advantage of my intellectual property and learning that I've done

Unknown Speaker  25:49  
so in a world like that. How do we how do you guys think about marrying those two because it's a it's a big problem that's coming up. I got to keep my data secure and, as a result, private. But then, how do I get advanced? How do I take advantage of the models?

Unknown Speaker  26:06  
Yeah, so few different ways that I can think of and

Unknown Speaker  26:12  
that we like to build into our our methodologies, and even in the R D that we're doing on this next generation of AI, is your data is yours. We're going to be fully transparent and full visibility on how we are handling your data, and it's going to be kept inside your domain. We're not going to send it out to third parties. We're not going to sell it to anybody. That is your what I was talking about earlier, about your competitive advantage, it's yours.

Unknown Speaker  26:46  
And then within the context of your domain,

Unknown Speaker  26:52  
we

Unknown Speaker  26:54  
the flexibility with access controls and

Unknown Speaker  26:59  
doing this, not around arbitrary structures, like systems of the past, you know, put it in this folder or this drive, and here's who has access to it. Well, now we're taking a way more dynamic and data driven approach, that just by the nature of the data itself, we're going to be able to configure who are the right people that has access to that information?

Unknown Speaker  27:26  
Yeah, and I to build on that. So we're handling the suggestion here is that we're handling security by using access controls and metadata driven controls rather than location driven. That's right.

Unknown Speaker  27:48  
And then, from a privacy standpoint, we are leveraging the models somehow,

Unknown Speaker  27:56  
but not releasing any of the the company's data, and so how are we doing that,

Unknown Speaker  28:08  
local and open source models? So I think it's the only way. Right, right? We're gonna, we're gonna bring

Unknown Speaker  28:18  
with the platform the ability to

Unknown Speaker  28:22  
these local models can be installed

Unknown Speaker  28:26  
within your own instance, so to speak. So we don't have to send your data outside of your instance, out to the cloud, out to a third party. So we're going to we're going to install those locally,

Unknown Speaker  28:41  
and we're also going to give you choice on

Unknown Speaker  28:46  
what you are able to leverage,

Unknown Speaker  28:49  
which models you could choose. That's right, yeah, I think the key is our

Unknown Speaker  28:58  
philosophical as well as practical

Unknown Speaker  29:02  
insistence that these models are open source, so when they're trained and provided, when they can be they should be open now, we understand there's going to be some proprietary ones for sure, right? That's just going to happen, but the open source models give the greatest option

Unknown Speaker  29:23  
to connecting your knowledge graph with a pre trained model, which can accelerate what you're already doing to stay on top of your data.

Unknown Speaker  29:35  
The

Unknown Speaker  29:37  
other insistence is that, and this is not necessarily popular, because we've seen other companies

Unknown Speaker  29:45  
overtly violate this principle, even if they said in the past they wouldn't, and that's your data is truly your data when you're using our software. It's not a conveyor belt.

Transcribed by https://otter.ai