The Interconnectedness of Things

Should Government Agencies Use AI?

August 08, 2024 QFlow Systems, LLC

In this inaugural episode of The Interconnectedness of Things, hosts Emily Nava and Dr. Andrew Hutson delve into the critical question, "Should government agencies use AI?"

They discuss the current landscape of AI usage within government agencies, highlighting how federal bodies like NASA and the Department of Commerce are already leveraging AI for various applications. The episode explores the potential benefits, such as enhanced decision-making and improved efficiency, while also addressing the risks, including data privacy concerns and the reliability of AI outputs.

Dr. Hutson emphasizes the importance of accountability frameworks like those proposed by the U.S. Government Accountability Office (GAO) and the emerging role of Chief AI Officers in federal agencies. The hosts conclude with a look at future possibilities for AI in smaller agencies like the USDA, speculating on how AI could revolutionize data-driven decision-making in agriculture and beyond.

Timestamp Summary:

  • 0:07 – Introduction of hosts Emily and Dr. Andrew Hutson; overview of QFlow Systems.
  • 1:02 – Introduction of the main topic: Should government agencies use AI?
  • 1:58 – Discussion on the U.S. Government Accountability Office's AI framework.
  • 3:01 – Concerns about AI reliability and the phenomenon of AI "hallucinations."
  • 4:56 – The importance of monitoring and performance evaluation in AI systems.
  • 7:58 – The creation of Chief AI Officers and their role in federal agencies.
  • 8:34 – Discussion on the current and planned AI use cases across federal agencies.
  • 9:40 – Highlighting NASA as the leading agency in AI use cases.
  • 12:58 – The debate on data privacy and the use of proprietary vs. public data in AI models.
  • 14:57 – How businesses and government agencies can leverage AI models and knowledge graphs.
  • 16:53 – Speculative discussion on AI applications in smaller government agencies like the USDA.
  • 20:05 – The challenges of data collection and the integration of AI in decision-making processes.

About "The Interconnectedness of Things"
Welcome to "The Interconnectedness of Things," where hosts Dr. Andrew Hutson and Emily Nava explore the ever-evolving landscape of technology, innovation, and how these forces shape our world. Each episode dives deep into the critical topics of enterprise solutions, AI, document management, and more, offering insights and practical advice for businesses and tech enthusiasts alike.

Brought to you by QFlow Systems
QFlow helps manage your documents in a secure and organized way. It works with your existing software to make it easy for you to find all your documents in one place. Discover how QFlow can transform your organization at qflow.com

Follow Us!
Andrew Hutson - LinkedIn
Emily Nava - LinkedIn

Intro and Outro music provided by Marser.

Total recording length: 00:22:14


{ 0:07 }

Hello and welcome to the very first episode of The Interconnectedness of Things as we dive into the exciting world of tech and its profound impact on various industries and our personal lives. I'm Emily. I'm the head of marketing at Q Flow Systems.


Speaker 2:  { 0:24 }

Hi, I'm Doctor Andrew Hutson. I hold my PhD from the Missouri Institute of Informatics. I've done research into clinical decision making as well as data visualizations. I am thrilled to start this podcast with Emily as we explore how AI will affect our lives in ways we can't even imagine.


{ 0:47 }

We both work for Q Flow Systems and we make document management, knowledge management and workflow automation tools for use in all industries and a special emphasis on federal government and healthcare.


Speaker 1:  { 1:02 }

And what a brilliant segue that is. We're talking about government today. Should government agencies use AI?


Speaker 2:  { 1:12 }

Mean, it's a big question that's coming up right now of not even just for government, but for anybody. Should this be something we use? What what are the guardrails that should be put in place? Is is government and Congress even prepared to put regulations around here? You probably read the news articles of big companies like Apple asking for some type of guidance and regulation for the use of AI to help protect not only corporate interests, but also individual privacy and security concerns. Have you ever heard of the US Government Accountability Office?


Speaker 1:  { 1:52 }

I've heard of the name floating around, but for our listeners out there, how about you explain that?


Speaker 2:  { 1:58 }

Oh man, great answer.


{ 1:59 }

I hadn't heard of them before very recently. But these guys, it is a government agency. You can go to the website if you want, gao.gov, and they've attempted to put a framework around artificial intelligence, specifically around accountability.  The four pillars of their framework are data, governments, governance, monitoring and performance.   The interesting thing here is it it didn't say in those four pillars, Reliability, which I often challenge on AI, is I want to understand how can I as a user, rely on the outputs of the AII mean. Is this just guessing? Should I just assume that it's going to have my best interest at heart because of, you know, of its soul?


Speaker 1:  { 2:53 }

Probably not, and you definitely want your government who would need to have accurate data.


Speaker 2:  { 3:00 }

Oh, for sure.


Speaker 1:  { 3:01 }

Right. So I've used ChatGPT before, and ChatGPT has gotten wild over the last year or so. And so I would be horrified to learn that the federal government was using ChatGPT.


Speaker 2:  { 3:17 }

Especially for big decisions, right? So, and just to be clear what you're saying, it's gotten wild. There's a a specific term called hallucinations of when you ask a question of a generative AI model and it gives you back something that's just completely inaccurate. In the the classic example is someone asking Google Gemini, how do I make my pizza stickier? And it suggested Elmer's glue. And obviously I don't think it's a great pizza topping, but man, it was sticky and it was how you make something stickier. So why wouldn't that work?


{ 3:53 }

But it's because the hallucination and that can be subtle where you don't even realize that it's the incorrect thing because you don't have the background and knowledge to know that and and that so that could that's where the danger could come in. It's not always going to be obvious and in front of your face.


Speaker 1:  { 4:11 }

Yeah, and that seems like a slippery slope too.


Speaker 2:  { 4:14 }

Oh yeah, for sure. So that's that's a problem that can come up as these things would reinforce the wrong answers and then we would start to be convinced that the wrong thing is the right thing. You can even see this like in human interactions too, when you say the the a less than truthful anecdote and then it starts to spiral and then get out of control. We can see the same thing happening with us humans.  To to dive in a little bit more about how I think they're going to approach the reliability using this framework is it kind of centers on their monitoring pillar where they say they want to ensure reliability and relevance over time.


{ 4:56 }

And the only way to do that is through continuous performance evaluation and assessing the sustainment and expanded use of the datas that is building out and relating that model. So having that pillar is really important.   The thing that it doesn't say explicitly that we've certainly discussed in our backdoor meetings is how do we get the context of what we know how to do. The context of the the rules by which we know we operate for our business. Or the the procedures for my financial institution that would want to make decisions on investments or the healthcare industry that would want to find the best intervention for a diagnosis.


{ 5:54 }

All these things we as humans have developed over time in a framework in our minds through trial and error, through education. And as a result, we have a a portion of folks that have specialized. If you think about the Doctor Who specialized in ontology, that's exactly oncology. Excuse me, that's exactly what you'd want if you get cancer. Someone who has studied and understands how that works so you can get the best recommendation.  The same is true for AI and why that's true is there's new ways of interconnecting pieces of information that have started out in the social media realm, but now have come in to help out with these AI models To say me as a human have built out the meaningful relationships between pieces of information.


{ 6:55 }

And now I can connect that to a pre trained model to increase the likelihood that what is produced from the model is in line with what objective truths I have developed inside of my own knowledge graph or a graph database. So that's kind of cool to think about and it, the technology has not been called out explicitly yet on GA OS website, but there there is a concerted effort at at least in the US federal government to put some, some type of structure in place that this gets our attention. And so even earlier, just this year in March 28th, 2024, the ai.gov is the place to go find out about the advancing governance, innovation and risk management for AI at agencies across the federal government.


{ 7:58 }

And they've also started to name chief AI officers, a new role that we haven't seen before that's trying to pair the data side, because these folks typically started as a chief data officer with the use and accountability of AI models as they enter into the day-to-day practice of our federal agencies. And I found out a kind of a cool fact. How many planned or current artificial intelligence use cases are in existence right now in federal government?


Speaker 1:  { 8:34 }

Well, that was going to be my next question is so you're telling me that there are federal agencies right now using AI and there are multiple apparently that's.


Speaker 2:  { 8:45 }

A great deductive reasoning there, absolutely. So the the fact that I asked the question current and plan insinuates that there may be something current happening at the federal government.


{ 8:55 }

So the the count right now, according to gao.gov, is 20 of 23 agencies have self reported about 1200 current and planned artificial intelligence use cases.


Speaker 1:  { 9:15 }

Interesting.


Speaker 2:  { 9:16 }

Right now, there's so many of them. Now here's another bit of interesting trivia, and maybe you can take a guess. What do you think the number one agency is for AI use cases?


Speaker 1:  { 9:31 }

My mind went to cybersecurity first for some reason.  Warm, warmer cybersecurity.


Speaker 2:  { 9:40 }

I don't think there's a cybersecurity agency, but I I do like where your head's at on that.


Speaker 1:  { 9:45 }

Is it Homeland Security?


Speaker 2:  { 9:47 }

Great guess. DHS, the Department of Homeland Security is about midway through.


{ 9:54 }

They've identified 21 use cases, the biggest by far, NASA with 390 reported use cases, seconded by the Department of Commerce for these AI model use cases. And again, these are either planned or current of that total 1200, there's 500 or 20, maybe less planned that's coming up and maybe 228 are in total in production. Now there's another subset that says their use is not provided probably from security reasons. We can't tell you if we're using it, which makes sense right across the board with these agencies. But we can see that the question that you started the conversation with of should government use these is very relevant because they already are.


{ 10:51 }

And I I think to ask that question, there may be an assumption that we haven't said out loud is when we say should they use AI, were we thinking that the use of AI is a relatively new occurrence? I think some people think it is, and that was probably largely influenced by the rapid explosive growth of ChatGPT into the the zeitgeist of of our concepts of technology.


Speaker 1:  { 11:21 }

Yeah, it hit the mainstream.


Speaker 2:  { 11:22 }

Yeah, it hit like everybody knew it. And then it was made in a way that was approachable, that others felt like they could use it. They didn't understand it maybe, but they knew they could probably get some value from it. And so now the question is bringing that to the forefront of, well, should that be in use by a government agency? And I think it's important to call out that generative AI has been at least 10 years in the making.  ChatGPT didn't start recently. They've been working since about 20-12 to go pull data, train, reinforce.


{ 11:58 }

And I think some folks maybe inflate or conflate the SAS idea with AI companies and they're very different because the time it takes upfront to do the research science model tuning is very time consuming and resource intensive. Some of the models that you can see out there today, I think recently Meta just dropped Llama 3.1 which had 405 billion parameters. It likely cost 60 to $80 million in just compute cost to generate That knowledge of how expensive it is to build these things is why it's very hard for other companies just to decide to get into the AI model game. And most of what we're going to see is how do we leverage the pre build models?


Speaker 1:  { 12:57 }

Right.


{ 12:58 }

And that makes me think of data privacy and that's a huge issue. And I think that's a lot of where people are worried about generative AI being used in government agencies. And so that's probably where the chief AI officers are going to come into play.  And so whenever you're talking about these different models, they've largely been, correct me if I'm wrong, but the data has been captured from public information.


Speaker 2:  { 13:35 }

That's exactly right. There could be some proprietary stuff that's put in there depending on who owns it. For for example, Slack was just in the news recently that that Salesforce, because they bought Slack a couple years ago, is going to use the data in Slack to help train and model, right. So sometimes it could be proprietary like we're the only ones that have that set.


{ 14:00 }

X is certainly doing that. Formerly Twitter. There's a new, I think it's called grok dot AI or X dot AI. I'm not can't tell which one it is, but those would be examples of using proprietary data to inform a model. But by and large most of the training set has been publicly accessible data. In fact, there's a a thing called the pile which has been in the news recently that was curated by a non for profit company that was funded by for profit companies to pull the data. And then they could use the pool of this data to then train models. And some YouTube creators are upset because their videos are being used and added to the pile. You'll probably get some explosive sound effects with that. The pile. The pile.


Speaker 1:  { 14:51 }

You have.


Speaker 2:  { 14:53 }

To train all their their models.


{ 14:57 }

But when we talk about the privacy component, the the generation of the models will get better over time because of the inclusion of publicly accessible data. But what can't happen is the context of your business or you yourself. The only way right now if you want to use those third party models is either accept their terms and send your data to them or find a way to leverage those models behind a wall, if you will, that only you can access. So you can leverage the bottles as a baseline and then add your own data on top of it. That can seem difficult for folks, but it's a lot cheaper than trying to build the model from scratch yourself.


{ 15:56 }

So what we're seeing is more and more open source models and then folks not quite understanding where to take that next step. And the next step comes into what we've referenced before, which is knowledge graphs that have your specific knowledge combined with a local version of the models that you are then reinforcing. And building on top of that, to me, is what the next phase is of AI as they get better and better foundational models, we'll say.  I think Apple, when they were talking about artificial intelligence or Apple intelligence, were grouping things like ChatGPT into world knowledge, so generally available, but your specificness within that was kept private. And I think that's what companies are going to move towards as well, as well as government agencies.


Speaker 1:  { 16:53 }

Yes.


{ 16:53 }

And I think that that's, that's an important component too, because my mind's going like, it's not surprising that NASA is using AI the most.   But I'm thinking of those smaller agencies like like the USDA or GSA or you know, any of these smaller agencies and how, how might they even use AI?


Speaker 2:  { 17:28 }

Totally, totally speculative now, right? But let's let's take the Department of Agriculture, USDA as as an example, OK? So they are trying to coordinate the producers of America, whether that be livestock or agricultural activities or any of the ancillary work that's done for those ranches and farms, right.


{ 17:58 }

The power that's needed in the next level for the USDA is being able to determine region specific anomalies that require some type of intervention or data collection for soil composition.  Better meteorological models now that global warming is causing less predictive behavior. The efficacy of different agricultural approaches. So rather than plowing a field, now you would leave the field alone and use a different tool to get the seeds in the ground to prevent the topsoil from blowing away because of big storms as an example.


{ 18:44 }

These approaches are very much aided when a knowledge graph can be built to give producers better inside access and prediction, if not prescription, on what to go do with their particular farm.  Some use cases could be down to hey, natural language question to the USDA to figure out what subsidies I might be eligible for in my specific region. Well, today that's not super available and it might take a lot of looking into, But if you can create that knowledge graph and tie it into a large language model so you can use human questions to interrogate it, then perhaps that becomes a lot faster and easier for producers. So then they can get the support that's required and the system becomes more efficient. All of this is speculative, but you can certainly see the power of knowledge graphs, private LLMS, and geographical information system combining together to create something really powerful for producers.


Speaker 1:  { 19:52 }

Yeah. And this is data that they're, they're likely already gathering.


{ 19:57 }

It's just connecting those those dots and being able.


Speaker 2:  { 20:00 }

To.


Speaker 1:  { 20:01 }

Analyze that data easily.


Speaker 2:  { 20:05 }

And that's an effort that I think everyone has been on for at least three decades. How do I capture data? How do I organize it in a way that I can then take advantage of it? And what we're fighting is that's hard from an actual practice standpoint. It's hard to have the discipline to get your data right, to have it be reliable, to have it calculated as you expect. I mean, you have a, a whole department at USDA on statistics that are just there to collect self reported forms from producers to try to come up with some type of trending that can happen. And it's, it's manual, sometimes it's in paper. And it's still a bit difficult and dated.  Now we're going to go ask the introduction to AI models into that, that that could be really hard and seemingly daunting.


{ 20:56 }

So the easier we can make that approach, the easier that it becomes second nature to populate a knowledge graph with the connected relationships between bits of information, both direct and indirect, mind you, not just direct relationships, the better off everyone's going to be on the power of decision making that they get. It's it's another big tipping point in going from our information phase to an intelligence phase and our use of data, not only from a collection standpoint, but from an execution standpoint.


Speaker 1:  { 21:37 }

And that wraps up another episode of The Interconnectedness of Things. We hope you enjoyed exploring today's topic as much as we did. Remember, technology is more than just tools. It's a thread that connects us all. If you found value in our discussion, be sure to subscribe, leave us a review, and share the podcast with others who are just as passionate about tech and innovation as you are.


{ 21:58 }

Thanks for joining us on this journey through the interconnectedness of technology. Until next time, stay curious and keep exploring.



 ---   End of transcript   ---