Congress confronts security risks as it seeks to expand Hill’s AI use
More than 100 congressional offices are already using artificial intelligence for everyday tasks — such as writing constituent correspondence, handling member scheduling and drafting legislation.
And lawmakers and staff alike are hungry to find more ways to harness AI.
That could include ways to ease the workload of overburdened staffers, help with research, write bills and summaries and extend constituent outreach capabilities. Essentially, the Hill is eyeing ways to build staff capacity without actually expanding the payroll.
Congress may be notorious for lagging behind as the world embraces new technology — from then-Sen. Ted Stevens calling the internet a “series of tubes” back in 2006 to lawmakers’ slow-footed approach to adopting email back in the 1990s.
But lawmakers are determined that when it comes to AI, things will be different.
“AI won’t replace humans,” Rep. Bryan Steil (R-Wis.), the House Administration Committee chair, said in an interview. “But humans that use AI could replace those who aren’t using AI.”
Still, even lawmakers who favor innovation know AI comes with risks. There are concerns that an overreliance on AI could lead to cybersecurity problems, from national security risks to looser restrictions on private constituent data. Officials admit it may be a while before AI can be leveraged for anything involving sensitive or personal information.
“We’re talking about balancing the risks that come with any new technology to make sure we have appropriate safeguards in place and to make sure we’re leveraging the benefits of AI and protecting ourselves from any downside risk,” Steil said.
To that end, Congress is working to build early guardrails for AI use. The House’s Chief Administrative Office is expected to unveil a draft policy for AI use across the House in the next two to three months, according to Deputy CAO John Clocker, who said at a committee hearing Tuesday that while AI has “transformative potential,” offices have to be “extraordinarily cautious before we integrate AI tools.”
“Adversaries will also use these tools to try to harm the House,” Clocker warned the Administration Committee.
The House’s policies will be based on the National Institute of Standards and Technology’s AI Risk Management Framework, but tailored by CAO for what Rep. Barry Loudermilk (R-Ga.) referred to as the “very complicated ecosystem” of the House.
While the House may adopt broad guardrails for AI usage, management of each office will remain up to individual members and their appetite for innovation, experimentation and risk. Some lawmakers already have ideas for ways to harness it for themselves and their staff.
Rep. Morgan Griffiths (R-Va.) envisions being able to listen to audio versions of reports or bill text on his drive to Washington, he said at the committee hearing. Rep. Norma Torres (D-Calif.) wants to know how AI can help her district staff to wade through overwhelming loads of constituent casework while protecting people’s personal information.
Congressional use of AI is in early stages, but so are major AI programs outside the public sector. The current plans to take advantage of it for lawmakers represent a rare example of Congress adopting a technology while it’s still being honed and developed.
“It does hallucinate,” Clocker said of AI’s inconsistencies. “It is confident, even though it is hallucinating.”
Already, more than 200 staffers in 150 House offices, plus committees, are participating in a pilot program using Chat GPT+ for everyday tasks, such as scheduling, constituent correspondence and bill summaries.
Currently, the most popular use of ChatGPT+ is to produce a first draft of testimony, a statement or a speech, before staffers bring it to its final form — editing the AI version to integrate voice and verve that AI can’t yet achieve.
While the House has been in talks with other generative AI platforms, including Google’s Bard and Microsoft’s Copilot, OpenAI’s ChatGPT is the only company so far that has made a commitment to protect House and member data. That includes commitments to not use that data in training their models or sharing it with other customers.
The CAO’s office is evaluating other providers. But without accepting the House’s terms for data protection, the paid license version of ChatGPT+ is the only House-approved AI provider.
The Senate is not as far along in its experimentation with AI use, though the upper chamber did establish a working group late last year and has issued some guidance to offices involved in pilot efforts. The chamber’s top cybersecurity officials determined that OpenAI’s ChatGPT, Google’s BARD and Microsoft’s Copilot stand a “moderate level of risk if controls are followed.”
For now, Senate officials have limited use of the technology to research and evaluation purposes — and only using non-sensitive data.
Go to Source
Author: By Katherine Tully-McManus