In what was an eerie first in congressional history, an artificial intelligence program mimicked a senator to open a hearing on artificial intelligence this week. The demonstration by Sen. Richard Blumenthal, D-Connecticut, of how perfectly A.I. could recreate both his voice and the kind of wording he would use highlighted some of the real dangers this technology poses absent standardized guidelines governing its use. As Blumenthal noted, it could just as easily have been programmed to mimic him endorsing, say, the surrender of Ukraine. What appears to be bipartisan support for some sort of A.I. regulation going forward is encouraging.
Today's emerging A.I. systems aren't merely new ways of using existing computer technologies but are an entirely new one: programs that learn, grow and ultimately create content that no human specifically gave them. The audio recording of the fake Blumenthal's opening remarks at Tuesday's Senate subcommittee hearing on technology, for example — both his words and his voice — were created by a ChatGPT program that was trained on recordings of Blumenthal's floor speeches.
The same technology can synthesize remarkably realistic video images of people saying things they never said. It doesn't take much effort to imagine what disinformation mischief could be done with an A.I.-generated video of, say, a sitting president. A favorite scam against the elderly, sending them emails claiming to be relatives making desperate pleas for money, could conceivably be augmented with videos that would look and sound like their own grandchildren. More mundane threats include the possibility that entire categories of jobs involving research, writing and communication could be lost to A.I. programs. Perhaps even (gulp) editorial writers.
And those are just the concerns about A.I. doing what humans tell it to do. The ultimate risk of machines that learn is that they could outgrow their human taskmasters. Hollywood has already imagined, many times, where that could go, from "2001: A Space Odyssey" to "The Matrix," and now technology ethicists are pondering the same question.
In a welcome departure from the confrontational way Congress and Silicon Valley usually interact, Tuesday's star witness, OpenAI chief executive Sam Altman, not only acknowledged the potential dangers of his company's ChatGPT program and other A.I., but he urged Congress to work toward a regulatory framework. "I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that," Altman told the subcommittee. "We want to work with the government to prevent that from happening."
Some want a dedicated government agency created to address the issue, which sounds appropriate. Regulation could include licensing of A.I. systems, placing restrictions on what source material is made available to it in its learning process, and creating rules to ensure that A.I.-generated content is always identified as such. These programs are useful tools that must not be allowed to become weapons.
REPRINTED FROM THE ST. LOUIS POST-DISPATCH
Photo credit: Alexandra_Koch at Pixabay
View Comments