The Case of Scarlett Johansson’s Voice Certainly Resonates at AI Global Forum at Seoul

Parley participants seem to agree on the need for curbs to protect property like the actress’s astonishing voice.

Joel C Ryan/Invision/AP, file
Scarlett Johansson at Cannes, May 24, 2023. Joel C Ryan/Invision/AP, file

The South Korean and British organizers of the second AI Global Forum agree. The actress Scarlett Johansson’s claim to have recognized an imitation of her voice on ChatGPT was not only valid but an example of the safety issues besetting the wild, untamed world of artificial intelligence.

“That was misuse of AI,” the South Korean minister of science, Lee Jong-ho, responded without hesitation when asked by the Sun what he thought of Ms. Johansson’s charge that her voice was imitated by Open AI without her knowledge or approval for ChatGPT’s Sky. It exemplified, he said,  the need for curbs on how AI “could be handled for our society”

Beside him at the closing of an intense all-day meeting at Seoul, the head of the British team at the forum, Michele Donelan, told the Sun, “In that particular case, my understanding is a company imposed her voice.” Britain’s secretary of state in the department of science, innovation and technology, Ms. Donelan said “the general point is intellectual property” — one of the most worrisome issues in the rapid advance of AI.

The case of Ms. Johansson struck a responsive chord at the forum, successor to the first AI Forum hosted by Britain at Bletchley Park in November.  Emissaries from two dozen nations and 14 companies all signed on to promises to do all possible to prevent abuses of AI, to avoid copycatting and plagiarizing the material of others, to stop making up information and ideas at a speed beyond the ability of mere mortals to control.

“Ensuring Digital Rights for a Digital Society of Shared Prosperity,” said a banner proclaiming the overall purpose of the conference.

The idealism and good intentions of the pledge, however, raised the question of how governments and corporations, competing, engaging in trade wars threatening one another, often fighting real wars, can ever cooperate effectively beyond mere words exchanged in the leafy confines of the Korea Institute of Science and Technology, a research institution in Seoul.

Ms. Donelan appeared rather defensive when asked how the flossy language of  what was called “the Seoul Declaration” could stick in the real world of bitter, sometimes violent rivalries. “Global regulation was never the purpose,?” she said. “We produced tangible solutions working together. Further consultations are needed, but that will be done by different forums.”

Mr. Lee and Ms. Donelan, like most of the government representatives and corporate executives, looked to a future where the emphasis might shift from the “safety” in the use of AI to “action” to seriously stop abuses. “Action,” they said, would be the focus of the third AI Summit and ministers’ session to be held next year in Paris.

The head of policy for a British think tank, the Centre for Governance of AI, Markus Anderljung, predicted “fraud and what-not will be big going forward” but believed companies would agree on what might be “intolerable risks”that would necessitate controls. “If our system can increase the activities of cyber criminals,” he said, “that will be an intolerable risk.” 

Regardless, “AI systems will be much more vitally used,” said Mr. Anderljung. “Society will need to adapt. Cyber attacks will be much cheaper. We will need spam filters on our homes.”

An Oxford professor of government, Robert Trager, saw “many ethical challenges” that go beyond anything anticipated by officials and bureaucrats.  “We have no idea where to look for the answers,” he told the Sun after speaking out in a conference session. “I see lots of serious problems.”

The conference organizers, though, looked ahead with much greater optimism.

“We wish to establish global AI leadership,” said Mr. Lee. “The Seoul declaration and agreement adopted three major goals — safety, innovation and inclusivity.”

Ms. Donelan said “We shape the future where AI is a catalyst for a safer world.” Certainly “there could be risks and side effects,” she said. “Our knowledge needs to be improved around the risks.”


The New York Sun

© 2024 The New York Sun Company, LLC. All rights reserved.

Use of this site constitutes acceptance of our Terms of Use and Privacy Policy. The material on this site is protected by copyright law and may not be reproduced, distributed, transmitted, cached or otherwise used.

The New York Sun

Sign in or  create a free account

By continuing you agree to our Privacy Policy and Terms of Use