Oppenheimer was shattered by the atomic bomb, “I have blood on my hands.”
As Nolan’s film was released in theaters this summer, the debate over how to develop AI safely and responsibly was at a fever pitch in Washington. As President Biden convenes top CEOs at the White House for a discussion on AI, tech executives and senators are using Oppenheimer’s struggles to shift the moral tone of the debate around emerging technologies. I saw this as an opportunity to explain complex interests.
But Silicon Valley’s fascination with Oppenheimer left Nolan with “ambivalent” feelings.
“It’s great that scientists and engineers from all walks of life are looking at history, looking at that moment in time, and worrying about the unintended consequences,” Nolan said at a recent conference at the Hay-Adams Hotel in Washington. said in an interview. “But I also think it’s important to keep in mind that the nuclear threat is a unique threat to humanity.”
Nolan said the atomic bomb was a “force of destruction” and policymakers needed to deal with it differently than with tools such as artificial intelligence. He cautioned against treating AI as a special case and ascribing “god-like” attributes to technology in a way that allows companies and governments to avoid responsibility.
“We need to look at it as a tool, and there needs to be accountability for the people using the tool and how that tool is used,” he said.
World leaders gather at the UK’s AI Summit. Destruction is on the agenda.
Some engineers are warning of an “apocalypse” scenario in which AI becomes more capable of thinking for itself and threatens to wipe out humanity. Their warnings resonate on the world stage, with a global debate about AI safety taking place at Bletchley Park, the British historic site where Allied codebreakers deciphered secret German messages during World War II. It became a major focus of international gatherings of leaders.
But Nolan warns that focusing on these potential outcomes distracts from solving problems that businesses and policymakers can tackle now.
“If you’re looking at the most extreme scenario, this puts everyone out of harm’s way,” he said.
Already, AI systems are taking his work and other Hollywood films to generate photos and videos. Nolan said policymakers need to address the fact that AI systems are taking jobs away from people.
Digital clones created by AI technology could make Hollywood extras obsolete
“I think when you look at the broader picture of where this technology is going to be applied or where it’s going, you’re distracting from things that need to be addressed right now, like copyright law,” he said. To tell. “They are not all that exciting or interesting to talk about…but there are immediate implications for employment and compensation that need to be addressed.”
Nolan said Oppenheimer’s story suggests how difficult it will be to regulate artificial intelligence in the future. ChatGPT has accelerated the race among top companies to develop and deploy AI systems, and is in the early stages of catching up with policymakers around the world. In Congress, lawmakers have formed a group to develop bipartisan legislation to address the technology amid heavy lobbying from the tech industry.
Oppenheimer was largely unsuccessful in his efforts to address the risks of his invention. Nolan said he was “devastated” by his efforts to stop the development of the hydrogen bomb. Scientists’ efforts to work within the political system to create change have largely failed, especially after their security clearances were revoked due to alleged ties to communism.
“I sympathize with those on the cutting edge of AI who look at Mr. Oppenheimer’s story and see it as a cautionary tale, in part because I don’t think it provides many answers. ,” he said.
How Oppenheimer weighed the possibility of an atomic bomb test ending the Earth
After the war, Nolan said, atomic researchers were elevated in popular culture and achieved greater fame than any scientist in history. But ultimately, they found themselves excluded from the political system.
“When politicians need inventors, they have a say, but when they don’t need an inventor, they have less say,” Nolan said. “Oppenheimer’s story points out the many difficulties and pitfalls of this kind of problem.”
Given that inventors cannot ultimately decide how their technology is used, many technology companies who have spent a lot of time this year educating policymakers in Washington about artificial intelligence It bodes badly for executives, researchers, and engineers. OpenAI CEO Sam Altman, Tesla CEO Elon Musk, and top AI researchers from places like the Massachusetts Institute of Technology have testified at public hearings and spoken to lawmakers in private meetings amid discussions of new AI. I’ve spent hours talking to him.
Congress is rushing to regulate AI. Silicon Valley is eager to show them how.
The modern political environment presents new challenges, especially as companies developing AI systems accumulate greater political influence. In Washington.
“I’m concerned that we’re not yet out of the manipulation of a technology industry that leaders in Washington consistently tell us they don’t understand well enough to regulate,” Nolan said. “We have to get out of that mode immediately.”
Nolan says when he set out to make the film about 20th century scientists, he never expected it to become so relevant to this year’s technology debate. He frequently discussed AI during the “Oppenheimer” media blitz, and in November he spoke at a meeting of the Federation of American Scientists with policymakers working on artificial intelligence, including Sen. Charles E. Schumer (D.N.Y.) Received the Public Service Award. Todd C. Young (R-Ind.) and Alondra Nelson, former acting director of the White House Office of Science and Technology Policy.
“In making a film about Oppenheimer, I never expected to spend so much time talking about artificial intelligence.” Nolan said.