Tech companies agree to White House safeguards for AI security.

White House meets tech leaders, seeks commitments to ensure AI is used equitably & openly; some push for more.

July 22nd 2023.

Tech companies agree to White House safeguards for AI security.
Last month, President Biden met with experts and leaders in the field of artificial intelligence in San Francisco to discuss the promise and risks associated with the field, and Vice-President Kamala Harris had a meeting with consumer protection, labor, and civil rights leaders to discuss how to leverage the power of artificial intelligence while protecting people from harm and bias. It's clear that the White House is taking artificial intelligence and the potential risks it poses seriously, and has now announced a meeting between leading technology companies and firms to discuss the development of artificial intelligence in a responsible and safe manner.

Google, Meta, Amazon, Microsoft, OpenAI, Anthropic, and Inflection met with the Biden Administration to discuss the implementation of the AI Bill of Rights framework that was laid out in 2022. This framework aims to ensure that the public is protected from the dangers of emerging artificial intelligence, and the press release from the White House states that “Today’s announcement is part of a broader commitment by the Biden-Harris Administration to ensure AI is developed safely and responsibly, and to protect Americans from harm and discrimination.”

The companies have reportedly agreed to voluntary safety, security, and trust commitments, and Inflection CEO Mustafa Suleyman told the Associated Press that “It’s a big deal to bring all the labs together, all the companies,” Suleyman said. “This is supercompetitive and we wouldn’t come together under other circumstances.” He also said that the “red-team” tests that the companies agreed to represent a significant commitment even though they are voluntary.

However, not everyone is convinced that this is enough to ensure public safety when it comes to AI. AI Now Institute’s executive director Amba Kak told the Associated Press that “A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough.” She believes that a much more wide-ranging public deliberation is necessary, and one that will likely bring up issues that companies almost certainly won’t voluntarily commit to because it could affect their business models.

The only way to know if the White House’s initiative on artificial intelligence will be successful is to see what comes out of these agreements. AI is increasingly being used in various industries, so it is important that it is used in an equitable and fair manner. As the technology industry continues to grow, we must make sure that public safety and accountability is not ignored in favor of commercial interests.

[This article has been trending online recently and has been generated with AI. Your feed is customized.]

 0
 0