Former Google executive cautions that highly advanced AI technology could be used as a tool by terrorists.

Mentions of North Korea, Iran and Russia were made.

February 13th 2025.

Former Google executive cautions that highly advanced AI technology could be used as a tool by terrorists.
The world of technology is constantly evolving, with new advancements being made every day. However, amidst all the excitement and progress, there are also some serious concerns that need to be addressed. Eric Schmidt, a former CEO of Google, has recently voiced his concerns about the potential dangers of artificial intelligence (AI) falling into the wrong hands. In a recent interview with the BBC, Schmidt issued a sobering warning about the "extreme risk" posed by terrorists or rogue states using AI for their own nefarious purposes.

According to Schmidt, governments need to take a more active role in regulating private tech companies, as there is a very real possibility of AI being used for "evil goals". He specifically mentioned countries like North Korea, Iran, and Russia, and how they could potentially use AI to create biological weapons. "The real fears that I have are not the ones that most people talk about AI - I talk about extreme risk," Schmidt expressed, highlighting the gravity of the situation.

Having been a part of Google for over 16 years, Schmidt has seen firsthand the potential of AI and its impact on society. However, with private companies leading the way in AI development, he stressed the importance of careful monitoring and regulation by governments. "It's really important that governments understand what we're doing and keep their eye on us," he said, emphasizing the need for transparency and accountability in the development of AI.

Schmidt's comments come in the wake of a recent summit on AI in Paris, where the UK joined the US in not signing a communique about the future direction of the technology. The declaration, signed by 57 countries, aimed at promoting "inclusive and sustainable artificial intelligence for people and the planet". However, the UK expressed concerns about the lack of practical clarity on global governance of AI and the need to address difficult questions regarding national safety.

When questioned about the decision to not sign the communique, Communities minister Alex Norris denied any influence from the US administration and stated that decisions were made based on what was best for the British people. "That's what we've done in this situation, as we would do in any situation - global or domestic," he clarified.

It is evident that the development of AI has the potential to change the world, but it also comes with significant risks that cannot be ignored. As Schmidt rightly points out, it is crucial for governments to understand and closely monitor the progress of AI to prevent it from being used for malicious purposes. The future of AI lies in the hands of both private companies and governments, and it is vital for them to work together to ensure its responsible and safe development for the benefit of all.

[This article has been trending online recently and has been generated with AI. Your feed is customized.]
[Generative AI is experimental.]

 0
 0