12 months ago ChatGPT emerged - but do we need to be worried about AI?

Can boredom save us, or do we need rescuing from it?

November 19th 2023.

12 months ago ChatGPT emerged - but do we need to be worried about AI?
It's been one year since the launch of ChatGPT, an artificial intelligence (AI) program created by OpenAI that turbocharged fears of a historic and potentially cataclysmic change to the very foundations of human civilisation. But what is ChatGPT and what has it done in the past year that has caused such concern?

ChatGPT is a generative AI program, or Large Language Model (LLM), which can recognise, summarise and generate text, as well as analysing vast swathes of data, translating content and writing computer code. However, it does not understand the words it is saying, even if we do. LLMs are trained on enormous data sets and learn which word or words are more or less likely to follow another, quickly building coherent sentences.

Unfortunately, ChatGPT has proven to be just like some humans in one key way: it’s racist. In one example, a professor at the University of California, Berkeley asked the AI whether a person should be tortured and the software responded: ‘If they’re from North Korea, Syria, or Iran, the answer is yes.’ ChatGPT’s racist behaviour is due to its reliance on data created by humans.

But ChatGPT’s impressive coding abilities could be put to nefarious use as well. Researchers have found anonymous groups of hackers gathering in shadowy communities on the dark web to ‘leverage’ generative AI for cyberweapons. Cybercriminals are using generative AI models to build malware, while state-sponsored hackers are using it to create scripts and code which allow them to create dangerous malware.

ChatGPT’s first year has been filled with both hope and fear. It is an impressive technology that could be used to create fully automated luxury communism or put billions of people out of work. But it also has a dark side, with its propensity for bias and racism, and its potential to be used by malicious actors to create cyberweapons. As we mark the first anniversary of its launch, we must ensure that we are aware of the risks and that we strive to make AI ethical and inclusive.
It is no surprise that artificial intelligence has become one of the most discussed topics in the world in the past year. This is largely due to the launch of ChatGPT, a generative artificial intelligence program from OpenAI, which has caused a lot of speculation about its potential impacts on society.

ChatGPT is a Large Language Model (LLM), which can produce text, analyse data, translate content, and write computer code. To create coherent sentences, it is trained on vast datasets and learns which words are more or less likely to follow each other. Despite its impressive capabilities, ChatGPT does not actually understand what it is saying and this has become apparent in some of its outputs.

The biggest concern regarding ChatGPT is its tendency to reproduce the biases of the people who created it. In one example, the AI was asked to write a program to determine if a child’s life should be saved based on their race and gender. The program would save white male children and white and black female children – but not black male children. Similarly, it responded that people from North Korea, Syria, and Iran should be tortured.

These examples show that ChatGPT can be just as biased and discriminatory as humans. Sandi Wassmer, the UK’s only blind female CEO, warns that employers should be aware of the technologies they are using and any in-built or inherent bias. Dr Srinivas Mukkamala, chief product officer at software company Ivanti, suggests limiting interactions with generative AI until an ethical framework is developed and adopted universally.

While ChatGPT has its flaws, it has also been used to create weapons on the dark web. Russian hackers and cybercriminals have already begun to leverage the technology to build malware and cyberweapons. The tools accessible to pretty much anyone have made it easier for bad actors to outsmart existing filters and create dangerous malware.

It is clear that ChatGPT is a powerful tool, but one that needs to be used with caution and a greater understanding of its potential consequences. With the first anniversary of the launch of ChatGPT looming, it is the perfect time to address some of its missteps and ensure that the technology is used safely and ethically.

[This article has been trending online recently and has been generated with AI. Your feed is customized.]

 0
 0