AI previously classed as “too dangerous to release” is now out

Artificial intelligence is one of the most advanced domains of today’s technology and in many fields, using AI is a lot more efficient than doing the same task by hand ( and therefore by a paid employee).

The AI model is known as “GPT-2”, and its sole purpose was to analyze an input piece of text and then predict the words that might follow it. The algorithm was capable of putting together long strings of text that might fool anybody to believe that they were written by an actual human being.

The reason for worrying

It turned out that GPT-2 was so good at doing its job that it could easily be used malevolently by anybody who desired to do so, which was a big concern at the time.

One of the potentially realistic scenarios is the generation of “synthetic propaganda”, meaning that the algorithm could quickly generate long texts promoting a certain extremist group, for example.

“Due to our concerns about malicious applications of the technology, we are not releasing the trained model,” stated OpenAI in a blog post from February.

They added:  “As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

AI on the loose

However, after some time passed they released a limited edition of the AI and now the full version is available.

Many researchers are extremely worried about the potential effects of the AI. It can generate fake news, impersonate other people and spam insistently and pertinently. 

Experts advise people to be more careful about the online articles that they choose to trust because, at the moment, there is no countermeasure to detect  AI-generated fake news.

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *