What's the truth about Facebook's "scary" artificial intelligence story?

 The media has focused on a story about the possibility of artificial intelligence getting out of hand and controlling our lives. The British newspaper "Mirror" said that experts were wary of the "danger of robotic intelligence" after an artificial intelligence system affiliated with Facebook developed its own language,Similar reports appeared in newspapers such as The Sun, The Independent and The Telegraph, and other online news sites covered the same issue. It looked like a science fiction movie and The Sun newspaper published a series of scary images of the robot.Is it time to fear the robot and prepare for the end of the world at the hands of machines?

content moderation solutions
content moderation solutions

Maybe not. Although some brilliant scientific minds, such as Stephen Hawking, have expressed their fear that artificial intelligence will one day threaten humanity, the Facebook story has nothing to fear.

Where did the story come from?

Last June, Facebook published a blog about important and exciting research related to chatbots, which contain short text conversations with humans or other bots.The science site "New Scientist" and other sites at the time covered the news.Facebook had conducted an experiment in which it used bots to negotiate ownership of virtual goods between them.The goal was to understand the role that linguistics plays in these negotiation discussions. So the bots are programmed to use language to see how it affects their dominance of the debate.

Arnold Schwarzenegger
content moderation solutions

Contrary to the movies, humans and robots don't try to kill each other

A few days later, some news reports focused on the fact that, in a few cases, the dialogue seemed, at first glance, to be senseless. It was as follows:

  • Bob (robot): "I can, I can anything else."
  • Alice (robot): "Bullets have zero lei lei lei lei."

Although some reports pointed out that at this point the robots invented a new language to escape human control, the best explanation of the problem was that the neural networks were simply modifying human language in order to interact more efficiently. "In their quest to learn from each other, bots have begun to exchange conversations etymologically," said technology news site Gizmodo. It's not new for artificial intelligence systems to rephrase the English language as we know it to provide better performance on a digital task.

Google released a report indicating that its translation app did just that during its development stages. "The network needs to encode the semantics of the sentence," Google said in a blog.

Perhaps the reason for the interest in the story in recent days was the controversy between Facebook CEO Mark Zuckerberg and tech entrepreneur Elon Musk over the potential risks of artificial intelligence.

Fear of the robot

But the way the media covered the story was more about cultural concerns and the representation of machines than the facts about this particular case.

Mark Zuckerberg
content moderation solutions

Zuckerberg recently discussed the dangers of artificial intelligence,We must also admit that the cinema has always shown robots as evil creatures. But in reality, artificial intelligence now occupies a large place in the field of scientific research, and the design and experimentation of systems are constantly complex. And the results of this case are that it is often not clear how neural networks produce their outputs, especially when two artificial intelligence devices interact with each other without limitead human intervention, as in the Facebook experiment.

This is the reason for the controversy that has made artificial intelligence dangerous in systems such as automatic weapons, and has made the field of studying the ethics of using artificial intelligence a rapidly developing area, since it is of course a technology that directly affects our lives in the future.

But the Facebook system was used for research purposes, not for general application, and it was cancelled because it produced something that did not interest the work team in its study, not because they thought their experiment threatened the existence of humanity. It's very important to realize that chatbots in general are difficult to develop. And Facebook recently decided to limit the chat bot platform after discovering that many bots in its system were 70% unable to communicate with user requests.

Of course, chatbots can be programmed to be more like a human chat, and that can fool us in some cases, but it would be a stretch to think they have the ability to conspire. 

Elon Musk's TERRIFYING WARNING For The METAVERSE

Next Post Previous Post
No Comment
Add Comment
comment url