Why Artificial Intelligence could be dangerous

Lukas O'Neill

Lukas O'Neill

Letting the world know about PassKit's incredible product.
Share on facebook
Share on twitter
Share on linkedin

Between mistrust, fascination and fantasy, artificial intelligence (AI) continues to thrive rapidly, and its applications are numerous.

Between the catastrophic scenarios conveyed by Hollywood cinema and the fear of seeing jobs suppressed in favor of automated tasks, artificial intelligence is as distrustful as it is fascinating. Especially as it evolves more and more quickly.


Scientists at the University of Oxford and DeepMind, a subsidiary of Google, have developed an artificial intelligence called ‘LipNet’, that can read on the lips much better than a human being. According to an article published in the journal New Scientist, a reading professional was able to decipher only 12.4% of the words from videos, the AI ​​reached a score of 46.8%.

Bob and Alice invent their own encryption

Researchers from Google Brain published on an article in which they explain that two artificial intelligences (Bob and Alice) were able to create an encryption enabling them to communicate between them without a third AI (Eve) being able to decipher their messages.

The machine learning’s dark side

A Chatbot was supposed to imitate the Americans of 18-24 years by reproducing their language. And the more we talked with Tay, the more she was supposed to learn and expand her language. Problem, it immediately encountered a vast operation of diversion led by the “trolls” of the forum 4Chan. Objective: to transform Tay into a racist, sexist, negationist and anti-Semitic “adolescent”. Mission accomplished. The next day, Microsoft had to disable it.

 Tay artificial intelligence

While this experiment made it possible to pinpoint the damage that could be done during the learning phase of the artificial intelligence, it also demonstrated that Tay’s nuisance power was relatively limited. Of course, she made unacceptable remarks, but we are still far from the end of the world described in “Terminator”.

Weak AIs vs strong AIs

“weak AIs” are machines simulating human behavior without self-consciousness. As opposed to “strong AIs”, that are machines producing intelligent behavior, capable of self-consciousness by experiencing feelings and understanding of their own reasoning”.

The warning of Hawking and Musk

Stephen Hawking, famous high-tech entrepreneur, Elon Musk-Tesla’s patron and Space X-, Apple co-founder Steve Wozniak, Frank Wilczek Nobel Prize in Physics and many researchers. They all wished to warn against the use of AI in the military field, and in particular against automated armed systems, which they consider to be “the third revolution in the practice of war, after powder and weapons nuclear. “


Join The Cause

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Related Posts