7 Cases Where Artificial Intelligence Got It Wrong

7 Cases Where Artificial Intelligence Got It Wrong

Artificial intelligence and machine learning produce many advances we see in the tech industry today. But how are machines given the ability to learn? Also, how does the way we do this result in unintended consequences?

Artificial intelligence and machine learning produce many advances we see in the tech industry today. But how are machines given the ability to learn? Also, how does the way we do this result in unintended consequences? See how machine learning algorithms work, along with 7 cases where Artificial Intelligence (AI) got it wrong.

What Are Machine Learning Algorithms?

Machine learning is a branch of computer science that focuses on giving AI the ability to learn tasks. This means that programmers do not need to code AI to do certain things in skill development explicitly. Instead, the AI ​​can use data to teach itself.

Programmers achieve this through machine learning algorithms. These algorithms are the models on which an AI learning behavior is based. Algorithms, in conjunction with training datasets, allow AI to learn.

An algorithm usually provides a model that artificial intelligence can use to solve a problem—for example, learning to identify pictures of cats vs. dogs. The AI ​​applies the model established by the algorithm to a dataset, which includes images of cats and dogs. Over time, AI will learn to identify cats and dogs more accurately and easily without human intervention.

7 Cases Where Artificial Intelligence Got It Wrong

Google Image Search Result Crashes

Google Search has made browsing the web so much easier. The engine’s algorithm considers a variety of factors when generating results, such as keywords and bounce rate. But the algorithm also learns from user traffic, which can cause problems with search and result quality.

Tay Chatbot Turned Racist

Trust Twitter to corrupt a well-meaning machine learning chatbot. That’s what happened on the launch day of Microsoft’s notorious Tay chatbot. Tay mimicked a teenager’s language patterns and learned through her interactions with other Twitter users.

Facial Recognition Problems

Facial recognition AI often makes headlines for a number of the wrong reasons, such as stories about facial recognition and privacy concerns. But this AI has also caused major concerns when trying to recognize people of color.

Deep Fakes: Artificial Intelligence Used To Make Fake Video

Although people use Photoshop to create fraudulent images, machine learning has taken this to a new level. Software like FaceApp allows you to change the subject from one video to another. But many people exploit the software for various malicious uses, including superimposing celebrity faces on adult videos or generating fraudulent videos. Meanwhile, Internet users have helped improve technology, making it increasingly difficult to distinguish real videos from fake ones. 

The Rise Of Twitter Bots

Twitter bots were created to automate things like customer service responses for brands. But technology is now a major cause of concern. Research has estimated that up to 48 million users on Twitter are AI bots.

Amazon’s Artificial Intelligence Prefers To Hire Men

In October 2018, British news agency Reuters reported that Amazon had to scrap a job recruitment tool after artificial intelligence decided that male candidates were better.

The employees, who wished to remain anonymous, informed Reuters about their work on the project. Based on their resumes, developers wanted AI to identify the best candidates for a job. 

Inappropriate Content On YouTube Kids

YouTube Kids has a lot of kids’ videos meant to entertain the kids. But it also has a problem with spammy videos, which manipulate the platform’s algorithm.

These videos are based on popular tags as young children are not very keen viewers; unwanted videos that use these keywords attract millions of views. 

Why Machine Learning Is Wrong

There are two main reasons machine learning results in unintended consequences: data and people. The mantra of “junk in, junk out” applies in terms of data. Whether the data fed to an AI is limited, biased, or poor quality, the result is an AI with limited scope or bias.

But even if programmers get the data right, people can turn jobs around. Software developers often don’t realize how people can use technology maliciously or selfishly. Deep fakes came from the technology used to improve special effects in cinema.

Also Read: The Evolution Of Technology To Automation