Posted on

AI failures of 2016

AI has seen a renaissance over the last year, with developments in driverless vehicle technology, voice recognition, and the mastery of the game “Go,” revealing how much machines are capable of.

But with all of the successes of AI, it’s also important to pay attention to when, and how, it can go wrong, in order to prevent future errors. A recent paper by Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville, outlines a history of AI failures which are “directly related to the mistakes produced by the intelligence such systems are designed to exhibit.” According to Yampolskiy, these types of failures can be attributed to mistakes during the learning phase or mistakes in the performance phase of the AI system.

Here is TechRepublic’s top 10 AI failures from 2016, drawn from Yampolskiy’s list as well as from the input of several other AI experts.

Share This:

Posted on

The Positive and Negative Effects of Artificial Intelligence on Our Lives

Artificial IntelligenceArtificial Intelligence effects our lives in good and bad ways Artificial Intelligence helps us by causing less work for us to do. A down side to Artificial Intelligence is that it can not think for it self it still has to have some kind of person or other intelligence to control it.The advantages that Artificial Intelligence has are the ability to do work without getting any rest, sleep and food. It has little learning capability of its own but not enough to think on its own. Most computer programs that involve Artificial Intelligence designed to supply both the knowledge and the reasoning of human experts in a given field may well be the consultants of the future. They have already been used in such diverse areas as mineral exploration and computer manufacturing. As more and more Artificial Intelligence is developed it could be used by nonexperts as well.Artificial Intelligence can also resemble the way a mouse works. A mouse can be used to draw lines, to point, and to circle objects to be moved, transposed, or edited. Once the command has been selected on the screen, the click of a button on the mouse activates it. This Artificial Intelligence, by the computer, is in a way that involves human intelligence with the computer.Some disadvantages of Artificial Intelligence are the fact that the computer cant think for it self, it always has to have some kind of other intelligence controlling it such as people and the fact that it cannot think for it self makes the computer vulnerable to malfunctions and shut downs. Another disadvantages is all English language must be turned into numbers consisting of ones and zeros so the computer will understand.Problems that are usually involved in Artificial Intelligence is when the computer is capable of learning on its own and causes situations where our personal information is changed, it will change our lives too.

Share This:

Posted on

Exploring the risks of artificial intelligence

“Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten.”

These words, articulated by Neil Armstrong at a speech to a joint session of Congress in 1969, fit squarely into most every decade since the turn of the century, and it seems to safe to posit that the rate of change in technology has accelerated to an exponential degree in the last two decades, especially in the areas of artificial intelligence and machine learning.

Artificial intelligence is making an extreme entrance into almost every facet of society in predicted and unforeseen ways, causing both excitement and trepidation. This reaction alone is predictable, but can we really predict the associated risks involved?

It seems we’re all trying to get a grip on potential reality, but information overload (yet another side affect that we’re struggling to deal with in our digital world) can ironically make constructing an informed opinion more challenging than ever. In the search for some semblance of truth, it can help to turn to those in the trenches.

In my continued interview with over 30 artificial intelligence researchers, I asked what they considered to be the most likely risk of artificial intelligence in the next 20 years.

Some results from the survey, shown in the graphic below, included 33 responses from different AI/cognitive science researchers. (For the complete collection of interviews, and more information on all of our 40+ respondents, visit the original interactive infographic here on TechEmergence).

Two “greatest” risks bubbled to the top of the response pool (and the majority are not in the autonomous robots’ camp, though a few do fall into this one). According to this particular set of minds, the most pressing short- and long-term risks is the financial and economic harm that may be wrought, as well as mismanagement of AI by human beings.

Dr. Joscha Bach of the MIT Media Lab and Harvard Program for Evolutionary Dynamics summed up the larger picture this way:

“The risks brought about by near-term AI may turn out to be the same risks that are already inherent in our society. Automation through AI will increase productivity, but won’t improve our living conditions if we don’t move away from a labor/wage based economy. It may also speed up pollution and resource exhaustion, if we don’t manage to install meaningful regulations. Even in the long run, making AI safe for humanity may turn out to be the same as making our society safe for humanity.”

artificial intelligence

Essentially, the introduction of AI may act as a catalyst that exposes and speeds up the imperfections already present in our society. Without a conscious and collaborative plan to move forward, we expose society to a range of risks, from bigger gaps in wealth distribution to negative environmental effects.

Leaps in AI are already being made in the area of workplace automation and machine learning capabilities are quickly extending to our energy and other enterprise applications, including mobile and automotive. The next industrial revolution may be the last one that humans usher in by their own direct doing, with AI as a future collaborator and – dare we say – a potential leader.

Some researchers believe it’s a matter of when and not if. In Dr. Nils Nilsson’s words, a professor emeritus at Stanford University, “Machines will be singing the song, ‘Anything you can do, I can do better; I can do anything better than you’.”

In respect to the drastic changes that lie ahead for the employment market due to increasingly autonomous systems, Dr. Helgi Helgason says, “it’s more of a certainty than a risk and we should already be factoring this into education policies.”

Talks at the World Economic Forum Annual Meeting in Switzerland this past January, where the topic of the economic disruption brought about by AI was clearly a main course, indicate that global leaders are starting to plan how to integrate these technologies and adapt our world economies accordingly – but this is a tall order with many cooks in the kitchen. 

Another commonly expressed risk over the next two decades is the general mismanagement of AI. It’s no secret that those in the business of AI have concerns, as evidenced by the $1 billion investment made by some of Silicon Valley’s top tech gurus to support OpenAI, a non-profit research group with a focus on exploring the positive human impact of AI technologies.

Share This:

Posted on

AI built to predict future crime was racist

The company Northpointe built an AI system designed to predict the chances of an alleged offender to commit a crime again. The algorithm, called “Minority Report-esque” by Gawker (a reference to the dystopian short story and movie based on the work by Philip K. Dick), was accused of engaging in racial bias, as black offenders were more likely to be marked as at a higher risk of committing a future crime than those of other races. Another media outlet, ProPublica, found that Northpointe’s software wasn’t an “effective predictor in general, regardless of race.”

Share This:

Posted on

Non-player characters in a video game crafted weapons beyond creator’s plans

In June, an AI-fueled video game called Elite: Dangerous exhibited something the creators never intended: The AI had the ability to create superweapons that were beyond the scope of the game’s design. According to one gaming website, “[p]layers would be pulled into fights against ships armed with ridiculous weapons that would cut them to pieces.” The weapons were later pulled from the game’s developers.

Share This:

Posted on

Robot injured a child

A so-called “crime fighting robot,” created by the platform Knightscope, crashed into a child in a Silicon Valley mall in July, injuring the 16-month-old boy. The Los Angeles Times quoted the the company as saying that incident was a ” freakish accident.”

Share This:

Posted on

Fatality in Tesla Autopilot mode

As previously reported by TechRepublic, Joshua Brown was driving a Tesla engaged in Autopilot mode when his vehicle collided with a tractor-trailer on a Florida highway, in the first-reported fatality of the feature. Since the accident, Telsa has announced major upgrades to its Autopilot software, which Elon Musk claimed would have prevented that collision. There have been other fatalities linked to Autopilot, including one in China, although none can be directly tied to a failure of the AI system.

Share This:

Posted on

Microsoft’s chatbot Tay utters racist, sexist, homophobic slurs

In an attempt to form relationships with younger customers, Microsoft launched an AI-powered chatbot called “Tay.ai” on Twitter last spring. “Tay,” modeled around a teenage girl, morphed into, well, a ” Hitler-loving, feminist-bashing troll”—within just a day of her debut online. Microsoft yanked Tay off the social media platform and announced it planned to make “adjustments” to its algorithm.

Share This:

Posted on

AI-judged beauty contest is racist

In “The First International Beauty Contest Judged by Artificial Intelligence,” a robot panel judged faces, based on “algorithms that can accurately evaluate the criteria linked to perception of human beauty and health,” according to the contest’s site. But by failing to supply the AI with a diverse training set, the contest winners were all white. As Yampolskiy said, “Beauty is in the pattern recognizer.”

Share This:

Posted on

Pokémon Go keeps game-players in white neighborhoods

After the release of the massively popular Pokémon Go in July, several users noted that there were fewer Pokémon locations in primarily black neighborhoods. According to Anu Tewary, chief data officer for Mint at Intuit, it’s because the creators of the algorithms failed to provide a diverse training set, and didn’t spend time in these neighborhoods.

Share This:

Posted on

Google’s AI, AlphaGo, loses game 4 of Go to Lee Sedol

In March 2016, Google’s AI, AlphaGo, was beaten in game four of a five-round series of the game Go by Lee Sedol, a 18-time world champion of the game. And though the AI program won the series, Sedol’s win proved AI’s algorithms aren’t flawless yet.

“Lee Sedol found a weakness, it seems, in Monte Carlo tree search,” said Toby Walsh, professor of AI at the University of New South Wales. But while this can be considered a failure of AI, Yampolskiy also makes the point that the loss “could be considered by some to be within normal operations specs.”

Share This:

Posted on

Chinese facial recognition study predicts convicts but shows bias

Two researchers at China’s Shanghai Jiao Tong University published a study entitled “Automated Inference on Criminality using Face Images.” According to the Mirror, they “fed the faces of 1,856 people (half of which were convicted violent criminals) into a computer and set about analysing them.” In the work, the researchers concluded that there are “some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle.” Many in the field questioned the results and the report’s ethics underpinnings.

Share This: