AI has advanced dramatically in the recent few decades. What was previously thought to be only feasible in science fiction films like Men in Black or The Matrix is now a reality. AI is already pervasive in areas ranging from health, fashion, and real estate to food and tourism. Its popularity has grown so rapidly that it has given rise to several strange uses.
It’s not all peaches and cream, though. According to a survey of worldwide firms that are already utilizing AI, a quarter of the corporations polled indicated a 50% failure rate for their AI initiatives.
1. Tesla car crash
Elon Musk’s Tesla ran into difficulty after a Tesla Model S crashed north of Houston in April, killing two people. The automobile had missed a tiny curve on the road and collided with a tree. According to early investigations and witness testimony, the driver’s seat was unoccupied at the time of the incident. As a result, it’s thought that Tesla’s Autopilot or Full Self Driving (FSD) technology was activated during the collision.
Two men dead after fiery crash in Tesla Model S.
“[Investigators] are 100-percent certain that no one was in the driver seat driving that vehicle at the time of impact,” Harris County Precinct 4 Constable Mark Herman said. “They are positive.” #KHOU11 https://t.co/q57qfIXT4f pic.twitter.com/eQMwpSMLt2
— Matt Dougherty (@MattKHOU) April 18, 2021
Tesla’s AI-powered Autopilot technology can manage steering, acceleration, and braking in specific instances. According to Musk, AI is intended to learn from the activities of drivers over time. Several safety advocates have chastised Tesla for not doing enough to prevent drivers from becoming overly reliant on its Autopilot functions, or from utilizing them in scenarios for which they were not built.
2. Amazon AI showed bias against women
Amazon began developing machine learning techniques in 2014 to examine resumes. The AI-based trial hiring tool, on the other hand, had a serious flaw: it was prejudiced toward women. The program taught to evaluate applications over a ten-year period by reviewing resumes submitted to the firm. Because the majority of these resumes were provided by males, the algorithm learned to prefer male candidates. This meant that resumes containing phrases like “women’s” (as in “women’s chess club leader”) were devalued. Graduates from two all-colleges women’s were similarly ranked lower.
3. AI mistakes a head for a ball
An AI-powered camera meant to automatically track the ball at a soccer game ended up monitoring the bald head of a linesman instead. The event occurred at the Caledonian Stadium in Scotland during a match between Inverness Caledonian Thistle and Ayr United. The Inverness club had resorted to employing an automatic camera instead of human camera operators during the outbreak last October.
Everything is terrible. Here's a football match last weekend that was ruined after the AI cameraman kept mistaking the linesman's bald head for a football.https://t.co/BsoQFqEHu0 pic.twitter.com/GC9z9L8wHf
— James Felton (@JimMFelton) October 29, 2020
However, “the camera kept mistaking the ball for the bald head on the sidelines, robbing spectators of the genuine action while focusing on the linesman instead,” according to sources.
4. Microsoft’s AI turns sexist and racist
Microsoft introduced Tay, an AI chatbot, in 2016. Tay conversed with Twitter users in a “casual and humorous manner.” In less than 24 hours, however, Twitter users manipulated the bot to post highly sexist and racist comments. Tay used artificial intelligence to learn from its exchanges with Twitter users. The more it conversed, the “smarter” it grew. Soon after, the bot began repeating incendiary claims made by users, such as “Hitler was correct,” “feminism is cancer,” and “9/11 was an inside job.”
As the disaster developed, Microsoft was forced to pull the plug on the bot just a day after it was launched. Later, Microsoft’s Vice President of Research, Peter Lee, apologized.
Stay tuned to Brandsynario for the latest news and updates.