Source: Median

AI has advanced dramatically in the recent few decades. What was previously thought to be only feasible in science fiction films like Men in Black or The Matrix is now a reality. AI is already pervasive in areas ranging from health, fashion, and real estate to food and tourism. Its popularity has grown so rapidly that it has given rise to several strange uses.

It’s not all peaches and cream, though. According to a survey of worldwide firms that are already utilizing AI, a quarter of the corporations polled indicated a 50% failure rate for their AI initiatives.

1. Tesla car crash

Elon Musk’s Tesla ran into difficulty after a Tesla Model S crashed north of Houston in April, killing two people. The automobile had missed a tiny curve on the road and collided with a tree. According to early investigations and witness testimony, the driver’s seat was unoccupied at the time of the incident. As a result, it’s thought that Tesla’s Autopilot or Full Self Driving (FSD) technology was activated during the collision.

Tesla’s AI-powered Autopilot technology can manage steering, acceleration, and braking in specific instances. According to Musk, AI is intended to learn from the activities of drivers over time. Several safety advocates have chastised Tesla for not doing enough to prevent drivers from becoming overly reliant on its Autopilot functions, or from utilizing them in scenarios for which they were not built.

2. Amazon AI showed bias against women

Amazon began developing machine learning techniques in 2014 to examine resumes. The AI-based trial hiring tool, on the other hand, had a serious flaw: it was prejudiced toward women. The program taught to evaluate applications over a ten-year period by reviewing resumes submitted to the firm. Because the majority of these resumes were provided by males, the algorithm learned to prefer male candidates. This meant that resumes containing phrases like “women’s” (as in “women’s chess club leader”) were devalued. Graduates from two all-colleges women’s were similarly ranked lower.

3. AI mistakes a head for a ball

An AI-powered camera meant to automatically track the ball at a soccer game ended up monitoring the bald head of a linesman instead. The event occurred at the Caledonian Stadium in Scotland during a match between Inverness Caledonian Thistle and Ayr United. The Inverness club had resorted to employing an automatic camera instead of human camera operators during the outbreak last October.

However, “the camera kept mistaking the ball for the bald head on the sidelines, robbing spectators of the genuine action while focusing on the linesman instead,” according to sources.

4. Microsoftโ€™s AI turns sexist and racist

Microsoft introduced Tay, an AI chatbot, in 2016. Tay conversed with Twitter users in a “casual and humorous manner.” In less than 24 hours, however, Twitter users manipulated the bot to post highly sexist and racist comments. Tay used artificial intelligence to learn from its exchanges with Twitter users. The more it conversed, the “smarter” it grew. Soon after, the bot began repeating incendiary claims made by users, such as “Hitler was correct,” “feminism is cancer,” and “9/11 was an inside job.”

As the disaster developed, Microsoft was forced to pull the plug on the bot just a day after it was launched. Later, Microsoft’s Vice President of Research, Peter Lee, apologized.

Stay tuned to Brandsynario for the latest news and updates.

Usman Kashmirwala
Your thoughts are your biggest asset in this world and as a content writer, you get a chance to pen down these thoughts and make them eternal. I am Usman Kashmirwala, apart from being a movie maniac, car geek and a secret singer, I am a guy lucky enough to be working in a profession that allows me to showcase my opinions and vision to the world every day and do my little part in making it a better place for all of us.