Image Source: The US Sun

According to the blog post, the AI-powered solution engine Bard uses online data to create new, high-quality replies. In short, it will provide in-depth, conversational, and essay-style responses, much like ChatGPT provides presently. According to the blog post, a user may ask Bard to explain new findings from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the finest strikers in football right now, and then get workouts to improve their talents.

Google AI chatbot Bard sends shares plummeting after it gives wrong answer | Google | The Guardian
Source: The Guardian

What is the frenzy all about?

The internet behemoth appears to be providing an incorrect response. An advertisement for their latest AI bot showed it answering a question poorly. Alphabet shares fell more than 7% on Wednesday, erasing $100 billion (£82 billion) from the company’s market worth. In the marketing for the bot, known as Bard, which was shared on Twitter on Monday, the bot was asked what to tell a nine-year-old about James Webb Space Telescope discoveries.

It responded that the telescope was the first to take images of a planet outside the Earth’s solar system, when in reality the European Very Large Telescope did so in 2004 – a mistake promptly corrected by scientists on Twitter. Why didn’t you double-check this example before publishing it? Newcastle University colleague Chris Harrison responded to the tweet. Investors were also unimpressed by the company’s presentation on its aspirations to incorporate artificial intelligence into its goods.

Since late last year, when Microsoft-backed OpenAI revealed new ChatGPT software, Google has been under fire. It immediately became a viral sensation because to its ability to pass business school exams, compose song lyrics, and answer other questions. Microsoft said this week that a new version of its Bing search engine, which has trailed behind Google for years, will make use of ChatGPT technology in a more advanced form.

However, Chatbot AI systems pose hazards to organizations due to inherent biases in their algorithms, which can distort findings, sexualize photographs, or even plagiarize, as users test the service discovered. Microsoft, for example, launched a Twitter chatbot in 2016 that swiftly began creating racist comments before being shut down. Furthermore, an AI utilized by the news site CNET was discovered to create factually wrong or plagiarized pieces.

The crumbling support of investors?

Though investors have supported the artificial intelligence drive, some have warned that pushing out the technology increases the danger of mistakes or improperly distorted outcomes, as well as plagiarism difficulties.

According to a Google spokeswoman, the incident underlined the need of a thorough testing methodology, which we’re launching this week with our Trusted Tester program. “We’ll combine external input with our own internal testing to ensure Bard’s replies match a high standard for quality, safety, and roundedness in real-world information,” they explained.

Stay tuned to Brandsynario for the latest news and updates.

Usman Kashmirwala
Your thoughts are your biggest asset in this world and as a content writer, you get a chance to pen down these thoughts and make them eternal. I am Usman Kashmirwala, apart from being a movie maniac, car geek and a secret singer, I am a guy lucky enough to be working in a profession that allows me to showcase my opinions and vision to the world every day and do my little part in making it a better place for all of us.