According to the blog post, the AI-powered solution engine Bard uses online data to create new, high-quality replies. In short, it will provide in-depth, conversational, and essay-style responses, much like ChatGPT provides presently. According to the blog post, a user may ask Bard to explain new findings from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the finest strikers in football right now, and then get workouts to improve their talents.
What is the frenzy all about?
The internet behemoth appears to be providing an incorrect response. An advertisement for their latest AI bot showed it answering a question poorly. Alphabet shares fell more than 7% on Wednesday, erasing $100 billion (ยฃ82 billion) from the company’s market worth. In the marketing for the bot, known as Bard, which was shared on Twitter on Monday, the bot was asked what to tell a nine-year-old about James Webb Space Telescope discoveries.
It responded that the telescope was the first to take images of a planet outside the Earth’s solar system, when in reality the European Very Large Telescope did so in 2004 – a mistake promptly corrected by scientists on Twitter. Why didn’t you double-check this example before publishing it? Newcastle University colleague Chris Harrison responded to the tweet. Investors were also unimpressed by the company’s presentation on its aspirations to incorporate artificial intelligence into its goods.
Since late last year, when Microsoft-backed OpenAI revealed new ChatGPT software, Google has been under fire. It immediately became a viral sensation because to its ability to pass business school exams, compose song lyrics, and answer other questions. Microsoft said this week that a new version of its Bing search engine, which has trailed behind Google for years, will make use of ChatGPT technology in a more advanced form.
Bard is an experimental conversational AI service, powered by LaMDA. Built using our large language models and drawing on information from the web, itโs a launchpad for curiosity and can help simplify complex topics โ https://t.co/fSp531xKy3 pic.twitter.com/JecHXVmt8l
— Google (@Google) February 6, 2023
However, Chatbot AI systems pose hazards to organizations due to inherent biases in their algorithms, which can distort findings, sexualize photographs, or even plagiarize, as users test the service discovered. Microsoft, for example, launched a Twitter chatbot in 2016 that swiftly began creating racist comments before being shut down. Furthermore, an AI utilized by the news site CNET was discovered to create factually wrong or plagiarized pieces.
The crumbling support of investors?
Though investors have supported the artificial intelligence drive, some have warned that pushing out the technology increases the danger of mistakes or improperly distorted outcomes, as well as plagiarism difficulties.
According to a Google spokeswoman, the incident underlined the need of a thorough testing methodology, which we’re launching this week with our Trusted Tester program. “We’ll combine external input with our own internal testing to ensure Bard’s replies match a high standard for quality, safety, and roundedness in real-world information,” they explained.
Stay tuned to Brandsynario for the latest news and updates.