Trusted

Why Google Employees Are Questioning Bard’s Helpfulness

2 mins
Updated by Geraint Price
Join our Trading Community on Telegram

In Brief

  • Google Bard Experience Lead, Cathy Pearl, questions the true usefulness of large language model (LLM) chatbots in a Discord server.
  • Community members on Discord claim that Google's Bard chatbot struggles to answer even basic questions about Google's own apps.
  • Dominik Rabiej, product manager for Bard, suggests that the output of LLM chatbots cannot be trusted without independent verification.
  • promo

The screenshot of a Discord server dedicated to Google’s artificial intelligence (AI) chatbot, Bard, suggests that some Google employees question the helpfulness of large language model (LLM) chatbots.

AI chatbots became mainstream in 2023 due to OpenAI’s ChatGPT and Google’s Bard. However, are AI chatbots, in their current state, really helpful?

Google Bard Experience Lead Question if LLMs Are Making a Difference

According to Bloomberg, there is a Discord server dedicated to heavy users of Google Bard and some employees of the search engine giant. In the particular server, they discuss the effectiveness and utility of the chatbot.

Bloomberg had been collecting screenshots of the conversation from two of the Discord servers’ members from July to October. In August, the user experience lead for Google Bard, Cathy Pearl wrote:

“The biggest challenge I’m still thinking of: what are LLMs truly useful for, in terms of helpfulness? Like really making a difference. TBD!”

Read more: Most Popular Machine Learning Models in 2023

Google Bard, Google employees, AI
Screenshot of Google employees’ Discord chat. Source: Bloomberg

Community members believe Bard cannot answer even basic questions. An X (Twitter) user wrote:

“Anyone who has used Bard would probably agree. It can’t even answer basic questions about how Google’s own apps work, eg Analytics, reCAPTCHA etc. Go test it.”

Google Product Manager Believes That Output Cannot Be Trusted

There were discussions about the reliability of the data generated by the chatbots. Dominik Rabiej, a product manager for Bard, says that LLMs are not at a stage wherein users cannot trust their outputs without independently verifying them. He said:

“My rule of thumb is not to trust LLM output unless I can independently verify it”

LLMs are AI models trained with a large set of data to generate human-like outputs. Chatbots such as ChatGPT and Bard use LLMs.

As the data is not completely reliable, Rabiej suggests that it is ideal to use Google Bard for brainstorming purposes instead of relying on it only for information. As a matter of fact, when a user first starts using Bard, it shows a message stating, “Bard is an experiment.” The message further reads:

“Bard will not always get it right Bard may give inaccurate or offensive responses. When in doubt, use the Google button to double-check Bard’s responses.”

Read more: ChatGPT vs. Google Bard: A Comparison of AI Chatbots

Google Bard, AI, artificial intelligence
Google Bard’s message. Source: Official website

Do you have anything to say about the Google Bard chatbot or anything else? Write to us or join the discussion on our Telegram channel. You can also catch us on TikTok, Facebook, or X (Twitter).

For BeInCrypto’s latest Bitcoin (BTC) analysis, click here.

Top crypto projects in the US | November 2024
Coinbase Coinbase Explore
Coinrule Coinrule Explore
Uphold Uphold Explore
3Commas 3Commas Explore
Chain GPT Chain GPT Explore
Top crypto projects in the US | November 2024
Coinbase Coinbase Explore
Coinrule Coinrule Explore
Uphold Uphold Explore
3Commas 3Commas Explore
Chain GPT Chain GPT Explore
Top crypto projects in the US | November 2024

Disclaimer

In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content. Please note that our Terms and ConditionsPrivacy Policy, and Disclaimers have been updated.

Harsh.png
Harsh Notariya
Harsh Notariya is an Editorial Standards Lead at BeInCrypto, who also writes about various topics, including decentralized physical infrastructure networks (DePIN), tokenization, crypto airdrops, decentralized finance (DeFi), meme coins, and altcoins. Before joining BeInCrypto, he was a community consultant at Totality Corp, specializing in the metaverse and non-fungible tokens (NFTs). Additionally, Harsh was a blockchain content writer and researcher at Financial Funda, where he created...
READ FULL BIO
Sponsored
Sponsored