Generative AI and Large Language Models (LLMs) have brought about some of the most significant advancements in artificial intelligence (AI), opening up a world of possibilities. However, as with any technology, there are risks associated with its use, and AI has the potential to perpetuate biases and misinformation if not used carefully.
Microsoft and Google are the Two Largest Contenders in the AI Search Race
Tech giants Microsoft Bing and Google are locked in an AI search arms race, with Bing introducing a ChatGPT integration and Google unveiling its new technology, Bard, both for consumer search use cases. Google has been experimenting to improve search results by providing featured snippets for some time now, but there have been instances where the AI model has confidently answered, with biased or factually incorrect information, as the best answer.
In 2020, Google search results incorrectly displayed a featured snippet that claimed former US President Barack Obama was planning a coup. In February, there was the famous Google Bard fiasco on James Webb Telescope along with the recent infamous conversations that users have had with the Bing AI chatbot. While corrective measures were taken quickly, these incidents highlight the potential of AI generated answers to spread misinformation.
The Value of AI Powered Search Chat
The financial stakes are high in this contest. Both Google and Microsoft have invested heavily in AI over the last several years, but AI has suddenly become a major factor for stock market investors.
On February 6, Google Bard made an impressive debut with its demonstration of its ChatGPT rival. Bard mistakenly claimed that the James Webb Telescope took the “very first pictures” of an exoplanet outside our solar system. This was not correct. The first image of an exoplanet was taken by the European Southern Observatory’s Very Large Telescope in 2004, according to NASA.
The consequences of this public stumble were immediate. In two days of stock trading following the Google Bard factual error, the value of Google parent company Alphabet(Goog) dropped over 7% which erased over $100 Billion dollars in market value.
OpenAI was recently valued at roughly $29 Billion as it entered talks to sell existing shares to venture capital firms Thrive Capital and Founders Fund, as reported by The Wall Street Journal. This is double its estimated value in 2021.
The Problem with Generative AI and Chatbot Search
Instead of searching endlessly through a list of results, providing answers directly can give users an easier and improved search experience. Even though Generative AI can confidently provide answers, they may not be factually correct or may only represent one perspective of the source of information. Previously, users would make a decision on the authenticity of information based on the source or citation, but this paradigm shift means that users need to be able to differentiate between an AI hallucination and a real answer to consume the correct information appropriately.
This issue extends beyond factual questions, as it applies to any query that could yield a biased response based on its source. For example, if a generative AI model is asked for the best political party, it may yield a biased response based on the political leaning of the source it was trained on. While responses from these may be correct 90% of the time, it is challenging to measure the impact of the remaining 10% of false information that is presented with confidence, and the average user may not possess the necessary skills to differentiate between authentic and biased information.
Large language models are better for tasks where it is easy for humans to review, like code debugging and generation and for tasks where the truth is not so important like creative writing. LLMs are also incredibly handy for tasks where there is a huge amount of training dataset readily available, for example, text translation to another language, and speech recognition. However, with the hefty cost of building and maintaining these models, savvy tech giants have prioritized the first use case with consumer search, due to its financial opportunities in ad revenue.
The Value of LLMs for Enterprise Business Search
The real value of Large Language Models lies in enterprise use cases. Enterprises have access to a wealth of information that can be easily mined with these new technologies for better efficiency, productivity, and customer service. LLMs can easily help understand users’ queries with context, and if used properly, can be put to greater use for information retrieval needs.
Siri and Alexa, although popular virtual assistants, have fallen short of being true representations of truly “intelligent” virtual assistants. These devices are limited in their capabilities and primarily serve as command-driven devices that are mostly used for automated actions such as playing a song or launching an app. However, there are greater use cases for virtual assistants beyond executing basic commands.
With truly intelligent virtual assistants, consumers can expect to see more natural conversations and more reliable self-service automation for better user experiences. Intelligent virtual assistants are transforming the way we access information while enhancing customer experiences and streamlining self-service tasks for improved convenience and efficiency.
AI has the potential to greatly improve our lives, but it must be used with the necessary controls in place. As the AI war among big tech companies continues, it is up to us as users to consume the information generated by these technologies responsibly, and to ensure that they are used for the right purposes.
You can build your own Intelligent Virtual Assistant using the Kore platform. Try it for yourself!