Over the past few months AI tools like ChatGPT and Dall-E have seen a sudden rise in popularity and it seems Google has taken note of this and hit the panic button.
The Mountain View-based organization fears that this could be the start of something that may ultimately hinder its business.
Recent reporting from The New York Times suggests some in Google’s executives view the sudden rise of publicly available generative AI tools like ChatGPT as a “code red” situation.
The report claims that the Google CEO, Sundar Pichai “upended’’ numerous Google meetings to prioritize the perceived threat posed to the company’s future.
While chatbots like ChatGPT aren’t exclusively intended for search, tinkers have found ways to have the model draft lines of code and even answer specific queries to their questions.
Outside of text generation, Google is also plowing ahead with text to image and text to video systems capable of creating digital artworks similar to OpenAI’s popular DALL-E model.
According to the Times report, teams from Google’s Research, Trust and Safety, and other divisions have been reassigned to work on new prototypes and products in preparation for a conference in May.
One of these tools could be a cloud computing product that uses the technology underpinning its LaMDA chatbot to handle simple customer support tasks.
According to the report, some early prototypes of new AI tools, limited to around 500,000 users, may ship with lower trust and safety standards and include a warning to users that the model may produce false or offensive statements.
Also Read: Google To End Support For Chrome In Windows 7 And 8.1
Some of these trials, according to reports, are already underway. According to the Times, Google currently uses the LaMDA technology to highlight short blurbs in response to searches by users.
Will GPT And Other Chatbots Like It Upend Google Search?
There’s plenty of reason to remain skeptical of claims that GPT or other chatbots like it will upend Google search anytime soon.
For starters, OpenAI’s model still struggles, often, to present factually accurate answers, a requirement critical to any reliable search function.
In certain situations, ChatGPT will even opt to just make up answers entirely or will create biased and offensive messages.
Even if those kinks are worked out, getting millions of internet users to quickly switch their search behavior away from expecting a list of hyperlinks may also prove more challenging than certain GPT enthusiasts imagine.