That of Google is ‘hallucinatory’, spreading dangerous information – including a suggestion to add glue to pizza sauce

Googleâ’s compilations, designed to provide quick answers to search questions, reported to issue “hallucinations” of false information and under -evaluation publishers by attracting users away from traditional connections.

The Big Tech giant – which landed in hot water last year after leaving a “Woke” tool that generated images of women and black Vikings – has attracted criticism of offering false and dangerous summer tips in its compilations, according to Times of London.

The latest Google’s artificial intelligence tool that is created to provide quick answers to search questions is to face criticism. Google Sundar Pichai’s CEO is photographed. AFP your getty images

In one case, the summaries of he advised the addition of glue to the pizza sauce to help the cheese climb better, the exit reported.

In another, a fake phrase – € “you cannot lick a Badger twice” a legitimate idiom.

Hallucinations, as computer scientists call them, are complicated by the tool he reducing the visibility of reputable resources.

Instead of directing the user directly on the website, it summarizes the information from the search results and presents its response created by it with several links.

Laurence Oâ toole, founder of the Authority Firm Author, studied the impact of the tool and found that click rates to publish websites of the website fall by 40% -60% when showing it.

“While these were general about the questions people usually give, she noted some specifics are that we need to improve,” Liz Reid, Google’s search head, told Times in response to the glue.

Google Ai Mode is an experimental way using artificial intelligence and large language patterns to process Google search questions. CODY YOUR GETTY EXPERIENCES

The post has requested comment from Google.

He’s compilations were presented last summer and made possible by Google’s Gemini language model, a system similar to Opennai’s chatgpt.

Despite the public concerns, Google Sundar Pichai’s general manager has defended the vehicle in an interview with The Verge, stating that helps users discover a wider range of information sources.

“In the past year, it is clear to us that the width of the area we are submitting people to adult” we are definitely sending traffic to a wider range of sources and publishers, “he said.

Google seems to minimize its own hallucination rate.

When a journalist checked Google for information on how often her things were written, he claimed the Norma Halucination between 0.7% and 1.3%.

Google’s compilations were presented last summer and made possible by the Gemini language model, a chatgpt -like system. Apea

However, data from the Face Face’s monitoring platform showed that the current scale for the latest twin model is 1.8%.

Googleâ RIP models also to provide pre-programmed protection of their behavior.

In response to whether he issteals € works of art, the tool said he will steal art in the traditional sense.â €

When asked if people had to be scared of him, the vehicle traversed some ordinary concerts before concluding that Fafear may be overloaded.â €

Some experts worry that while the generating systems to become more complex, they are also becoming more prone to errors.

Concerns about hallucinations go beyond Google.

Openai recently admitted that its newest models, known as O3 and O4-Mini, hallucinates more often than previous versions.

Internal testing showed that O3 made up information in 33% of cases, while O4-Mini did 48% of the time, especially when answers to real people.

#Google #hallucinatory #spreading #dangerous #information #including #suggestion #add #glue #pizza #sauce
Image Source : nypost.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top