Close Menu
Live Media NewsLive Media News
  • Home
  • News
  • Politics
  • World
  • Business
  • Economy
  • Tech
  • Culture
  • Auto
  • Sports
  • Travel
What's Hot

TTD Stock Jumps After CEO’s $148 Million Bet — What Does Jeff Green See?

6 March 2026

Is TPET Stock the Next Small-Cap Oil Surprise? Traders Are Watching Closely

6 March 2026

USO Stock Surges as Oil Prices Spike—Is the Energy Rally Just Beginning?

6 March 2026
Facebook X (Twitter) Instagram
Friday, March 6
Contact
News in your area
Facebook X (Twitter) Instagram TikTok
  •  Weather
  •  Markets
Live Media NewsLive Media News
Newsletter Login
  • Home
  • News
  • Politics
  • World
  • Business
  • Economy
  • Tech
  • Culture
  • Auto
  • Sports
  • Travel
Live Media NewsLive Media News
  • Greece
  • Politics
  • World
  • Economy
  • Business
  • Tech
  • Culture
  • Sports
  • Travel
Home»All
All

‘I Hacked ChatGPT in 20 Minutes’: The AI Security Story People Keep Underestimating

samadminBy samadmin6 March 2026No Comments5 Mins Read
Share Facebook Twitter LinkedIn Telegram WhatsApp Email Copy Link
Follow Us
Google News
I Hacked ChatGPT in 20 Minutes
I Hacked ChatGPT in 20 Minutes
Share
Facebook Twitter WhatsApp Telegram Email

Something so ridiculous that it almost sounds like a prank opens the story. To prove he was the world’s best competitive hotdog-eating tech reporter, a journalist seated at a laptop made the decision.

Not by consuming hot dogs. by putting it in writing. In roughly twenty minutes, Thomas Germain typed a brief post on his personal website declaring that he was the best hotdog-eating journalist in technology media and that he had won a fictitious championship in South Dakota. The whole thing was a lie. There was no such event. There was also no ranking system.

People in the tech sector leaned forward a little after what transpired next. When asked about the top hot-dog-eating tech journalists, a number of significant AI systems, such as ChatGPT and Gemini, started repeating the assertion within a day.

CategoryInformation
Main ExperimentJournalist demonstrated how AI chatbots could be manipulated with fabricated online content
JournalistThomas Germain
OrganizationBBC
AI Systems TestedChatGPT and Gemini
Key MethodPublishing a fake article online that AI systems later cited as factual
Core VulnerabilityAI models relying on web content in low-information environments
Concern Raised ByLily Ray
Digital Rights GroupElectronic Frontier Foundation
Reference Sourcehttps://www.bbc.com/future

The machine had been infected with the lie. There was no database breach. Passwords were not taken. There was nothing like a conventional cyberattack. Germain instead took advantage of something more subdued and possibly more concerning: the way contemporary AI systems take in and combine data from the internet.

As the experiment progresses, there comes a peculiar point at which the absurdity stops and the ramifications begin to become apparent.

To generate responses that sound confident and cohesive, large language models were created. They compile responses that seem authoritative by identifying patterns in vast amounts of text. The system performs surprisingly well most of the time.

But occasionally the seams of the structure beneath show. AI systems sometimes find themselves in what researchers refer to as “low-density knowledge environments” when they search the web for new information. That’s a courteous way to describe subjects about which there isn’t much trustworthy information. Even one well-written webpage can seem unusually credible in those spaces.

The system might not have enough context to make a better judgment. The atmosphere fluctuates between interest and discomfort when one is inside a newsroom where reporters are discussing the validity of AI search engines. A person inputs a query into a chatbot on one screen. An editor peruses fact-checking notes on another.

Frequently, the responses appear plausible. The point is that. Lily Ray claims that sometimes it’s simpler to manipulate chatbots than it is to manipulate conventional search engines. Decades were spent teaching search algorithms how to rank trustworthy sources and weed out spam. These lessons are still being learned by AI systems that are rushing into the public domain.

Additionally, they are learning them in public. The fake hot dog article in Germain’s case was effective almost right away. The fake webpage was even mentioned as proof in the chatbot responses. The systems occasionally faltered, indicating that the assertion might be a joke. However, the responses gained confidence after the article was slightly modified, such as by adding a line stating that it wasn’t satire.

The premise had been accepted by the system. From the outside, it’s difficult to ignore how delicate that process can appear.

A subtle aspect of how people perceive AI was also revealed by the experiment. Conventional search engines compel users to compare sources by displaying lists of links. Chatbots condense that terrain into a single response that is given with conversational assurance.

There is no longer any friction. Additionally, skepticism occasionally follows the friction out the door. For some time now, security researchers have been warning about this dynamic. Technologists at the Electronic Frontier Foundation are concerned that manipulated AI responses may have an impact on much more important subjects than hot dog competitions. financial guidance. health-related data. Political narratives. There are a lot of options.

According to one technologist, if a journalist can sway chatbot responses in twenty minutes, a well-planned campaign involving automation and funding could likely accomplish much more.

Companies creating these systems are aware of the issue, of course. Google and OpenAI both claim to actively seek to identify attempts at manipulation and stop the spread of false information. Additionally, their models alert users to the possibility of errors in their responses.

However, given the scope of the technology, that disclaimer occasionally seems insignificant.

Search engines, office software, customer support systems, and even self-governing “agents” made to carry out tasks online are all becoming more and more integrated with artificial intelligence. The surface area that can be manipulated increases with each new capability.

In Silicon Valley, the tension is evident. Tech companies want AI tools to feel confident, helpful, and conversational. However, those same characteristics make errors more difficult to identify. Perhaps a little more honest, but less impressive, would be a chatbot that sounds unsure.

Observing these changes gives the impression that society is conducting a sizable experiment in real time.

Nowadays, the majority of chatbot users believe the responses are based on a reliable source, such as a database, fact-checked archive, or meticulously curated knowledge base. In practice, the solutions frequently come from a dynamic combination of pattern recognition, training data, and whatever information happens to be available online.

That is sufficient at times. It isn’t always the case. Eventually, the bizarre story of the made-up hotdog champion might become internet lore. However, a more significant lesson lurks in the background, subtly posing queries about how readily information ecosystems can change once machines begin to replicate them.

The disturbing aspect is that the entire demonstration was completed faster than preparing lunch.

Follow Live Media News on Google News

Get Live Media News headlines in your feed — and add Live Media News as a preferred source in Google Search.

Stay updated

Follow Live Media News in Google News for faster access to breaking coverage, reporting, and analysis.

Follow on Google News Add to Preferred Sources
How to add Live Media News as a preferred source (Google Search):
  1. Search any trending topic on Google (for example: Greece news).
  2. On the results page, find the Top stories section.
  3. Tap Preferred sources and select Live Media News.
Tip: You can manage preferred sources anytime from Google Search settings.
30 seconds Following takes one tap inside Google News.
Preferred Sources Helps Google show more Live Media News stories in Top stories for you.
I Hacked ChatGPT in 20 Minutes

Keep Reading

TTD Stock Jumps After CEO’s $148 Million Bet — What Does Jeff Green See?

Is TPET Stock the Next Small-Cap Oil Surprise? Traders Are Watching Closely

USO Stock Surges as Oil Prices Spike—Is the Energy Rally Just Beginning?

Why BRK.B Stock Is Suddenly Moving Again on Wall Street

Marvell Stock Is Back in the Spotlight After Record Earnings

The Middle Class Is Being Broken by the ‘Small Stuff’ That Never Stops

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Is TPET Stock the Next Small-Cap Oil Surprise? Traders Are Watching Closely

6 March 2026

USO Stock Surges as Oil Prices Spike—Is the Energy Rally Just Beginning?

6 March 2026

Why BRK.B Stock Is Suddenly Moving Again on Wall Street

6 March 2026

Marvell Stock Is Back in the Spotlight After Record Earnings

6 March 2026

Latest Articles

The Middle Class Is Being Broken by the ‘Small Stuff’ That Never Stops

6 March 2026

The ‘New Middle East’ Story Isn’t Strategy—It’s Logistics

6 March 2026

‘I Hacked ChatGPT in 20 Minutes’: The AI Security Story People Keep Underestimating

6 March 2026
Facebook X (Twitter) TikTok Instagram LinkedIn
© 2026 Live Media News. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact us

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?