ReferIndia News ChatGPT And Gemini Can Give Harmful Answers If You Trick Them Via Poetry, Here Is How

ReferIndia News

ePrescribe

Upgrade your clinic to smart, simple, and paperless management—start today!

Know more
News Image

ChatGPT And Gemini Can Give Harmful Answers If You Trick Them Via Poetry, Here Is How

Published on: Dec. 1, 2025, 10:38 a.m. | Source: Times Now

Recent research from Italy's Icaro Lab has revealed significant weaknesses in AI models like ChatGPT and Gemini, allowing attackers to bypass safety measures by framing harmful requests as poetry. The study tested 20 harmful prompts in poetic form, achieving a 62% success rate across various AI systems, including Moonshot AI and Mistral AI. , Technology & Science, Times Now

Checkout more news
Ad Banner

Kunjesh Investment Banking – Trusted Since 2001

Financial Planning • Insurance • Investments • Retirement & Wealth Management. Personalized strategies, expert guidance, lasting success.

Get Started
ReferIndia News contact