Over the past few days, social media has been flooded with animated versions of doctors surrounded by ultrasound machines, journalists facing impossible microphones, or personal trainers with biceps the size of a melon. The craze for ChatGPT-generated caricatures has exploded almost without warning and, as often happens, it begins with “how amazing” and quickly shifts to “hang on a minute…”.
The appeal is obvious. You upload a photo, write a prompt along the lines of “create a caricature of me and my job based on everything you know about me” and, within seconds, a digital double appears—exaggerated, but uncannily recognisable. It’s not just a pretty filter. What’s interesting (and unsettling) is that the image is built on information you’ve already shared with the chatbot, whether through previous conversations or the data you choose to provide in that moment.
This is where the trend takes a turn. Unlike other recent fads—such as when half the internet turned itself into a Studio Ghibli character—these caricatures don’t just play with aesthetics. They play with the idea that AI “knows you”. And the more it knows, the sharper the illustration becomes. Job, daily routine, working environment, personal interests… it all adds up to a more precise result.
So far, so fun. The issue arises when that highly accurate result is shared publicly.
When the caricature reveals more than it should
Cybersecurity experts have spent the last few days warning about something fairly straightforward: a seemingly harmless caricature can reveal far more personal information than we realise. Profession, habits, workplace environment, clues about schedules—even visual details that make it possible to identify someone, even if their name is never mentioned.
If the image is based on a real photograph, other less visible factors come into play, such as metadata—location, date, device used. Information we rarely see, but which exists. And once the image starts circulating online, it loses context and control. It can be downloaded, reused, reinterpreted or end up in places you never considered when you hit “post”. This isn’t new to the internet, but the extreme personalisation offered by AI significantly amplifies the issue.
According to OpenAI’s privacy policy, texts, images and files shared may be temporarily stored and used to train and improve models. Although the company states that data is not kept indefinitely, it does not specify how long it remains on its servers.
The risk increases when shared images include minors, vulnerable individuals or information that could be used for identity impersonation. In the event of a security breach, those photos—along with their metadata—could be exposed and exploited for phishing attacks, online harassment or social engineering campaigns.
Alarmism? Not quite
It’s worth being clear: this isn’t an immediate dystopia or a digital apocalypse. Many people share these caricatures without consequences and enjoy the creative aspect with no drama. The risk isn’t automatic; it’s potential, and largely depends on how much you share and how you do it.
This pattern is familiar. The more “free” the tool, the more important it is to ask what the exchange really is. In this case, data in return for entertainment. If you understand that and you’re comfortable with it, fine. The key is awareness, not panic.
Joining the trend (if you want to) with a bit of common sense
For those who don’t want to miss out on the joke, a few basic recommendations apply: avoid uploading sensitive photos, don’t share more personal information than necessary, review what you make public and, if you do post the caricature, do so with a clear understanding of what that image might be revealing about you.
Digital personalisation is fun, creative and increasingly impressive. But it also forces us to sharpen our judgement. Laughing at an exaggerated avatar is great; forgetting how the internet works is not.
Sigue toda la información de HIGHXTAR desde Facebook, Twitter o Instagram
You may also like...