In the thread, Daniel explained how he found a way of tricking ChatGPT into replying to whatever he wanted when analyzing a picture... and the trick is so simple and old that is even funny.
This is an example of the proverb "Man is the only animal that trips twice over the same stone" or how Software Engineers have no historical memory and repeat the same mistakes over and over again.
In the '90s and early 2000s, when the Internet as we know it was growing and SEO started to gain more and more importance, Webmasters used to add "transparent" keywords at the bottom of the document so search engines would associate the page with those keywords and rank higher. This was considered a bad practice and a shady way of tricking web crawlers (later, they penalized this behavior).
Fast forward 15-20 years... and it's the keywords trick all over again! We are making the same mistake! Adding an "almost transparent" sentence on an image will trick the omniscient AI into saying whatever you want. This wouldn't be a problem if it weren't because of how many people seem to trust unquestioningly what AI/LLMs repeat.
This type of trickery will surely be fixed soon, but it highlights an inherent problem of software development's "move fast, break things" culture. Moving fast often comes at the cost of ignoring sound and well-known industry standards and practices.
As software engineers, we need to do better. And as AI consumers, we need to be wary of the results we get. Many people/companies rely increasingly on systems that fall for the oldest trick in the book (literally).
It all reminded me of the song "History Repeating" by the Propellerheads featuring Miss Shirley Bassey:
Source of the image and the thread that this post references: https://twitter.com/d_feldman/status/1713019158474920321.