Earlier this year, a bunch of freaks on Twitter figured out that they could prompt Grok, the “non-woke” chatbot from Elon Musk’s xAI, to generate sexualized images of people on X, even without their consent. Three months later, X has done something about it…kinda. According to a report from Social Media Today, the social platform quietly added a new feature to its iOS app that allows people to opt to “block modifications by Grok” when uploading content.
Per the report, users should start seeing a toggle when they upload images or videos to include in a post. When that toggle is enabled, it will prevent Grok from modifying the material. It’s a useful feature in theory, given the replies of every post on that god-forsaken platform now include some sad sack tagging Grok to do something for them, including tweaking peoples’ pictures.
The problem, though, according to The Verge, is that the feature has an incredibly narrow use case. Specifically, the publication said the toggle says it can “prevent @Grok from modifying this content.” That means it only stops people from tagging Grok in a thread to modify images. It doesn’t stop people from editing the image in the Grok app. In fact, they don’t even have to download the original image to do so. Per The Verge, X users can hold down on an image in the X app and select the option “Edit image with Grok,” and the image will open in the Grok app, where it can be freely modified. Users can also save the original image, re-upload it to an X post, and tag Grok to change it.
So basically, X has taken away the lowest hanging fruit—and has done so in a limited way, seeing as the feature is only available in iOS and thus far isn’t available to everyone and isn’t particularly easy to find. Currently, it’s hidden in the menu that is pulled up by tapping the paintbrush symbol in X when uploading content. Whatever the bare minimum is, it seems like X has just barely exceeded that.
It seems unlikely this “protection” will satisfy the dozens of regulatory authorities that launched investigations into X and Grok following the debacle that saw users on X using the generative AI tool to create nude and sexually explicit images of people—including minors—without consent. The company previously locked image editing features behind a paywall, which may have prevented some problems and may have just monetized people’s perversions and acts of virtual sexual abuse. That, combined with this new effort of the hidden feature that only kinda works, should tell everyone how serious the platform is about solving the problem.
Read the full article here
