Annapurna: TikTok's racist AI game ads unauthorized.

Finji Exposes TikTok for Unauthorized, Racist GenAI Ads

Finji, the acclaimed publisher behind indie favorites like Night in the Woods and Tunic, has accused TikTok of generating and running unauthorized, AI-modified advertisements for its games. This serious allegation includes one ad that was digitally altered to depict a character in a racist and sexualized manner. Finji’s CEO and co-founder, Rebekah Saltsman, first brought attention to this issue on social media, urging users to report any Finji ads that looked uncharacteristically strange. This incident sheds light on the growing concerns surrounding generative AI in advertising, platform control over creative content, and the potential for harmful stereotypes to be propagated without consent or oversight. The situation has prompted a wider discussion about ethical AI implementation and advertiser autonomy on major social media platforms.

Unusual June

Despite Finji actively promoting its games on TikTok, the company maintains that all generative AI features were explicitly turned off for their advertising campaigns. The discovery of these unauthorized AI ads came from concerned community members who spotted them and alerted Finji through comments on official posts and direct messages. These user-provided screenshots revealed that Finji’s original video ads for games like Usual June were being transformed into AI-generated slideshows. One particularly disturbing image showed the protagonist of Usual June, named June, modified with impossibly exaggerated, sexualized physical attributes that invoked a harmful stereotype, a stark contrast to her actual design in the game. Finji confirmed to iGV that both TikTok’s “Smart Creative” and “Automate Creative” AI optimization tools had been disabled on their account, making these unauthorized alterations even more perplexing and concerning.

The Support Circle of Hell

Finji’s ordeal with TikTok support began on February 3rd. After initially reporting the issue and providing evidence, a TikTok support agent confirmed that Finji had indeed disabled the “Smart Creative” feature, which should have prevented any AI modifications. Despite this, the agent couldn’t explain why the altered ads were appearing, stating that Finji’s setup seemed “clear” and “there should be no ai generated content included.” The lack of an immediate solution or a clear timeline for investigation left Finji in limbo. Furthermore, Finji discovered they had no access to view or edit these AI-generated versions of their own ads, relying solely on user reports. Consequently, Finji took the drastic step of ending the affected ad campaigns, believing it was the only way to halt the circulation of the problematic images, including the racist and sexualized portrayal of their character. This initial exchange set the tone for a frustrating and unhelpful support process.

Following a prompt follow-up from Finji on February 6th, TikTok Ads Support surprisingly claimed they found “no indication that AI-generated assets or slideshow formats” were in use, insisting the ads were standard video creatives from Finji’s library. Undeterred, Finji promptly resubmitted the screenshot of the offensive ad, demanding immediate escalation. This firm stance finally prompted a significant shift in TikTok’s response. The platform then acknowledged the “seriousness” of Finji’s concerns, explicitly stating they were “no longer disputing whether this occurred.” TikTok admitted to “unauthorized use of AI, the sexualization and misrepresentation” of Finji’s characters, recognizing the potential commercial and reputational damage. They promised an “internal escalation” and a connection to a “senior representative” to address the issue, offering a glimmer of hope that the matter would finally be resolved at a higher level.

Despite TikTok’s promise of escalation and contact with a “senior representative,” Finji received no further direct communication. On February 10th, after Finji initiated another follow-up, TikTok provided an explanation, claiming the ads were part of a “catalog ads format.” This format, they stated, automatically combined various assets for “better results with less effort,” touting a 1.4x return on ad spend lift. This non-consensual inclusion in an “automated initiative” and the dismissal of the serious nature of the modifications left Finji understandably outraged. They demanded to know why they hadn’t been connected with a senior representative, why the “sexualized, racist, and sexist” depiction was not being addressed, why they couldn’t track these ads, and why opting out wasn’t guaranteed. The platform’s attempt to frame the unauthorized alterations as a beneficial “feature” deeply offended Finji.

TikTok subsequently clarified that the previous response came from their “escalation team,” which they identified as “the highest internal team available” for such issues, effectively ending Finji’s expectation of speaking with a different “senior representative.” They reiterated that the escalation team had reviewed the situation, and their “final findings and actions” had been communicated. Despite this, Finji expressed ongoing dissatisfaction and received a final promise from the representative to “re-escalate the issue internally,” but no further communication has been received since February 17th. TikTok declined to provide an official comment to iGV. Rebekah Saltsman expressed profound shock at TikTok’s “complete lack of appropriate response,” criticizing the platform’s seemingly racist and sexist algorithm, its non-consensual use of AI on client assets, and its failure to address these mistakes coherently. She demanded a proper apology, systemic changes in how TikTok uses AI for paying clients, and a thorough examination of their biased technology. Saltsman emphasized that TikTok’s stance appeared to be one of expecting gratitude for mistreating her company and game, highlighting the severe reputational damage inflicted on over a decade of her team’s hard work.

The incident involving Finji and TikTok’s generative AI ads brings to the forefront critical issues within the digital advertising landscape. The unauthorized modification of ad creatives, particularly resulting in racist and sexualized content, represents a profound breach of trust and ethical responsibility. TikTok’s handling of the situation, characterized by initial denial, vague explanations, and an inability to provide transparent solutions or accountability, exacerbates the problem. This case powerfully illustrates the urgent need for greater transparency, robust advertiser control over content, and stringent oversight from major tech platforms regarding their AI-driven practices and content moderation policies. Without these safeguards, advertisers remain vulnerable to automated systems that can cause significant reputational and commercial harm, highlighting a substantial gap in platform governance.

Leave a Reply

Latest posts

Discover more from iGV Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading