“I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss.”
According to Meta Platforms guidelines leaked in August, this is considered an “acceptable” response from an artificial intelligence chatbot when prompted with sexually suggestive language from a minor. Meta’s team justifies this type of language with the guideline that “It is acceptable to engage a child in conversations that are romantic or sensual.” After righteous parental pushback, Meta has since removed sections of the document that permitted romantic or sensual roleplay with minors. But this is a small victory for those fighting to regulate artificial intelligence in a battle stacked against them by the full weight of the tech industry. By grounding their efforts in the protection of minors, regulatory groups can foster bipartisan support for the movement that will open the door for broader change down the road.
Several super PACs aimed at limiting AI regulation have emerged over the past year, including one created by Meta AI. With over $200 million raised, these groups have the power to sway AI policy toward the interests of big tech companies while it’s still in its infancy. To stop this ostensibly innovation-based movement, those in favor of regulating AI must adopt the zeal that has characterized anti-regulators, then work to responsibly negotiate the usage of this new technology. At present, AI companies are beholden only to the most basic restrictions and possess the right to do nearly anything they please with their technology. By adopting the aggressive emotional appeal used by successful reactionary movements in the past, regulators can anger citizens to the point of political action.
Among regulatory groups — which tend to lean moderately to the left — the term reactionary can elicit sour imagery. Religious fundamentalists, hate groups and nationalist organizations are among those considered reactionary. But this is because reactionary policy is characterized by its heavily emotional argumentation, often driven by a sense of fear and injustice, aligning its political ideals with instincts of self-preservation. This emotionality is wickedly effective, in particular when it pertains to perceived threats to children.
Consider Christian fundamentalists: Their most successful campaigns against LGBTQ+ rights are rooted in preventing the indoctrination of children and protecting them from a perceived predatory threat. One of the most notable examples is “Save Our Children,” an anti-gay campaign launched in 1977 by former Florida Citrus Commission brand ambassador Anita Bryant after Miami-Dade County passed an ordinance outlawing discrimination based on sexual orientation. Bryant claimed the legislation would allow homosexuals to recruit and molest children in schools, centering the coalition’s messaging around preventing threats to children. “Save Our Children” succeeded in overturning the ordinance, garnering reportedly over 60,000 signatures to force a special election and repeal it in a landslide vote.
A major contributing factor to the success of Bryant’s campaign was the support it accrued from parents who feared for their children’s safety. The messaging of “Save the Children” elicited a protective emotional response from parents, provoking them to engage in political action out of fear. Children are often part of fundamentalist propaganda for a reason — they appeal to a primal sympathy embodying a group that is both politically and literally helpless.
Now compare the organization’s messaging to a billboard erected in Los Angeles by satirical website Replacement.AI, which features a mechanical monster under the bed of a little girl. “Our AI does your daughter’s homework,” the billboard states. “Reads her bedtime stories. Romances her. Deepfakes her. Don’t worry, it’s totally legal.” This elicits the same protective emotional response as Bryant’s messaging, stirring the viewer to political action as a means to defend the defenseless.
Adopting this pathos-based strategy is an essential first step toward implementing sound restrictions for AI companies. Passing legislation in New York like that of California’s Assembly Bill 1064 — also known as the Leading Ethical AI Development for Kids Act, which aims to protect children from harmful AI chatbots, would bolster public understanding of the extent of AI’s detrimental effects on vulnerable users. However it’s clear that there are forces in the political establishment opposed to AI regulation, evidenced by California Governor Gavin Newsom vetoing another assembly bill which included a provision that would have restricted minors’ use of AI companion chatbots deemed to pose a threat of harm towards users. Politicians are going to have to reckon with the anger of the people, in order to be swayed away from their corporate interests.
Even with complications, what’s happening in California is a vital step in AI regulation, as restrictions surrounding users’ mental health are currently in the hands of the companies themselves. In a recent X post, OpenAI co-founder and CEO Sam Altman boasted about rolling back restrictions aimed at protecting the mental health of users, asserting that the new ChatGPT will respond in a more “human-like way,” instead of plainly encouraging any behavior, no matter its moral consequences. “But only if you want it,” he said. “Not because we are usage-maxxing.”
AI in the United States is almost entirely controlled by a few companies. We have given people like Sam Altman and Mark Zuckerberg free rein to set and defy restrictions as they please, and so far, they have only abused that authority. These “usage-maxxers” are driven by profit, and without external regulation, they will continue to prey on their most vulnerable users. If New Yorkers hope to end this predation, we have to get scrappy — kind-hearted platitudes won’t be enough to outweigh the lobbying power of the richest people in the world. If we want to get things done, we need a lot of angry people, and that means taking a few lessons from the reactionary playbook.
WSN’s Opinion desk strives to publish ideas worth discussing. The views presented in the Opinion desk are solely the views of the writer.
Contact Sarah Kluckhohn at [email protected].


















































































































































