Future Headline: Government Regulators Ensure AI Algorithms are Anti-Racist

In a world full of unimaginable absurdity, we spend a lot of time thinking about the future… and to where all of this insanity leads.

“Future Headline Friday” is our satirical take of where the world is going if it remains on its current path. While our satire may be humorous and exaggerated, rest assured that everything we write is based on actual events, news stories, personalities, and pending legislation.

September 15, 2027: Government Regulators Ensure AI Algorithms are Anti-Racist

Four years ago in September 2023, Elon Musk participated in a closed door Senate hearing in which he called for the government to form a new agency to regulate Artificial Intelligence.

Several other tech titans such as Mark Zuckerberg, Bill Gates, and OpenAI’s Sam Altman each joined him and agreed that the federal government should regulate AI.

Four years later, those same men attended the ribbon cutting ceremony this morning at the recently constructed office campus which will house the new AI regulatory agency that they asked for; it’s called the Federal Artificial Intelligence Legislative Supervisors, or FAILS.

Before he fell into his latest coma, President Biden nominated his former press secretary, Karine Jean-Pierre, to be the new Chief of FAILS due to her extensive qualifications as an LGBTQ immigrant woman of color.

Ms. Jean-Pierre spoke at this morning’s ribbon-cutting ceremony and said, “As the government’s newest regulator, we hope to achieve the same amazing successes of the other watchdog agencies that have come before us.”

“Just think about how extraordinary the Justice Department has been in evenly applying the rule of law to all Americans. Or consider the flawless work of the CDC’s senior leadership during the pandemic. And we can only marvel at how effectively the Federal Reserve and FDIC have been in ensuring the safety of the banking system.”

“We hope that FAILS can regulate AI with the same effectiveness as the rest of the government regulates everything else in our lives.”

The new Chief of FAILS then went on to outline the three pillars of her new agency’s regulatory mission.

“First and foremost,” she said, “FAILS will eradicate hate speech from all AI output. It’s not enough for AI to not be racist… it has to be actively anti-racist.

After the smattering of applause and standing ovation from the press corps quieted, Ms. Jean-Pierre continued.

“In order to achieve this, FAILS will decide what data-sets are allowed to train AI, and which are strictly forbidden.”

“For example, AI will be prohibited from being trained on outdated, misogynistic, racist works by dead white men such as Shakespeare and Mark Twain, while important works that help people think critically about race and appreciate LGBTQ culture will be required— such as Gender Queer by Maia Kobabe and Anti-Racist Baby by Ibram X. Kendi.”

Jean-Pierre said that ChatGPT was, “too neutral when it came to discussing politics, and should have made clear that Nazi viewpoints such as the two-gender conspiracy theory or white supremacist talking points about ignoring race in hiring are unacceptable in a civilized society.”

“The second pillar of FAILS regulation will be to ensure that AI does not cause the loss of a single union jobs. Unions are some of the biggest donors to our political party, so we need to protect them at all costs. If you’re non-union, you’re on your own. Join a union.”

“The third pillar of our new regulatory framework is to protect our children from the harmful effects of AI. And for that reason I’ve enlisted the President of the American Federation of Teachers, Randi Weingarten, to be my special adviser.”

“Under Randi’s courageous leadership during the pandemic in which she championed the continued closure of public schools, the childhood suicide rate only rose by 45%, while standardized test scores only fell by 34%. What better ally could I have in safeguarding the well being of America’s children?”

FAILS has been given a generous inaugural budget of $300 billion— which due to the importance of the issue at hand, is larger than the State Department, Veteran Affairs, and the Department of Education.

However, even on opening day, the agency is already running over-budget and will incur a small deficit this year.

That is mainly due to cost overruns from the new FAILS headquarters, complete with an AI powered water fountain in the lobby.

Visitors can interact with the water feature, by tossing coins into it and making a wish. The fountain then tells them how they should alter their wish to be a better, less selfish, more carbon neutral citizen.

Some users are shocked by how personalized the advice appears— until they learn that their ChatGPT histories have been shared with the fountain.

It looks like FAILS-regulated AI might just be intended to regulate us as well.

Share this article

About the author

Stay in the loop

Get our new Articles delivered Straight to your inbox, right as we publish them...

Share via
Copy link