ChatGPT blocked 250,000 AI image requests of US election candidates
Greater than 250,000 requests to OpenAI platforms to make deepfakes of US election candidates had been rejected, the corporate says.
ChatGPT refused greater than 250,000 requests to generate photographs of the US election candidates utilizing their synthetic intelligence (AI) platform.
OpenAI, the corporate behind the AI chatbot, mentioned in a weblog replace on Friday that their platform DALL-E, used to generate photographs and video, rejected requests to make photographs of president-elect Donald Trump, his selection for vice chairman JD Vance, present president Joe Biden, democratic candidate Kamala Harris, and her vice-presidential decide, Tim Walz.
The refusals had been resulting from “security measures” that OpenAI put in place earlier than election day, the weblog submit mentioned.
“These guardrails are particularly vital in an elections context and are a key a part of our broader efforts to forestall our instruments getting used for misleading or dangerous functions,” the replace learn.
The groups behind OpenAI say they “haven’t seen proof” of any US election-related affect operations going viral by utilizing their platforms, the weblog continued.
The firm mentioned in August it stopped an Iranian affect marketing campaign known as Storm-2035 from producing articles about US politics and posing as conservative and progressive information shops.
Accounts associated to Storm-2035 had been later banned from utilizing OpenAI’s platforms.
One other replace in October disclosed that OpenAI disrupted greater than “20 operations and misleading networks,” from throughout the globe that had been utilizing their platforms.
Of those networks, the US election-related operations they discovered weren’t capable of generate “viral engagement,” the report discovered.