OpenAI is rolling out a series of initiatives to prevent its products from being used for misinformation ahead of a major year for elections globally.
On Monday, the artificial intelligence startup announced new tools that will attribute information about current events provided by its chatbot ChatGPT, and help users determine if an image was created by its AI software. The changes come as concerns rise over the risks of so-called deepfakes — manipulated videos or other digital representations — and other AI-produced content that could misguide voters during campaigns.
"Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” the company wrote in a blog post on Monday.
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.