Apple Joins US Commerce Department's AI Safety Institute Consortium - MacRumors
Skip to Content

Apple Joins US Commerce Department's AI Safety Institute Consortium

Apple and other top tech companies have joined a new U.S. consortium to support the safe and responsible development and deployment of generative AI, the Commerce Department announced on Thursday (via Bloomberg).

Apple, along with OpenAI, Microsoft, Meta, Google, and Amazon, will join more than 200 members of the AI Safety Institute Consortium (AISIC) under the department, Commerce Secretary Gina Raimondo said.

"The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," Raimondo said in a statement.

The group will work with the department's National Institute of Standards and Technology on priority actions outlined in President Biden's AI executive order, "including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content."

Other technology companies, as well as civil society groups, academics, and state and local government officials, will also be involved to establish safety standards regarding AI regulation.

Generative AI has spurred excitement due to its potential to enhance creativity, improve efficiency, and advance technology. However, fears surrounding generative AI include ethical concerns like deepfakes, the potential impact on jobs, issues around information reliability, and challenges in ensuring privacy and effective regulation.

Apple is said to be spending millions of dollars a day on AI research as training large language models requires a lot of hardware. Apple is on track to spend more than $4 billion on AI servers in 2024, according to one report.

Apple is said to be developing its own generative AI model called "Ajax". Designed to rival the likes of OpenAI's GPT-3 and GPT-4, Ajax operates on 200 billion parameters, suggesting a high level of complexity and capability in language understanding and generation. Internally known as "Apple GPT," Ajax aims to unify machine learning development across Apple, suggesting a broader strategy to integrate AI more deeply into Apple's ecosystem.

Aspects of the model could be incorporated into iOS 18, such as an enhanced version of Siri with ChatGPT-like generative AI functionality. Both The Information and analyst Jeff Pu claim that Apple will have some kind of generative AI feature available on the ‌iPhone‌ and iPad later this year.

Popular Stories

Apple Event Logo

Apple's Next Era Begins September 1

Thursday May 7, 2026 10:36 am PDT by
Apple recently announced that Tim Cook will be stepping down as CEO later this year, after 15 years of leading the company. Effective September 1, Apple's hardware engineering chief John Ternus will become the company's next CEO, while Cook will become executive chairman of Apple's board of directors. In his new role, Apple said Cook will assist with "certain aspects" of the company,...
Four iPhone 18 Pro Colors Mock Feature

iPhone 18 Pro Launching in September With These 10 New Features

Saturday May 9, 2026 6:03 am PDT by
While the iPhone 18 Pro and iPhone 18 Pro Max are not launching until September, there are already plenty of rumors about the devices. It was initially reported that the iPhone 18 Pro models would have fully under-screen Face ID, with only a front camera visible in the top-left corner of the screen. However, the latest rumors indicate that only one Face ID component will be moved under the...
MacBook Pro Low Angle Wide Lens

macOS 27: Two More Changes Leaked Ahead of WWDC Next Month

Sunday May 10, 2026 9:45 am PDT by
macOS 27 will have a "slight redesign" compared to macOS Tahoe, according to the latest word from Bloomberg's Mark Gurman. In his Power On newsletter today, Gurman said the design changes will help to improve the readability of macOS Tahoe's Liquid Glass interface:If you've used Tahoe, you're likely familiar with some of the quirks — particularly the transparency effects and shadows that...

Top Rated Comments

30 months ago
If tech companies talk about "safety", I read "censorship".

It is okay if they block fake porn of famous people, but even ChatGPT already goes much further. It blocks everything that could be seen as controversial. Computers that are smarter than humans my be creepy for many people, but even more creepy is the idea that tech companies or governments control those super intelligent computers.

Try asking ChatGPT what advantages climate change has. It will refuse to answer that question.
Score: 10 Votes (Like | Disagree)
30 months ago
Ministry of Truth.
Score: 6 Votes (Like | Disagree)
VulchR Avatar
30 months ago

...

We’re approaching the threshold of history- a veil beyond which prophets would not see clearly for the muddiness of the waters of truth, and time travelers will be unable to relay back what really happened for all the confusion.
We'll soon approach a point when so much of internet content will be generated by AI that AI 'hallucinations' might impair our ability to learn the truth about history or current affairs. We could have AI systems learning the hallucinations of other AI systems as they troll the internet for training data.
Score: 5 Votes (Like | Disagree)
30 months ago

Watermarking isn’t enough. The prompts used on AI created or manipulated images should be permanently encoded into the image. The images should always be allowed to be saved, yet come with the anti-screenshot tech turned on.

Loopholes should be closed as much as possible for laundering AI content to sell as real. That is the greatest danger to our society- to exist in a permanent unreal present where truth about the past and current events are in the hands of those with the power to manipulate the most believers.
Most of this is simply impossible to practically implement. I mean we can pass all the laws we want but it will not in practice stop this. Stable diffusion already watermarks. But it's in the code and you can unwatermark. You can train your AI to not do the watermark. We want to be safe and wise but some things are not practically possible to stop. It's like the invention of the camera and telling people they are only allowed to photograph what the govt says. Only a totalitarian govt could do it.
Score: 4 Votes (Like | Disagree)
IllegitimateValor Avatar
30 months ago
Watermarking isn’t enough. The prompts used on AI created or manipulated images should be permanently encoded into the image. The images should always be allowed to be saved, yet come with the anti-screenshot tech turned on.

Loopholes should be closed as much as possible for laundering AI content to sell as real. That is the greatest danger to our society- to exist in a permanent unreal present where truth about the past and current events are in the hands of those with the power to manipulate the most believers.

I don’t trust even the companies involved to do right by us even with deep oversight.

We’re approaching the threshold of history- a veil beyond which prophets would not see clearly for the muddiness of the waters of truth, and time travelers will be unable to relay back what really happened for all the confusion.
Score: 4 Votes (Like | Disagree)
30 months ago

I look forward to a day when the only way to get uncensored open-source AI models (like the kind you can get today on sites like HuggingFace or CivitAI) is to torrent them on shady sites because the government prevents people from getting them normally... for our own good, apparently. 🤡 /s
For the children.
Score: 3 Votes (Like | Disagree)