Ollama Now Runs Faster on Macs Thanks to Apple's MLX Framework - MacRumors
Skip to Content

Ollama Now Runs Faster on Macs Thanks to Apple's MLX Framework

Ollama, the popular app for running AI models locally on a computer, has released an update that takes advantage of Apple's own machine learning framework, MLX. The result is a hefty speed boost on Macs with Apple silicon.

ollama logo mac
According to Ollama, the new version processes prompts around 1.6 times faster (prefill speed) and nearly doubles the speed at which it generates responses (decode speed). Macs with M5-series chips are said to see the largest improvements, thanks to Apple's new GPU Neural Accelerators.

The update also includes smarter memory management, which should make AI-powered coding tools and chat assistants feel noticeably more responsive during extended use.

Ollama says the new performance boost should especially benefit macOS users who run personal assistants like OpenClaw or coding agents like Claude Code, OpenCode, or Codex.

The preview release is available to download as Ollama 0.19 – just make sure you have a Mac with more than 32GB of unified memory to run it. Support is currently limited to Alibaba's Qwen3.5, but Ollama says support for more AI models is planned.

Popular Stories

iOS 26

iOS 26.5 Features: Everything New in iOS 26.5

Monday May 11, 2026 5:09 pm PDT by
Apple released iOS 26.5 after a few months of beta testing, and while it doesn't have the Siri features we were hoping for since those are being held until iOS 27, there are a handful of useful changes worth knowing about. Subscribe to the MacRumors YouTube channel for more videos. End-to-End Encryption for RCS Support for end-to-end encryption (E2EE) for RCS messages between iPhone and...
Dynamic Island iPhone 18 Pro Feature

11 Reasons to Wait for the iPhone 18 Pro

Monday May 11, 2026 9:01 am PDT by
We're only four months out from the launch of Apple's premium next-generation smartphone lineup, and while we're not expecting a sea change in terms of functionality, there are still several enhancements rumored to be coming to the iPhone 18 Pro and iPhone 18 Pro Max. One thing worth noting is that Apple is reportedly planning a major change to its iPhone release cycle this year, adopting a...
Four iPhone 18 Pro Colors Mock Feature

iPhone 18 Pro May Have 'Aggressive' Starting Price Despite RAM Crisis

Tuesday May 12, 2026 6:53 am PDT by
While the ongoing RAM chip shortage is leading some Android smartphone makers to increase prices, one analyst believes that Apple will take advantage of the situation with the upcoming iPhone 18 Pro and iPhone 18 Pro Max. In a research note with GF Securities today, analyst Jeff Pu said he expects Apple to outperform in the smartphone market by having an "aggressive pricing strategy" for the ...

Top Rated Comments

6 weeks ago

This is going to be some serious cash flow incoming for Apple in this year.
I think this could be a major business for Apple - it’s way cheaper for a small business to buy a powerful Mac and run qwen 3.5 than pay for an enterprise license for a frontier model - and you don’t need to worry about privacy issues.
Score: 11 Votes (Like | Disagree)
6 weeks ago
On device is definitely gonna be the future.

I can’t help but wonder if Apple looked ahead and foresaw this when developing the M series, or if they’ve lucked into it.
Score: 10 Votes (Like | Disagree)
Justin Cymbal Avatar
6 weeks ago
M-Series chips at work😎
Score: 7 Votes (Like | Disagree)
6 weeks ago
As someone who downloads and experiments with everything possible…

There is a lot of delusion in this thread. Local language models below 100 billion parameters are quite useless. Even 100 billion parameters is considered the weak side. Fun to play with for a while but boredom and frustration sets in quickly.

So what happens is they want the next model…and then the next one…and then the next one…falsely believing their 16GB or 32GB machine will one day have the holy grail of small and powerful local language model.

But it doesn’t happen. The models keep growing and aside from being memory hungry the most important thing that makes them useable is memory bandwidth.

The top 5 language models in the world are all over a trillion parameters and what makes them useful and responsive is that they respond quickly and have GPU with over a terabyte of bandwidth.
Score: 6 Votes (Like | Disagree)
Kirkster Avatar
6 weeks ago
They are so far behind LM Studio. And only support for one model?
Score: 6 Votes (Like | Disagree)
6 weeks ago
This is going to be some serious cash flow incoming for Apple in this year.
Score: 6 Votes (Like | Disagree)