Apple's New Transcription APIs Blow Past Whisper in Speed Tests - MacRumors
Skip to Content

Apple's New Transcription APIs Blow Past Whisper in Speed Tests

Apple's new speech-to-text transcription APIs in iOS 26 and macOS Tahoe are delivering dramatically faster speeds compared to rival tools, including OpenAI's Whisper, based on beta testing conducted by MacStories' John Voorhees.

apple record transcribe phone calls

Call recording and transcription in iOS 18.1

Apple uses its own native speech frameworks to power live transcription features in apps like Notes and Voice Memos, as well as phone call transcription in iOS 18.1. To improve efficiency in iOS 26 and macOS Tahoe, Apple has introduced a new SpeechAnalyzer class and SpeechTranscriber module that deal with similar requests.

According to Voorhees, the new models processed a 34-minute, 7GB video file in just 45 seconds using a command line tool called Yap (developed by Voorhees' son, Finn). That's a full 55% faster than MacWhisper's Large V3 Turbo model, which took 1 minute and 41 seconds for the same file.

Other Whisper-based tools performed even slower, with VidCap taking 1:55 and MacWhisper's Large V2 model requiring 3:55 to complete the same transcription task. Voorhees also reported no noticeable difference in transcription quality across models.

The speed advantage comes from Apple's on-device processing approach, which avoids the network overhead that typically slows cloud-based transcription services.

While the time difference might seem modest for individual files, Voorhees notes that the performance gain increases exponentially when processing multiple videos or longer content. For anyone generating subtitles or transcribing lectures regularly, the efficiency boost could save them hours.

The Speech framework components are available across iPhone, iPad, Mac, and Vision Pro platforms in the current beta releases. Voorhees expects Apple's transcription technology to eventually replace Whisper as the go-to solution for Mac transcription apps.

Related Roundups: iOS 26, iPadOS 26, macOS Tahoe
Related Forums: iOS 26, macOS Tahoe

Popular Stories

iOS 26

iOS 26.4 Adds Two New Features to CarPlay

Tuesday March 24, 2026 1:55 pm PDT by
iOS 26.4 was released today, and it includes a couple of new features for CarPlay: an Ambient Music widget and support for voice-based chatbot apps. To update your iPhone 11 or newer to iOS 26.4, open the Settings app and tap on General → Software Update. CarPlay will automatically offer the new features so long as the iPhone connected to your vehicle is running iOS 26.4 or later....
Apple Business hero

Apple Unveils 'Apple Business' All-in-One Platform

Tuesday March 24, 2026 8:53 am PDT by
Apple today announced Apple Business, a new all-in-one platform that unifies device management, productivity tools, and customer outreach features. The service is designed to be a consolidated replacement for several of Apple's existing business-focused offerings, including Apple Business Essentials, Apple Business Manager, and Apple Business Connect. It provides organizations with a single...
AirPods Pro Firmware Feature

Apple Releases New Firmware for AirPods Pro 3, AirPods Pro 2 and AirPods 4

Tuesday March 24, 2026 12:31 pm PDT by
Apple today released new firmware for the AirPods Pro 2, AirPods Pro 3, and the AirPods 4. The firmware has a version number of 8B39, up from 8B34 on the AirPods Pro 3, 8B28 on the AirPods Pro 2, and 8B21 on the AirPods 4. There is no word on what's included in the firmware, but Apple has a support document with limited notes. Most updates are limited to bug fixes and performance...

Top Rated Comments

Big_D Avatar
10 months ago
Impressive, if it is accurate. What the story doesn't mention is how accurate each of those transcriptions was? Were they all identical? Did one or other have more mistakes? What is the accuracy percentage for each one, and how badly wrong were those mistakes?

I'm not trying to defend ChatGPT, just the speed is a single metric, which isn't very useful if the results are garbage. If the Apple one is faster and more accurate, that is incredible, faster and as accurate, impressive, faster but full of errors, not really that useful.

Hopefully it is the first one: it is faster and more accurate.
Score: 26 Votes (Like | Disagree)
10 months ago

Impressive, if it is accurate. What the story doesn't mention is how accurate each of those transcriptions was? Were they all identical? Did one or other have more mistakes? What is the accuracy percentage for each one, and how badly wrong were those mistakes?

I'm not trying to defend ChatGPT, just the speed is a single metric, which isn't very useful if the results are garbage. If the Apple one is faster and more accurate, that is incredible, faster and as accurate, impressive, faster but full of errors, not really that useful.

Hopefully it is the first one: it is faster and more accurate.
Nothing scientific, but in the MacStories post: "What stood out above all else was Yap’s speed. By harnessing SpeechAnalyzer and SpeechTranscriber on-device, the command line tool tore through the 7GB video file a full 55% faster than MacWhisper’s Large V3 Turbo model, with no noticeable difference in transcription quality."

It would be good to see more formal comparisons with data you suggested. Also, it would be good to know what computer John was using for the test.
Score: 17 Votes (Like | Disagree)
Big_D Avatar
10 months ago

Impressive, if it is accurate.
OK, I read the original article, they all had similar problems with the podcast name, AppStories, writing it as two words instead of CamelCasing it, which is acceptable, and they all had similar problems with people's names. But the Apple tools weren't any less accurate, despite being much faster.
Score: 15 Votes (Like | Disagree)
10 months ago
Not mentioning accuracy at all implies it's not. Lots of models are faster than O3, but they're not better.

This is just silly getting sillier. Write something meaningful.

Whisper works in real time. Anything faster is irrelevant for iOS.

And saying it's because network overhead? When you can run OpenAI's whisper locally?....... mhm.

This is a blatant advertisement just regurgitating apples marketing bullets.
Score: 7 Votes (Like | Disagree)
klasma Avatar
10 months ago
Speech-to-text is a good use case for on-device processing, but yes, accuracy is an important question, not to mention (multi-)language support.
Score: 5 Votes (Like | Disagree)
Basic75 Avatar
10 months ago

While the time difference might seem modest for individual files, Voorhees notes that the performance gain increases exponentially when processing multiple videos or longer content.
That's not how it works. Recommend maths lesson.
Score: 4 Votes (Like | Disagree)