Thunderbolt 5 Could Double Bandwidth to 80 Gbps, Intel Leak Suggests
An Intel executive appears to have unintentionally leaked details of Thunderbolt 5, the next-generation hardware interface protocol that is yet to have been officially announced by Intel.
The details appeared Sunday in a tweeted photo, since deleted, by EVP and GM of Intel's Client Computing Group, Gregory Bryant, who was documenting his visit to Intel's R&D labs in Israel.
As outlined by AnandTech, the photo from a Thunderbolt-related tour revealed a poster on a lab wall with the words "80G PHY Technology," suggesting TB5 connectivity will support up to 80 Gb/s throughput, or double the bandwidth of existing Thunderbolt 4 and USB 4 connections.
The poster also includes the sentence "USB 80G is targeted to support the existing USB-C ecosystem," implying that Intel intends to run the extra bandwidth through the same USB-C interface connector.
A more technical reference in the poster appears to refer to a new PAM-3 (Pulse Amplitude Modulation) implementation that would make use of a 3-bit data signal, allowing TB5 to achieve a higher bandwidth than that allowed in the more standard non-return-to-zero (NRZ) and PAM-4 implementations seen in existing connectivity protocols.
Intel launched Thunderbolt 4 last year and several TB4 accessories have been available for some time, but Apple's latest Macs and iPad Pro models still only support Thunderbolt 3. However, while TB4 offers more power and utility and is backwards compatible, it doesn't deliver any bandwidth increase over the maximum 40 Gb/s of Thunderbolt 3, so the step up to Thunderbolt 5 with its doubled maximum throughput could be significant.
The practical upshot of these innovations could mean, for example, TB5 supporting higher refresh rates for 4K and 8K monitors while providing backward compatibility with older Thunderbolt and USB connections.
Whether Intel Thunderbolt 5 will be officially launched – and supported by future Apple devices – is unclear at this time, but the unintentional leak at least provides a peak into where Intel might take the interface protocol in the future.
Top Rated Comments
Problem is, I think the "Universal" port is an idea who's time has come and gone. USB 2-3, great in its time - cheap to implement, cheap cables and good for everything from a keyboard to a regular external SSD, while video interfaces remained totally separate. Now, they're trying for a single port that can be used for everything from a keyboard/mouse (which still only need USB 1 speeds) through a fast SSD (even so, USB3.0 still covers the majority of applications) through to super-high-speed SSD arrays, and 8/16k displays... which are only used by a tiny fraction of customers (and which will therefore always cost a fortune). Meanwhile, the need for boring old USB2/3 devices isn't going away - because those protocols are more than good enough for things like keyboards, mice, MIDI, backup drives and the typical home/small office network.
The problem with catering for such a wide range of cases via a single universal port is that CPU and GPU resources don't grow on trees - CPUs supply a finite number of PCIe lanes, or equivalent, GPUs support a limited number of DisplayPort streams, so while they may be able to drive half a dozen USB2/3 ports they don't have the resources to drive more than a couple of high-bandwidth universal ports. Sure, manufacturers can add switching arrangements to share resources between ports but that adds expense and complexity (and obscure rules as to what permutations of devices you can plug in to the "universal" ports) so what we get is fewer holes in which to plug stuff.
The height of the stupidity is combining video, charging and data - forcing three independent sets of resources to compete for the same precious universal port... because, otherwise, when you put your laptop on the desk you might have to plug in two or three cables rather than one. Oh the humanity! (I mean, this was more of a point back in the good old days when you were talking about half a dozen honking great D-connectors that had to be screwed in place, but popping in 2-3 modern connectors takes seconds) - and using a display as a dock made sense back in ~2010 and the days of the 1440p Thunderbolt display, which left plenty of spare bandwidth to drive other ports. Nowadays, with 4k commonplace (so, 4x the bandwidth), 6k and 8k bubbling under along with HDR, higher refresh rates etc. it's the displays that are really driving the bandwidth requirement - so trying to share the same channel with other, unrelated data is just dumb.
The size of laptops is already fixed by the keyboard, display and battery - so there should be no shortage of space for ports (use mini-connectors if you really must... that's a separate issue from universal ports). The only demand for "universal" connectors is on mobiles where there really is limited space - but the future of mobile is probably totally wireless (because a mobile with a cable connected isn't... mobile).
...and while we don't want to be the "640K is enough for everybody" guy, the reality is that a huge number of use cases are more than adequately covered by USB3 and 4k displays, and will be for several years. SSD speed and capacity doesn't seem to be doubling every 18 months any more, and the resolution of the Mk1 Eyeball isn't increasing, either.
If the tech community gets confused the general public must be completely lost. To be honest I'd rather them use speed numbers instead of generic numbers, so at least the USB host is 5Gbps or 10Gbps etc and if the device is one of those that is what you get.
So much simper if the PC says USB 10Gbps and the device you want to use says USB 10Gbps, instantly you know that's the fastest you'll get, or if the device your plugging in only does 5Gbps into a 10Gbps port you know you will only get half the speed.
USB2.0 was announced as being 480mbit, but then the companies lobbied to allow USB1.1 devices(12mbit) to be marketed as USB2.0 so the USB consortium created “USB 2.0 Full Speed” which was identical to USB1.1 and “USB 2.0 Hi-Speed” which was the new 480mbit standard.
USB3 20gbit was launched and is entirely different and electrically incompatible from the USB4 20gbit standard. USB4 20gbit controllers are not mandated to work with USB3 20gbit. USB4 also has a 10gbit mode that is not backwards compatible with USB3.0 for some reason.
* It would be valid for a company to make a 10gbit USB4 device that doesn't work with USB3 10gbit.
* It is likely that a USB3 20gbit peripheral will not work at 20gbit on a USB4 port even though USB4 ports are mandated to have a 20gbit mode(40gbit isn't mandatory).
M.2 PCIe 3 x 4 at full speed in an external TB4 enclosure. Never mind M.2 PCIe 4 x 4.
HDMI 2.1 Alt mode. (48Gbps)
DisplayPort 2.0 Alt mode. (80Gbps)
An Apple 6K HDR 10bit display requires a lot of bandwidth. Combining 2x DisplayPort 1.4 together has been done, but HDMI 2.1 or DP 2.0 Alt mode over USB-C is a much better solution.
Once you factor in overhead, a future TB5 @ 80Gbps can't even handle DP 2.0 at full speed.
TB6 anyone? 160Gbps! Will it be enough for M.2 PCIe 5 x 4 that's coming 2nd half of 2022?