Nvidia: Will The GTX 1080 Ti Hold The High End?


Nvidia’s GTX 1080 Ti is the new graphics champ according to reviews.

Unclear whether the 1080 Ti will fend off the challenge of AMD’s forthcoming Vega.

Nvidia scores a true Open Compute Project design win.

Rethink Technology business briefs for March 9, 2017.

Reviews are in for the Nvidia GTX 1080i

Source: Guru3D

In my March 1 Tech Brief I talked a little about the presentation by Nvidia’s (NASDAQ:NVDA) CEO Jen-Hsun Huang at the Game Developers Conference. It was mainly a preview of the GTX 1080 Ti high-end video card. I pointed out that the main purpose of the card seemed to be to head off the imminent threat of Advanced Micro Devices’ (NASDAQ:AMD) new Vega GPU architecture.

How successful Nvidia will be remains to be seen, but it appears that Nvidia has done what it can in the near term to boost performance of the Pascal architecture while keeping the price reasonable. The 1080 Ti delivers approximately the performance of the Titan X for a Founders Edition list of $700 vs. the $1,200 of the Titan X.

Reviews just came out today, and I looked at reviews from Tom’s Hardware, Ars Technica, AnandTech and Guru3D. My general impression is that Nvidia more or less delivered on its promise of a 35% performance improvement over the 1080 for 4K resolution gaming, currently the most stressful. At lower resolutions, the performance improvement was less, typically 15-20%.

How did Nvidia manage to equal the performance of the Titan X? More to the point, does this imply lower margins on the 1080 Ti? I thought that Tom’s explained this very well.

The 1080Ti uses the same GP102 chip as the Titan X. Nvidia increased the yield of the GP102 by building in more processing cores (called CUDA cores by Nvidia) than it actually uses. That allows it to selectively disable a block of defective cores while still maintaining spec performance. Out of 3,840 cores on the GP102 die, only 3,584 are enabled.

For the new Ti, Nvidia goes a step further and disables one of 12 memory controllers and some other circuitry, whereas the Titan X needs to use all of them. This further improves yield and lowers cost of the Ti. It’s the single disabled memory controller that results in the odd 11 GB of memory for the card vs. 12 GB for Titan X.

To compensate for the decrease in memory controller count, Nvidia ups the clock rate on the memory interface and takes advantage of some slightly faster memory from Micron (NASDAQ:MU).

So the lower price doesn’t necessarily mean lower margins for Nvidia on the Ti. In addition to the higher yield, normal learning curve at Nvidia’s foundry, TSMC (NYSE:TSM), reduces cost per wafer. The net effect is that Nvidia has probably been able to hold the line on gross margin.

Can the 1080 Ti Defend Against Vega?

It will be interesting to see how the Ti matches up against Vega in performance. Rumors have generally put Vega performance somewhere between the 1080 and the Titan X. The most specific information I’ve seen is a leaked benchmark of the Radeon RX 580 published by VideoCardz for the famous Ashes of the Singularity DX12 benchmark:

It’s actually not easy to find a set of results for AotS at 1080p, since most reviewers focused on higher resolution, but Ars Technica did assemble a set of results:

Often, test results can vary a lot between reviewers based on settings used. The leaked VideoCardz result indicates a Standard preset, while Ars Technica stated that it uses “high or ultra” settings for its tests. Given the performance margin for the Ti in the results, it appears that Nvidia’s position at the top of the high performance graphics market remains secure.

This probably leaves AMD having to price Vega well below the Ti. That could be difficult, given Vega’s use of High Bandwidth Memory 2 (HBM2). HBM2 offers very high performance, but it’s inherently more expensive since the memory is mounted within the GPU package.

Source: wccftech

Nvidia only offers the GP100 version of Pascal with HBM2, and this is only available in the Tesla (NASDAQ:TSLA) P100 accelerator for datacenter use, which lists for $3,600. If Nvidia has been able to offer better performance without HBM2, this is probably a competitive advantage.

Nvidia’s Open Compute Project Design Win

Microsoft’s (NASDAQ:MSFT) Project Olympus, which I profiled yesterday, also offered some very good news for Nvidia. Although Project Olympus mainly deals with generic open standards for detacenters such as rack and server specifications, Microsoft had a very specific proposal to use the Tesla P100 accelerator in a specific rack implementation.

What Microsoft proposed is being called the HGX-1 Hyperscale GPU Accelerator and it consists of a rack of 8 P100s:

The P100s are connected by Nvidia’s high speed NV Link interface rather than conventional PCIe. Unlike other Olympus announcements, the HGX-1 was designed around a very specific GPU architecture only available from Nvidia, and really does constitute a “design win.”

According to Kushagra Vaid, GM, Azure Hardware Infrastructure,

The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast growing machine learning workloads, and its unique design allows it to be easily adopted into existing datacenters around the world.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s