Type something and hit enter

ads here
On
advertise here

If you're interested in investing in NVDA check out my new subreddit /r/NVDA_Stock. It's shamelessly inspired by /r/AMD_Stock You can read and discuss this article here: http://ift.tt/2id3ixa

Nvidia's GPU revenue is secure for the next year, there is still no competition.

There are many bears that are concerned about competition from AMD's Vega in both gaming and deep learning. I took the time to learn about Vega and study the state of the competition seriously. My conclusion is that competition is non-existent.

Gaming

Polaris failed to make an impact as we saw last quarter with NVIDIA's record revenues and AMD's very weak revenues. Coming a full year later after the launch of Pascal, Vega is expected to hit around GP104 performance. This level of performance is probably too low. GP104 was top tier in 2016 but will be merely mid-tier in 2017. The 1080ti will replace the 1080 as the gaming flagship for NVIDIA and Vega will only work as mid-tier competition. NVIDIA should also launch their Pascal refresh or Volta cards by fall of this year. If that's the case, then GP104 will fall to mid-low tier, not mid tier, and Vega will only compete on the budget level. As a result NVIDIA will once again face no competition in most of the market for gamers.

Deep learning

In deep learning competition is again non-existent. Many people have pointed to Google also offering AMD cards in their cloud as a sign of validation for AMD as a deep learning competitor. But this is untrue and a media scam, or worse yet, some kind of scheme to fool investors.

http://ift.tt/2fTySiH

Read the google cloud announcement directly instead of reading an ignorant regurgitation from financial news outlets.

Google Cloud will offer AMD FirePro S9300 x2 that supports powerful, GPU-based remote workstations. We'll also offer NVIDIA® Tesla® P100 and K80 GPUs for deep learning, AI and HPC applications that require powerful computation and analysis. GPUs are offered in passthrough mode to provide bare metal performance. Up to 8 GPU dies can be attached per VM instance including custom machine types.

As we can see, only NVIDIA cards are being offered for deep learning. AMD cards are only used for remote workstations, an old and uninteresting usecase. AMD has no competition for deep learning as of yet. The FirePro cards are offered at extreme discounts to the NVIDIA equivalents, we're talking about prices that are only 1/2 to 1/4 as much. Yet Quadro and Tesla dominate the market vs FirePro. In any case, NVIDIA is not interested in engaging in a race to the bottom with AMD and the customer base for those products are generally not so price sensitive.

AMD also announced a Vega based "Instinct" deep learning platform. Though 5 years late, AMD is hoping to start the catch up with the launch of Vega. There are 2 questions to think about with Instinct in respect to whether or not it'll be successful in deep learning.

Question 1 is the hardware, is it competitive with P100? Question 2 is the software, can the CUDA moat of NVIDIA be breached?

THE HARDWARE

Reference article: http://ift.tt/2hlrhxF

Hardware-wise, NVIDIA has a feature advantage. NVIDIA's cards are optimized for "deep learning operations" while AMD is not. But let's be clear here, "deep learning operations" is marketing for INT8. Most deep learning today is done on FP16. P100 is "10x" faster than Maxwell partly based on improvements to FP16 deep learning operations and is today far ahead of any competition.

In the future, AMD is advertising better support for FP16, bringing them up to parity with NVIDIA in that regard. But NVIDIA is moving ahead with support for INT8 operations, an even faster way of doing deep learning than FP16. INT8 is however useful only in some usecases.

Deep learning research has found that trained deep neural networks can be applied to inference using reduced precision arithmetic, with minimal impact on accuracy. These instructions allow rapid computation on packed low-precision vectors. Tesla P4 is capable of a peak 21.8 INT8 TOP/s (Tera-Operations per second).

Investors should be careful to note that this is only NVIDIA marketing, INT8's usefulness in the real world has not yet been proven. In this case, hardware for INT8 did not exist prior to Pascal and it will be with the expansion of Pascal into the market that real applications for INT8 will be written. However if NVIDIA is correct (and I think they are), INT8 represents yet another generationally important hardware feature that NVIDIA has over AMD.

But let's ignore this for now and talk about competition in the more traditional FP16.

AMD is offering a Polaris, Fiji, and Vega based solution.

Anandtech is politely saying that Polaris and Fiji are generally worse than the NVIDIA options.

The MI6 and MI8 will be going up against NVIDIA’s P4 and P40 accelerators. AMD’s cards don’t directly line-up against the NVIDIA cards in power consumption or expected performance, so the competitive landscape is somewhat broad, but those are the cards AMD will need to dethrone in the inference landscape.

I'll be more realistic and say right away that simply means AMD is completely non competitive for the Polaris and Fiji products, especially considering the relative price inelasticity of deep learning customers and NVIDIA's lead in CUDA (which I'll get to in section 2).

More interesting is MI25, the upcoming Vega based product. It's got new architectural improvements of uncertain value. We simply don't know how it will compete against P100.

As AMD’s sole training card, the MI25 will be going up against NVIDIA’s flagship accelerator, the Tesla P100. And as opposed to the inference cards, this has the potential to be a much closer fight. AMD has parity on packed instructions, with performance that on paper would exceed the P100. AMD has yet to fully unveil what Vega can do – we have no idea what “NCU” stands for or what AMD’s “high bandwidth cache and controller” are all about – but on the surface there’s the potential for the kind of knock-down fight at the top that makes for an interesting spectacle. And for AMD the stakes are huge; even if they can’t necessarily win, being able to price the MI25 even remotely close to the P100 would give them huge margins. More practically speaking, it means they could afford to significantly undercut NVIDIA in this space to capture market share while still making a tidy profit.

Anandtech is full of optimism for the MI25. I look at it much more critically. Even if the MI25 is competitive on the hardware level it is probably too late. Like in the gaming market, it comes a full year after Pascal and Volta-based deep learning chips are probably going to be announced before their release. This year's GTC (GPU Technology Conference), the annual NVIDIA hosted conference should see the announcement of Volta based V100 for deep learning. Pascal was announced at last year's GTC.

Based on this release cadence, AMD looks at least a year behind in technology even ignoring INT8.

The Software, CUDA Moat.

A lot of investors have heard of CUDA and how important it is in deep learning. But it doesn't seem like its importance is sufficiently stressed seeing as how many bears are still out there talking about competition. CUDA has already won, there is no war, the war is over. NVIDIA's proprietary platform is as dominant as windows is over Linux. CUDA is easier to use, have a vastly bigger community, resources, tooling, love and support from everyone. There is basically no alternative. The idea of OpenCL winning vs CUDA in 2017 is as farcical as the idea of Linux winning over Windows on the desktop in 2017. So long as people love CUDA they will stick with NVIDIA.

AMD is going to attempt to breach the CUDA moat with the Boltzmann Project. It's a project to poorly port CUDA code to AMD compatible OpenCL code. To me this sounds almost a bit delusional... but I'll talk about it anyway. Has any software platform ever won by creating an emulator / port layer for another platform's user apps? Doesn't such a thing just send the signal that the winning platform has, in fact, won?

The Boltzmann Project is of dubious quality technology wise. I read in many places that the output is garbage. But just humoring it, we have to raise the question why anyone would want to port from a popular and great environment like CUDA to a crappy one like OpenCL in the first place. The only answer is, they don't want NVIDIA to get a monopoly on deep learning (it's too late NVDA already has a monopoly) and want to support AMD to help it become a competitor.

The only problem is such logic is that computer scientists are not fanboys. People generally don't spend weeks of work time to help out a company for the purpose of charity. They want to get stuff done, make software, get their AIs to make recommendations, help translate languages, create machine music, make medical diagnosis, whatever it is that they're doing.

Is it possible for AMD to make ground in the CUDA moat? I can't dismiss it out of hand, if AMD makes an unrealistic, absurd amount of investment in that effort, I can see them making progress. It's just extremely unlikely, and everyone knows it.

In conclusion, there are no concerns about competition vis-a-vis AMD. But that doesn't mean there isn't a bear case for the NVDA stock. Stick around /r/NVDA_Stock and check out some other posts.



Submitted January 10, 2017 at 02:41PM by Charuru http://ift.tt/2jfwHIy

Click to comment