Type something and hit enter

ads here
On
advertise here

NVidea's retrospective genius was to notice that having lots of processors on their chips meant that the physical distance between any given chunk of memory and a processor is much smaller. Getting memory to and from a processor quickly, more than offsets the slower processing speeds. Thus NVidea chips tend to outperform Intel chips even for sequential tasks (for optimised code).

Is unsupervised machine learning going to be the be all and end all of tomorrow's programming? Is OpenAcc etc. going to get reach a stage where no one needs to learn CUDA and everyone can just expect a compiler to GPU optimise everything? Is Intel going to make their own incredible GPUs? Are GPUs overegged, maybe we're always going to use CPUs for bottleneck tasks?

Which company would feel more comfortable buying and holding for 30 years?



Submitted July 30, 2017 at 01:26AM by moomin100 http://ift.tt/2tT9khb

Click to comment