Friday, August 31, 2012

NVDA, INTC: Fun New Tricks with Virtualization, Parallelism

In case you missed it, there have been some interesting technical assessments put out about Intel (INTC) and Nvidia (NVDA) in the last 24 to 48 hours.

Patrick Moorhead, who worked for years at Advanced Micro Devices (AMD) and runs Moor Insights & Strategy, has been consulting with Nvidia of late.

Yesterday he offered up his assessment of Nvidia’s announcement of “VGX,” a technology that is meant to “virtualize” the use of “graphics processing units,” or “GPUs.” The technology is made possible by Nvidia’s introduction recently of a new line of GPUs, called “Kepler.”

Chip makers such as Intel (INTC) have made changes to microprocessors to support virtualization at a chip level, but Nvidia says its VGX is the first effort for graphics chips, specifically.

Moorhead writes that the chip technology will make it more feasible to have virtual desktops, where compute activity happens largely on the server while the client device is getting rapid updates of its user interface. He thinks this will be a significant advance in “cloud computing”:

Currently, GPUs cannot be shared in the cloud by different users. This has led to massive scalability issues for cloud gaming and virtualizing high end applications for designers and power users. NVIDIA�s Kepler is the world�s first GPU that can be virtualized in hardware, or shared, by many users in the cloud. Service providers can then install a few high-end NVIDIA Kepler-based VGX cards into servers and serve multiple users and application instances. VMware�s Hypervisors and Citrix XenDesktop will both be supporting NVIDIA�s VGX architecture.

See the link above to read the entire white paper.

In another note, Tom Halfhill, an editor with Microprocessor Report, wrote this week that Intel has made an interesting new offer to those trying to program across the multiple CPU “cores” of today’s microprocessors.

A project named “River Trail,” developed in Intel Labs, uses the Javascript programming language — actually, extensions to Javascript that Intel wrote — which masks all of the hardware details from the programmer.

It is a “refreshingly different approach” to parallel programming, writes Halfhill, that “makes parallel programming easy enough for almost anyone.”

“The technology can accelerate any task that benefits from data parallelism,” writes Halfhill. “The more inherent parallelism in a program, the greater the speedup.”

Halfhill even gives an example of his own tinkering:

Intel�s API draft specification includes some example method calls, but MPR judged them a little too simplistic to be fully illustrative, so we wrote a few of our own. The following example uses the ParallelArray map method to create a multiply-add (madd) function that operates on every array element. It assumes the program has already created a 10-element ParallelArray named pa1 containing the numbers 1 through 10:

var pa2 = pa1.map(function madd(x){

return x*2+x;

});


The function returns the sum and stores it as the corresponding element in the new array, pa2. After this method call, the pa2 array contains the following elements:�

3, 6, 9, 12, 15, 18, 21, 24, 27, 30

For the skeptics out there, Halfhill writes, “our example isn�t mere pseudocode�it�s real executable code. River Trail actually does hide the nitty-gritty details of vector arithmetic, multithreading, and multiprocessing from Javascript programmers.”

Halfhill’s article is available only to subscribers to Microprocessor Report.

No comments:

Post a Comment