标题: 微软发布 C++ AMP(Accelerated Massive Parallelism),全力推动 GPGPU。 [打印本页] 作者: Edison 时间: 2011-6-16 15:24 标题: 微软发布 C++ AMP(Accelerated Massive Parallelism),全力推动 GPGPU。 Microsoft brings GPU computing to C++ with C++ AMP
by Cyril Kowaliski — 12:22 AM on June 16, 2011
Look out, OpenCL. Microsoft has set its sights on the democratization of development for heterogeneous systems, and it's pulled out the big guns. At the AMD Fusion Developer Summit today, Microsoft's Herb Sutter announced an extension to the C++ language designed to let programmers tap into any DirectCompute-capable graphics hardware for general-purpose tasks. Microsoft calls the new extension C++ Accelerated Massive Parallelism, or C++ AMP for short, and it aims to make it an open spec that can be implemented on non-Microsoft platforms with non-Microsoft compilers.
Sutter presented C++ AMP as a way to cut through what he called the "jungle of heterogeneity." The PowerPoint slide below illustrates the full extent of that jungle as Sutter sees it, with processors in increasing order of specialization on the Y axis and memory systems in increasing order of non-uniformity and disjointedness mapped to the X axis. It's definitely not pretty:
C++ currently gives developers free reign in the bottom left corner of that jungle, Sutter said, but C++ AMP expands the roaming area dramatically. Not only that, but Microsoft hopes to support other specialized processors with future releases of C++ AMP, thus extending its domain over time.
So, what does C++ AMP entail? Sutter bills it as "minimal," and indeed, the list of additions is a short one:
The additions on which Sutter dwelt the most are array_view and restrict(). The former was described in the PowerPoint presentation as a "portable view that works like an N-dimensional 'iterator range'" and billed as a way to deal with memory that may not be uniform. The restrict() function was easier to grasp for a non-coder like me. If I understand correctly, it simply allows developers to create functions that execute on DirectCompute devices exclusively—all you have to do is toss "restrict(direct3d)" in there, like so:
(The code on the left is regular C++, while the code on the right is C++ AMP. That man in the bottom right corner is Microsoft's David Moth, who got into the nitty-gritty details of C++ AMP during a technical session after Sutter's keynote.)
Now, there will be restrictions on what can go inside, er, restrict()-ed functions, since DirectCompute-capable GPUs can only support a subset of the C++ language. Nevertheless, programs written with C++ AMP will be compiled as single executables capable of making use of DirectCompute-capable hardware if it's there. (I'm guessing developers will be able to include fallback code paths so systems without DirectCompute GPUs can just use the CPU to do the work.)
Microsoft said it will include C++ AMP support in the next version of Visual Studio. Of course, since C++ AMP is an open spec, Visual Studio won't be the only way to write C++ AMP code and compile it. In fact, Sutter said Microsoft and AMD are already working together on non-Windows compilers. (Lest you think Nvidia is being left out, a post on Nvidia's blog says the firm "continues to work closely with Microsoft to help make C++ AMP a success.") 作者: Edison 时间: 2011-6-16 15:28
NVIDIA 很 happy:
Microsoft today made an announcement that will accelerate the adoption of GPU computing (that is, the use of GPUs as a companion processor to CPUs). The software maker is working on a new programming language extension, called C++ AMP, with a focus on accelerating applications with GPUs.
With Microsoft now embracing GPUs in their future higher-level language and OS roadmap, it makes the decision to go with GPU computing even easier for those programmers still on the fence.
Its intent with C++ AMP is to expose C++ language capabilities to millions of Windows developers with the goal of enabling them to take advantage of GPUs. It promises to give millions of C++ developers the option of using Microsoft Visual Studio-based development tools to accelerate applications using the parallel processing power of GPUs. CUDA C and CUDA C++ will continue to be the preferred platform for Linux apps or demanding HP** performance computing) applications that need to maximize performance.
In the Spring 2007, there was just one language (CUDA C) supporting NVIDIA GPUs. Fast forward to today and our customers now have a much wider selection of languages and APIs for GPU computing – CUDA C, CUDA C++, CUDA Fortran, OpenCL, DirectCompute and in the future Microsoft C++ AMP. There are even Java and Python wrappers, as well as.NET integration, available that sit on top of CUDA C or CUDA C++.
If you are a Windows C++ developer looking at GPU Computing for the first time, there is no need to wait. Visual C++ developers today use our high performance CUDA C++ with the Thrust C++ template library to easily accelerate applications by parallelizing as little as 1 to 5 percent of their application code and mapping it to NVIDIA GPUs. CUDA C++ comes with a rich eco-system of profilers, debuggers, and libraries like cuFFT, cuBLAS, LAPACK, cuSPARSE, cuRAND, etc. NVIDIA’s Parallel Nsight™ for Visual Studio 2010 provides these Windows developers a familiar development environment, combined with excellent GPU profiling and debugging tools.
The take away from Microsoft’s announcement today is that the GPU computing space has reached maturity, with the company that produces the world’s most widely used commercial C++ developer tools – Microsoft — completely embracing GPU computing in their core tools. Rest assured, NVIDIA continues to work closely with Microsoft to help make C++ AMP a success, and we will continue to deliver the best GPU developer tools and training.
Stay tuned for more details.