首页| 行业标准| 论文文档| 电子资料| 图纸模型
购买积分 购买会员 激活码充值

您现在的位置是:团子下载站 > 其他 > CUDA命令行编译器文档

CUDA命令行编译器文档

  • 资源大小:758
  • 上传时间:2021-08-05
  • 下载次数:0次
  • 浏览次数:23次
  • 资源积分:1积分
  • 标      签: CUDA

资 源 简 介

The CUDA Toolkit targets a class of applicaTIons whose control part runs as a process on a general purpose computer (Linux, Windows), and which use one or more NVIDIA GPUs as coprocessors for acceleraTIng SIMD parallel jobs. Such jobs are „self- contained‟, in the sense that they can be executed and completed by a batch of GPU threads enTIrely without intervenTIon by the „host‟ process, thereby gaining optimal benefit from the parallel graphics hardware. Dispatching GPU jobs by the host process is supported by the CUDA Toolkit in the form of remote procedure calling. The GPU code is implemented as a collection of functions in a language that is essentially „C‟, but with some annotations for distinguishing them from the host code, plus annotations for distinguishing different types of data memory that exists on the GPU. Such functions may have parameters, and they can be „called‟ using a syntax that is very similar to regular C function calling, but slightly extended for being able to specify the matrix of GPU threads that must execute the „called‟ function. During its life time, the host process may dispatch many parallel GPU tasks. See Figure 1.
VIP VIP