3.1 运行环境准备
- 安装pip
-
- # curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
-
- # python get-pip.py
-
- # pip --version
-
- pip 18.1 from /usr/lib/python2.7/site-packages/pip (python 2.7)
-
- # python --version
-
- Python 2.7.5
-
- 安装GCC G++
-
- # yum install gcc gcc-c++
-
- # gcc --version
-
- gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
-
- 安装一些需要的包
-
- #yum -y install zlib*
-
- #yum install openssl-devel -y
-
- #yum install sqlite* -y
-
- 升级CentOS默认Python2.7.5版本到3.6.5
-
- 下载Python源码包
-
- # wget -c https://www.python.org/ftp/python/3.6.5/Python-3.6.5.tgz
-
- 解压源码包
-
- # tar -zvxf Python-3.6.5.tgz
-
- 进入源码目录
-
- # cd Python-3.6.5/
-
- # ./configure --with-ssl
-
- 编译并安装
-
- # make && make install
-
- 查看一下新安装的python3的文件位置
-
- # ll /usr/local/bin/python*


3.2 安装CUDA
- 升级内核:
-
- # rpm -import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
-
- # rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
-
- # yum -y --enablerepo=elrepo-kernel install kernel-ml.x86_64 kernel-ml-devel.x86_64
-
- 查看内核版本默认启动顺序:
-
- awk -F' '$1=="menuentry " {print $2}' /etc/grub2.cfg
-
- CentOS Linux (4.20.0-1.el7.elrepo.x86_64) 7 (Core)
-
- CentOS Linux (3.10.0-862.el7.x86_64) 7 (Core)
-
- CentOS Linux (0-rescue-c4581dac5b734c11a1881c8eb10d6b09) 7 (Core)
-
- #vim /etc/default/grub
-
- GRUB_DEFAULT=saved 改为GRUB_0=saved
-
- 运行grub2-mkconfig命令来重新创建内核配置
-
- # grub2-mkconfig -o /boot/grub2/grub.cfg
-
- #reboot
-
- # uname -r 重启后验证一下内核版本
-
- 4.20.0-1.el7.elrepo.x86_64
-
- CUDA Toolkit安装有两种方式:
-
- Package安装 (RPM and Deb packages)
- Runfile安装
- 这里选择使用Runfile模式进行安装
-
- 安装包下载:https://developer.nvidia.com/compute/cuda/10.0/Prod/local_installers/cuda_10.0.130_410.48_linux
-
- 根据自身操作系统进行安装包筛选,并下载。复制下载链接直接用wget -c命令进行下载
-
- # wget -c https://developer.nvidia.com/compute/cuda/10.0/Prod/local_installers/cuda_10.0.130_410.48_linux
-
- #chmod +x cuda_10.0.130_410.48_linux
-
- #./cuda_10.0.130_410.48_linux
-
- Do you accept the previously read EULA?
-
- accept/decline/quit: accept
-
- Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 410.48?
-
- (y)es/(n)o/(q)uit: y
-
- Install the CUDA 10.0 Toolkit?
-
- (y)es/(n)o/(q)uit: y
-
- Enter Toolkit Location
-
- [ default is /usr/local/cuda-10.0 ]:
-
- Do you want to install a symbolic link at /usr/local/cuda?
-
- (y)es/(n)o/(q)uit: y
-
- Install the CUDA 10.0 Samples?
-
- (y)es/(n)o/(q)uit: y
-
- Enter CUDA Samples Location
-
- [ default is /root ]:
-
- 配置CUDA运行环境变量:
-
- # vim /etc/profile
-
- # CUDA
-
- export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}}
-
- export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
-
- # source /etc/profile
-
- 检查版本
-
- # nvcc --version
-
- nvcc: NVIDIA (R) Cuda compiler driver
-
- Copyright (c) 2005-2018 NVIDIA Corporation
-
- Built on Sat_Aug_25_21:08:01_CDT_2018
-
- Cuda compilation tools, release 10.0, V10.0.130
-
- 使用实例验证测试CUDA是否正常:
-
- #cd /root/NVIDIA_CUDA-10.0_Samples/1_Utilities/deviceQuery
-
- # make
-
- "/usr/local/cuda-10.0"/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o deviceQuery.o -c deviceQuery.cpp
-
- "/usr/local/cuda-10.0"/bin/nvcc -ccbin g++ -m64 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o deviceQuery deviceQuery.o
-
- mkdir -p ../../bin/x86_64/linux/release
-
- cp deviceQuery ../../bin/x86_64/linux/release
-
- # cd ../../bin/x86_64/linux/release/
-
- # ./deviceQuery
-
- #./deviceQuery Starting...
-
- CUDA Device Query (Runtime API) version (CUDART static linking)
-
- Detected 1 CUDA Capable device(s)
-
- Device 0: "Quadro P2000"
-
- CUDA Driver Version / Runtime Version 10.0 / 10.0
-
- CUDA Capability Major/Minor version number: 6.1
-
- Total amount of global memory: 5059 MBytes (5304745984 bytes)
-
- ( 8) Multiprocessors, (128) CUDA Cores/MP: 1024 CUDA Cores
-
- GPU Max Clock rate: 1481 MHz (1.48 GHz)
-
- Memory Clock rate: 3504 Mhz
-
- Memory Bus Width: 160-bit
-
- L2 Cache Size: 1310720 bytes
-
- Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
-
- Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
-
- Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
-
- Total amount of constant memory: 65536 bytes
-
- Total amount of shared memory per block: 49152 bytes
-
- Total number of registers available per block: 65536
-
- Warp size: 32
-
- Maximum number of threads per multiprocessor: 2048
-
- Maximum number of threads per block: 1024
-
- Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
-
- Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
-
- Maximum memory pitch: 2147483647 bytes
-
- Texture alignment: 512 bytes
-
- Concurrent copy and kernel execution: Yes with 2 copy engine(s)
-
- Run time limit on kernels: No
-
- Integrated GPU sharing Host Memory: No
-
- Support host page-locked memory mapping: Yes
-
- Alignment requirement for Surfaces: Yes
-
- Device has ECC support: Disabled
-
- Device supports Unified Addressing (UVA): Yes
-
- Device supports Compute Preemption: Yes
-
- Supports Cooperative Kernel Launch: Yes
-
- Supports MultiDevice Co-op Kernel Launch: Yes
-
- Device PCI Domain ID / Bus ID / location ID: 0 / 0 / 11
-
- Compute Mode:
-
- < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
-
- deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.0, CUDA Runtime Version = 10.0, NumDevs = 1
-
- Result = PASS
-
- Result = PASS且测试过程中无报错,表示测试通过!
3.3安装 cuDNN (编辑:晋中站长网)
【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!
|