Skip to content
This repository was archived by the owner on Dec 10, 2022. It is now read-only.

Commit 562d24e

Browse files
author
Holst, Henrik
committed
Builds and runs on Ubuntu 16.04, fingers crossed.
1 parent 3d3df67 commit 562d24e

File tree

7 files changed

+151
-42
lines changed

7 files changed

+151
-42
lines changed

README.md

Lines changed: 122 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,6 @@
33
</div>
44
-----------------
55

6-
| **`Linux CPU`** | **`Linux GPU PIP`** | **`Mac OS CPU`** | **`Android`** |
7-
|-------------------|----------------------|------------------|----------------|
8-
| [![Build Status](http://ci.tensorflow.org/buildStatus/icon?job=tensorflow-master-cpu)](http://ci.tensorflow.org/job/tensorflow-master-cpu) | [![Build Status](http://ci.tensorflow.org/buildStatus/icon?job=tensorflow-master-gpu_pip)](http://ci.tensorflow.org/job/tensorflow-master-gpu_pip) | [![Build Status](http://ci.tensorflow.org/buildStatus/icon?job=tensorflow-master-mac)](http://ci.tensorflow.org/job/tensorflow-master-mac) | [![Build Status](http://ci.tensorflow.org/buildStatus/icon?job=tensorflow-master-android)](http://ci.tensorflow.org/job/tensorflow-master-android) |
9-
106
**TensorFlow** is an open source software library for numerical computation using
117
data flow graphs. Nodes in the graph represent mathematical operations, while
128
the graph edges represent the multidimensional data arrays (tensors) that flow
@@ -18,38 +14,140 @@ organization for the purposes of conducting machine learning and deep neural
1814
networks research. The system is general enough to be applicable in a wide
1915
variety of other domains, as well.
2016

21-
**If you'd like to contribute to tensorflow, be sure to review the [contribution
22-
guidelines](CONTRIBUTING.md).**
17+
## Building TensorFlow in Ubuntu 16.04 LTS
18+
19+
Install NVIDIA CUDA packages from Universe repository:
20+
21+
$ sudo apt install nvidia-cuda-toolkit
22+
23+
Install NVIDIA cuDNN 4.0:
2324

24-
**We use [github issues](https://github.com/tensorflow/tensorflow/issues) for
25-
tracking requests and bugs, but please see
26-
[Community](tensorflow/g3doc/resources/index.md#community) for general questions
27-
and discussion.**
25+
$ sudo tar -C /usr/local -xzf /mnt/dl/mirror/cudnn-7.0-linux-x64-v4.0-prod.tgz
26+
27+
Check out the TensorFlow 0.8.0, modified to work on Ubuntu 16.04 LTS
28+
and with older NVIDIA GPUs with compute capability 3.0:
29+
30+
$ git checkout [email protected]/hholst/ea-tensorflow.git
31+
32+
Create an Anaconda environment containing the build tools needed:
33+
```
34+
$ conda create -n ea-tensorflow python=3.5 swig numpy
35+
Using Anaconda Cloud api site https://api.anaconda.org
36+
Fetching package metadata: ....
37+
Solving package specifications: .........
2838
29-
## Installation
30-
*See [Download and Setup](tensorflow/g3doc/get_started/os_setup.md) for instructions on how to install our release binaries or how to build from source.*
39+
Package plan for installation in environment /home/hholst/anaconda3/envs/ea-tensorflow:
3140
32-
People who are a little bit adventurous can also try our nightly binaries:
41+
The following NEW packages will be INSTALLED:
3342
34-
* Linux CPU only: [Python 2](http://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_CONTAINER_TYPE=CPU,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.8.0-cp27-none-linux_x86_64.whl) ([build history](http://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_CONTAINER_TYPE=CPU,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/)) / [Python 3](http://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_CONTAINER_TYPE=CPU,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.8.0-cp34-cp34m-linux_x86_64.whl) ([build history](http://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_CONTAINER_TYPE=CPU,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/))
35-
* Linux GPU: [Python 2](http://ci.tensorflow.org/view/Nightly/job/nigntly-matrix-linux-gpu/TF_BUILD_CONTAINER_TYPE=GPU,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-working/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.8.0-cp27-none-linux_x86_64.whl) ([build history](http://ci.tensorflow.org/view/Nightly/job/nigntly-matrix-linux-gpu/TF_BUILD_CONTAINER_TYPE=GPU,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=gpu-working/)) / [Python 3](http://ci.tensorflow.org/view/Nightly/job/nigntly-matrix-linux-gpu/TF_BUILD_CONTAINER_TYPE=GPU,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-working/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.8.0-cp34-cp34m-linux_x86_64.whl) ([build history](http://ci.tensorflow.org/view/Nightly/job/nigntly-matrix-linux-gpu/TF_BUILD_CONTAINER_TYPE=GPU,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=gpu-working/))
36-
* Mac CPU only: [Python 2](http://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_CONTAINER_TYPE=CPU,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.8.0-py2-none-any.whl) ([build history](http://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_CONTAINER_TYPE=CPU,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=mac-slave/)) / [Python 3](http://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_CONTAINER_TYPE=CPU,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-0.8.0-py3-none-any.whl) ([build history](http://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_CONTAINER_TYPE=CPU,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=mac-slave/))
37-
* [Android](http://ci.tensorflow.org/view/Nightly/job/nightly-matrix-android/TF_BUILD_CONTAINER_TYPE=ANDROID,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=NO_PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=android-slave/lastSuccessfulBuild/artifact/bazel-out/local_linux/bin/tensorflow/examples/android/tensorflow_demo.apk) ([build history](http://ci.tensorflow.org/view/Nightly/job/nightly-matrix-android/TF_BUILD_CONTAINER_TYPE=ANDROID,TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=NO_PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=android-slave/))
43+
mkl: 11.3.1-0
44+
numpy: 1.11.0-py35_0
45+
openssl: 1.0.2h-0
46+
pcre: 8.31-0
47+
pip: 8.1.1-py35_1
48+
python: 3.5.1-0
49+
readline: 6.2-2
50+
setuptools: 20.7.0-py35_0
51+
sqlite: 3.9.2-0
52+
swig: 3.0.8-1
53+
tk: 8.5.18-0
54+
wheel: 0.29.0-py35_0
55+
xz: 5.0.5-1
56+
zlib: 1.2.8-0
57+
58+
Proceed ([y]/n)?
59+
60+
Linking packages ...
61+
[ COMPLETE ]|############################################################################| 100%
62+
#
63+
# To activate this environment, use:
64+
# $ source activate ea-tensorflow
65+
#
66+
# To deactivate this environment, use:
67+
# $ source deactivate
68+
#
69+
```
70+
71+
You also need to activate the new Anaconda environment:
72+
73+
$ source activate ea-tensorflow
74+
75+
*Optional:* You can re-run the `./configure` script.
76+
Observe that we override the compute capability
77+
to include support for CUDA compute capability 3.0!
78+
79+
```
80+
$ ./configure
81+
Please specify the location of python. [Default is /home/hholst/anaconda3/envs/ea-tensorflow/bin/python]:
82+
Do you wish to build TensorFlow with GPU support? [Y/n]
83+
GPU support will be enabled for TensorFlow
84+
Please specify which gcc nvcc should use as the host compiler. [Default is /usr/bin/gcc]:
85+
Please specify the Cuda SDK version you want to use, e.g. 7.0. [Leave empty to use system default]:
86+
Please specify the location where CUDA toolkit is installed. Refer to README.md for more details. [Default is /usr]:
87+
Please specify the Cudnn version you want to use. [Leave empty to use system default]:
88+
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
89+
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
90+
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
91+
Please note that each additional compute capability significantly increases your build time and binary size.
92+
[Default is: "3.5,5.2"]: 3.0,3.5,5.2
93+
Setting up Cuda include
94+
Setting up Cuda lib
95+
Setting up Cuda bin
96+
find: File system loop detected; ‘/usr/bin/X11’ is part of the same file system loop as ‘/usr/bin’.
97+
find: File system loop detected; ‘/usr/bin/X11’ is part of the same file system loop as ‘/usr/bin’.
98+
Configuration finished
99+
```
100+
101+
### Compiling
102+
103+
Build TensorFlow
104+
105+
$ bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer
106+
107+
Run test
108+
109+
$ bazel-bin/tensorflow/cc/tutorials_example_trainer --use_gpu
110+
111+
Build pip package
112+
113+
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
114+
115+
Install pip package:
116+
117+
$ pip install /tmp/tensorflow_pkg/tensorflow-0.8.0-py3-none-any.whl
118+
119+
### *Try your first TensorFlow program*
120+
121+
NOTE: Make sure you're not standing inside the `ea-tensorflow` git repository
122+
when you are starting python. Doing so might cause problems with CUDA runtime library.
38123

39-
#### *Try your first TensorFlow program*
40124
```python
41125
$ python
42-
126+
Python 3.5.1 |Continuum Analytics, Inc.| (default, Dec 7 2015, 11:16:01)
127+
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
128+
Type "help", "copyright", "credits" or "license" for more information.
43129
>>> import tensorflow as tf
44-
>>> hello = tf.constant('Hello, TensorFlow!')
45-
>>> sess = tf.Session()
46-
>>> sess.run(hello)
47-
Hello, TensorFlow!
130+
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
131+
I tensorflow/stream_executor/dso_loader.cc:99] Couldn't open CUDA library libcudnn.so. LD_LIBRARY_PATH:
132+
I tensorflow/stream_executor/cuda/cuda_dnn.cc:1562] Unable to load cuDNN DSO
133+
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
134+
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
135+
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
136+
>>> sess = tf.InteractiveSession()
137+
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
138+
name: GeForce GTX 980
139+
major: 5 minor: 2 memoryClockRate (GHz) 1.2155
140+
pciBusID 0000:03:00.0
141+
Total memory: 4.00GiB
142+
Free memory: 3.24GiB
143+
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0
144+
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y
145+
I tensorflow/core/common_runtime/gpu/gpu_device.cc:755] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 980, pci bus id: 0000:03:00.0)
48146
>>> a = tf.constant(10)
49147
>>> b = tf.constant(32)
50148
>>> sess.run(a+b)
51149
42
52-
>>>
150+
>>>
53151
```
54152

55153
##For more information

configure

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -40,11 +40,11 @@ echo "$SWIG_PATH" > tensorflow/tools/swig/swig_path
4040
## Set up Cuda-related environment settings
4141

4242
while [ "$TF_NEED_CUDA" == "" ]; do
43-
read -p "Do you wish to build TensorFlow with GPU support? [y/N] " INPUT
43+
read -p "Do you wish to build TensorFlow with GPU support? [Y/n] " INPUT
4444
case $INPUT in
4545
[Yy]* ) echo "GPU support will be enabled for TensorFlow"; TF_NEED_CUDA=1;;
4646
[Nn]* ) echo "No GPU support will be enabled for TensorFlow"; TF_NEED_CUDA=0;;
47-
"" ) echo "No GPU support will be enabled for TensorFlow"; TF_NEED_CUDA=0;;
47+
"" ) echo "GPU support will be enabled for TensorFlow"; TF_NEED_CUDA=1;;
4848
* ) echo "Invalid selection: " $INPUT;;
4949
esac
5050
done
@@ -86,7 +86,7 @@ while true; do
8686

8787
fromuser=""
8888
if [ -z "$CUDA_TOOLKIT_PATH" ]; then
89-
default_cuda_path=/usr/local/cuda
89+
default_cuda_path=/usr
9090
read -p "Please specify the location where CUDA $TF_CUDA_VERSION toolkit is installed. Refer to README.md for more details. [Default is $default_cuda_path]: " CUDA_TOOLKIT_PATH
9191
fromuser="1"
9292
if [ -z "$CUDA_TOOLKIT_PATH" ]; then
@@ -98,10 +98,10 @@ while true; do
9898
else
9999
TF_CUDA_EXT=".$TF_CUDA_VERSION"
100100
fi
101-
if [ -e $CUDA_TOOLKIT_PATH/lib64/libcudart.so$TF_CUDA_EXT ]; then
101+
if [ -e $CUDA_TOOLKIT_PATH/lib/x86_64-linux-gnu/libcudart.so$TF_CUDA_EXT ]; then
102102
break
103103
fi
104-
echo "Invalid path to CUDA $TF_CUDA_VERSION toolkit. $CUDA_TOOLKIT_PATH/lib64/libcudart.so$TF_CUDA_EXT cannot be found"
104+
echo "Invalid path to CUDA $TF_CUDA_VERSION toolkit. $CUDA_TOOLKIT_PATH/x86_64-linux-gnu/libcudart.so$TF_CUDA_EXT cannot be found"
105105
if [ -z "$fromuser" ]; then
106106
exit 1
107107
fi
@@ -119,7 +119,7 @@ while true; do
119119

120120
fromuser=""
121121
if [ -z "$CUDNN_INSTALL_PATH" ]; then
122-
default_cudnn_path=${CUDA_TOOLKIT_PATH}
122+
default_cudnn_path=/usr/local/cuda
123123
read -p "Please specify the location where cuDNN $TF_CUDNN_VERSION library is installed. Refer to README.md for more details. [Default is $default_cudnn_path]: " CUDNN_INSTALL_PATH
124124
fromuser="1"
125125
if [ -z "$CUDNN_INSTALL_PATH" ]; then

tensorflow/core/common_runtime/gpu/gpu_device.cc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -689,8 +689,8 @@ struct CudaVersion {
689689

690690
// "configure" uses the specific name to substitute the following string.
691691
// If you change it, make sure you modify "configure" as well.
692-
std::vector<CudaVersion> supported_cuda_compute_capabilities = {
693-
CudaVersion("3.5"), CudaVersion("5.2")};
692+
// Unofficial setting. DO NOT SUBMIT!!!
693+
std::vector<CudaVersion> supported_cuda_compute_capabilities = {CudaVersion("3.0"), CudaVersion("3.5"), CudaVersion("5.2"),};
694694

695695
} // namespace
696696

tensorflow/tensorflow.bzl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -262,7 +262,7 @@ def tf_cuda_cc_tests(tests, deps, tags=[], size="medium"):
262262
def tf_gpu_kernel_library(srcs, copts=[], cuda_copts=[], deps=[], hdrs=[],
263263
**kwargs):
264264
cuda_copts = ["-x", "cuda", "-DGOOGLE_CUDA=1",
265-
"-nvcc_options=relaxed-constexpr", "-nvcc_options=ftz=true",
265+
"-nvcc_options=expt-relaxed-constexpr", "-nvcc_options=ftz=true",
266266
"--gcudacc_flag=-ftz=true"] + cuda_copts
267267
native.cc_library(
268268
srcs = srcs,
@@ -475,7 +475,7 @@ def tf_custom_op_library(name, srcs=[], gpu_srcs=[], deps=[]):
475475
if gpu_srcs:
476476
basename = name.split(".")[0]
477477
cuda_copts = ["-x", "cuda", "-DGOOGLE_CUDA=1",
478-
"-nvcc_options=relaxed-constexpr", "-nvcc_options=ftz=true",
478+
"-nvcc_options=expt-relaxed-constexpr", "-nvcc_options=ftz=true",
479479
"--gcudacc_flag=-ftz=true"]
480480

481481
native.cc_library(

third_party/gpus/crosstool/CROSSTOOL

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,9 @@ toolchain {
5050
# Use "-std=c++11" for nvcc. For consistency, force both the host compiler
5151
# and the device compiler to use "-std=c++11".
5252
cxx_flag: "-std=c++11"
53+
# EA: gcc 5.3 tweaks.
54+
cxx_flag: "-D_MWAITXINTRIN_H_INCLUDED"
55+
cxx_flag: "-D_FORCE_INLINES=1"
5356
linker_flag: "-lstdc++"
5457
linker_flag: "-B/usr/bin/"
5558

@@ -105,6 +108,10 @@ toolchain {
105108
compiler_flag: "-Wunused-but-set-parameter"
106109
# But disable some that are problematic.
107110
compiler_flag: "-Wno-free-nonheap-object" # has false positives
111+
# EA: Disable some pedantic warnings.
112+
compiler_flag: "-Wno-unused-function"
113+
compiler_flag: "-Wno-unused-local-typedefs"
114+
compiler_flag: "-Wno-unused-variable"
108115

109116
# Keep stack frames for debugging, even in opt mode.
110117
compiler_flag: "-fno-omit-frame-pointer"

third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -250,7 +250,8 @@ def InvokeNvcc(argv, log=False):
250250

251251
# "configure" uses the specific format to substitute the following string.
252252
# If you change it, make sure you modify "configure" as well.
253-
supported_cuda_compute_capabilities = [ "3.5", "5.2" ]
253+
# Unofficial setting. DO NOT SUBMIT!!!
254+
supported_cuda_compute_capabilities = ["3.0", "3.5", "5.2",]
254255
nvccopts = ''
255256
for capability in supported_cuda_compute_capabilities:
256257
capability = capability.replace('.', '')

third_party/gpus/cuda/cuda_config.sh

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,10 @@ function CheckAndLinkToSrcTree {
101101
# the same. This could happen if invoked from the source tree by accident.
102102
if [ ! $(readlink -f $PWD) == $(readlink -f $OUTPUTDIR/third_party/gpus/cuda) ]; then
103103
mkdir -p $(dirname $OUTPUTDIR/third_party/gpus/cuda/$FILE)
104-
ln -sf $PWD/$FILE $OUTPUTDIR/third_party/gpus/cuda/$FILE
104+
# EA: don't overwrite local changed files (ie., host_config.h)
105+
if [ ! -f $OUTPUTDIR/third_party/gpus/cuda/$FILE -o -L $OUTPUTDIR/third_party/gpus/cuda/$FILE ]; then
106+
ln -sf $PWD/$FILE $OUTPUTDIR/third_party/gpus/cuda/$FILE
107+
fi
105108
fi
106109
}
107110

@@ -120,8 +123,8 @@ fi
120123
# Actually configure the source tree for TensorFlow's canonical view of Cuda
121124
# libraries.
122125

123-
if test ! -e ${CUDA_TOOLKIT_PATH}/lib64/libcudart.so$TF_CUDA_VERSION; then
124-
CudaError "cannot find ${CUDA_TOOLKIT_PATH}/lib64/libcudart.so$TF_CUDA_VERSION"
126+
if test ! -e ${CUDA_TOOLKIT_PATH}/lib/x86_64-linux-gnu/libcudart.so$TF_CUDA_VERSION; then
127+
CudaError "cannot find ${CUDA_TOOLKIT_PATH}/lib/x86_64-linux-gnu/libcudart.so$TF_CUDA_VERSION"
125128
fi
126129

127130
if test ! -d ${CUDNN_INSTALL_PATH}; then
@@ -175,12 +178,12 @@ function LinkAllFiles {
175178
mkdir -p $OUTPUTDIR/third_party/gpus/cuda
176179
echo "Setting up Cuda include"
177180
LinkAllFiles ${CUDA_TOOLKIT_PATH}/include $OUTPUTDIR/third_party/gpus/cuda/include || exit -1
178-
echo "Setting up Cuda lib64"
179-
LinkAllFiles ${CUDA_TOOLKIT_PATH}/lib64 $OUTPUTDIR/third_party/gpus/cuda/lib64 || exit -1
181+
echo "Setting up Cuda lib"
182+
LinkAllFiles ${CUDA_TOOLKIT_PATH}/lib/x86_64-linux-gnu $OUTPUTDIR/third_party/gpus/cuda/lib64 || exit -1
180183
echo "Setting up Cuda bin"
181184
LinkAllFiles ${CUDA_TOOLKIT_PATH}/bin $OUTPUTDIR/third_party/gpus/cuda/bin || exit -1
182-
echo "Setting up Cuda nvvm"
183-
LinkAllFiles ${CUDA_TOOLKIT_PATH}/nvvm $OUTPUTDIR/third_party/gpus/cuda/nvvm || exit -1
185+
#echo "Setting up Cuda nvvm"
186+
#LinkAllFiles ${CUDA_TOOLKIT_PATH}/nvvm $OUTPUTDIR/third_party/gpus/cuda/nvvm || exit -1
184187

185188
# Set up symbolic link for cudnn
186189
ln -sf $CUDNN_HEADER_PATH/cudnn.h $OUTPUTDIR/third_party/gpus/cuda/include/cudnn.h || exit -1

0 commit comments

Comments
 (0)