TensorFlow, CUDA and cuDNN Compatibility

Purpose

TensorFlow is an open source library that helps you to build machine learning and deep learning models. It is widely utilized library among researchers and organizations to smart applications. I personally use TensorFlow and Keras (build on top of TensorFlow and offers ease in development) to develop deep learning models. You can follow my research work here.

Every model you develop in deep learning require good performance GPU enabled environment. And to run the models on GPU we need CUDA and cuDNN drivers installed in our system.

Compatible Versions

As of today, there are a lot of versions available for TensorFlow, CUDA and cuDNN, which might confuse the developers or the beginners to select right compatible combination to make their development environment.

The following table lists the compatible versions of CUDA, cuDNN with TensorFlow. This list is developed with reference to build configurations shared here.

TensorFlow GPUPythonCUDAcuDNN
2.4.03.6 - 3.811.08.0
2.3.03.5 - 3.810.17.6
2.2.03.5 - 3.810.17.6
2.1.03.5 - 3.710.17.6
2.0.03.5 - 3.710.07.4
1.15.03.5 - 3.710.07.4
1.14.03.5 - 3.710.07.4
1.13.03.5 - 3.710.07.4
1.12.03.5 - 3.69.07.2

My Configuration

In my development environment with NVIDIA RTX 2070 GPU I have following multiple configurations in my system. Different tensorflow-gpu versions can be installed by creating different anaconda environments (I prefer to use miniconda that offers minimal installed packages). And you can follow normal installation process for installing different version of CUDA and cuDNN together.

TensorFlow GPUPythonCUDAcuDNN
2.4.03.8.311.08.0.0.3
2.3.03.8.310.17.6.0.0
1.13.03.6.1210.07.4.0.1
1.12.03.6.129.07.1.0.4

Note: You may also need to check the GPU compatibility before selecting the CUDA version. You can check that here.

Check Installed Version

After you finished the installation, you verify the tensorflow-gpu library as follows:

tensorflow-gpu 2.x.ximport tensorflow as tf
tf.test.gpu_device_name()
tensorflow-gpu 1.x.ximport tensorflow as tf
sess = tf.Session(config = tf.ConfigProto (log_device_placement = True))

These above commands should list your available GPU devices.

PS: In case you are getting an error like unable load cuDNN dynamic library it means that you have installed incorrect version of cuDNN with CUDA.

To check the installed version of CUDA and cuDNN proceed as follows:

  • You can open “NVIDIA GPU Computing Toolkit\CUDA\vX.X\version.txt” to check the CUDA version.
  • You can open “NVIDIA GPU Computing Toolkit\CUDA\vX.X\include\cudnn.h” and search for “#define CUDNN_VERSION” to check cuDNN version inside CUDA. (Note: In cuDNN 8.0, the version details are moved to cudnn_version.h)

Leave a Reply