Note:
Searching from the top-level index page will search all documents. Searching from a specific document will search only that document.
Find an exact phrase:
Wrap your search phrase in
""
(double quotes) to only get results where the phrase is exactly matched. For example
"PyTorch for the IPU"
or
"replicated tensor sharding"
Prefix query:
Add an
*
(asterisk) at the end of any word to indicate a prefix query. This will return results containing all words with the specific prefix. For example
tensor*
Fuzzy search:
Use
~N
(tilde followed by a number) at the end of any word for a fuzzy search. This will return results that are similar to the search word. N specifies the “edit distance” (fuzziness) of the match. For example
Polibs~1
Words close to each other:
~N
(tilde followed by a number) after a phrase (in quotes) returns results where the words are close to each other.
N
is the maximum number of positions allowed between matching words. For example
"ipu version"~2
Logical operators.
You can use the following logical operators in a search:
+
signifies AND operation
|
signifies OR operation
-
negates a single word or phrase (returns results
without
that word or phrase)
It is best if you use the latest version of the Poplar SDK.
On some systems you must explicitly enable the Poplar SDK before you can use PyTorch or TensorFlow for the IPU, or the Poplar Graph Programming Framework. On other systems, the SDK is enabled as part of the login process.
Table 3.1
defines whether you have to explicitly enable the SDK and where to find it.
Table 3.1
Systems that need the Poplar SDK to be enabled and the SDK location
where
[poplar_ver]
is the software version number of the Poplar SDK and
[build]
is the build information.
Gcore Cloud
The SDK has been enabled as part of the login process.
To enable the Poplar SDK:
For SDK versions 2.6 and later, there is a single
enable
script that determines whether you are using Bash or Zsh and runs the appropriate scripts to enable both Poplar and PopTorch/PopART.
Source the single script as follows:
$ source[path_to_SDK]/enable
where [path_to_SDK] is the location of the Poplar SDK on your system.
For SDK versions earlier than 2.6, there are only Bash scripts available and you have to source the Poplar and PopART scripts separately.
You only have to source the PopART
enable
script if you are using PopTorch or PopART.
where [path_to_SDK] is the location of the Poplar SDK on your system. [os_ver] is the version of Ubuntu on your system, [poplar_ver] is the software version number of the Poplar SDK and [build] is the build information.
You must source the Poplar enable script for each new shell. You can add this source command to your .bashrc (or .zshrc for SDK versions later than 2.6) to do this on a more permanent basis.
If you attempt to run any Poplar software without having first sourced this
script, you will get an error from the C++ compiler similar to the following (the exact message will depend on your code):
fatal error: 'poplar/Engine.hpp' file not found
If you try to source the script after it has already been sourced, then you will get an error similar to:
ERROR: A Poplar SDK has already been enabled.Path of enabled Poplar SDK: /opt/gc/poplar_sdk-ubuntu_20_04-3.2.0-7cd8ade3cd/poplar-ubuntu_20_04-3.2.0-7cd8ade3cdIf this is not wanted then please start a new shell.
You can verify that Poplar has been successfully set up by running:
$ popc--version
This will display the version of the installed software.
3.2. Create and enable a Python virtual environment
It is good practice to work in a different Python virtual environment for each framework or even for each application. This section describes how you create and activate a Python virtual environment.
You must activate the Python virtual environment before you can start using it.
The virtual environment must be created for the Python version you will be using. This cannot be changed after creation. Create a new Python virtual environment with:
$ virtualenv-ppython3[venv_name]
where [venv_name] is the location of the virtual environment.
Make sure that the version of Python that is installed is compatible with the version of the Poplar SDK that you are using. See Supported tools in the Poplar SDK release notes for information about the supported operating systems and versions of tools.
To start using a virtual environment, activate it with:
$ source[venv_name]/bin/activate
where [venv_name] is the location of the virtual environment.
Now all subsequent installations will be local to that virtual environment.
3.3. Install the TensorFlow 2 wheels and validate
In order to run applications in TensorFlow 2 on an IPU, you have to install Python wheel files for the Graphcore ports of TensorFlow 2 and Keras and also for TensorFlow 2 add-ons.
There are two TensorFlow 2 wheels included in the Poplar SDK, one for AMD processors and one for Intel processors. Check which processor is used on your system by running:
$ lscpu|grepname
The wheel file has a name of the form:
tensorflow-[ver]+[platform].whl
where [ver] is the version of the Graphcore port of TensorFlow 2 and [platform] defines the server details (processor and operating system) for the TensorFlow build. An example of the TensorFlow 2 wheel file for an AMD processor for Poplar SDK 3.0 is:
POPLAR_SDK_ENABLED is the location of the Poplar SDK defined when the SDK was enabled. The ? ensures that an error message is displayed if Poplar has not been enabled.
To confirm that TensorFlow 2 has been installed, you can use:
pip list | grep tensorflow
For the example wheel file, the output will be:
tensorflow 2.6.3
You can also confirm that the correct tensorflow wheel has been installed by attempting to import tensorflow.python.ipu in Python, for example:
$ python3-c"from tensorflow.python import ipu"
If you get an “illegal instruction” or similar error, then you may have
installed the wrong version of TensorFlow for your processor.
In the TensorFlow 2.6 release, Keras was moved into a separate pip package. In the Poplar SDK 2.6 release, which includes the Graphcore distribution of TensorFlow 2.6, there is a Graphcore distribution of Keras which includes IPU-specific extensions.
The Keras wheel must be installed after the TensorFlow wheel, but before the TensorFlow Addons wheel.
The Keras wheel file has a name of the form:
keras-[tf-ver]*.whl
where [tf-ver] is the TensorFlow 2 version. An example of the Keras wheel file for TensorFlow 2.6 for the IPU for Poplar SDK 3.0 is:
POPLAR_SDK_ENABLED is the location of the Poplar SDK defined when the SDK was enabled. The ? ensures that an error message is displayed if Poplar has not been enabled.
You can confirm that the keras package has been installed by importing it in Python, for example:
$ python3-c"import keras"
If you get an “illegal instruction” or similar error, then try to install the Keras wheel again.
IPU TensorFlow Addons is a collection of add-ons created for the Graphcore port of TensorFlow. These include layers and optimizers for Keras, as well as legacy TensorFlow layers. For more information, refer to the section on IPU TensorFlow Addons in the TensorFlow 2 user guide.
The IPU TensorFlow 2 Addons wheel file is only available in Poplar SDK 2.4 and later.
There are separate Addons wheel files for TensorFlow 1 and TensorFlow 2.
The wheel file has a name of the form:
ipu_tensorflow_addons-[ver]+X+X+X-X-X-X.whl
where [ver] is the version of the Graphcore port of TensorFlow 2. An example of the Addons wheel file for TensorFlow 2.6 for the IPU for Poplar SDK 3.0 is:
POPLAR_SDK_ENABLED is the location of the Poplar SDK defined when the SDK was enabled. The ? ensures that an error message is displayed if Poplar has not been enabled.
You can confirm that the Addons module has been installed correctly by importing it in Python. For example:
where [base_dir] is a location of your choice. This will install the contents of the examples repository under ~/[base_dir]/examples. The tutorials are in ~/[base_dir]/examples/tutorials.
In order to simplify running the tutorials, we define the environment variable POPLAR_TUTORIALS_DIR that
points to the location of the cloned tutorials.
If the code has run successfully, you should see an output similar to that in Listing 3.1.
Listing 3.1 Example of output for TensorFlow 2 application.
2022-01-10 12:20:09.746730: I tensorflow/compiler/plugin/poplar/driver/poplar_platform.cc:44] Poplar version: 2.3.0 (d9e4130346) Poplar package: 88f485e763 2022-01-10 12:20:11.195463: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set 2022-01-10 12:20:11.435997: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes) 2022-01-10 12:20:11.436536: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2245780000 Hz 2022-01-10 12:20:12.922858: I tensorflow/compiler/plugin/poplar/driver/poplar_executor.cc:1714] Device /device:IPU:0 attached to IPU: 0 2022-01-10 12:20:13.609918: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2) Epoch 1/4 Compiling module a_inference_train_function_513__XlaMustCompile_true_config_proto___n_007_n_0...02_001_000__executor_type____.380: [##################################################] 100% Compilation Finished [Elapsed: 00:00:15.4] 2022-01-10 12:20:29.517778: I tensorflow/compiler/jit/xla_compilation_cache.cc:347] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process. 2000/2000 [==============================] - 18s 9ms/step - loss: 0.9729 Epoch 2/4 2000/2000 [==============================] - 1s 533us/step - loss: 0.3478 Epoch 3/4 2000/2000 [==============================] - 1s 610us/step - loss: 0.2876 Epoch 4/4 2000/2000 [==============================] - 1s 595us/step - loss: 0.2545
You have run an application that demonstrates how to use the IPU to train a simple 2-layer, fully-connected model on the MNIST dataset using TensorFlow 2.