@@ -160,6 +160,8 @@ conda activate my_dev_environement
160160Just run from the root:
161161
162162```
163+ pip install tensorflow==2.2.0rc4
164+ # you can use "pip install tensorflow-cpu==2.2.0rc4" too if you're not testing on gpu.
163165pip install -e ./
164166```
165167
@@ -247,7 +249,7 @@ If you need a custom C++/Cuda op for your test, compile your ops with
247249
248250``` bash
249251python configure.py
250- pip install tensorflow==2.1.0 -e ./ -r tools/install_deps/pytest.txt
252+ pip install tensorflow==2.2.0rc4 -e ./ -r tools/install_deps/pytest.txt
251253bash tools/install_so_files.sh # Linux/macos/WSL2
252254sh tools/install_so_files.sh # PowerShell
253255```
@@ -275,14 +277,14 @@ docker run --runtime=nvidia --rm -it -v ${PWD}:/addons -w /addons tensorflow/ten
275277
276278Configure:
277279```
278- python3 -m pip install tensorflow==2.1.0
280+ python3 -m pip install tensorflow==2.2.0rc4
279281python3 ./configure.py # Links project with TensorFlow dependency
280282```
281283
282284Install in editable mode
283285```
284286python3 -m pip install -e .
285- python3 -m pip install pytest pytest-xdist
287+ python3 -m pip install -r tools/install_deps/ pytest.txt
286288```
287289
288290Compile the custom ops
@@ -295,6 +297,10 @@ Run selected tests:
295297python3 -m pytest path/to/file/or/directory/to/test
296298```
297299
300+ Run the gpu only tests with ` pytest -m needs_gpu ./tensorflow_addons ` .
301+ Run the cpu only tests with ` pytest -m 'not needs_gpu' ./tensorflow_addons ` .
302+
303+
298304#### Testing with Bazel
299305
300306Testing with Bazel is still supported but not recommended unless you have prior experience
@@ -309,9 +315,9 @@ quickly, as Bazel has great support for caching and distributed testing.
309315To test with Bazel:
310316
311317```
312- python3 -m pip install tensorflow==2.1.0
318+ python3 -m pip install tensorflow==2.2.0rc4
313319python3 configure.py
314- python3 -m pip install pytest
320+ python3 -m pip install -r tools/install_deps/ pytest.txt
315321bazel test -c opt -k \
316322--test_timeout 300,450,1200,3600 \
317323--test_output=all \
@@ -409,22 +415,46 @@ on Tensors, `if` or `for` for example. Or with `TensorArray`. In short, when the
409415 conversion to graph is not trivial. No need to use it on all
410416your tests. Having fast tests is important.
411417
412- #### cpu_and_gpu
418+ #### Selecting the devices to run the test
419+
420+ By default, each test is wrapped behind the scenes with a
421+ ``` python
422+ with tf.device(" CPU:0" ):
423+ ...
424+ ```
413425
414- Will run your test function twice, once with ` with tf.device("/device:CPU:0") ` and
415- once with ` with tf.device("/device:GPU:0") ` . If a GPU is not present on the system,
416- the second test is skipped. To use it:
426+ This is automatic. But it's also possible to ask the test runner to run
427+ the test twice, on CPU and on GPU, or only on GPU. Here is how to do it.
417428
418429``` python
419- @pytest.mark.usefixtures (" cpu_and_gpu" )
430+ import pytest
431+
432+ @pytest.mark.with_device ([" cpu" , " gpu" ])
420433def test_something ():
421- assert ... == ...
434+ # the code here will run twice, once on gpu, once on cpu.
435+ ...
436+
437+ @pytest.mark.with_device ([" gpu" ])
438+ def test_something_else ():
439+ # This test will be only run on gpu.
440+ # The test runner will call with tf.device("GPU:0") behind the scenes.
441+ ...
442+
443+ @pytest.mark.with_device ([" cpu" ])
444+ def test_something_more ():
445+ # Don't do that, this is the default behavior.
446+ ...
422447```
423448
449+ Note that if a gpu is not detected on the system, the test will be
450+ skipped and not marked as failed. Only the first gpu of the system is used,
451+ even when running pytest in multiprocessing mode. (` -n ` argument).
452+ Beware of the out of cuda memory errors if the number of pytest workers is too high.
453+
424454##### When to use it?
425455
426- When you test custom CUDA code. We can expect existing TensorFlow ops to behave the same
427- on CPU and GPU.
456+ When you test custom CUDA code or float16 ops.
457+ We can expect other existing TensorFlow ops to behave the same on CPU and GPU.
428458
429459#### data_format
430460
0 commit comments