I am trying to get the LLVM IR generated by the XLA Compiler in TensorFlow. I know that the entire LLVM Context is contained in the llvm_module object. This is then converted to a string with the utility function llvm_ir::DumpModuleToString(*llvm_module) function in the Compile() function in the file: //tensorflow/compiler/xla/service/cpu.cpu_compiler.cc.
But I have been trying to log it using VLOG(2) from tensorflow/core/logging.h. No logs are shown. However, the remaining VLOG(2) statements from other files are logged in my Python run.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))
2017-03-10 22:36:43.226843: I tensorflow/compiler/xla/service/platform_util.cc:58] platform Host present with 8 visible devices
2017-03-10 22:36:43.227931: I tensorflow/compiler/xla/service/service.cc:183] XLA service 0x2821510 executing computations on platform Host. Devices:
2017-03-10 22:36:43.227951: I tensorflow/compiler/xla/service/service.cc:191] StreamExecutor device (0): <undefined>, <undefined>
b'Hello, TensorFlow!'
[FYI I can't leave comments, since I just joined and apparently don't have a reputation yet.]
First off, make sure to read this, including the starred blue boxes. In particular note that turning on XLA for your whole session only performs JIT for GPU, and not CPU at the moment. https://www.tensorflow.org/performance/xla/jit
Now let's assume you've got everything set up correctly. The program in your example won't use XLA to compile for 2 reasons:
In the comments you mentioned running on mnist_softmax, presumably following the instructions on the link above. If you're indeed compiling and running on CPU, the only remaining issue is using VLOG(2). VLOG is only enabled if you set command-line flags to turn it on.
So try replacing your VLOG(2) with LOG(INFO), and you should see the IR dump in your logs.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With