Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I am fairly new to TensorFlow and right now looking into the custom op development. I have already read the official tutorial but I feel a lot of things happen behind the scenes and I do not always want to put my custom ops in user_ops directory.

As such, I took up an example word2vec

which uses a custom "Skipgram" op whose registration is defined here:
/word2vec_ops.cc
and whose kernel implementation is here:
/word2vec_kernels.cc

Looking at the build file, I tried to build individual targets

1) bazel build -c opt tensorflow/models/embedding:word2vec_ops
This generates bunch of object files as expected.

2) bazel build -c opt tensorflow/models/embedding:word2vec_kernels
Same for this.

3) bazel build -c opt tensorflow/models/embedding:word2vec_kernels:gen_word2vec

This last build uses a custom rule namely tf_op_gen_wrapper_py https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tensorflow.bzl#L197-L231

Interesting to note that this only depends on Op Registration and not on the kernel itself.

After all above, if I build py_binary itself using

bazel build -c opt tensorflow/models/embedding:word2vec

it works fine, but I fail to see where and how the kernel c++ code linked?

Additionally, I would also like to understand the tf_op_gen_wrapper_py rule and the whole compilation/linking procedure that goes behind the scenes for ops registration.

Thanks.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
185 views
Welcome To Ask or Share your Answers For Others

1 Answer

When adding a new kind of operation to TensorFlow, there are two main steps:

  1. Registering the "op", which involves defining an interface for the operation, and

  2. Registering one or more "kernels", which involves defining implementation(s) for the operation, perhaps with specialized implementations for different data types, or device types (like CPU or GPU).

Both steps involve writing C++ code. Registering an op uses the REGISTER_OP() macro, and registering a kernel uses the REGISTER_KERNEL_BUILDER() macro. These macros create static initializers that run when the module containing them is loaded. There are two main mechanisms for op and kernel registration:

  1. Static linking into the core TensorFlow library, and static initialization.

  2. Dynamic linking at runtime, using the tf.load_op_library() function.

In the case of "Skipgram", we use option 1 (static linking). The ops are linked into the core TensorFlow library here, and the kernels are linked in here. (Note that this is not ideal: the word2vec ops were created before we had tf.load_op_library(), and so there was no mechanism for linking them dynamically.) Hence the ops and kernels are registered when you first load TensorFlow (in import tensorflow as tf). If they were created today, they would be dynamically loaded, such that they would only be registered if they were needed. (The SyntaxNet code has an example of dynamic loading.)

The tf_op_gen_wrapper_py() rule in Bazel takes a list of op-library dependencies and generates Python wrappers for those ops. The reason that this rule depends only on the op registration is that the Python wrappers are determined entirely by the interface of the op, which is defined in the op registration. Notably, the Python interface has no idea whether there are specialized kernels for a particular type or device. The wrapper generator links the op registrations into a simple C++ binary that generates Python code for each of the registered ops. Note that, if you use tf.load_op_library(), you do not need to invoke the wrapper generator yourself, because tf.load_op_library() will generate the necessary code at runtime.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...