Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Access internal tensors and add a new node to a tflite model?

I am fairly new to TensorFlow and TensorFlow Lite. I have followed the tutorials on how to quantize and convert the model to fixed point calculations using toco. Now I have a tflite file which is supposed to perform only fixed point operations. I have two questions

  1. How do I test this in python? How do i access all the operations and results in the tflite file?
  2. Is there a way to add a new node or operation in this tflite file? If so how?

I would be really grateful if anyone could guide me.

Thanks and Regards,
Abhinav George

like image 626
Abhinav George Avatar asked Dec 06 '25 22:12

Abhinav George


1 Answers

Is there a way to add a new node or operation in this tflite file? If so how?

Unfortunately, no, and it is actually a good thing. TF-Lite was designed to be extremely light yet effective, using mapped files, flat buffers, static execution plan and so on to decrease memory footprint. The cost of that is that you loose any flexibility of TensorFlow.

TF-Lite is a framework for deployment. However, earlier on Google IO, the TF team mentioned the possibility of on-device training, so maybe some kind of flexibility will be available in the future, but not now.


How do I test this in python? How do i access all the operations and results in the tflite file?

You cannot access all internal operations, only inputs and outputs. The reason is simple: the internal tensors wouldn't be saved, since the memory sections for them are also used for other operations (which is why the memory footprint of it is so low).

If you just want to see the outputs, you can use the Python API as below (the code is self explanatory):

import pprint
from tensorflow.contrib.lite.python import interpreter as interpreter_wrapper

# Load the model and allocate the static memory plan
interpreter = interpreter_wrapper.Interpreter(model_path='model.tflite')
interpreter.allocate_tensors()

# print out the input details
input_details = interpreter.get_input_details()
print('input_details:')
pp = pprint.PrettyPrinter(indent=2)
pp.pprint(input_details)

# print out the output details
output_details = interpreter.get_output_details()
print('output_details:')
pp = pprint.PrettyPrinter(indent=2)
pp.pprint(output_details)

# set input (img is a `numpy array`)
interpreter.set_tensor(input_details[0]['index'], img)

# forward pass
interpreter.invoke()

# get output of the network
output = interpreter.get_tensor(output_details[0]['index'])

What if I call interpreter.get_tensor for non-input and non-output tensors?

You will not get the actual data that contained in that tensor after execution of the corresponding operation. As mentioned earlier, the memory section for tensors are shared with other tensors for maximum efficiency.

like image 174
FalconUA Avatar answered Dec 08 '25 10:12

FalconUA



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!