I'm trying to deploy a face detection service using MTCNN in tensorflow + flask + uWSGI. I based my deployment on this docker and added this custom uwsgi.ini:
[uwsgi]
module = main
callable = app
enable-threads = true
cheaper = 2
processes = 16
threads = 16
http-timeout = 60
but when I try to do face detection using this docker image I just built, I always get 504 Gateway Time-out. Actually when I dug deeper, I noticed that the code runs fine to this session.run line:
for op_name in data_dict:
with tf.variable_scope(op_name, reuse=True):
for param_name, data in iteritems(data_dict[op_name]):
try:
var = tf.get_variable(param_name)
session.run(var.assign(data))
except ValueError:
if not ignore_missing:
raise
At first, I thought it was a problem related to threading under uwsgi worker, so I added increased number of processes and threads but without any success.
When I run the same code with flask debugger, it runs just fine and processes the image in less than a second. So it is not a problem with code but a problem with config or combination of these tools.
You also need to set cheaper = 0.
This is my uwsgi and it is working.
[uwsgi]
module = main
callable = app
master = false
processes = 1
cheaper = 0
Use master = false and processes = 1 for uwsgi config. There is a known issue that tensorflow hangs in a multiprocess setting.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With