I am trying to evaluate various languages to build a small-in-size high-throughput application server. It needs to do small stuff like getting a request, reading data from a separate server running cache application (memcached, redis) , and sending back a 5 - 10 line XML or JSON. Very high throughput ~ 1000 per sec in production atleast. I have this on Nginx - PHP and memcached takes 5+ ms to send back all the required data so there is some network IO which blocks.
I was looking at Python's BaseHTTPServer class. I am not a python guru, but i need to know how it works behind the scene. If you read this page -
http://docs.python.org/library/socketserver.html
It says "To build asynchronous handlers, use the ThreadingMixIn and ForkingMixIn classes."
Is it really Asynchronous or does it start One Thread Per Client. If it is on one thread per client model - are these OS level threads ? If i stick to one thread per client model, will Python's GC clear stuff fast enough if i give it high RAM, 8 core amazon instance.
ForkingMixIn as you can see in the source code, does a real fork. ThreadingMixIn uses the Python threads. So you have to deal with the GIL, which means even if it's using the underlying os-threading mechanism, there will be no concurrent processing of your python-threads. I wouldn't recommend it for a high-throughput server.
So in short: NO, they are not asynchronous per your definition. If you want "real" asynchronous (one core/process/thread) functionality you should look into: Twisted or Tornado or maybe Gunicorn. The later probably wouldn't meet your definition of asynchronous either.
I would suggest to use torndado with nginx. There is a post in google groups about how to set it up. Because the internal Tornado-Server doesn't have all standards implemented, you could use a "real" server as proxy.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With