I am making a RESTful API and am wondering how computationally expensive it is for the server if each request is done using SSL? It's probably hard to quantify, but a comparison to non-SSL requests would be useful (e.g. 1 SSL is as expensive as 30 non-SSL request).
Am I right in thinking that for an SSL connection to be established, both parties need to generate public and private keys, share them with each other, and then start communicating. If when using a RESTful API, does this process happen on each request? Or is there some sort of caching that reuses a key for a given host for a given period of time (if so, how long before they expire?).
And one last question, the reason I am asking is because I am making an app that uses facebook connect, and there are some access tokens involved which grant access to someone's facebook account, having said that, why does facebook allow transmitting these access tokens over non-encrypted connections? Surely they should guard the access tokens as strongly as the username/passwd combos, and as such enforce an SSL connection... yet they don't.
EDIT: facebook does in fact enforce a HTTPS connection whenever the access_token is being transmitted.
http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html
On our [Google's, ed.] production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.
If you stop reading now you only need to remember one thing: SSL/TLS is not computationally expensive any more.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With