Close modal

Blog Post

Efficient serving UWSGI vs NGINX

Fri 04 August 2017

UWSGI is great, and I use it alot for serving Python applications. The usual manual actually recommends not to use it to serve static content, and instead to use your web server (such as NGINX or Apache), of course being told not to do something, I immediately wondered why, and set off to discover just what differences exist.

The setup

We'll serve a fairly empty python application, it'll essentially just be a testbed for some static files, which I'll have two groups of:

  • Small images, between 30-100kb
  • Large media files, between 20-150mb

In order to simulate the constrained resource of a VPS provider, without actually getting in trouble for possibly DOS-like traffic, I'm going to run Debian on a virtualbox that's allocated 256mb of memory and 8gb of disk space. Just to make it even more constrained, this particular machine is powered by a Atom x7-Z8700 (1.6GHZ), and I'll be running siege from my macbook pro (core i7), so hopefully it's not a fair fight and we can put the web-stack under duress.

Here's the siege config for the first run:

concurrent = 200
time = 1M

And second run:

concurrent = 200
time = 5M

Python: The python for the most part is simple:

from bottle import route, default_app, static_file

@route('/f/<path>', name='static')
def static_test(path):
    return static_file(path, root='static/f')

# Create app instance
app = application = default_app()

As you can see it just serves a static file using the static_file directive.

Here's our NGINX config (set as the default app):

server {
    # Set the root for NGINX's content
    root  /sites/t1/static;                                                                                                                                                                
   listen 80;

    # Ask NGINX to try serving the URL as it is - and pass it on the UWGI second
    try_files  $uri @uwsgi;
    location @ uwsgi{
        include uwsgi_params;
        uwsgi_pass unix:///tmp/uwsgi_t1.sock;

This setup is design to serve the files from the URI (nginx) first when it exists, and after that it will query UWSGI, this is what try_files does, it tries a file and then falls back to the next location.

Disabling this and just serving out of UWSGI would be the null hypothesis, to test this simply comment out the line try_files $uri @uwsgi; with a # - it won't try the files, it'll just serve the location of UWSGI.

Extra fun (/etc/nginx/nginx.conf in 'http' section)

In order to see if NGINX can perform any faster we'll also add Google Pagespeed: :::nginx pagespeed on; pagespeed FileCachePath /var/ngx_pagespeed_cache;


For more fun, we'll also see if the allowing NGINX to keep its file-handles open longer speeds up serving static content:

open_file_cache          max=2000 inactive=20s;
open_file_cache_valid    60s;
open_file_cache_min_uses 5;
open_file_cache_errors   off;

In theory, this could be more efficient if doing lots of static files as the file handles could remain open and not close/re-open frequently.


Run 1 - Concurrency of 200 - Run over 60 seconds - 30-100KB images

Let's see how it runs with a bunch of small files being hit repeatedly with siege.

Item/Setup UWSGI NGINX* +Pagespeed +Keep-open
Transactions 9476 9580 4876 9291
Availability 98.35 % 98.56 % 99.98 % 98.48 %
Seconds Elapsed 59.61 59.56 59.65 59.44
Data transferred (MB) 556.49 562.59 286.57 545.71
Response time (Seconds) 1.24 1.23 2.36 1.26
Transaction/Second 158.97 160.85 81.74 156.31
Throughput (MB/sec) 9.34 9.45 4.80 9.18
Concurrency 196.55 197.20 193.20 196.97
Successful transactions 9476 9580 4876 9291
Failed transactions 159 140 1 143
Longest transaction 11.54 16.54 45.36 9.76
Shortest transaction 0.09 0.12 0.16 0.11

*Nginx is version 1.10 from dotdeb installed as nginx-extras

Run 2 - Concurrency of 200 - Run over 5 minutes - 20-150MB media files

Let's see how it runs with a bunch of large files being hit repeatedly with siege.

Item/Setup UWSGI NGINX* +Pagespeed +Keep-open
Transactions 97 186 183 176
Availability 82.91 % 100.00 % 100.00 % 100.00 %
Seconds Elapsed 299.40 299.25 299.57 299.60
Data transferred (MB) 569.30 2782.37 2737.49 2632.78
Response time (Seconds) 165.16 268.65 268.30 266.83
Transaction/Second 0.32 0.62 0.61 0.59
Throughput (MB/sec) 1.90 9.30 9.14 8.79
Concurrency 53.51 166.98 163.90 156.75
Successful transactions 97 186 183 176
Failed transactions 20 0 0 0
Longest transaction 299.11 298.52 299.44 298.72
Shortest transaction 0.00 209.37 214.78 220.48

*Nginx is version 1.10 from dotdeb installed as nginx-extras

Lots of numbers!

Just looking at that, you should be able to see that in the first case UWSGI purely serving the content performed very closely to the other configurations , nothing of statistical significance there - however interesting to note that enabled page-speed (which uses a cache) actually caused a significant slowdown, and a much lower througput. I will save my thoughts until the end.

In the second case, we can see that UWSGI actually serves less transactions overall, and therefore (as its a sampling distribution run over a long time) has a lower throughput. (It was actually the only one to have any failed transactions as there were overall considerably less transactions with these larger files). We note that pagespeed is on par with the rest of them. (You might want to ignore the response time, as failed transactions would cancel earlier - distorting the average).


I will graph what I consider to be important from the table (and critically different).


That was intereresting.

Here's my thoughts:

For small files, UWSGI performs just as well as NGINX (such a small file has little overhead in memory etc), however Pagespeed clearly slows things down here, because the time taken to look up in the cache is possibly comparatively quite large for such a small file being hit again and again.

With larger files, using Pagespeed doesn't make much difference as I would guess that taking 20ms (for example) to lookup a cache vs 10 seconds to download a file is a veritable drop in the pond. At this point though, UWSGI does start choking it seems. Failing considerably more requests and therefore having a lower amount of data served.

While I did cover two extremes here, there may be the case of serving typical images from an iPhone that are about 5mb each, perhaps that'll be enough to cause UWSGI to perform less, perhaps it won't.

UWSGI can allow you to enforce much better control on static files, such as looking up access rights in a database and logging who downloads it, but if you want to efficiently serve static content without the need for that stuff, NGINX will perform just as well - but most probably better in the majority of cases.

Comments !