f l a m e . o r g

organized flames

Mongrel, Apache, and Rails

Posted on October 28, 2007

When I first started running Rails applications on my web server, I chose to use FastCGI. Specifically, the mod_fcgid module, which had some features I wanted. It also has the unfortunate by-product of corrupting Apache’s memory. Bad news.

I’ve since removed FastCGI entirely and moved to a proxy to mongrel_cluster setup. And I’ve started deploying with Capistrano.


I have a certain amount of concern with moving to a deployment system I knew very little about. Just like a new backup system, I feel like I’m handing the keys to my data over to something not written by me. And, while it is fairly simple to set up, Capistrano is somewhat complicated internally.

I already push out my operating system upgrades in an automated way. I compile NetBSD on one machine here at home, and push the binaries out to all the machines I have which run NetBSD. This means about 7 machines rsync from the build box with one command. This can be scary, but I’ve been doing it for 5 years now, and it just works. How can a web site be scary compared to kernels and system binaries?

The answer is, it’s not. If something breaks it is fairly easy to manually reconfigure if I need to. So, I’ve relaxed a bit. My concerns are still there, and I’m keeping a careful watch on how Capistrano runs each time I deploy. I have yet to do a real deployment after all! So far, I’ve not done a single migration, and have not had to roll back. And I’m pushing to a single machine, which runs the database as well as the site.

I suspect that, as I become comfortable with this new method to update my web sites, I’ll start thinking of it as rsync++. It really is that simple.


Mongrel is a vary amazing little widget. Sure, it’s slower than Apache, but that’s ok. Mongrel is still far, far faster than restarting Rails for each web hit, and far more reliable than mod_fcgid.

In my configuration, I run each site on ports 10000, 10010, 10020, etc. with up to 3 servers per. This means application #1 is on 10000 through 10002, with room to grow should I need to run more. If I find myself running more than 10 servers for a site it needs a new machine anyway, or more machines. And if that happens, I hope I’ll have a budget.

Apache load balancing

This is a new feature in Apache 2.1, and apparently is very reliable with Apache 2.2. This is currently my favorite way to run a web site.

My configuration, which happens to be for this site:

 1 <proxy balancer://blog>
 2   BalancerMember http://localhost:10010
 3   BalancerMember http://localhost:10011
 4   BalancerMember http://localhost:10012
 5 </proxy>
 6 <VirtualHost blog.flame.org:80>
 7   DocumentRoot /www/blog/flame-blog/current/public
 8   <directory "/www/blog/flame-blog/current/public">
 9     Options FollowSymLinks
10     AllowOverride None
11     Order allow,deny
12     Allow from all
13   </directory>
15   ProxyRequests off
16   <proxy *>
17     order deny,allow
18     allow from all
19   </proxy>
21   RewriteEngine on
23   # Check for maintenance file. Let apache load it if it exists
24   RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f
25   RewriteRule . /system/maintenance.html [L]
27   # Rewrite index to check for static
28   RewriteRule ^/$ balancer://blog%{REQUEST_URI} [L,P,QSA]
30   # Let apache serve static files (send everything via mod_proxy that
31   # is *no* static file (!-f)
33   RewriteRule .* balancer://blog%{REQUEST_URI} [L,P,QSA]
34 </VirtualHost>

It is important, at least on my host, to use localhost in the balancer destinations. This is due to mongrel suddenly running on IPv6 loopback (::1) rather than the usual IPv4 loopback ( I don’t know why this happened, but the localhost trick makes Apache try both addresses, and whichever works it will use.

This configuration makes Apache serve static content, and sends all other requests off to one of the Mongrel processes.