Varnish Cache is a web application accelerator. It stands in front of a web server and can cache any type of data. It stores data in memory and can speed up your application by a factor of 300-1000x depending on your architecture.
It’s not my first post about Varnish Cache (Boost WordPress performance) but this time I’m going to show a generic example which can work with any type of PHP application.
You can install Varnish Cache via good old “apt-get” although I prefer to compile it from sources. The reason for that is I usually use it with memcached module which requires Varnish Cache source code.
Varnish requires libpcre.
$ sudo apt-get install libpcre3-dev
Install the software.
$ wget http://repo.varnish-cache.org/source/varnish-3.0.3.tar.gz $ tar zxfv varnish-3.0.3.tar.gz $ ./configure $ make $ sudo make install
If you didn’t use any –prefix= the software should be installed under /usr/local.
$ whereis varnishd varnishd: /usr/local/sbin/varnishd
Config file should be in /usr/local/etc/varnish/default.vcl but it’s not that important.
Now it’s a time to create a very simple PHP script and save it as index.php.
Hello World
Cache from:
ESI is not working!
You might be wondering what is the ESI tag. It stands for Edge Side Includes and it’s a very cool feature.
A web page usually consists of multiple blocks. Some of them like layout change almost never while other might be fully dynamic. Varnish Cache allows to break down a page into such a blocks and cache them with a different expire time. Depends on your needs you can setup Varnish to pull those blocks from different web servers (for example you can have a dedicated host for a real time data).
Going back to our example Varnish will replace “” with a content from “/time.php“. Everything inside the
tag will be removed from the page.
Lets create the time.php script.
Cache from:
It couldn’t be simpler.
Right now you should have 2 pages:
http://127.0.0.1/
http://127.0.0.1/time.php
Now it’s the time to create the varnish configuration file.
$ vim /usr/local/etc/varnish/default.vcl
backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_recv { if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { return (pipe); } if (req.request != "GET" && req.request != "HEAD") { return (pass); } if (req.http.Authorization || req.http.Cookie) { return (pass); } return (lookup); } sub vcl_pipe { return (pipe); } sub vcl_pass { return (pass); } sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } sub vcl_hit { return (deliver); } sub vcl_miss { return (fetch); } sub vcl_fetch { if( req.url == "/") { set beresp.do_esi = true; /* Do ESI processing */ set beresp.ttl = 120s; /* Sets the TTL on the HTML above */ } elseif (req.url == "/time.php") { set beresp.ttl = 5s; /* Sets a one minute TTL on */ } if (beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return (hit_for_pass); } return (deliver); } sub vcl_deliver { return (deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; set obj.http.Retry-After = "5"; synthetic {" "} + obj.status + " " + obj.response + {"Error "} + obj.status + " " + obj.response + {"
"} + obj.response + {"
Guru Meditation:
XID: "} + req.xid + {"
Varnish cache server
"}; return (deliver); } sub vcl_init { return (ok); } sub vcl_fini { return (ok); }
If you are new to Varnish Cache this might look little bit overwhelming but I assure you there is no magic here. This is the default configuration which is well explained in the manual. What’s interesting from our example’s point of view is inside vcl_fetch.
sub vcl_fetch { if( req.url == "/") { set beresp.do_esi = true; /* Do ESI processing */ set beresp.ttl = 120s; /* Sets the TTL on the HTML above */ } elseif (req.url == "/time.php") { set beresp.ttl = 5s; /* Sets a one minute TTL on */ }
For the “/” request we turn the ESI processing on and we cache content from this location for 120 seconds. Content returned from “/time.php” will be stored only for 5 seconds.
Lets run varnish and give it a go.
sudo varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,128M -T 127.0.0.1:2000 -a 0.0.0.0:8080 -d Platform: Linux,3.5.0-30-generic,x86_64,-smalloc,-smalloc,-hcritbit 200 244 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,3.5.0-30-generic,x86_64,-smalloc,-smalloc,-hcritbit Type 'help' for command list. Type 'quit' to close CLI session. Type 'start' to launch worker process.
One thing to notice is “-d” flag at the end of the above line. That will run Varnish Cache in debug mode so you have to type “start” to run it.
start child (5101) Started 200 0 Child (5101) said Child starts
No open new tab in your web browser and visit http://127.0.0.1:8080/.
You should see something like that:
Hello World
Cache from: Sat, 06 Jul 13 22:20:47 +0100
Cache from: Sat, 06 Jul 13 22:20:47 +0100
Interesting thing happen when you refresh the page. First two line should stay the same for 2 minutes while the last one should change every 5 seconds. Isn’t that great?
This is not everything. There are cases when you have to invalidate cache without waiting for it to expire.
Varnish 3.x allows to ban cached data https://www.varnish-cache.org/docs/3.0/tutorial/purging.html. Modify the default.vcl file.
sub vcl_recv { if( req.url ~ "^/clearcache" ) { # for example /clearcache?uri=foo/bar if( req.url ~ "uri=" ) { ban( "req.url ~ ^/" + regsub( req.url, ".*uri=", "") ); } error 200 "Ban added"; }
Obviously in the production environment you need additional condition to allow calling “/clearcache” only from certain IP addresses.
Stop Varnish server (ctrl + c) and start it again (don’t forget to type “start”).
$ sudo varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,128M -T 127.0.0.1:2000 -a 0.0.0.0:8080 -d
Now if you go to http://127.0.0.1:8080/clearcache?uri= cache for “/” will be invalidated. You can see all active bans in you server console by typing ban.list.
ban.list 200 52 Present bans: 1373146379.588119 1 req.url ~ ^/
Varnish will add bans only if there is a cached content (for that rule).
The last thing is to call the clearcache URL from PHP. After all we don’t want to manually refresh that page.
Lets create another script and call it clearcache.php.
$val ) { curl_setopt( $ch, $opt, $val ); } if( ! empty( $post ) ) { curl_setopt( $ch, CURLOPT_POST, 1 ); curl_setopt( $ch, CURLOPT_POSTFIELDS, http_build_query( $post ) ); } $output = curl_exec( $ch ); if( $output === false ) { throw new Exception( curl_error( $ch ) ); } $info = curl_getinfo($ch); curl_close( $ch ); return $output; } } $ret = CURL::getUrl( 'http://127.0.0.1:8080/clearcache?uri=' ); if( preg_match( '/200 Ban added/', $ret ) ) { echo 'cache cleared'; } else { echo 'error
'; echo $ret; }
Now you can visit http://127.0.0.1/clearcache.php to give it a go.
If you need to troubleshoot your VCL script put
import std;
in the first line and echo data with
std.log( );
Debug data will be pushed to the Varnish Cache log and to read it run:
$ varnishlog | grep Log
Thank you for getting to the end of this post. Varnish Cache is a great peace of software and it’s worth knowing it. It’s little bit techie and programming VCL script could be easier but it will make your application fly.