Developers care about performance of their software. The problem is that they don’t usually have enough time for a proper optimisation. I would like to propose 3 relatively simple steps which won’t take much time and will improve a response time by at least 60%.
If you can not measure it, you can not improve it.
There are many great tools you can use for benchmarking and finding bottle necks but the tools are not everything. If you want your results to be representative you need to follow the best practice. Doing a good science is a big subject far behind scope of this post. I can recommend four simple points which used to work for me.
Apache Benchmark
This is my favourite tool. It’s very easy to use and you can find it everywhere. I usually go for a concurrency set between 20 and 100 and a minimum 500 requests. For example:
$ ab -n500 -c20 "http://example.com/"
Look at average requests per second. Higher is better.
Apache jMeter
Very rubost testing tool. If you want to know more about it read my previous post.
XDebug
Well known PHP extension for debugging and profiling. Profiling takes lots of resources so it’s better to use it on demand (trigger with XDEBUG_PROFILE=1 attached to a URL).
xdebug.profiler_enable=0 xdebug.profiler_enable_trigger=1
Profiler will save cache grinds to /tmp/ directory. You can use webbgrid to analyse it. It’s a very useful tool and it doesn’t require any configuration.
1. PHP accelerator
It has been iterated many times and I’m not very innovative here. This is the starting point. You can use APC or Zend Optimizer Plus. Bare in mind that the default configuration is not the fastest one. Both extensions will be looking into file timestamp to make sure the cache is up to date. Code doesn’t change spontaneously in a production environment so you can disable that feature. It will eliminate the slow fstat() calls but you will have to restart all web servers after every release.
2. Autoload with the Composer
Composer is designed to manage dependencies but it can also help with the performance. Even if a code is cached by an accelerator your framework can still try to call functions like “is_readable” and loop through the include_path. I had this particular problem with Zend Framework 1.12.1. You can quickly fix it by editing the composer.json file.
"autoload": { "classmap": ["application/", "library/"] }
Classmap is an array so you can keep adding new paths. Run the composer to re-generate the autoloader.
$ php composer.phar update
The classmap is going to be cached by the APC so autoloading should get much faster.
3. Serialiaze heavy objects
Instantiating an object with a complex relationship might take a significant amount of time. A good example is Zend Framework’s router. The router is a collection of routes. Every route is an instance of “Zend_Controller_Router_Route”. Instantiating a route means executing route’s constructor (for every instance). Certain routes might be storing (chaining) other routes which adds to the execution time. Not to mention a configuration is usually stored in an XML or INI file which needs to be opened and parsed in the first place.
If the component is hermetic and doesn’t affect the environment you can skip the execution process and cache serialized object. Our goal is performance so cache it in RAM. When the instance is needed acquire it from a cache and unserialize it. Obviously the unserialize function doesn’t come without a cost. There are some opinions it can be sometimes slower than the normal approach. It might also vary accross different PHP versions. You can easly measure it with the mirotime function:
microtime(true);
In the router example I got a 10% improvement just with that one change.
It’s not difficult to achieve a major performance improvement on a not optimised code. The key to success is to measure everything and apply the 80-20 rule (80% of problems are in the 20% of code). XDebug profiler will instantly highlight where are the problems. Remember to profile against your production config.