DEV Community

Cover image for PHP session quirks
Marko Kruljac for Bornfight

Posted on

PHP session quirks

Hello there, fellow developer!

Did you know that PHP Sessions are blocking on a single server instance, but vulnerable to race conditions bugs on multi-server architecture?

Here are the important things you should know about how sessions work in PHP.

First thing you should know is how sessions are stored.
The default session save handler is called “files”, which just saves all the session data in a file. The file is conveniently named exactly like the value of the PHPSESSID, which is how the server knows where is your session data and if your session even exists on the server and how to retrieve the session data.

Second thing you should know is that “files” session handler is blocking by design, and there is no way to disable this constraint on this session handler. What this means is that every time the server tries to open your session file, it locks the file (using flock) which prevents any other processes from opening the file – until the lock has been lifted, which happens automatically after the PHP script/request has finished. This is actually a great technique to prevent race-conditions. You can imagine the following snippet of code.


if ($_SESSION['received_payment'] === false) {
  $_SESSION['received_payment] = true;
Enter fullscreen mode Exit fullscreen mode

Running this code in parallel, and without locks could result in the sendMoney() being called multiple times! This is a race condition which is solved by locks. Remember, while PHP is single-threaded, you can achieve concurrency by running multiple processes in parallel, Apache or Nginx does this for you. The same trick is used by pm2 to parallelise node processes.

So there is no problem, right? Wrooong 🙂

The problem is that this pattern scales poorly with regards to the total time required to completely process all requests it received in parallel. The requests themselves are received in parallel, but due to the locking they are executed in sequence. This means that if you have 10 parallel ajax calls to process, and let say that each call takes 500ms to process, you will have to wait a total of 5 seconds until all the ajax requests have been resolved. Even worse is if the first call needs 4 seconds to complete, and the rest 9 call need 100ms each. You will still end up waiting 5 seconds, but you will wait a full 4 seconds before seeing any results!
There is a great demo with which you can fiddle with.

I also made my own experiment, here is with "slow" sessions
And here is what happens when sessions get closed as soon as they are opened.
There are also some other things to take into consideration, like the browser connection limit, and your web server concurrency settings - but these are beyond the scope of this post.

So how to mitigate this issue?

There are two viable solutions.

The first solution is to close the session as soon as you are finished with reading session data. Sessions are most often used just to determine if the user is logged or a guest. After that point the session is no longer needed (in most cases) and if you close the session early, you are allowing the next request to be processed concurrently.

The second solution is to use the read-only session flag, when you will only be doing “read” operations from the session. Again a good example is checking if the user is a guest or logged in user. Here you are only reading from the session, not writing anything – this has the nice property that there is no possibility for race conditions (since data is not being changed) and there is no need for locks!
This approach has its caveats. Read-only sessions are only supported from php 7, which is an issue for frameworks who wish to support php 5 (looking at you Yii2). Another issue is that major frameworks like Zend, Symfony are slow to support this, see and

So your best bet is to just close-early and avoid sessions as much as possible 🙂

Remember, this only applies to ajax calls from the same user (the same PHPSESSID) and only if the session is being used (session_start() called anywhere in the script lifecycle)!

Ok, but what about multi-server architecture? Well, now you can no longer use “files” as your session handler, since a session could exist on one server instance, but not on another.

How you approach this issue is by using some shared memory space to manage your sessions, redis and memcached being the strongest candidates for the job.

Redis session handler does not support locks at all, and memcached has started supporting it with various degrees of success (there are bugs

This means that you cannot get that sweet sweet race-condition safety you get with “files” session handler. The trivial snippet with “received_payment” session gets very difficult to implement correctly.

The solution for this case, is unfortunately to change your code logic and use either a database for locking or specifically some locking mechanism (like, and again avoid sessions as much as possible.

How do you approach session management? How do sessions work in a node backend environment? Please share your thought and experiences in the comments below! :)

And happy developing!

Links and resources:

Top comments (5)

crussell52 profile image
Chris Russell

This race condition can be very dramatic if you heavily use the session as a data store. The important thing to remember is that the entire php session is loaded -- effectively as an array -- at the start of the php execution and all changes are written at the end of the php script.

If concurrent scripts modify any portion of the session, all but one them will (very likely) lose their data. The slowest one will commit the session array at the end of its execution and its version of the array will not have any data set by the other scripts.

When working with a shared data store, you can not eliminate data races unless you use a locking mechanism or some other synchronization technique. You can, however, shrink the scope and/or duration of the race and that is "good enough" for many use cases.

One strategy is to generate a unique Redis key at the start of the session and use it to create a Redis map. Store that key in your php session.

When the session is loaded, read in the data key. When you need to read or write data, interact with elements of the Redis map instead of the php session array.

When the session expires (or is destroyed), the data key is "lost" making the session data unreachable. Setting a TTL on the Redis map will make sure it is eventually pruned; using a TTL slightly longer than your PHP session is usually a good choice so long as you renew the TTL every time the session is loaded.

With this strategy, the race scope is reduced to individual elements of the Redis map instead of the entire session array. The length of the race is also reduced from the entire script execution to the time it takes to interact with that key. Not gone, but much better!

Of course, this strategy doesn't require Redis; it could be adapted for any data store with a little effort.

As a bonus, you don't need to deserialize ALL of your session data at the start of every script execution -- only the key that points to the data and specific data points, as they are needed.

Happy coding, fellow ElePHPants!

shockwavee profile image
Davor Tvorić

I didn't know the default session save handler uses actual files, that's really interesting!
Thanks for the article and the examples!

darkain profile image
Vincent Milum Jr

Yeah, it is simple, but also super crazy. I've seen server file systems run out of inodes due to it!

mike_hasarms profile image
Mike Healy

Was that due to having many simultaneous (or recent) users; or a problem with the garbage collection not deleting old session files?

shockwavee profile image
Davor Tvorić

Wow, that's a whole section of problems I hope I don't run into. :D