loading...

How to deal with ActiveRecord::StatementInvalid PG::QueryCanceled?

ben profile image Ben Halpern ・1 min read

We sometimes get this error ERROR: canceling statement due to statement timeout : SELECT and "unplugging and plugging back in" does the trick in terms of a fresh start, but I'd like to know more about dealing with this error in general.

It comes alongside errors like PG::ConnectionBad: FATAL: remaining connection slots are reserved for non-replication superuser connections

I know this is a matter of resource constraints and how Postgres deals with it, but other than the hard reset I don't have a good grasp of dealing with the problem.

I have also adjusted the overall pool size in dealing with this, but I'm still not confident in some of the underlying mechanics at play.

What exactly is going on here and how should I approach the problem?

Posted on by:

ben profile

Ben Halpern

@ben

A Canadian software developer who thinks he’s funny. He/Him.

Discussion

markdown guide
 

What Heroku plan is dev.to on? It looks like it has something to do with Heroku's Postgres balancer.

Have you tried investigating with SELECT * FROM pg_stat_activity when that happens?

Here are the details of the table: postgresql.org/docs/10/static/moni...

Even though I guess that if you run out of connections there's no time to investigate...

 

Install pghero somewhere via docker container for example and then you keep watching live queries and how much time it's taking. Probably you could find the bottleneck in your queries or blocking table statements.

docker run -d -ti -e PGHERO_USERNAME=<http-basic-auth-username> -e PGHERO_PASSWORD=<http-basic-auth-password> -e DATABASE_URL=postgres://<username>:<password>@<host>:5432/report -p 5005:8080 ankane/pghero

 

I have no experience with anything ruby-related, let alone ActiveRecord, but a quick Google search for "postgres statement timeout" brought me to a page listing connection settings, including statement_timeout. So maybe you can find a way to increase this setting from its default value.

It might also be worth inspecting your query to see if you can optimise it a bit.

 

I think that's the right thought, but on that note I think decreasing the timeout value is probably more likely to be helpful I think in the macro. This seems to be a symptom of resource over-utilization where quitting early in order to avoid locking things up is probably the right course of action.

Anyway, that's where my head goes.

 

This might be stupid. But are your database queries efficient? It's possible that a bad execution plan is generated