Potential Consequences of Using Postgres as a Job Queue

15 points by eduard


Sirikon

I have a cold sweat whenever I read "MultiXact".

For anyone interested, MultiXacts also prevent some oportunistic optimizations like flagging a tuple as "dead" so the engine doesn't have to check the xmax all the time to see if the transaction(s) ended and they are all older than the oldest one alive, and therefore the tuple can be ignored safely without having to check more memory pages. Without this optimization the multixact issue becomes even worse because de MultiXact SLRU is accessed way more times.

Shameless plug -> https://medium.com/fever-engineering/why-concurrent-updates-and-inserts-can-severely-impact-postgresql-performance-73b14bad5ee9

weberc2

I’ve often used optimistic locking—locking sets a timestamp field and the row becomes available if the value of the field is older than $TTL. Workers lock by running a “select rows where timestamp is older than TTL; set the timestamp of the first result to NOW()” CTE (in a transaction). I have no idea how this will scale, but I’m curious if others have tried something similar at scale.

typesanitizer

Question for other folks more familiar with the landscape: Are there other research or industrial relational (ACID) databases which have much better performance when using tables as queues?