Look ma, I wrote a new JIT compiler for PostgreSQL

Sometimes, I don’t know why I do things. It’s one of these times. A few months ago, Python 3.13 got its JIT engine, built with a new JIT compiler construction methodology (copy-patch, cf. research paper). After reading the paper, I was sold and I just had to try it with PostgreSQL. And what a fun ride it’s been so far. This blog post will not cover everything, and I prefer other communication methods, but I would like to introduce pg-copyjit, the latest and shiniest way to destroy and segfault speed up your PostgreSQL server.

Before going any further, a mandatory warning: all code produced here is experimental. Please. I want to hear reports from you, like “ho it’s fun”, “ho I got this performance boost”, “hey maybe this could be done”, but not “hey, your extension cost me hours of downtime on my business critical application”. Anyway, its current state is for professional hackers, I hope you know better than trusting experimental code with a production server.

In the beginning, there was no JIT, and then came the LLVM JIT compiler

In a PostgreSQL release a long time ago, in a galaxy far far away, Andres Freund introduced the PostgreSQL world to the magics of JIT compilation, using LLVM. They married and there was much rejoicing. Alas, darkness there was in the bright castle, for LLVM is a very very demanding husband.

LLVM is a great compilation framework. Its optimizer produces very good and efficient code, and Andres went further than what anybody else would have thought and tried in order to squeeze the last microsecond of performance in his JIT compiler. This is a wonderful work and I don’t know how to express my love for the madness this kind of dedication to performance is. But LLVM has a big downside : it’s not built for JIT compilation. At least not in the way PostgreSQL will use it: the LLVM optimizer is very expensive, but not using it may be worse than no compilation at all. And in order to compile only the good stuff, the queries that can enjoy the performance boost, the typical query cost estimation is used. And that’s the PostgreSQL downside making the whole thing almost impossible: costs in PostgreSQL are not designed to mean anything. They are meant to be compared to each other, but do not mean anything regarding the real execution time. A query costing 100 may run in 1 second, while another costing 1000 may run in 100 milliseconds. It’s not a bug, it’s a design decision. That’s why a lot of people (including me) end up turning off the JIT compiler: most if not all queries on my production system will not get enough from the performance boost to compensate the LLVM optimizer cost. If I can run the query 10ms faster but it needed 50ms to be optimized, it’s pure loss.

There is one way to make the LLVM JIT compiler more usable, but I fear it’s going to take years to be implemented: being able to cache and reuse compiled queries. I will not dig further into that topic in this post, but trust me, it’s not going to be a small feat to achieve.

And in 2021, copy-and-patch was described…

So, what can we do? We need fast enough code generated the fastest way possible. Fast enough code mean at least a bit faster than the current interpreter… But writing a compiler is painful, writing several code generators (for different ISAs for instance) is even worse…

This is where the innovation of copy-and-patch comes into play and saves the day.

With copy-patch, you write stencils in C. These stencils are functions with holes, and they are compiled by your typical clang compiler (gcc support pending, too complicated to explain here). Then when you want to compile something, you stitch stencils together, fill in the gaps, and jump straight into your brand new “compiled” function.

And this is it. This is the magic of copy-and-patch. You only copy the stencils in a new memory area, patch the holes, and voilà.

Of course, you can go further. You can figure out what computation can be done at compilation time, you can split loops in several stencils to unroll them, you can merge several stencils together to optimize them in one go (creating kind of meta-stencils…)

This paper caught the eyes of the Faster-CPython team, they implemented it in CPython 3.13, and this is when more people (including me) discovered it.

Bringing copy-and-patch to PostgreSQL

So, what does it take to build a new JIT engine in PostgreSQL? Hopefully, not that much, otherwise I would likely not be blogging about this.

When JIT compilation was introduced, it was suggested on hackers to make LLVM a plugin, allowing future extensions to bring other JIT compilers. Back then, I was quite skeptical to this idea (but never expressed this opinion, I did not want to be wrong later), and it turned out I proved myself wrong… The interface is really simple, your .so only needs to provide a single _PG_jit_provider_init function, and in this function initialize three callbacks, named compile_expr, release_context and reset_after_error. The main one is obviously compile_expr. You get one ExprState* parameter, a pointer to an expression, made of opcodes. Then it’s “only” a matter of compiling the opcodes together in any way you want, mark this built code as executable, and changing the evalfunc to this code instead of the PostgreSQL interpreter. This is easy, and you have an automatic fallback to the PostgreSQL interpreter if you encounter any opcode you’ve not implemented yet.

The copy and patch algorithm (implemented with only a few small optimizations so far) is so easy I can explain it here. For each opcode, the compiler will look into the stencil collection. If the opcode has a stencil, the stencil is appended to the “built” code. Otherwise, the compilation stops and the PostgreSQL interpreter will kick in. After appending the stencil, each of its holes are patched with the required value.

For instance, let’s consider this basic unoptimized stencil, for the opcode CONST.

Datum stencil_EEOP_CONST (struct ExprState *expression, struct ExprContext *econtext, bool *isNull)
    *op.resnull = op.d.constval.isnull;
    *op.resvalue = op.d.constval.value;


op is declared as extern ExprEvalStep op; (and NEXT_OP is a bit harder to explain, I won’t dig into it here). When building this to a single .o file, the compiler will leave a hole in the assembly code, where the address for op will have to be inserted (using a relocation). When the stencil collection is built, this information is kept and used by the JIT compiler to use the current opcode structure address in order to get a working code.

The build process for the stencils is quite fun, not complicated, but fun. The first step is to build the stencils to a single .o file, and then extract the assembly code and relocations from this .o file into C usable structures, that the JIT compiler will link to.

And that’s about all there is.

At first, I was extracting the assembly code manually. Using that way, I managed to get the three needed opcodes for SELECT 42; to work. And there was much joy. After this first proof of concept (and I guess some disturbed looks a few days ago at PgDay.Paris when people saw me happy with being able to run SELECT 42, that may have sound weird), I wrote a DirtyPython (unofficial variant) script to automate the assembly code extraction, and in a few hours I implemented function calls, single table queries, more complicated data types, introduced a few optimizations…

Current state

It works on my computer with PostgreSQL 16. It should be fine with older releases. It only supports AMD64 because that’s what I have and I can not target everything at once. Later I will add ARM64, and I would love to have some time to add support for some interesting targets like POWER64 or S390x (these may require some compiler patches, sadly, and access to such computers, nudge nudge wink wink)…

Performance-wise, well, keeping in mind that I’ve spent almost no time optimizing it, the results are great. Code generation is done in a few hundreds microseconds, making it usable even for short queries, where LLVM is simply out of the game. On a simple SELECT 42; query, running with no JIT takes 0,3ms, with copyjit it requires 0,6ms, LLVM with no optimizations goes to 1,6ms and optimizing LLVM will require 6,6ms. Sure, LLVM can create really fast code, but the whole idea here is to quickly generate fast enough code, and thus comparing both tools won’t make much sense.

But still, you are all waiting for a benchmark, so here we go, benchmarking two queries on a simple non-indexed 90k rows table. This benchmark is done on a laptop and my trust in such a benchmark setup is moderate at best, a proper benchmark will be done later on a desktop computer without any kind of thermal envelope shenanigans. And I have not optimized my compiler, it’s still quite stupid and there is a lot of things that can and must be done.

QueryMin/max (ms)Median (ms) and stdev
select * from b; — no JIT10.340/14.04610.652/0.515
select * from b; — JIT10.326/14.61310.614/0.780
select i, j from b where i < 10; — no JIT3.348/4.0703.7333/0.073
select i, j from b where i < 10; — JIT3.210/4.7013.519/0.107
Stupid benchmark on a laptop running non-optimized code, don’t trust these…

As you can see, even in the current unfinished state, as soon as there is CPU work to do (here it’s the where clause), performance relative to the interpreter get better. It’s only logical, and what is important here is that even if the JIT is an extra, slightly time consuming step, it takes so little time even these queries can go a few percents faster.

Note that even if I’ve implemented only a small handful of opcodes, I can run any query on my server, the JIT engine will only complain loudly about it and let the interpreter run the query…

For the more curious, the code is dumped there on github. I said dumped because I focus only on the code and not on the clarity of my git history nor on wrapping it in a nice paper with flying colors and pretty flowers, that’s what you do when the code is done, this one isn’t yet… If you want to build it, the build-stencils.sh file must be run manually first. But again, I do not document it yet because I simply can not provide any support for the code in its current state.


This is a proof of concept. I’ve not worked on making it easy to build, on making it possible to package it… The build scripts are Debian and PostgreSQL 16 specific. And, well, to be honest, at this point, I don’t care much and it will not trouble me, my focus is on implementing more opcodes, and searching for optimizations.

I really hope I will reach a point where I can safely package this and deploy it on my production servers. This way, I’ll keep using the LLVM JIT on the server that can use it (a GIS server where queries are worth the optimization) and use this JIT on my web-application databases, where short query time is a must have, and the LLVM optimizations end up being counter-productive.

I am also dead serious on porting this to other architectures. I love the old days of Alpha, Itanium, Sparc, M68k and other different architectures. I am not going to use this kind of system, but I miss the diversity, and I really don’t want to be a part of the monoculture issue here.


First, huge thanks to my current day-job employer, Entr’ouvert. We are a small french SaaS company, free-software focused, and my colleagues simply let me toy on this between tickets and other DBA or sysadmin tasks.

I would like to thank my DBA friends for supporting me and motivating me into doing this (won’t give their names, they know who they are). BTW: use PoWA, great tool, tell your friends…

Also, quick question: they suggest I shall go to PGConf.dev to show this, but it’s too late for the schedule and since I live in France I did not intend to go there. If you think it’s important or worth it, please, please, say so (comments below, or my email is p@this.domain), otherwise see you in future european PG events 🙂

This “month” in Calligra #1


In the past years, Calligra has not been very active. Since a few months, we are trying to improve the situation and come back. We need great office suites. We need components we can reuse in our applications. We need a Plasma-mobile document viewer. This is what Calligra can be, this is what Calligra will be.

In order to show what is happening in the project, we will try to write monthly activity reports. Since it is the first one, it will convey the changes of the whole year.

The whole suite received huge code modernization. We were still using old constructions (especially old style connect) that are slower or less safe than new ones. Thanks to clazy and patience, a lot of these are gone now. We also started upgrading our minimum requirements in order to anticipate the future Qt 6 migration, with further upgrades in the pipe. As a side effect (and also some additional work), the compiler is complaining much less than before.

We also have some more specific changes in the various suite components, listed below.

File formats:

  • fix missing background in docx files (#239200)
  • support encodings other than UTF8/16 in word files metadata (#406014)


  • fix duplicated actions (#426801)


  • fix underline/strikethrough being paint without accounting for DPI
  • crash fixes when being headless
  • fix a potential crash on exit
  • improve the statistics logic and split it between widget and computation
  • huge performance improvement when opening and saving medium to huge documents (tests performed with Open Document specification, over 800 pages)


  • fix a potential crash on exit

Okular plugin:

  • fix a massive memory leak (that likely made documents of 10+ pages unusable)


  • improve design
  • depend on Qt 5.15 to simplify the QML code

We will try to release a 3.3 soon so you can enjoy these changes.

Wal-e and the gotcha, how I nearly lost 50% of my backups…

We all know, often from a bad experience, the classical “an untested backup is not a backup”. This small story will illustrate that situation, highlight a big wal-e gotcha and remind you to test your backups.

TL;DR : wal-e has a “problem” with multiple clusters.

Reminder : what is a PostgreSQL backup ?

This question seems silly, but it is indeed very important and worth repeating the answers.

A backup is a known good copy of a given dataset.

Streaming replication can save you from a server burning (if your replica is not in the same rack), a datacenter room burning (if your replica is not in the same room, lucky you) or a datacenter burning (if your replica is in a different datacenter, you’re richer and luckier than most). But it won’t save you from yourself or your dear human companions that can and will do mistakes.

You could also, every day, do a full pg_dump and archive it. I will allow you to restore to a good state, but you could end up losing a lot of good data added since your last backup. (But keep doing logical backups, they have many benefits)

Instead, backuping the WAL, with regular base backups, will give you the full power of PITR and give you a safe, “locked” backup of your whole system, that you should be able to rely on. Someone drops a table, you can go back in time before the table was dropped…

Since this can be complicated to maintain, especially with tricky storage systems like object storages (amazon S3 for instance), there are a lot of tools to automate this setup. Here, I will focus on wal-e.

It was a sunny Friday

In my current company, we manage quite a few PostgreSQL clusters. Most systems (Debian of course) have two clusters, with different PostgreSQL versions. Everything is backuped of course, using wal-e, in S3 (for reasons out of my jurisdiction), and logical backups.

Since I had a busy week, I decided to spend my Friday on trivial tasks. We recently had a migration to PostgreSQL 9.6 for one application, before migrating every application to it, and I did not check twice that every backup was set. So, before migrating more applications, a check was due. And indeed, a non-critical test environment was not backuped, not a critical mistake, but a mistake nevertheless.

So I went on the server, launched the wal-e setup (remember, if you have more than one machine, automate things) for the 9.6 cluster, checked that the WAL was being sent… All was fine. But WALs without a base backup are useless, so I decided to immediately push a basebackup.

Patience is not always a virtue

And I launched the wal-e backup-push command. This command requires a few ugly environment variables for S3 configuration, and the PGDATA folder for the cluster. wal-e has the huge problem of not being chatty. Not at all. So for at least 15 seconds, it said nothing, and I saw no new object on S3. So… Why not check what it is doing ?

I did a small ps faux on the server. And I saw that wal-e launched an SQL command using psql, a COPY, and was waiting for it… And I saw on my 9.4 cluster a COPY operation, and none on the 9.6 cluster.

“Silly me, I did a typo !”

Ctrl+C, and… wait… No, I did not ? What is wrong here ?

The first sweat drop happened.

I checked my PGDATA folder. /var/lib/postgresql/9.6/main could not be the path of my 9.4 cluster, no way.

So I looked at the psql command launched by wal-e, and I did not see any flag to specify the port of my 9.6 cluster.

Did I forget a configuration file ?

No. If you look at the whole wal-e documentation, it never specifies anywhere the port to use to connect to the cluster. Never. And we never specified it, since that information is right in PGDATA.

Wait, does this mean…

Now, the red panic button was armed and ready. I quickly opened the amazon S3 console, and went to our production backups. I went looking for the backups of our new 9.6 clusters, running on non-standard ports. And the extended_version.txt file scared me :

PostgreSQL 9.4.10 on x86_64-unknown-linux-gnu...

And then I pushed on the panic button…

Our restore tests did not include yet these shiny new 9.6 clusters, so we got absolutely no warning about this. So remember: test your backups, just after you created the cluster everything should be up and running…

Fixing the issue

I don’t know how you setup your wal-e backups, but we use envdir and a per cluster folder containing the wal-e settings that are then pushed in the wal-e commands environments. The really easy fix was simply to write down the cluster port in a new PGPORT file in each cluster folder, drop the old useless backups (and wals), and push new backups. And as soon as possible, because this implied that production did not have all the backups it needed. (Don’t rely on only one backup system)

After a few hours, the panic button was disarmed, and it was time for a check of wal-e itself.

Wal-e, what are you doing buddy ?

I went to the wal-e documentation (its github page) and checked again. No reference to the PostgreSQL port anywhere.

I downloaded the latest source code, and grepped… No reference there either ! The psql launching code was just no able to specify a port to connect to… This is a really really big gotcha. While it could read the PostgreSQL port from postmaster.pid in PGDATA, it did not.

So I wrote a patch and sent it upstream.


I hope it will be merged soon, or that the documentation will be modified to show this trap waiting for its next victim.

Until then, I wrote this blog post to warn as many people as possible. Test your backups for every cluster, and if you have several clusters on the same machine with wal-e running, make sure you configured it properly.

Using Row Level Security with a Django application


Row Level Security is one of the lesser known great addition to PostgreSQL 9.5. The documentation about it is of PostgreSQL documentation quality of course, but there is a lack of online examples and usages without relying on distinct PostgreSQL users for multi-tenant websites. I recently built a proof of concept using RLS to secure access to a specific table in a multi-site Django application, and I will explain here how to do it. It is a very simple trick, but it can improve your security a lot, especially in bigger applications when auditing the whole source code can become tedious.

What is Row Level Security ?

Row Level Security, aka RLS, is a great PostgreSQL feature that allows you, for each (user, table, action) combination, to specify additional rules that restrict access to the rows. The PostgreSQL documentation, and most online documentations, show that feature with SQL users. A table with an “owner” column would be automatically filtered on the criteria owner = current_user(). It’s very efficient (the criteria is not applied after the query but pushed in the query and thus can use more indexes if needed), and for a very low maintenance cost you can have security pushed down to the lowest level of your stack.

With RLS, suddendly customer A can no longer steal from customer B, even if there is a SQL injection in your server. It would also require a security issue or misconfiguration on PostgreSQL side. Of course, you should keep checks in your code, better safe than sorry, but it’s the extra security belt that could save you from the next “data leaks” headlines 🙂

Limitation of the common RLS demonstrations and workaround

Usually, most RLS demo will apply restrictions per SQL user. It’s, by far, the easiest and safest way to do it, because it will be very hard to alter the data used to apply the rules. But it’s not going to work very well for most, if not all, web applications. Web applications seldomly use different SQL users per authenticated web user. Sometimes one SQL user is used per “site” (we will see that a bit later), but even that is not the most common deployment strategy. Web applications, with their very short connection lives, rely on poolers like pgbouncer to keep connections to the SQL server alives and save a lot of time for each HTTP request. Doing that for tens or hundreds of different SQL users would be complicated and likely to be expensive.

But there is absolutely nothing in PostgreSQL preventing you from using any function. We could even have a policy applying restrictions based on the time or the day in the week (a “no work can be done outside work hours” policy). The real “complexity” will be the forwarding of the needed informations from your application to the policy. And PostgreSQL has a solution for that too (new motto of the day ?) : SET SESSION…

So let’s build a small django application and combine all that together to have our security belt against code mistakes.

Our first policy : business_hours

First, let’s see how to create a simple RLS policy, based on my previous business hours idea.

-- First step, create a table...
test=# CREATE TABLE work_queue(id serial primary key, task text);
test=# GRANT SELECT ON work_queue TO employee;
test=# INSERT INTO work_queue(task) VALUES ('long and complicated task');

We are superuser here (notice the # in prompt) and we put a new task in our work_queue. And our employee can read the queue. But he could do that even out of business hours, and we don’t want that. So let’s write a policy and enable it :

test=# CREATE POLICY business_hours ON work_queue FOR ALL TO employee USING (extract('hour' from now()) between 8 and 20);

Policies are quite readable : on table work_queue, for all operations (SELECT, INSERT, UPDATE, DELETE) to user employee, filter rows using the time of day. Note that policies, of course, don’t apply to super users…

And now, if our employee selects in the table:

test=> SELECT NOW(); SELECT * FROM work_queue;
 2017-03-21 23:15:28.560327+01
(1 row)

id | task 
(0 rows)

And if he comes back tomorrow :

test=> SELECT NOW(); SELECT * FROM work_queue;
 2017-03-22 10:20:53.382093+01
(1 row)

id | task 
 1 | long and complicated task
(1 row)

We wrote our first security policy, congrats! Now, let’s start working with Django.

Django and RLS

Understanding the need

In the Django world, a single server, a single process can handle several “sites”. Let’s say you, tinycorp, are selling a SaaS software to your customers megacorp1 and megacorp2. The both want their logo to appear on the login page. So you won’t be able to have a simple app.tinycorp.com domain, otherwise you would have no way to customize the look. You will instead use one vhost per customer, megacorp1.tinycorp.com and megacorp2.tinycorp.com. That’s what Django calls a site.

So here we will work on preventing a leaking page used by an attacker against megacorp2 to leak datas from megacorp1 too.

What to inject, and how ?

Django store in memory, for each request, the current site. It’s done in the first steps after a browser connects to the server. This is a simple integer, referencing the django_site table. So we have to do the following SQL call for each request:

SET SESSION django.site TO $id$;

How can we do that without altering every view in the application? It’s easy, most Python/web developers will know the answer: we must write a middleware.

In the Python world, a middleware is a function (or a class that defines what is needed to behave like a function) that is called for each request. The request and the “next” middleware are given as parameters, the response must be returned. For instance, you could have an AuthMiddleware, a CSRFProtectionMiddleware, a WafMiddleware and then your application. This is a very nice mechanism, efficient and simple.

Since this post starts being quite long, I will just post here a working Django Middleware. The code should be obvious to most Python developers.

class RlsMiddleware(object):
        def __init__ (self, get_response):
                self.get_response = get_response
        def __call__ (self, request):
                current_site = get_current_site(request)
                with connection.cursor() as cursor:
                        cursor.execute('SET SESSION "django.site" = %s;' % 
                response = self.get_response(request)
                with connection.cursor() as cursor:
                        cursor.execute('SET SESSION "django.site" = -1;')
                return response

And now, we only have to enable RLS on the tables and restrict the Django SQL user.

CREATE POLICY "check_django_site" ON my_table
                FOR ALL
                TO "django-user"
                USING ((site_id = (current_setting('django.site'))::integer));

And… that’s all folks! Nothing else is needed, you only have to apply such a security for each table.

Drawbacks and further improvements…

The main drawback with that solution against restrictions based on the SQL user is that you trust your SQL user to honestly request the proper site. It’s a far lower security than distinct SQL users, but like any security, you must do a threat analysis. The threat here is poorly coded views forgetting a check, and some partial SQL injections. Full SQL injections, allowing any SQL query, could still hurt you badly since they could change the django.site variable. But, considering the low cost of that extra-security… Why not?

Another drawback is that you must have a separate SQL user for your console Django commands, or celery tasks or whatever you do in your Django application with no user interaction. But that’s almost always better to do that, so it’s more like pushing a good practice down your throat 🙂

If you want, you can dig deeper. Much, much deeper. You could set a django.user variable that contains the current user id, and push down most access rules in the database. Again, you are protecting yourself against some programming or logic errors, a full SQL injection would work around that easily…Thank you all for reading that long explanation, and I hope it will be of use to someone… Especially if I have an account on your website, I don’t want to be in a pwned list please !

Modern C++ stored procedure wrapper

In a application following an intelligent database design, calls to stored procedures happen very often and thus must be done with as little boilerplate as possible.
Usually frameworks abstracting calls to the database are just ORMs, ignoring completely stored procedures, making the database stupid and moving all the logic in the application.

A year ago, I read on planet PostgreSQL (http://tech.zalando.com/posts/zalando-stored-procedure-wrapper-part-i.html) about a simple system built using Java and the Java annotation and reflection system.
A stored procedure can be called using a few lines of interface :

interface BasicExample {
    long computeProduct(@SProcParam int a, @SProcParam int b);

Recently, I started planning the development on my spare time of a C++/Qt5 application using a PostgreSQL database and I realized I had just no way to easily call stored procedures. Doing a proper database for the application would thus be a huge pain from a C++ point of view, messing database calls in the middle of the application… Since my C++ skills needed an update (C++11 and C++14 are out in the wild since a few years and I never had an opportunity to use the new features they bring), I figured this would be the best time to do it.

C++ does not have (yet… C++17, I have faith in you) the attributes and introspection used in the stored procedure wrapper of Zalando. Instead, C++ has a great compilation-time processing system through the templates. Templates are not just meant for implementing generics, they are a turing-complete meta-programming language. You can really do a lot of things using them. A lot. For instance, a tuple type working just like a Python tuple, to store a few values of different types side by side. Or implementing a compile-time mathematical function. C++11 and C++14 brought variadic templates, auto and a few other tools
that seemed very powerful and could yield great solutions for my problem.

After a few hours of hacking, I had my first interesting results :

SqlBindingMapper<QDateTime> get_now("now");
qDebug() << get_now();

With a few lines to implement the database connection (using QtSql so far, because I plan to write Qt5 applications with it), these two lines are enough to call NOW() in the database and map the result to a QDateTime, the Qt Date-Time representation object.

Of course, returning a single value from an argument-less function, that’s not really interesting. Let’s sum two digits.

SqlBindingMapper<int, int, int> summer("sum");
qDebug() << summer(1, 2);

And this will display 3.

So that’s for one returned record with one field. What about calling generate_series ?

SqlBindingMapper<QList<int>, int, int> generateSeries("generate_series");
for (auto i: generateSeries(1, 10))
    qDebug() << i;

Now, what about the following composite type :

CREATE TYPE card AS (value integer, suit text);
CREATE FUNCTION test_card (a card ) RETURNS integer LANGUAGE SQL AS 
$function$ SELECT $1.value; $function$;

Calling that function in C++ is only requiring you to use std::tuple :

SqlBindingMapper<int, std::tuple<int, QString>> testCard("test_card");
int value = testCard(std::make_tuple(1, "test"));
qDebug() << value;

Qt QObject introspection is also supported and during the FOSDEM I hacked support for arrays (ok, Qt vectors, but STL vectors are as easy to support) :

SqlBindingMapper<int, QVector<int>, int> array_length("array_length");
QVector<int> data;
data << 1 << 2;
qDebug() << "Our dims are :" << array_length(data, 1);


How does all this work behind the scene ? SqlBindingMapper is a template class that take a variadic number of parameters, the first one being the return type. It then implements the operator () returning the specified returning type and taking the specified parameters. A query is then built (at runtime so far, but this could evolve) with placeholders and the appropriate casting, still using templates. The placeholders are then filled, and after coming back from the database, a SqlQueryResultMapper<T> instance maps the rows to the required objects.

So far, the code is a crude hack, done for recreational purpose and for my own education. If there is any interest in a proper explanation of the code and people wanting to use it in real serious projects, I would be happy to help or write more documentation of course.

Right now, the code packaging suck. It’s only 3 headers you’d have to copy from the example on github : https://github.com/pinaraf/StoredProq/ (pg_types.h, queryresult.h and sqlmapper.h)

I hope you had fun reading this !

Review – “PostgreSQL Server Programming”

Two months ago, I reviewed the Packt Publishing book “Instant PostgreSQL Starter” . And two weeks ago, Packt Publishing offered me to review their latest PostgreSQL book, “PostgreSQL Server Programming”.

I was looking forward reading that book. I’m, like a lot of PostgreSQL users, quite fond of using as much server-side features and functions as possible. And at work, when facing people long used to extremely simple queries (even a join seems weird to some of them), it’s quite difficult to teach that no, a SQL server is not a stupid spreadsheet. I thus was expecting that book to help me teach them other ways.

A simple warning : this book is not for beginners. It is aimed at moderate/advanced PostgreSQL users, people with some SQL knowledge already. It’s also not a SQL reference, covering every corner of the language. There is no CTE in this book for instance.

Now that we know what this book is not about… Let’s have a look at the table of contents.

The book is structured in 10 chapters.

PL/pgSQL programming basics

The first four teach you basics about server side programming (declaring variables in PL/pgSQL, returning more than a basic type…)

It begins with a simple introduction to the principles behind server side programming, the way one can view PostgreSQL as a framework, examples where server side shines, some teasing about triggers, and key reasons for server side programming… Then an entire chapter covers the reasons why the author choose PostgreSQL in the first place, the features it brings to you from a programmer point of view… These 40 pages are welcome for the high quantity of information they bring.

Then the fun begins with «Your First PL/pgSQL Function». We learn the basics about PostgreSQL functions writing, passing parameters, returning values, loops, conditional structures, returning a set… What you need to write your functions, you’ll find it here.

And the last «pure» PL/pgSQL chapter teach you some tricks about returning structured data, arrays, using IN/OUT/INOUT parameters, cursors, even a few words about the XML data type.

Triggers, debugging…

The two following chapters cover two must-have for server-side programming.

The triggers are probably the key feature of server-side programming, the feature you really can not do in another way. Not much to tell here. It covers the classical triggers use cases : auditing, disallowing features, modifying an update before it happens.

And the second chapte covers debugging, because like any real programmer, you will need it. It explains the classical debug through «prints», and a real debugger, integrated with pgadmin. Quite a nice tool I did not know about.

Other languages

The book has an entire chapter about unrestricted languages, especially PL/Python. It is really nice because it uncovers a new set a features with it being an unrestricted language, while it remains simple to use and to understand. A good choice for this book.

And a second chapter goes deeper in PostgreSQL with functions written using C. It explains of course why you would have to decide to write such functions (performances, some features…), and what you need to know to not get lost amid the PostgreSQL code.

Final chapters

The last two chapters cover two more advanced cases. The first one teachs you how to use PL/Proxy to improve your scalability by splitting big tables accross servers, using a simple chat system as example. This is the natural evolution after moving more logic in the server and getting too much data load for one server, you have to split up, and PL/Proxy is an easy way to distribute the load accross servers…

And the final chapter covers writing extensions. Extensions are by far the best way to distribute your PostgreSQL addons. Thanks to this book, you can learn how to write new PostgreSQL features, and that chapter just tells you how to distribute them.


To make this quick : this book meet my expectations. It’s a handy book to have when using PL/pgSQL and PL/python, it can help convincing people to use server side programming. It’s not a complete reference, it does not cover advanced queries (that would require an entire book, maybe more than one), but it’s more than enough to start and stay on your desk for a while when writing functions…

My first PostgreSQL patch/hack

After almost two years of lobbying at work to promote PostgreSQL instead of MySQL, after two years of tears against poor DB typing, lack of advanced SQL features, traps in what seemed like basic needs (select * from myview where a=5, if myview is not «simple», triggers a full scan on the view result)… we are finally starting to deploy some PostgreSQL databases.

I wrote my dear sysadmin friends documentations, argumentations. Among my «it’s simpler» arguments, I promoted application_name as a way to quickly know who is hurting that bad your SQL server, along with the so simple ps faux to know who is doing what on your server… That’s the simplest monitoring possible, but when you’re in a hurry at 3AM, it can help you… Sadly, the application_name is not displayed in the ps output. I was looking for a tiny thing to patch PostgreSQL in order to discover its code a bit, it seemed like a good opportunity.

And let’s share also how I did the patch to highlight how easy patching PostgreSQL is 🙂 (the patch has been written together with this blog entry)


0) System setup

First thing, you need a minimal development environment for PostgreSQL. Your tools : your favorite compiler (GCC), make, git, $EDITOR, ack-grep and a bit of patience if your CPU is slow (postgresql compiles quite quickly).

~$ cd ~/projects
~/projects$ git clone git://git.postgresql.org/git/postgresql.git
~/projects$ cd postgresql
~/projects/postgresql$ git checkout -b application-name-in-proctitle

1) Exploring the code

We want to modify how the proctitle is constructed. Under BSD systems, it’s done through the setproctitle call, easier to find than the game with ARGV under linux. Soo… let’s search it.

–/projects/postgresql$ ack-grep --type cc setproctitle
42: * use the function setproctitle(const char *, ...)
265: * apparently setproctitle() already adds a `progname:' prefix to the ps
329: setproctitle("%s", ps_buffer);

422:/* Define to 1 if you have the `setproctitle’ function. */

Well, only one call… it’s gonna be simple, let’s look at this function. I stripped out comments and ifdefs.

set_ps_display(const char *activity, bool force)
/* A few exit cases.... */
/* .................... */

/* Update ps_buffer to contain both fixed part and activity */
strlcpy(ps_buffer + ps_buffer_fixed_size, activity,
ps_buffer_size – ps_buffer_fixed_size);

ps_buffer_cur_len = strlen(ps_buffer);

setproctitle(“%s”, ps_buffer);

Well, this is quite simple, the activity is added to a buffer containing the fixed portion of the proc title. To prevent useless memory allocations, the same buffer is reused at each call.
Our application_name can change during one connexion, we will need it in this function to dynamically append it before the activity when available.
But where is application_name available ?

~/projects/postgresql$ ack-grep --type cc application_name

This gives us more results, but there is an obvious one :
228:extern char *application_name;

It’s a simple per process global variable…

2) Patching…

So our patch is very straigh-forward now, even for C beginners.

~/projects/postgresql$ $EDITOR src/backend/utils/misc/ps_status.c

The main part we will change is around here :
strlcpy(ps_buffer + ps_buffer_fixed_size, application_name,
ps_buffer_size - ps_buffer_fixed_size);
ps_buffer_dynamic_len = strlen(application_name);

~/projects/postgresql$ git diff

[ too long for a blog, http://pastebin.com/vF9S5CMB ]

It’s not the cleanest patch, but it’s a start.

3) Testing

Compiling and testing PostgreSQL is simple, and can be done along another running instance.

~/projects/postgresql$ ./configure --prefix $HOME/bin/psql-master --enable-debug
~/projects/postgresql$ make -j 6
~/projects/postgresql$ make -j 3 install

Et voilà…

Now that your PostgreSQL has been installed in ~/bin/psql-master, let’s create an instance and start it.
~/projects/postgresql$ cd ~/bin/psql-master/bin
~/bin/psql-master/bin$ ./initdb ../data

If you have a PostgreSQL running on port 5432, you’ll have to change the port in the ../data/postgresql.conf file. I changed it to 5435.
And now…

~/bin/psql-master/bin$ ./postgres -D ../data
LOG: database system was shut down at 2013-06-17 23:51:29 CEST
LOG: database system is ready to accept connections
LOG: autovacuum launcher started

Let’s test it, in another console :

~/bin/psql-master/bin$ ./psql -p 5435
psql (9.4devel)
Type "help" for help.
postgres=# select pg_sleep(10);

And while the sleep is running, we can do a ps :

snoopy 14791 0.0 0.0 164128 4056 ? Ss 23:53 0:00 | \_ postgres: snoopy postgres [local] psql SELECT

And if I change the application name…

snoopy 14791 0.0 0.0 164128 4320 ? Ss 23:53 0:00 | \_ postgres: snoopy postgres [local] my_buggy_script.pl SELECT

Our first patch is working !

But it’s not ready to be submitted yet… Maybe people are using the current ps output and awk/cut-ing it to extract data. Better not crash these by adding a configuration option maybe.

4) Back to code…

First, you must commit your current patch. Git is your friend, use it 🙂

~/projects/postgresql$ git add src/backend/utils/misc/ps_status.c
~/projects/postgresql$ git commit -m "Display the application_name in proctitle before the current activity"
[application-name-in-proctitle 28a6b94] Display the application_name in proctitle before the current activity
1 file changed, 17 insertions(+), 2 deletions(-)

Now, we can look at adding a new configuration option.
We know that a configuration option exists to disable update_process_title entirely. We can just look at how it works, and copy it.

Here is the code in set_ps_display :

/* update_process_title=off disables updates, unless force = true */
if (!force && !update_process_title)

And where is that variable used/defined ?

~/projects/postgresql$ ack-grep --type cc update_process_title
1039: {"update_process_title", PGC_SUSET, STATS_COLLECTOR,
1043: &update_process_title,

35:bool update_process_title = true;
295: /* update_process_title=off disables updates, unless force = true */
296: if (!force && !update_process_title)

a few if (update_process_title)

15:extern bool update_process_title;

What is that guc thingy ? The file heading comment says :

* guc.c
* Support for grand unified configuration scheme, including SET
* command, configuration file, and command line options.

This is the file we are looking for.

Well, let’s just copy the way the update_process_title setting is created…

This is really straigh-forward, nothing interesting here. I named the setting application_in_process_title…

~/projects/postgresql$ git diff --stat src
src/backend/utils/misc/guc.c | 10 ++++++++++
src/backend/utils/misc/ps_status.c | 4 ++--
src/include/utils/ps_status.h | 2 ++
3 files changed, 14 insertions(+), 2 deletions(-)

The full patch is available here : http://pastebin.com/eg0NZGGr

And we have our configuration setting, allowing us to switch between application_name in the process title and the current behaviour.

5) Conclusion

In less than one hour, with no PostgreSQL code experience, it’s really easy to hack a simple patch like this one.
After having nightmares through the MySQL code, it’s a real pleasure to hack on PostgreSQL code : it’s simple, commented, logical…

Now I can start the review process for that patch maybe 🙂

Review – “Instant PostgreSQL Starter”

Thanks to Shaun M. Thomas, I have been offered a numeric copy of the “Instant PostgreSQL Backup” book from Packt publishing, and was provided with the “Instant PostgreSQL Starter” book to review. Considering my current work-situation, doing a lot of PostgreSQL advertising and basic teaching, I was interested in reviewing this one…

Like the Instant collection ditto says, it’s short and fast. I kind of disagree with the “focused” for this one, but it’s perfectly fine considering the aim of that book.

Years ago, when I was a kid, I discovered databases with a tiny MySQL-oriented book. It teaches you the basis : how to install, basic SQL queries, some rudimentary PHP integration. This book looks a bit like its PostgreSQL-based counterpart. It’s a quick travel through installation, basic manipulation, and the (controversy) “Top 9 features you need to know about”. And that’s exactly the kind of book we need.

So, what’s inside ? I’d say what you need to kick-start with PostgreSQL.

The installation part is straight forward : download, click, done. Now you can launch pgadmin, create an user, a database, and you’re done. Next time someone tells you PostgreSQL ain’t easy to install, show him that book.

The second part is a fast SQL discovery, covering a few PostgreSQL niceties. It’s damn simple : Create, Read, Update, Delete. You won’t learn about indexes, functions, advanced queries here. For someone discovering SQL, it’s what needs to be known to just start…

The last part, “Top 9 features you need to know about”, is a bit more hard to describe. PostgreSQL is a RDBMS with included batteries, choosing 9 features must have been a really hard time for the author, and I think nobody can be blamed for not choosing that or that feature you like : too much choice… The author spends some time on pg_crypto, the RETURNING clause with serial, hstore, XML, even recursive queries… This is, from my point of view, the troublesome part of the book : mentioning all these features means introducing complicated SQL queries. I would never teach someone how to do recursive queries before teaching him joins, it’s like going from elementary school to university in fourty pages. But the positive part is that an open-minded and curious reader will have a great teaser and nice tracks to follow to increase his knowledge of PostgreSQL. Mentioning hstore is really cool, that’s one of the PostgreSQL feature one have to know…


To sum up my point of view about this book : it’s a nice book for beginners, especially considering the current NoSQL movement and people forgetting about SQL and databases. It’s a bit sad we don’t have more books like this one about PostgreSQL. I really hope Packt publishing will try to have a complete collection, from introduction (this book) to really advanced needs (PostgreSQL High Performance comes to mind) through advanced SQL queries, administration tips and so on… They have a book about PostgreSQL Server Programming planned next month, I’m really looking forward to this one.

my-post-engine : a postgres-engine for MySQL


For a while, I’ve been trying to push some people toward PostgreSQL. It covers their use-cases better than MySQL, should provide greater performance, and should help me keep the hairs on my head.

But most of the time, they are relunctant unless they can really try it. Sadly, when you have huge databases, huge piece of code depending on the MySQL behaviour, you’re in a dead-end.

So I tried to find a way for this MySQL-depending code to speak to a PostgreSQL database seemlessly, in order to ease possible migrations…

What is it ?

After many failed attempts translating MySQL queries to a PostgreSQL syntax on the fly, I decided only MySQL could understand MySQL queries, and hence I decided to take the problem in a different way.

MySQL has a notion of storage engine. It allows you to decide how your tables are stored.

Most known storage engines are MyISAM (the “famous”, transaction-less, lock-full engine) and InnoDB (with transaction support, but say farewell to full-text and spatial indexing)…
Other included, but less known storages, are Blackhole (aka /dev/null) and Federated, where your MySQL server exposes tables coming from other MySQL servers.

And that’s when the crazy idea came : why couldn’t I write a PostgreSQL engine for MySQL ?

How does it work ?

Well, the storage API for MySQL is quite simple : read row, search in an index, delete row, modify row, insert row… That’s really easy to implement that with several simple SQL queries. But… it can not be done with standard-compliant SQL queries. You need to be able to identify each row of a table with a guaranteed unique identifier. Most of tables have a primary key, perfect for that. But you don’t always have that luck. And when you don’t have it, you have to start relying on other tricks, specific to your database engine. MySQL supports limit on delete/update, even without any order by statement. PostgreSQL doesn’t. Instead, we can rely on hidden, system-specific columns like the ctid in PostgreSQL… So your engine can not be really generic.

Anyway… well, using mixed tricks, the end result is quite convincing.

First step, create a simple table with the postgres engine.

CREATE TABLE `todo_pg` ( 
  `id` int(11) NOT NULL AUTO_INCREMENT, 
  `domain` varchar(255) NOT NULL, 
  `date_todo` datetime NOT NULL,  
  `state` varchar(127) NOT NULL,  
  PRIMARY KEY (`id`)
) ENGINE=POSTGRES DEFAULT CHARSET=utf8 CONNECTION='host=localhost port=5432 dbname=todo password=mypass user=mysql'

Now, you can start playing with it.

mysql> select * from todo_pg;
Empty set (0.00 sec)

mysql> insert into todo_pg (domain, date_todo, state) values ('postgresql.org', now(), 'toVisit');
Query OK, 1 row affected (1.14 sec)

mysql> select * from todo_pg;
| id | domain         | date_todo           | state   |
|  4 | postgresql.org | 2013-04-01 01:05:52 | toVisit |
1 row in set (0.00 sec)

mysql> insert into todo_pg (domain, date_todo, state) values ('mysql.org', now(), 'toVisit');
Query OK, 1 row affected (0.08 sec)

mysql> select last_insert_id();
| last_insert_id() |
|                5 |
1 row in set (0.00 sec)

mysql> select * from todo_pg;
| id | domain         | date_todo           | state   |
|  4 | postgresql.org | 2013-04-01 01:05:52 | toVisit |
|  5 | mysql.org      | 2013-04-01 01:06:04 | toVisit |
2 rows in set (0.00 sec)

mysql> update todo_pg set state='done' where domain like '%.org';
Query OK, 2 rows affected (0.19 sec)
Rows matched: 2  Changed: 2  Warnings: 0

mysql> select * from todo_pg;
| id | domain         | date_todo           | state |
|  4 | postgresql.org | 2013-04-01 01:05:52 | done  |
|  5 | mysql.org      | 2013-04-01 01:06:04 | done  |
2 rows in set (0.00 sec)

mysql> truncate todo_pg;
Query OK, 0 rows affected (0.23 sec)

mysql> select * from todo_pg;
Empty set (0.05 sec)

It ain’t no joke, that really works.

Current status

Well, the current implementation is experimental, lacks broad testing, has possible performance issues, and installation is a real pain because MySQL supports plugins, but don’t provide neither build instructions nor .h files to help you build such plugins… It’s a pity writing plugins for MySQL, really… Installation is sucks, only distributions could provide it along their MySQL packages :/

The code is available in gitorious : https://gitorious.org/my-post-engine

Don’t hesitate, have fun testing it. Play with it, crash it, fork it… I only hope it will be usefull to someone 🙂