That's so true! Years ago I switched almost all projects to PostgreSQL. I never looked back. I use MySQL only for very legacy applications.
MySQL has so many bugs an inconsistencies, unicode support is only one of many issues. (Unsafe GROUP BY, silently cutting concatenated text fields when the result grows too big, silently converting invalid dates to '0000-00-00', and so on ...)
In the past, MySQL had some performance advantages over PostgreSQL, but only for MyISAM, i.e. without real transaction safety. Nowadays, if you want to trade ACID for speed, you wouldn't use MySQL but some "NoSQL" database instead. So even from a performance point of view, I see no point in using MySQL for anything but legacy stuff.
I switched to sqlite. I've done my time on SQL servers. In my view, they simply have too much complexity for many scenarios. Basically, one big database server equals one big downtime when issues occur. Sharded data architecture on the other hand can design for failure and maintain graceful degraded performance, not to mention optimized resource allocation and the option of adopting vastly different security and backup policies across different parts of the datastore (eg. encryption, snapshots, etc.). Yes, you lose database-internal consistency guarantees. No, that doesn't mean you have to lose consistency.
I did that in 2006. A big win was maintenance, since downloading/uploading a copy of the database was so simple. Eventually I had performance problems when doing read-write-read-write stuff, since writing locks the entire db file.
You could argue I shouldn't be doing that, and you may be right. But I just switched back to MySQL, and the problem went away.
SQLite fixed that issue in 2010 with write ahead logging (version 3.7). Writes go to a separate file so there is no impact on readers of the main file. It is a little more complex than that and described at https://www.sqlite.org/wal.html
One nifty feature of Postgresql is that if you have a group by a,b,c,d, you need to aggregate everything else in your select that isn't a/b/c/d, or the statement won't prepare, whereas if you don't select an aggregate in MySQL you'll just get one random column, which is probably not what you want.
This is mostly true, but in recent Postgres versions (since 9.1 I believe), if you GROUP BY the primary key, you can still include other columns in the SELECT statement. For instance:
SELECT a.name, COUNT(b.id)
FROM a
LEFT OUTER JOIN b ON b.a_id = a.id
GROUP BY a.id
That's so true! Years ago I switched almost all projects to PostgreSQL. I never looked back. I use MySQL only for very legacy applications.
MySQL has so many bugs an inconsistencies, unicode support is only one of many issues. (Unsafe GROUP BY, silently cutting concatenated text fields when the result grows too big, silently converting invalid dates to '0000-00-00', and so on ...)
In the past, MySQL had some performance advantages over PostgreSQL, but only for MyISAM, i.e. without real transaction safety. Nowadays, if you want to trade ACID for speed, you wouldn't use MySQL but some "NoSQL" database instead. So even from a performance point of view, I see no point in using MySQL for anything but legacy stuff.