I don't doubt that a competent person could have improved the configuration substantially.
However I didn't get to choose the server, it already existed. Furthermore I was dealing with a restored backup of a production webserver, so it was configured appropriately for OLTP even though I was using it for complex reporting. There was simply no way that anyone was going to risk the possibility of production problems to make my life on the back end any easier. And there was no way that I could in good conscience recommend that they do so.
And I did find my temp table workaround, that effectively let me force any query plan I needed.
I've definitely been in these types of situations, having to make things work in a hostile environment or to meet a deadline. I am just recommending an option to avoid these types of situations for those who are able to seek out the best practice before starting.
I might also speculate that if you were writing analytics queries that ran scans across several, large, heavily contended production database tables, it's likely that dumping the data into a temporary table first might have been the best option even with a highly tuned server. Despite the fantasy we're often told about MVCC, there are still enormous amounts of resources wasted doing large analytics-type scans on live, hot tables. This isn't something that could be solved with a query plan hint and it's unlikely even a smarter query planner could have saved you.
If I had a magic wand, my environment of choice in this situation would be a separate dedicated, read-only slave running on ZFS that would allow me to create filesystem snapshots & bring up a PostgreSQL instance on that snapshot.
I assure you that contention was not an issue on a restored backup. At that point PostgreSQL did not have the option to have read-only slaves. But for reporting, day old data was acceptable so that wasn't a problem.
It still doesn't solve the problem that you really want to tune differently for OLTP and data warehousing, and the server was tuned for OLTP.
I'll definitely agree that attempting to run OLTP and analytics queries on a single PostgreSQL server is not going to turn out well. I'm not sure if there's a way around it though, even in theory (e.g. MapReduce isn't very good for OLTP).
However I didn't get to choose the server, it already existed. Furthermore I was dealing with a restored backup of a production webserver, so it was configured appropriately for OLTP even though I was using it for complex reporting. There was simply no way that anyone was going to risk the possibility of production problems to make my life on the back end any easier. And there was no way that I could in good conscience recommend that they do so.
And I did find my temp table workaround, that effectively let me force any query plan I needed.