MySQL Developer Contests PostgreSQL Benchmarks

Michael “Monty” Widenius, the lead MySQL developer, contests the benchmarks compiled by Great Bridge, and reported on by several prominent websites like ApacheToday and Slashdot.

Michael “Monty” Widenius, the lead MySQL developer, contests the benchmarks compiled by Great Bridge, and reported on by several prominent websites like ApacheToday and Slashdot.

Note: The following been edited for English. Edited copy posted 10:10am MST August 16, 2000

Our apologies to Monty for posting the undedited copy. – DevShed

I would really like to comment against the last PostgreSQL benchmark test, if there would be anything to comment. As they haven’t released any information about the test or even the results one can just assume that they have done a test that doesn’t have anything to do with the real world :(

All databases have some good and weak areas and it’s very simple to design a test that tests the things others are weak at. MySQL 3.22 is for example not very good for mixed inserts + selects on the same table and that is also the only thing we ever have seen PostgreSQL users test in their previous tests when PostgreSQL comes outs as better. PostgreSQL has of course also a lot of weak areas, like slow connection, slow insert, slow create/drop tables, running long multiple transactions where you get a conflict at the end (in this page/row locking is better) and the need to run vacuum() from time to time (especially when you do a lot of deletes / updates). We will update our benchmark comparison web page with PostgreSQL 7.0.2 shortly, to give another picture of this story. The only way to know which database is best for your application is to write a simulation of the applications and test both databases.

We here at MySQL we have always tried to design very fair tests that no one can misinterpret or misrepresent. Anyone can download our benchmarks and run them themselves to verify our test results. One could argue that we don’t measure everything, but we do add new tests all the time and we also accept tests from database users that want to test specific things against a lot of databases. This will, in the long run, make our tests very fair and easier to use as a decision tool when choosing a database.

It’s a shame that Great Bridge funds a test that is solely done to confuse users instead of telling the truth; PostgreSQL is good in some areas, and bad in others, just like all other databases. No database can be used to solve all problems (at least not without providing a lot of hints for the database and a lot of dedicated code to solve the problem) and you can’t win all benchmarks. If one starts claiming this they are on very thin ice!

I hope that Great Bridge doesn’t end up in a lawsuit from the TPC organization by misusing their test. This may backfire on all open source databases as I have never seen any commercial database publish such a bad press release and the commercial companies may think that this is the way things are done in the open source world :(

The way to set up the databases in a test is also very crucial for the performance of the database. The article doesn’t mention anything about this or even with which ODBC driver they used the different databases. The defaults for MySQL are for a database with moderate load that should take very little resources. MySQL also have two ODBC drivers, one slow with debugging and one fast. It would be very nice to know how they actually did use MySQL. To get any performance from Oracle, one has also to tune this a lot; The ODBC driver for Oracle also has very bad performance; This is a common known fact; No one runs a critical system with Oracle and ODBC. (I use Oracle here as an example, as they press release implicate that they are using Oracle for testing; If not, they are not testing against the proprietary database leaders).

One thing that also is interesting is that they don’t mention which PostgreSQL version they are using. It’s very unlikely that they did actually test PostgreSQL 7.0 as this has at least one very fatal bug in the index handling which made it useless for benchmarks (at least when we did a test run on it). If they have tested another version, a tuned PostgreSQL with non standard patches and non standard setup, they have broken even more rules. According to the given benchmarks numbers, it looks as if they have run PostgreSQL without disk syncing, which would be fine if they would have told it in the article and if they would also have done this with the other databases.

I agree that MySQL 3.22 is weak in the case where you do a lot of inserts + updates + selects on one table, but it’s, on the other hand, equally strong if you don’t do that; In MySQL 3.23 you can intermix insert + select but not updates. This will be fixed with BDB tables, but a release with this is still a few weeks into the future. Threads also gives MySQL better scalability than processes, that PostgreSQL uses, so we are very confident about the future.

I also don’t agree with the argument that they are not testing MySQL 3.23 as this is still marked ‘beta’. We have here at MySQL a completely different release schedule on our versions. We don’t mark anything as release if there has been any significant bug report for the gamma version for a month. Compared to our release schedule, PostgreSQL 7.0 would be called alpha (I don’t mean anything bad with this; Many users use our alpha version in a production environment with good results)

The net result is that the posted benchmark is about as dishonest as a benchmark can be, the important thing is that the people who read the press release understand that.

Regards, Monty
[gp-comments width="770" linklove="off" ]
antalya escort bayan antalya escort bayan