apples and oranges logo The Open Source Database Benchmark
Run Rules
Building OSDB
Style Guide
Project Status

OSDB Links
-Get OSDB here!
-The CVS Repository has the latest (maybe broken) sources
-OSDB mailing lists
-Contact us

Other Links
-Linux Benchmark Suite
- Association Française des Utilisateurs d'Unix et des Systèmes Ouverts SSBA benchmark suite

OSDB has tests underway for

OSDB needs tests for

Frequently Asked Questions


What is OSDB?

OSDB, the "Open Source Database Benchmark," grew out of a minor project at Compaq Computer Corporation to evaluate the I/O throughput and general processing power of GNU Linux/Alpha.

Starting with nothing but text-based descriptions of database benchmarks (and being unwilling to spend the $$$ for SPECxxx or TPC-y test suites), we chose to build a test suite based on AS3AP, the ANSI SQL Scalable and Portable Benchmark, as documented in Chapter 5 of The Benchmark Handbook, edited by Jim Grey. AS3AP itself was created by D. Bitton and C. Turbyfill.

Is OSDB really AS3AP?

Though OSDB implements AS3AP in large part, it differs in a number of important aspects:

  1. Metrics
  2. AS3AP identifies but a single reporting metric: the size of the largest database which can be used to complete the AS3AP suite in less than 12 hours.

    OSDB reports LOTS of numbers! You can use overall results, or you can limit your evaluation to those tests which best reflect your needs.

  3. Missing functionality
  4. AS3AP requires a rather complete SQL implementation in order to run. OSDB accomodates incomplete SQL implementations, and even non-SQL implementations.

  5. Clarifications
  6. In the specification (which was labelled "second draft"), a small number of ambiguities, inconsistencies, and errors appear.

    Because OSDB is specified in the C language, the ambiguities of English can be avoided.

  7. Arbitrary changes
  8. AS3AP includes a test to do journal recovery. OSDB omits this type of test because one hopes that database recovery will be a rare event, and thus of minor importance in considering the likely day-to-day performance of a system.

    Other changes, including test grouping and ordering, likewise reflect an attempt to isolate one-time events, such as initial database creation, from day-to-day operations.


Benchmark results

How do OSDB results compare to the results of other benchmarks?

There is no comparison! One might be tempted to predict, with greater or lesser success, the results of OSDB based on the results of some other benchmark, or vice versa, but such predictions would be pointless. Unless, of course, your job is cherry-picking benchmarks :-)

Why do you refuse to publish your results?

The actual performance of a specific database system on a specific benchmark is but one factor in a useful system evaluation.

Other factors include:

  • How well was the system tuned?
  • What was the load on the system at the time that the benchmark was run?
  • What will the load be on the system when the intended application is finally running?
  • Which features are important to you?
  • For some applications, a flat ASCII text file (or even a user's mouse click!) is the best way to gather and record the data. In other cases a fully-redundant, distributed, transaction-based SQL powerhouse is the only way to go.

You should form a committee to certify other people's results.

See the question above. It would be useful to discuss results in an open forum (to the extent that one can do so without violating their vendor's license), but with the goal of understanding the results, and what might be changed to improve them.
OSDB is proud
to be hosted on
Get Open Source Database Benchmark at Fast, secure and Free Open Source software downloads
(As always, trademarks are owned by their owners...) Webmaster