#Variables that are not set evaluate to NULL. The data type of a user defined variable is the data type of the value assigned to it, that means it is not necessary (or possible) to declare variable names before using them. There are no restrictions on the assigned values, large objects (LOBs) are supported as well.
#Variables that are not set evaluate to NULL. The data type of a user defined variable is the data type of the value assigned to it, that means it is not necessary (or possible) to declare variable names before using them. There are no restrictions on the assigned values, large objects (LOBs) are supported as well.
@~performance_1004_h2
#Performance Comparison
@~performance_1005_p
#In most cases H2 is a lot faster than all other (open source and not open source) database engines. Please note this is mostly a single connection benchmark run on one computer.
@~performance_1006_h3
#Embedded
@~performance_1007_th
#Test Case
@~performance_1012_td
#Simple: Init
@~performance_1102_h3
#Client-Server
@~performance_1103_th
#Test Case
@~performance_1110_td
#Simple: Init
@~performance_1236_h3
#Benchmark Results and Comments
@~performance_1237_h4
H2
@~performance_1238_p
#Version 1.0 (2007-09-15) was used for the test. For simpler operations, the performance of H2 is about the same as for HSQLDB. For more complex queries, the query optimizer is very important. However H2 is not very fast in every case, certain kind of queries may still be slow. One situation where is H2 is slow is large result sets, because they are buffered to disk if more than a certain number of records are returned. The advantage of buffering is, there is no limit on the result set size. The open/close time is almost fixed, because of the file locking protocol: The engine waits 20 ms after opening a database to ensure the database files are not opened by another process.
@~performance_1239_h4
HSQLDB
@~performance_1240_p
#Version 1.8.0.8 was used for the test. Cached tables are used in this test (hsqldb.default_table_type=cached), and the write delay is 1 second (SET WRITE_DELAY 1). HSQLDB is fast when using simple operations. HSQLDB is very slow in the last test (BenchC: Transactions), probably because is has a bad query optimizer. One query where HSQLDB is slow is a two-table join:
@~performance_1242_h4
Derby
@~performance_1243_p
#Version 10.3.1.4 was used for the test. Derby is clearly the slowest embedded database in this test. This seems to be a structural problem, because all operations are really slow. It will not be easy for the developers of Derby to improve the performance to a reasonable level. A few problems have been identified: Leaving autocommit on is a problem for Derby. If it is switched off during the whole test, the results are about 20% better for Derby.
@~performance_1244_h4
PostgreSQL
@~performance_1245_p
#Version 8.1.4 was used for the test. The following options where changed in postgresql.conf: fsync = off, commit_delay = 1000. PostgreSQL is run in server mode. It looks like the base performance is slower than MySQL, the reason could be the network layer. The memory usage number is incorrect, because only the memory usage of the JDBC driver is measured.
@~performance_1246_h4
MySQL
@~performance_1247_p
#Version 5.0.22 was used for the test. MySQL was run with the InnoDB backend. The setting innodb_flush_log_at_trx_commit (found in the my.ini file) was set to 0. Otherwise (and by default), MySQL is really slow (around 140 statements per second in this test) because it tries to flush the data to disk for each commit. For small transactions (when autocommit is on) this is really slow. But many use cases use small or relatively small transactions. Too bad this setting is not listed in the configuration wizard, and it always overwritten when using the wizard. You need to change this setting manually in the file my.ini, and then restart the service. The memory usage number is incorrect, because only the memory usage of the JDBC driver is measured.
@~performance_1248_h4
#Firebird
@~performance_1249_p
#Firebird 1.5 (default installation) was tested, but the results are not published currently. It is possible to run the performance test with the Firebird database, and any information on how to configure Firebird for higher performance are welcome.
@~performance_1250_h4
#Why Oracle / MS SQL Server / DB2 are Not Listed
@~performance_1251_p
#The license of these databases does not allow to publish benchmark results. This doesn't mean that they are fast. They are in fact quite slow, and need a lot of memory. But you will need to test this yourself. SQLite was not tested because the JDBC driver doesn't support transactions.
@~performance_1252_h3
#About this Benchmark
@~performance_1253_h4
#Number of Connections
@~performance_1254_p
#This is mostly a single-connection benchmark. BenchB uses multiple connections, the other tests one connection.
@~performance_1255_h4
#Real-World Tests
@~performance_1256_p
#Good benchmarks emulate real-world use cases. This benchmark includes 3 test cases: A simple test case with one table and many small updates / deletes. BenchA is similar to the TPC-A test, but single connection / single threaded (see also: www.tpc.org). BenchB is similar to the TPC-B test, using multiple connections (one thread per connection). BenchC is similar to the TPC-C test, but single connection / single threaded.
@~performance_1257_h4
#Comparing Embedded with Server Databases
@~performance_1258_p
#This is mainly a benchmark for embedded databases (where the application runs in the same virtual machine than the database engine). However MySQL and PostgreSQL are not Java databases and cannot be embedded into a Java application. For the Java databases, both embedded and server modes are tested.
@~performance_1259_h4
#Test Platform
@~performance_1260_p
#This test is run on Windows XP with the virus scanner switched off. The VM used is Sun JDK 1.5.
@~performance_1261_h4
#Multiple Runs
@~performance_1262_p
#When a Java benchmark is run first, the code is not fully compiled and therefore runs slower than when running multiple times. A benchmark should always run the same test multiple times and ignore the first run(s). This benchmark runs three times, the last run counts.
@~performance_1263_h4
#Memory Usage
@~performance_1264_p
#It is not enough to measure the time taken, the memory usage is important as well. Performance can be improved in databases by using a bigger in-memory cache, but there is only a limited amount of memory available on the system. HSQLDB tables are kept fully in memory by default, this benchmark uses 'disk based' tables for all databases. Unfortunately, it is not so easy to calculate the memory usage of PostgreSQL and MySQL, because they run in a different process than the test. This benchmark currently does not print memory usage of those databases.
@~performance_1265_h4
#Delayed Operations
@~performance_1266_p
#Some databases delay some operations (for example flushing the buffers) until after the benchmark is run. This benchmark waits between each database tested, and each database runs in a different process (sequentially).
@~performance_1267_h4
#Transaction Commit / Durability
@~performance_1268_p
#Durability means transaction committed to the database will not be lost. Some databases (for example MySQL) try to enforce this by default by calling fsync() to flush the buffers, but most hard drives don't actually flush all data. Calling fsync() slows down transaction commit a lot, but doesn't always make data durable. When comparing the results, it is important to think about the effect. Many database suggest to 'batch' operations when possible. This benchmark switches off autocommit when loading the data, and calls commit after each 1000 inserts. However many applications need 'short' transactions at runtime (a commit after each update). This benchmark commits after each update / delete in the simple benchmark, and after each business transaction in the other benchmarks. For databases that support delayed commits, a delay of one second is used.
@~performance_1269_h4
#Using Prepared Statements
@~performance_1270_p
#Wherever possible, the test cases use prepared statements.
@~performance_1271_h4
#Currently Not Tested: Startup Time
@~performance_1272_p
#The startup time of a database engine is important as well for embedded use. This time is not measured currently. Also, not tested is the time used to create a database and open an existing database. Here, one (wrapper) connection is opened at the start, and for each step a new connection is opened and then closed. That means the Open/Close time listed is for opening a connection if the database is already in use.
@~performance_1273_h3
#PolePosition Benchmark
@~performance_1274_p
#The PolePosition is an open source benchmark. The algorithms are all quite simple. It was developed / sponsored by db4o.
@~performance_1275_th
#Test Case
@~performance_1280_td
#Melbourne write
@~performance_1380_h2
#Application Profiling
@~performance_1381_h3
#Analyze First
@~performance_1382_p
#Before trying to optimize the performance, it is important to know where the time is actually spent. The same is true for memory problems. Premature or 'blind' optimization should be avoided, as it is not an efficient way to solve the problem. There are various ways to analyze the application. In some situations it is possible to compare two implementations and use System.currentTimeMillis() to find out which one is faster. But this does not work for complex applications with many modules, and for memory problems. A very good tool to measure both the memory and the CPU is the <a href="http://www.yourkit.com">YourKit Java Profiler</a> . This tool is also used to optimize the performance and memory footprint of this database engine.
@~performance_1383_h2
#Database Performance Tuning
@~performance_1384_h3
#Virus Scanners
@~performance_1385_p
#Some virus scanners scan files every time they are accessed. It is very important for performance that database files are not scanned for viruses. The database engine does never interprets the data stored in the files as programs, that means even if somebody would store a virus in a database file, this would be harmless (when the virus does not run, it cannot spread). Some virus scanners allow excluding file endings. Make sure files ending with .db are not scanned.
@~performance_1386_h3
トレースオプションを使用する
@~performance_1387_p
#If the main performance hot spots are in the database engine, in many cases the performance can be optimized by creating additional indexes, or changing the schema. Sometimes the application does not directly generate the SQL statements, for example if an O/R mapping tool is used. To view the SQL statements and JDBC API calls, you can use the trace options. For more information, see <a href="features.html#trace_options">Using the Trace Options</a> .
@~performance_1388_h3
#Index Usage
@~performance_1389_p
#This database uses indexes to improve the performance of SELECT, UPDATE and DELETE statements. If a column is used in the WHERE clause of a query, and if an index exists on this column, then the index can be used. Multi-column indexes are used if all or the first columns of the index are used. Both equality lookup and range scans are supported. Indexes are not used to order result sets: The results are sorted in memory if required. Indexes are created automatically for primary key and unique constraints. Indexes are also created for foreign key constraints, if required. For other columns, indexes need to be created manually using the CREATE INDEX statement.
@~performance_1390_h3
#Optimizer
@~performance_1391_p
#This database uses a cost based optimizer. For simple and queries and queries with medium complexity (less than 7 tables in the join), the expected cost (running time) of all possible plans is calculated, and the plan with the lowest cost is used. For more complex queries, the algorithm first tries all possible combinations for the first few tables, and the remaining tables added using a greedy algorithm (this works well for most joins). Afterwards a genetic algorithm is used to test at most 2000 distinct plans. Only left-deep plans are evaluated.
@~performance_1392_h3
#Expression Optimization
@~performance_1393_p
#After the statement is parsed, all expressions are simplified automatically if possible. Operations are evaluated only once if all parameters are constant. Functions are also optimized, but only if the function is constant (always returns the same result for the same parameter values). If the WHERE clause is always false, then the table is not accessed at all.
@~performance_1394_h3
#COUNT(*) Optimization
@~performance_1395_p
#If the query only counts all rows of a table, then the data is not accessed. However, this is only possible if no WHERE clause is used, that means it only works for queries of the form SELECT COUNT(*) FROM table.
#When executing a query, at most one index per joined table can be used. If the same table is joined multiple times, for each join only one index is used. Example: for the query SELECT * FROM TEST T1, TEST T2 WHERE T1.NAME='A' AND T2.ID=T1.ID, two index can be used, in this case the index on NAME for T1 and the index on ID for T2.
もしポートが他のアプリケーションによって使用されている場合は、H2コンソールを異なったポートで起動したいはずです。これは、.h2.server.properties.ファイル内のポートを変更することにより実行できます。このファイルはユーザディレクトリ内に格納されています (Windowsでは通常、"Documents and Settings/<ユーザ名>")。関連する項目はwebPortです。
サーバーを起動するとローカルのホームディレクトリに .h2.server.properties と呼ばれるファイル構成が作成されます。Windowsのインストールでは、このファイルは will be in the directory C:\Documents and Settings\[ユーザ名]のディレクトリ内にあります。このファイルはアプリケーションのセッティングに含まれています。
テーブル名やカラム名は、ツリー内のテーブル名、カラム名をクリックすることによってスクリプトにインサートすることができます。クエリが空の時にテーブルをクリックすると、 'SELECT * FROM ...' も同様に追加されます。 クエリを入力している間、使用されているテーブルはツリー内で自動的に拡張されます。例えば、 'SELECT * FROM TEST T WHERE T.' と入力すると、ツリー内のTESTテーブルは自動的に拡張されます。
#To connect to a database, a Java application first needs to load the database driver, and then get a connection. A simple way to do that is using the following code:
#To use the Lucene full text search, you need the Lucene library in the classpath. How his is done depends on the application; if you use the H2 Console, you can add the Lucene jar file to the the environment variables H2DRIVERS or CLASSPATH. To initialize the Lucene full text search in a database, call:
build_1034_p=The conversion between UTF-8 and Java encoding (using the \\u syntax), as well as the HTML entities (&\#..;) is automated by running the tool PropertiesToUTF8. The web site translation is automated as well, using <code>ant docs</code> .
build_1034_p=The conversion between UTF-8 and Java encoding (using the \\u syntax), as well as the HTML entities (&\#..;) is automated by running the tool PropertiesToUTF8. The web site translation is automated as well, using <code>ant docs</code> .