@advanced_1000_h1 Advanced @advanced_1001_a Result Sets @advanced_1002_a Large Objects @advanced_1003_a Linked Tables @advanced_1004_a Recursive Queries @advanced_1005_a Updatable Views @advanced_1006_a Transaction Isolation @advanced_1007_a Multi-Version Concurrency Control (MVCC) @advanced_1008_a Clustering / High Availability @advanced_1009_a Two Phase Commit @advanced_1010_a Compatibility @advanced_1011_a Standards Compliance @advanced_1012_a Run as Windows Service @advanced_1013_a ODBC Driver @advanced_1014_a Using H2 in Microsoft .NET @advanced_1015_a ACID @advanced_1016_a Durability Problems @advanced_1017_a Using the Recover Tool @advanced_1018_a File Locking Protocols @advanced_1019_a File Locking Method 'Serialized' @advanced_1020_a Using Passwords @advanced_1021_a Password Hash @advanced_1022_a Protection against SQL Injection @advanced_1023_a Protection against Remote Access @advanced_1024_a Restricting Class Loading and Usage @advanced_1025_a Security Protocols @advanced_1026_a SSL/TLS Connections @advanced_1027_a Universally Unique Identifiers (UUID) @advanced_1028_a Settings Read from System Properties @advanced_1029_a Setting the Server Bind Address @advanced_1030_a Pluggable File System @advanced_1031_a Split File System @advanced_1032_a Database Upgrade @advanced_1033_a Limits and Limitations @advanced_1034_a Glossary and Links @advanced_1035_h2 Result Sets @advanced_1036_h3 Statements that Return a Result Set @advanced_1037_p The following statements return a result set: SELECT, EXPLAIN, CALL, SCRIPT, SHOW, HELP. All other statements return an update count. @advanced_1038_h3 Limiting the Number of Rows @advanced_1039_p Before the result is returned to the application, all rows are read by the database. Server side cursors are not supported currently. If only the first few rows are interesting for the application, then the result set size should be limited to improve the performance. This can be done using LIMIT in a query (example: SELECT * FROM TEST LIMIT 100), or by using Statement.setMaxRows(max). @advanced_1040_h3 Large Result Sets and External Sorting @advanced_1041_p For large result set, the result is buffered to disk. The threshold can be defined using the statement SET MAX_MEMORY_ROWS. If ORDER BY is used, the sorting is done using an external sort algorithm. In this case, each block of rows is sorted using quick sort, then written to disk; when reading the data, the blocks are merged together. @advanced_1042_h2 Large Objects @advanced_1043_h3 Storing and Reading Large Objects @advanced_1044_p If it is possible that the objects don't fit into memory, then the data type CLOB (for textual data) or BLOB (for binary data) should be used. For these data types, the objects are not fully read into memory, by using streams. To store a BLOB, use PreparedStatement.setBinaryStream. To store a CLOB, use PreparedStatement.setCharacterStream. To read a BLOB, use ResultSet.getBinaryStream, and to read a CLOB, use ResultSet.getCharacterStream. When using the client/server mode, large BLOB and CLOB data is stored in a temporary file on the client side. @advanced_1045_h3 When to use CLOB/BLOB @advanced_1046_p This database stores large LOB (CLOB and BLOB) objects as separate files. Small LOB objects are stored in-place, the threshold can be set using MAX_LENGTH_INPLACE_LOB, but there is still an overhead to use CLOB/BLOB. Because of this, BLOB and CLOB should never be used for columns with a maximum size below about 200 bytes. The best threshold depends on the use case; reading in-place objects is faster than reading from separate files, but slows down the performance of operations that don't involve this column. @advanced_1047_h3 Large Object Compression @advanced_1048_p CLOB and BLOB values can be compressed by using SET COMPRESS_LOB. The LZF algorithm is faster but needs more disk space. By default compression is disabled, which usually speeds up write operations. If you store many large compressible values such as XML, HTML, text, and uncompressed binary files, then compressing can save a lot of disk space (sometimes more than 50%), and read operations may even be faster. @advanced_1049_h2 Linked Tables @advanced_1050_p This database supports linked tables, which means tables that don't exist in the current database but are just links to another database. To create such a link, use the CREATE LINKED TABLE statement: @advanced_1051_p You can then access the table in the usual way. Whenever the linked table is accessed, the database issues specific queries over JDBC. Using the example above, if you issue the query SELECT * FROM LINK WHERE ID=1, then the following query is run against the PostgreSQL database: SELECT * FROM TEST WHERE ID=?. The same happens for insert and update statements. Only simple statements are executed against the target database, that means no joins. Prepared statements are used where possible. @advanced_1052_p To view the statements that are executed against the target table, set the trace level to 3. @advanced_1053_p If multiple linked tables point to the same database (using the same database URL), the connection is shared. To disable this, set the system property h2.shareLinkedConnections=false. @advanced_1054_p The statement CREATE LINKED TABLE supports an optional schema name parameter. @advanced_1055_p The following are not supported because they may result in a deadlock: creating a linked table to the same database, and creating a linked table to another database using the server mode if the other database is open in the same server (use the embedded mode instead). @advanced_1056_p Data types that are not supported in H2 are also not supported for linked tables, for example unsigned data types if the value is outside the range of the signed type. In such cases, the columns needs to be cast to a supported type. @advanced_1057_h2 Updatable Views @advanced_1058_p By default, views are not updatable. To make a view updatable, use an "instead of" trigger as follows: @advanced_1059_p Update the base table(s) within the trigger as required. For details, see the sample application org.h2.samples.UpdatableView. @advanced_1060_h2 Transaction Isolation @advanced_1061_p Transaction isolation is provided for all data manipulation language (DML) statements. Most data definition language (DDL) statements commit the current transaction. See the Grammar for details. @advanced_1062_p This database supports the following transaction isolation levels: @advanced_1063_b Read Committed @advanced_1064_li This is the default level. Read locks are released immediately after executing the statement, but write locks are kept until the transaction commits. Higher concurrency is possible when using this level. @advanced_1065_li To enable, execute the SQL statement SET LOCK_MODE 3 @advanced_1066_li or append ;LOCK_MODE=3 to the database URL: jdbc:h2:~/test;LOCK_MODE=3 @advanced_1067_b Serializable @advanced_1068_li Both read locks and write locks are kept until the transaction commits. To enable, execute the SQL statement SET LOCK_MODE 1 @advanced_1069_li or append ;LOCK_MODE=1 to the database URL: jdbc:h2:~/test;LOCK_MODE=1 @advanced_1070_b Read Uncommitted @advanced_1071_li This level means that transaction isolation is disabled. @advanced_1072_li To enable, execute the SQL statement SET LOCK_MODE 0 @advanced_1073_li or append ;LOCK_MODE=0 to the database URL: jdbc:h2:~/test;LOCK_MODE=0 @advanced_1074_p When using the isolation level 'serializable', dirty reads, non-repeatable reads, and phantom reads are prohibited. @advanced_1075_b Dirty Reads @advanced_1076_li Means a connection can read uncommitted changes made by another connection. @advanced_1077_li Possible with: read uncommitted @advanced_1078_b Non-Repeatable Reads @advanced_1079_li A connection reads a row, another connection changes a row and commits, and the first connection re-reads the same row and gets the new result. @advanced_1080_li Possible with: read uncommitted, read committed @advanced_1081_b Phantom Reads @advanced_1082_li A connection reads a set of rows using a condition, another connection inserts a row that falls in this condition and commits, then the first connection re-reads using the same condition and gets the new row. @advanced_1083_li Possible with: read uncommitted, read committed @advanced_1084_h3 Table Level Locking @advanced_1085_p The database allows multiple concurrent connections to the same database. To make sure all connections only see consistent data, table level locking is used by default. This mechanism does not allow high concurrency, but is very fast. Shared locks and exclusive locks are supported. Before reading from a table, the database tries to add a shared lock to the table (this is only possible if there is no exclusive lock on the object by another connection). If the shared lock is added successfully, the table can be read. It is allowed that other connections also have a shared lock on the same object. If a connection wants to write to a table (update or delete a row), an exclusive lock is required. To get the exclusive lock, other connection must not have any locks on the object. After the connection commits, all locks are released. This database keeps all locks in memory. When a lock is released, and multiple connections are waiting for it, one of them is picked at random. @advanced_1086_h3 Lock Timeout @advanced_1087_p If a connection cannot get a lock on an object, the connection waits for some amount of time (the lock timeout). During this time, hopefully the connection holding the lock commits and it is then possible to get the lock. If this is not possible because the other connection does not release the lock for some time, the unsuccessful connection will get a lock timeout exception. The lock timeout can be set individually for each connection. @advanced_1088_h2 Multi-Version Concurrency Control (MVCC) @advanced_1089_p The MVCC feature allows higher concurrency than using (table level or row level) locks. When using MVCC in this database, delete, insert and update operations will only issue a shared lock on the table. An exclusive lock is still used when adding or removing columns, when dropping the table, and when using SELECT ... FOR UPDATE. Connections only 'see' committed data, and own changes. That means, if connection A updates a row but doesn't commit this change yet, connection B will see the old value. Only when the change is committed, the new value is visible by other connections (read committed). If multiple connections concurrently try to update the same row, the database waits until it can apply the change, but at most until the lock timeout expires. @advanced_1090_p To use the MVCC feature, append ;MVCC=TRUE to the database URL: @advanced_1091_p MVCC is disabled by default. The MVCC feature is not fully tested yet. The limitations of the MVCC mode are: it can not be used at the same time as MULTI_THREADED=TRUE; the complete undo log (the list of uncommitted changes) must fit in memory when using multi-version concurrency. The setting MAX_MEMORY_UNDO has no effect. It is not possible to enable or disable this setting while the database is already open. The setting must be specified in the first connection (the one that opens the database). @advanced_1092_p If MVCC is enabled, changing the lock mode (LOCK_MODE) has no effect. @advanced_1093_h2 Clustering / High Availability @advanced_1094_p This database supports a simple clustering / high availability mechanism. The architecture is: two database servers run on two different computers, and on both computers is a copy of the same database. If both servers run, each database operation is executed on both computers. If one server fails (power, hardware or network failure), the other server can still continue to work. From this point on, the operations will be executed only on one server until the other server is back up. @advanced_1095_p Clustering can only be used in the server mode (the embedded mode does not support clustering). The cluster can be re-created using the CreateCluster tool without stopping the remaining server. Applications that are still connected are automatically disconnected, however when appending ;AUTO_RECONNECT=TRUE, they will recover from that. @advanced_1096_p To initialize the cluster, use the following steps: @advanced_1097_li Create a database @advanced_1098_li Use the CreateCluster tool to copy the database to another location and initialize the clustering. Afterwards, you have two databases containing the same data. @advanced_1099_li Start two servers (one for each copy of the database) @advanced_1100_li You are now ready to connect to the databases with the client application(s) @advanced_1101_h3 Using the CreateCluster Tool @advanced_1102_p To understand how clustering works, please try out the following example. In this example, the two databases reside on the same computer, but usually, the databases will be on different servers. @advanced_1103_li Create two directories: server1, server2. Each directory will simulate a directory on a computer. @advanced_1104_li Start a TCP server pointing to the first directory. You can do this using the command line: @advanced_1105_li Start a second TCP server pointing to the second directory. This will simulate a server running on a second (redundant) computer. You can do this using the command line: @advanced_1106_li Use the CreateCluster tool to initialize clustering. This will automatically create a new, empty database if it does not exist. Run the tool on the command line: @advanced_1107_li You can now connect to the databases using an application or the H2 Console using the JDBC URL jdbc:h2:tcp://localhost:9101,localhost:9102/~/test @advanced_1108_li If you stop a server (by killing the process), you will notice that the other machine continues to work, and therefore the database is still accessible. @advanced_1109_li To restore the cluster, you first need to delete the database that failed, then restart the server that was stopped, and re-run the CreateCluster tool. @advanced_1110_h3 Detect Which Cluster Instances are Running @advanced_1111_p To find out which cluster nodes are currently running, execute the following SQL statement: @advanced_1112_p If the result is '' (two single quotes), then the cluster mode is disabled. Otherwise, the list of servers is returned, enclosed in single quote. Example: 'server1:9191,server2:9191'. @advanced_1113_h3 Clustering Algorithm and Limitations @advanced_1114_p Read-only queries are only executed against the first cluster node, but all other statements are executed against all nodes. There is currently no load balancing made to avoid problems with transactions. The following functions may yield different results on different cluster nodes and must be executed with care: RANDOM_UUID(), SECURE_RAND(), SESSION_ID(), MEMORY_FREE(), MEMORY_USED(), CSVREAD(), CSVWRITE(), RAND() [when not using a seed]. Those functions should not be used directly in modifying statements (for example INSERT, UPDATE, MERGE). However, they can be used in read-only statements and the result can then be used for modifying statements. Using auto-increment and identity columns is currently not supported. Instead, sequence values need to be manually requested and then used to insert data (using two statements). @advanced_1115_p When using the cluster modes, result sets are read fully in memory by the client, so that there is no problem if the server dies that executed the query. Result sets must fit in memory on the client side. @advanced_1116_p The SQL statement SET AUTOCOMMIT FALSE is not supported in the cluster mode. To disable autocommit, the method Connection.setAutoCommit(false) needs to be called. @advanced_1117_p It is possible that a transaction from one connection overtakes a transaction from a different connection. Depending on the operations, this might result in different results, for example when conditionally incrementing a value in a row. @advanced_1118_h2 Two Phase Commit @advanced_1119_p The two phase commit protocol is supported. 2-phase-commit works as follows: @advanced_1120_li Autocommit needs to be switched off @advanced_1121_li A transaction is started, for example by inserting a row @advanced_1122_li The transaction is marked 'prepared' by executing the SQL statement PREPARE COMMIT transactionName @advanced_1123_li The transaction can now be committed or rolled back @advanced_1124_li If a problem occurs before the transaction was successfully committed or rolled back (for example because a network problem occurred), the transaction is in the state 'in-doubt' @advanced_1125_li When re-connecting to the database, the in-doubt transactions can be listed with SELECT * FROM INFORMATION_SCHEMA.IN_DOUBT @advanced_1126_li Each transaction in this list must now be committed or rolled back by executing COMMIT TRANSACTION transactionName or ROLLBACK TRANSACTION transactionName @advanced_1127_li The database needs to be closed and re-opened to apply the changes @advanced_1128_h2 Compatibility @advanced_1129_p This database is (up to a certain point) compatible to other databases such as HSQLDB, MySQL and PostgreSQL. There are certain areas where H2 is incompatible. @advanced_1130_h3 Transaction Commit when Autocommit is On @advanced_1131_p At this time, this database engine commits a transaction (if autocommit is switched on) just before returning the result. For a query, this means the transaction is committed even before the application scans through the result set, and before the result set is closed. Other database engines may commit the transaction in this case when the result set is closed. @advanced_1132_h3 Keywords / Reserved Words @advanced_1133_p There is a list of keywords that can't be used as identifiers (table names, column names and so on), unless they are quoted (surrounded with double quotes). The list is currently: @advanced_1134_code CROSS, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, DISTINCT, EXCEPT, EXISTS, FALSE, FOR, FROM, FULL, GROUP, HAVING, INNER, INTERSECT, IS, JOIN, LIKE, LIMIT, MINUS, NATURAL, NOT, NULL, ON, ORDER, PRIMARY, ROWNUM, SELECT, SYSDATE, SYSTIME, SYSTIMESTAMP, TODAY, TRUE, UNION, UNIQUE, WHERE @advanced_1135_p Certain words of this list are keywords because they are functions that can be used without '()' for compatibility, for example CURRENT_TIMESTAMP. @advanced_1136_h2 Standards Compliance @advanced_1137_p This database tries to be as much standard compliant as possible. For the SQL language, ANSI/ISO is the main standard. There are several versions that refer to the release date: SQL-92, SQL:1999, and SQL:2003. Unfortunately, the standard documentation is not freely available. Another problem is that important features are not standardized. Whenever this is the case, this database tries to be compatible to other databases. @advanced_1138_h3 Supported Character Sets, Character Encoding, and Unicode @advanced_1139_p H2 internally uses Unicode, and supports all character encoding systems and character sets supported by the virtual machine you use. @advanced_1140_h2 Run as Windows Service @advanced_1141_p Using a native wrapper / adapter, Java applications can be run as a Windows Service. There are various tools available to do that. The Java Service Wrapper from Tanuki Software, Inc. is included in the installation. Batch files are provided to install, start, stop and uninstall the H2 Database Engine Service. This service contains the TCP Server and the H2 Console web application. The batch files are located in the directory h2/service. @advanced_1142_p The service wrapper bundled with H2 is a 32-bit version. To use a 64-bit version of Windows (x64), you need to use a 64-bit version of the wrapper, for example the one from Simon Krenger. @advanced_1143_p When running the database as a service, absolute path should be used. Using ~ in the database URL is problematic in this case, because it means to use the home directory of the current user. The service might run without or with the wrong user, so that the database files might end up in an unexpected place. @advanced_1144_h3 Install the Service @advanced_1145_p The service needs to be registered as a Windows Service first. To do that, double click on 1_install_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear. @advanced_1146_h3 Start the Service @advanced_1147_p You can start the H2 Database Engine Service using the service manager of Windows, or by double clicking on 2_start_service.bat. Please note that the batch file does not print an error message if the service is not installed. @advanced_1148_h3 Connect to the H2 Console @advanced_1149_p After installing and starting the service, you can connect to the H2 Console application using a browser. Double clicking on 3_start_browser.bat to do that. The default port (8082) is hard coded in the batch file. @advanced_1150_h3 Stop the Service @advanced_1151_p To stop the service, double click on 4_stop_service.bat. Please note that the batch file does not print an error message if the service is not installed or started. @advanced_1152_h3 Uninstall the Service @advanced_1153_p To uninstall the service, double click on 5_uninstall_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear. @advanced_1154_h3 Additional JDBC drivers @advanced_1155_p To use other databases (for example MySQL), the location of the JDBC drivers of those databases need to be added to the environment variables H2DRIVERS or CLASSPATH before installing the service. Multiple drivers can be set; each entry needs to be separated with a ; (Windows) or : (other operating systems). Spaces in the path names are supported. The settings must not be quoted. @advanced_1156_h2 ODBC Driver @advanced_1157_p This database does not come with its own ODBC driver at this time, but it supports the PostgreSQL network protocol. Therefore, the PostgreSQL ODBC driver can be used. Support for the PostgreSQL network protocol is quite new and should be viewed as experimental. It should not be used for production applications. @advanced_1158_p To use the PostgreSQL ODBC driver on 64 bit versions of Windows, first run c:/windows/syswow64/odbcad32.exe. At this point you set up your DSN just like you would on any other system. See also: Re: ODBC Driver on Windows 64 bit @advanced_1159_h3 ODBC Installation @advanced_1160_p First, the ODBC driver must be installed. Any recent PostgreSQL ODBC driver should work, however version 8.2 (psqlodbc-08_02*) or newer is recommended. The Windows version of the PostgreSQL ODBC driver is available at http://www.postgresql.org/ftp/odbc/versions/msi. @advanced_1161_h3 Starting the Server @advanced_1162_p After installing the ODBC driver, start the H2 Server using the command line: @advanced_1163_p The PG Server (PG for PostgreSQL protocol) is started as well. By default, databases are stored in the current working directory where the server is started. Use -baseDir to save databases in another directory, for example the user home directory: @advanced_1164_p The PG server can be started and stopped from within a Java application as follows: @advanced_1165_p By default, only connections from localhost are allowed. To allow remote connections, use -pgAllowOthers when starting the server. @advanced_1166_h3 ODBC Configuration @advanced_1167_p After installing the driver, a new Data Source must be added. In Windows, run odbcad32.exe to open the Data Source Administrator. Then click on 'Add...' and select the PostgreSQL Unicode driver. Then click 'Finish'. You will be able to change the connection properties. The property column represents the property key in the odbc.ini file (which may be different from the GUI). @advanced_1168_th Property @advanced_1169_th Example @advanced_1170_th Remarks @advanced_1171_td Data Source @advanced_1172_td H2 Test @advanced_1173_td The name of the ODBC Data Source @advanced_1174_td Database @advanced_1175_td ~/test;ifexists=true @advanced_1176_td The database name. This can include connections settings. By default, the database is stored in the current working directory where the Server is started except when the -baseDir setting is used. The name must be at least 3 characters. @advanced_1177_td Servername @advanced_1178_td localhost @advanced_1179_td The server name or IP address. @advanced_1180_td By default, only remote connections are allowed @advanced_1181_td Username @advanced_1182_td sa @advanced_1183_td The database user name. @advanced_1184_td SSL @advanced_1185_td false (disabled) @advanced_1186_td At this time, SSL is not supported. @advanced_1187_td Port @advanced_1188_td 5435 @advanced_1189_td The port where the PG Server is listening. @advanced_1190_td Password @advanced_1191_td sa @advanced_1192_td The database password. @advanced_1193_p To improve performance, please enable 'server side prepare' under Options / Datasource / Page 2 / Server side prepare. @advanced_1194_p Afterwards, you may use this data source. @advanced_1195_h3 PG Protocol Support Limitations @advanced_1196_p At this time, only a subset of the PostgreSQL network protocol is implemented. Also, there may be compatibility problems on the SQL level, with the catalog, or with text encoding. Problems are fixed as they are found. Currently, statements can not be canceled when using the PG protocol. Also, H2 does not provide index meta over ODBC. @advanced_1197_p PostgreSQL ODBC Driver Setup requires a database password; that means it is not possible to connect to H2 databases without password. This is a limitation of the ODBC driver. @advanced_1198_h3 Security Considerations @advanced_1199_p Currently, the PG Server does not support challenge response or encrypt passwords. This may be a problem if an attacker can listen to the data transferred between the ODBC driver and the server, because the password is readable to the attacker. Also, it is currently not possible to use encrypted SSL connections. Therefore the ODBC driver should not be used where security is important. @advanced_1200_p The first connection that opens a database using the PostgreSQL server needs to be an administrator user. Subsequent connections don't need to be opened by an administrator. @advanced_1201_h3 Using Microsoft Access @advanced_1202_p When using Microsoft Access to edit data in a linked H2 table, you may need to enable the following option: Tools - Options - Edit/Find - ODBC fields. @advanced_1203_h2 Using H2 in Microsoft .NET @advanced_1204_p The database can be used from Microsoft .NET even without using Java, by using IKVM.NET. You can access a H2 database on .NET using the JDBC API, or using the ADO.NET interface. @advanced_1205_h3 Using the ADO.NET API on .NET @advanced_1206_p An implementation of the ADO.NET interface is available in the open source project H2Sharp. @advanced_1207_h3 Using the JDBC API on .NET @advanced_1208_li Install the .NET Framework from Microsoft. Mono has not yet been tested. @advanced_1209_li Install IKVM.NET. @advanced_1210_li Copy the h2*.jar file to ikvm/bin @advanced_1211_li Run the H2 Console using: ikvm -jar h2*.jar @advanced_1212_li Convert the H2 Console to an .exe file using: ikvmc -target:winexe h2*.jar. You may ignore the warnings. @advanced_1213_li Create a .dll file using (change the version accordingly): ikvmc.exe -target:library -version:1.0.69.0 h2*.jar @advanced_1214_p If you want your C# application use H2, you need to add the h2.dll and the IKVM.OpenJDK.ClassLibrary.dll to your C# solution. Here some sample code: @advanced_1215_h2 ACID @advanced_1216_p In the database world, ACID stands for: @advanced_1217_li Atomicity: transactions must be atomic, meaning either all tasks are performed or none. @advanced_1218_li Consistency: all operations must comply with the defined constraints. @advanced_1219_li Isolation: transactions must be isolated from each other. @advanced_1220_li Durability: committed transaction will not be lost. @advanced_1221_h3 Atomicity @advanced_1222_p Transactions in this database are always atomic. @advanced_1223_h3 Consistency @advanced_1224_p By default, this database is always in a consistent state. Referential integrity rules are enforced except when explicitly disabled. @advanced_1225_h3 Isolation @advanced_1226_p For H2, as with most other database systems, the default isolation level is 'read committed'. This provides better performance, but also means that transactions are not completely isolated. H2 supports the transaction isolation levels 'serializable', 'read committed', and 'read uncommitted'. @advanced_1227_h3 Durability @advanced_1228_p This database does not guarantee that all committed transactions survive a power failure. Tests show that all databases sometimes lose transactions on power failure (for details, see below). Where losing transactions is not acceptable, a laptop or UPS (uninterruptible power supply) should be used. If durability is required for all possible cases of hardware failure, clustering should be used, such as the H2 clustering mode. @advanced_1229_h2 Durability Problems @advanced_1230_p Complete durability means all committed transaction survive a power failure. Some databases claim they can guarantee durability, but such claims are wrong. A durability test was run against H2, HSQLDB, PostgreSQL, and Derby. All of those databases sometimes lose committed transactions. The test is included in the H2 download, see org.h2.test.poweroff.Test. @advanced_1231_h3 Ways to (Not) Achieve Durability @advanced_1232_p Making sure that committed transactions are not lost is more complicated than it seems first. To guarantee complete durability, a database must ensure that the log record is on the hard drive before the commit call returns. To do that, databases use different methods. One is to use the 'synchronous write' file access mode. In Java, RandomAccessFile supports the modes rws and rwd: @advanced_1233_code rwd @advanced_1234_li : every update to the file's content is written synchronously to the underlying storage device. @advanced_1235_code rws @advanced_1236_li : in addition to rwd, every update to the metadata is written synchronously. @advanced_1237_p A test (org.h2.test.poweroff.TestWrite) with one of those modes achieves around 50 thousand write operations per second. Even when the operating system write buffer is disabled, the write rate is around 50 thousand operations per second. This feature does not force changes to disk because it does not flush all buffers. The test updates the same byte in the file again and again. If the hard drive was able to write at this rate, then the disk would need to make at least 50 thousand revolutions per second, or 3 million RPM (revolutions per minute). There are no such hard drives. The hard drive used for the test is about 7200 RPM, or about 120 revolutions per second. There is an overhead, so the maximum write rate must be lower than that. @advanced_1238_p Calling fsync flushes the buffers. There are two ways to do that in Java: @advanced_1239_code FileDescriptor.sync() @advanced_1240_li . The documentation says that this forces all system buffers to synchronize with the underlying device. This method is supposed to return after all in-memory modified copies of buffers associated with this file descriptor have been written to the physical medium. @advanced_1241_code FileChannel.force() @advanced_1242_li . This method is supposed to force any updates to this channel's file to be written to the storage device that contains it. @advanced_1243_p By default, MySQL calls fsync for each commit. When using one of those methods, only around 60 write operations per second can be achieved, which is consistent with the RPM rate of the hard drive used. Unfortunately, even when calling FileDescriptor.sync() or FileChannel.force(), data is not always persisted to the hard drive, because most hard drives do not obey fsync(): see Your Hard Drive Lies to You. In Mac OS X, fsync does not flush hard drive buffers. See Bad fsync?. So the situation is confusing, and tests prove there is a problem. @advanced_1244_p Trying to flush hard drive buffers is hard, and if you do the performance is very bad. First you need to make sure that the hard drive actually flushes all buffers. Tests show that this can not be done in a reliable way. Then the maximum number of transactions is around 60 per second. Because of those reasons, the default behavior of H2 is to delay writing committed transactions. @advanced_1245_p In H2, after a power failure, a bit more than one second of committed transactions may be lost. To change the behavior, use SET WRITE_DELAY and CHECKPOINT SYNC. Most other databases support commit delay as well. In the performance comparison, commit delay was used for all databases that support it. @advanced_1246_h3 Running the Durability Test @advanced_1247_p To test the durability / non-durability of this and other databases, you can use the test application in the package org.h2.test.poweroff. Two computers with network connection are required to run this test. One computer just listens, while the test application is run (and power is cut) on the other computer. The computer with the listener application opens a TCP/IP port and listens for an incoming connection. The second computer first connects to the listener, and then created the databases and starts inserting records. The connection is set to 'autocommit', which means after each inserted record a commit is performed automatically. Afterwards, the test computer notifies the listener that this record was inserted successfully. The listener computer displays the last inserted record number every 10 seconds. Now, switch off the power manually, then restart the computer, and run the application again. You will find out that in most cases, none of the databases contains all the records that the listener computer knows about. For details, please consult the source code of the listener and test application. @advanced_1248_h2 Using the Recover Tool @advanced_1249_p The Recover tool can be used to extract the contents of a database file, even if the database is corrupted. It also extracts the content of the transaction log and large objects (CLOB or BLOB). To run the tool, type on the command line: @advanced_1250_p For each database in the current directory, a text file will be created. This file contains raw insert statements (for the data) and data definition (DDL) statements to recreate the schema of the database. This file can be executed using the RunScript tool or a RUNSCRIPT FROM SQL statement. The script includes at least one CREATE USER statement. If you run the script against a database that was created with the same user, or if there are conflicting users, running the script will fail. Consider running the script against a database that was created with a user name that is not in the script. @advanced_1251_p The Recover tool creates a SQL script from database file. It also processes the transaction log. @advanced_1252_p To verify the database can recover at any time, append ;RECOVER_TEST=64 to the database URL in your test environment. This will simulate an application crash after each 64 writes to the database file. A log file named databaseName.h2.db.log is created that lists the operations. The recovery is tested using an in-memory file system, that means it may require a larger heap setting. @advanced_1253_h2 File Locking Protocols @advanced_1254_p Multiple concurrent connections to the same database are supported, however a database file can only be open for reading and writing (in embedded mode) by one process at the same time. Otherwise, the processes would overwrite each others data and corrupt the database file. To protect against this problem, whenever a database is opened, a lock file is created to signal other processes that the database is in use. If the database is closed, or if the process that opened the database stops normally, this lock file is deleted. @advanced_1255_p In special cases (if the process did not terminate normally, for example because there was a power failure), the lock file is not deleted by the process that created it. That means the existence of the lock file is not a safe protocol for file locking. However, this software uses a challenge-response protocol to protect the database files. There are two methods (algorithms) implemented to provide both security (that is, the same database files cannot be opened by two processes at the same time) and simplicity (that is, the lock file does not need to be deleted manually by the user). The two methods are 'file method' and 'socket methods'. @advanced_1256_p The file locking protocols (except the file locking method 'FS') have the following limitation: if a shared file system is used, and the machine with the lock owner is sent to sleep (standby or hibernate), another machine may take over. If the machine that originally held the lock wakes up, the database may become corrupt. If this situation can occur, the application must ensure the database is closed when the application is put to sleep. @advanced_1257_h3 File Locking Method 'File' @advanced_1258_p The default method for database file locking is the 'File Method'. The algorithm is: @advanced_1259_li If the lock file does not exist, it is created (using the atomic operation File.createNewFile). Then, the process waits a little bit (20 ms) and checks the file again. If the file was changed during this time, the operation is aborted. This protects against a race condition when one process deletes the lock file just after another one create it, and a third process creates the file again. It does not occur if there are only two writers. @advanced_1260_li If the file can be created, a random number is inserted together with the locking method ('file'). Afterwards, a watchdog thread is started that checks regularly (every second once by default) if the file was deleted or modified by another (challenger) thread / process. Whenever that occurs, the file is overwritten with the old data. The watchdog thread runs with high priority so that a change to the lock file does not get through undetected even if the system is very busy. However, the watchdog thread does use very little resources (CPU time), because it waits most of the time. Also, the watchdog only reads from the hard disk and does not write to it. @advanced_1261_li If the lock file exists and was recently modified, the process waits for some time (up to two seconds). If it was still changed, an exception is thrown (database is locked). This is done to eliminate race conditions with many concurrent writers. Afterwards, the file is overwritten with a new version (challenge). After that, the thread waits for 2 seconds. If there is a watchdog thread protecting the file, he will overwrite the change and this process will fail to lock the database. However, if there is no watchdog thread, the lock file will still be as written by this thread. In this case, the file is deleted and atomically created again. The watchdog thread is started in this case and the file is locked. @advanced_1262_p This algorithm is tested with over 100 concurrent threads. In some cases, when there are many concurrent threads trying to lock the database, they block each other (meaning the file cannot be locked by any of them) for some time. However, the file never gets locked by two threads at the same time. However using that many concurrent threads / processes is not the common use case. Generally, an application should throw an error to the user if it cannot open a database, and not try again in a (fast) loop. @advanced_1263_h3 File Locking Method 'Socket' @advanced_1264_p There is a second locking mechanism implemented, but disabled by default. To use it, append ;FILE_LOCK=SOCKET to the database URL. The algorithm is: @advanced_1265_li If the lock file does not exist, it is created. Then a server socket is opened on a defined port, and kept open. The port and IP address of the process that opened the database is written into the lock file. @advanced_1266_li If the lock file exists, and the lock method is 'file', then the software switches to the 'file' method. @advanced_1267_li If the lock file exists, and the lock method is 'socket', then the process checks if the port is in use. If the original process is still running, the port is in use and this process throws an exception (database is in use). If the original process died (for example due to a power failure, or abnormal termination of the virtual machine), then the port was released. The new process deletes the lock file and starts again. @advanced_1268_p This method does not require a watchdog thread actively polling (reading) the same file every second. The problem with this method is, if the file is stored on a network share, two processes (running on different computers) could still open the same database files, if they do not have a direct TCP/IP connection. @advanced_1269_h3 File Locking Method 'FS' @advanced_1270_p This database file locking mechanism uses native file system lock on the database file. No *.lock.db file is created in this case, and no background thread is started. This mechanism may not work on all systems as expected. Some systems allow to lock the same file multiple times within the same virtual machine, and on some system native file locking is not supported or files are not unlocked after a power failure. @advanced_1271_p To enable this feature, append ;FILE_LOCK=FS to the database URL. @advanced_1272_p This feature is relatively new. When using it for production, please ensure your system does in fact lock files as expected. @advanced_1273_h2 File Locking Method 'Serialized' @advanced_1274_p This locking mode allows to open multiple connections to the same database. The connections may be opened from multiple processes and from different computers. When writing to the database, access is automatically synchronized internally. Write operations are slower than when using the server mode, and concurrency is relatively poor. The advantage of this mode is that there is no need to start a server. @advanced_1275_p To enable this feature, append ;FILE_LOCK=SERIALIZED to the database URL. @advanced_1276_p This feature is relatively new. When using it for production, please ensure your use case is well tested (if possible with automated test cases). @advanced_1277_p One known limitation when using this mode is: queries that write to the database will fail with the exception "The database is read only", if the queries are run using Statement.executeQuery(). As a workaround, use Statement.execute() @advanced_1278_h2 Using Passwords @advanced_1279_h3 Using Secure Passwords @advanced_1280_p Remember that weak passwords can be broken regardless of the encryption and security protocols. Don't use passwords that can be found in a dictionary. Appending numbers does not make passwords secure. A way to create good passwords that can be remembered is: take the first letters of a sentence, use upper and lower case characters, and creatively include special characters (but it's more important to use a long password than to use special characters). Example: @advanced_1281_code i'sE2rtPiUKtT @advanced_1282_p from the sentence it's easy to remember this password if you know the trick. @advanced_1283_h3 Passwords: Using Char Arrays instead of Strings @advanced_1284_p Java strings are immutable objects and cannot be safely 'destroyed' by the application. After creating a string, it will remain in the main memory of the computer at least until it is garbage collected. The garbage collection cannot be controlled by the application, and even if it is garbage collected the data may still remain in memory. It might also be possible that the part of memory containing the password is swapped to disk (if not enough main memory is available), which is a problem if the attacker has access to the swap file of the operating system. @advanced_1285_p It is a good idea to use char arrays instead of strings for passwords. Char arrays can be cleared (filled with zeros) after use, and therefore the password will not be stored in the swap file. @advanced_1286_p This database supports using char arrays instead of string to pass user and file passwords. The following code can be used to do that: @advanced_1287_p This example requires Java 1.6. When using Swing, use javax.swing.JPasswordField. @advanced_1288_h3 Passing the User Name and/or Password in the URL @advanced_1289_p Instead of passing the user name as a separate parameter as in Connection conn = DriverManager. getConnection("jdbc:h2:~/test", "sa", "123"); the user name (and/or password) can be supplied in the URL itself: Connection conn = DriverManager. getConnection("jdbc:h2:~/test;USER=sa;PASSWORD=123"); The settings in the URL override the settings passed as a separate parameter. @advanced_1290_h2 Password Hash @advanced_1291_p Sometimes the database password needs to be stored in a configuration file (for example in the web.xml file). In addition to connecting with the plain text password, this database supports connecting with the password hash. This means that only the hash of the password (and not the plain text password) needs to be stored in the configuration file. This will only protect others from reading or re-constructing the plain text password (even if they have access to the configuration file); it does not protect others from accessing the database using the password hash. @advanced_1292_p To connect using the password hash instead of plain text password, append ;PASSWORD_HASH=TRUE to the database URL, and replace the password with the password hash. To calculate the password hash from a plain text password, run the following command within the H2 Console tool: @password_hash <upperCaseUserName> <password>. As an example, if the user name is sa and the password is test, run the command @password_hash SA test. Then use the resulting password hash as you would use the plain text password. When using an encrypted database, then the user password and file password need to be hashed separately. To calculate the hash of the file password, run: @password_hash file <filePassword>. @advanced_1293_h2 Protection against SQL Injection @advanced_1294_h3 What is SQL Injection @advanced_1295_p This database engine provides a solution for the security vulnerability known as 'SQL Injection'. Here is a short description of what SQL injection means. Some applications build SQL statements with embedded user input such as: @advanced_1296_p If this mechanism is used anywhere in the application, and user input is not correctly filtered or encoded, it is possible for a user to inject SQL functionality or statements by using specially built input such as (in this example) this password: ' OR ''='. In this case the statement becomes: @advanced_1297_p Which is always true no matter what the password stored in the database is. For more information about SQL Injection, see Glossary and Links. @advanced_1298_h3 Disabling Literals @advanced_1299_p SQL Injection is not possible if user input is not directly embedded in SQL statements. A simple solution for the problem above is to use a prepared statement: @advanced_1300_p This database provides a way to enforce usage of parameters when passing user input to the database. This is done by disabling embedded literals in SQL statements. To do this, execute the statement: @advanced_1301_p Afterwards, SQL statements with text and number literals are not allowed any more. That means, SQL statement of the form WHERE NAME='abc' or WHERE CustomerId=10 will fail. It is still possible to use prepared statements and parameters as described above. Also, it is still possible to generate SQL statements dynamically, and use the Statement API, as long as the SQL statements do not include literals. There is also a second mode where number literals are allowed: SET ALLOW_LITERALS NUMBERS. To allow all literals, execute SET ALLOW_LITERALS ALL (this is the default setting). Literals can only be enabled or disabled by an administrator. @advanced_1302_h3 Using Constants @advanced_1303_p Disabling literals also means disabling hard-coded 'constant' literals. This database supports defining constants using the CREATE CONSTANT command. Constants can be defined only when literals are enabled, but used even when literals are disabled. To avoid name clashes with column names, constants can be defined in other schemas: @advanced_1304_p Even when literals are enabled, it is better to use constants instead of hard-coded number or text literals in queries or views. With constants, typos are found at compile time, the source code is easier to understand and change. @advanced_1305_h3 Using the ZERO() Function @advanced_1306_p It is not required to create a constant for the number 0 as there is already a built-in function ZERO(): @advanced_1307_h2 Protection against Remote Access @advanced_1308_p By default this database does not allow connections from other machines when starting the H2 Console, the TCP server, or the PG server. Remote access can be enabled using the command line options -webAllowOthers, -tcpAllowOthers, -pgAllowOthers. If you enable remote access, please also consider using the options -baseDir, -ifExists, so that remote users can not create new databases or access existing databases with weak passwords. When using the option -baseDir, only databases within that directory may be accessed. Ensure the existing accessible databases are protected using strong passwords. @advanced_1309_h2 Restricting Class Loading and Usage @advanced_1310_p By default there is no restriction on loading classes and executing Java code for admins. That means an admin may call system functions such as System.setProperty by executing: @advanced_1311_p To restrict users (including admins) from loading classes and executing code, the list of allowed classes can be set in the system property h2.allowedClasses in the form of a comma separated list of classes or patterns (items ending with *). By default all classes are allowed. Example: @advanced_1312_p This mechanism is used for all user classes, including database event listeners, trigger classes, user-defined functions, user-defined aggregate functions, and JDBC driver classes (with the exception of the H2 driver) when using the H2 Console. @advanced_1313_h2 Security Protocols @advanced_1314_p The following paragraphs document the security protocols used in this database. These descriptions are very technical and only intended for security experts that already know the underlying security primitives. @advanced_1315_h3 User Password Encryption @advanced_1316_p When a user tries to connect to a database, the combination of user name, @, and password are hashed using SHA-256, and this hash value is transmitted to the database. This step does not protect against an attacker that re-uses the value if he is able to listen to the (unencrypted) transmission between the client and the server. But, the passwords are never transmitted as plain text, even when using an unencrypted connection between client and server. That means if a user reuses the same password for different things, this password is still protected up to some point. See also 'RFC 2617 - HTTP Authentication: Basic and Digest Access Authentication' for more information. @advanced_1317_p When a new database or user is created, a new random salt value is generated. The size of the salt is 64 bits. Using the random salt reduces the risk of an attacker pre-calculating hash values for many different (commonly used) passwords. @advanced_1318_p The combination of user-password hash value (see above) and salt is hashed using SHA-256. The resulting value is stored in the database. When a user tries to connect to the database, the database combines user-password hash value with the stored salt value and calculates the hash value. Other products use multiple iterations (hash the hash value again and again), but this is not done in this product to reduce the risk of denial of service attacks (where the attacker tries to connect with bogus passwords, and the server spends a lot of time calculating the hash value for each password). The reasoning is: if the attacker has access to the hashed passwords, he also has access to the data in plain text, and therefore does not need the password any more. If the data is protected by storing it on another computer and only accessible remotely, then the iteration count is not required at all. @advanced_1319_h3 File Encryption @advanced_1320_p The database files can be encrypted using two different algorithms: AES-128 and XTEA (using 32 rounds). The reasons for supporting XTEA is performance (XTEA is a bit faster as AES in some environments) and to have an alternative algorithm if AES is suddenly broken. Please note that the XTEA implementation used in this database only uses 32 rounds and not 64 rounds as recommended by its inventor (as of 2010, the best known attack is on 27 rounds). @advanced_1321_p When a user tries to connect to an encrypted database, the combination of file@ and the file password is hashed using SHA-256. This hash value is transmitted to the server. @advanced_1322_p When a new database file is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bits. The combination of the file password hash and the salt value is hashed 1024 times using SHA-256. The reason for the iteration is to make it harder for an attacker to calculate hash values for common passwords. @advanced_1323_p The resulting hash value is used as the key for the block cipher algorithm (AES-128 or XTEA with 32 rounds). Then, an initialization vector (IV) key is calculated by hashing the key again using SHA-256. This is to make sure the IV is unknown to the attacker. The reason for using a secret IV is to protect against watermark attacks. @advanced_1324_p Before saving a block of data (each block is 8 bytes long), the following operations are executed: first, the IV is calculated by encrypting the block number with the IV key (using the same block cipher algorithm). This IV is combined with the plain text using XOR. The resulting data is encrypted using the AES-128 or XTEA algorithm. @advanced_1325_p When decrypting, the operation is done in reverse. First, the block is decrypted using the key, and then the IV is calculated combined with the decrypted text using XOR. @advanced_1326_p Therefore, the block cipher mode of operation is CBC (cipher-block chaining), but each chain is only one block long. The advantage over the ECB (electronic codebook) mode is that patterns in the data are not revealed, and the advantage over multi block CBC is that flipped cipher text bits are not propagated to flipped plaintext bits in the next block. @advanced_1327_p Database encryption is meant for securing the database while it is not in use (stolen laptop and so on). It is not meant for cases where the attacker has access to files while the database is in use. When he has write access, he can for example replace pieces of files with pieces of older versions and manipulate data like this. @advanced_1328_p File encryption slows down the performance of the database engine. Compared to unencrypted mode, database operations take about 2.2 times longer when using XTEA, and 2.5 times longer using AES (embedded mode). @advanced_1329_h3 Wrong Password / User Name Delay @advanced_1330_p To protect against remote brute force password attacks, the delay after each unsuccessful login gets double as long. Use the system properties h2.delayWrongPasswordMin and h2.delayWrongPasswordMax to change the minimum (the default is 250 milliseconds) or maximum delay (the default is 4000 milliseconds, or 4 seconds). The delay only applies for those using the wrong password. Normally there is no delay for a user that knows the correct password, with one exception: after using the wrong password, there is a delay of up to (randomly distributed) the same delay as for a wrong password. This is to protect against parallel brute force attacks, so that an attacker needs to wait for the whole delay. Delays are synchronized. This is also required to protect against parallel attacks. @advanced_1331_p There is only one exception message for both wrong user and for wrong password, to make it harder to get the list of user names. It is not possible from the stack trace to see if the user name was wrong or the password. @advanced_1332_h3 HTTPS Connections @advanced_1333_p The web server supports HTTP and HTTPS connections using SSLServerSocket. There is a default self-certified certificate to support an easy starting point, but custom certificates are supported as well. @advanced_1334_h2 SSL/TLS Connections @advanced_1335_p Remote SSL/TLS connections are supported using the Java Secure Socket Extension (SSLServerSocket, SSLSocket). By default, anonymous SSL is enabled. The default cipher suite is SSL_DH_anon_WITH_RC4_128_MD5. @advanced_1336_p To use your own keystore, set the system properties javax.net.ssl.keyStore and javax.net.ssl.keyStorePassword before starting the H2 server and client. See also Customizing the Default Key and Trust Stores, Store Types, and Store Passwords for more information. @advanced_1337_p To disable anonymous SSL, set the system property h2.enableAnonymousSSL to false. @advanced_1338_h2 Universally Unique Identifiers (UUID) @advanced_1339_p This database supports UUIDs. Also supported is a function to create new UUIDs using a cryptographically strong pseudo random number generator. With random UUIDs, the chance of two having the same value can be calculated using the probability theory. See also 'Birthday Paradox'. Standardized randomly generated UUIDs have 122 random bits. 4 bits are used for the version (Randomly generated UUID), and 2 bits for the variant (Leach-Salz). This database supports generating such UUIDs using the built-in function RANDOM_UUID(). Here is a small program to estimate the probability of having two identical UUIDs after generating a number of values: @advanced_1340_p Some values are: @advanced_1341_th Number of UUIs @advanced_1342_th Probability of Duplicates @advanced_1343_td 2^36=68'719'476'736 @advanced_1344_td 0.000'000'000'000'000'4 @advanced_1345_td 2^41=2'199'023'255'552 @advanced_1346_td 0.000'000'000'000'4 @advanced_1347_td 2^46=70'368'744'177'664 @advanced_1348_td 0.000'000'000'4 @advanced_1349_p To help non-mathematicians understand what those numbers mean, here a comparison: one's annual risk of being hit by a meteorite is estimated to be one chance in 17 billion, that means the probability is about 0.000'000'000'06. @advanced_1350_h2 Recursive Queries @advanced_1351_p H2 has experimental support for recursive queries using so called "common table expressions" (CTE). Examples: @advanced_1352_p Limitations: Recursive queries need to be of the type UNION ALL, and the recursion needs to be on the second part of the query. No tables or views with the name of the table expression may exist. Different table expression names need to be used when using multiple distinct table expressions within the same transaction and for the same session. All columns of the table expression are of type VARCHAR, and may need to be cast to the required data type. Views with recursive queries are not supported. Subqueries and INSERT INTO ... FROM with recursive queries are not supported. Parameters are only supported within the last SELECT statement (a workaround is to use session variables like @start within the table expression). The syntax is: @advanced_1353_h2 Settings Read from System Properties @advanced_1354_p Some settings of the database can be set on the command line using -DpropertyName=value. It is usually not required to change those settings manually. The settings are case sensitive. Example: @advanced_1355_p The current value of the settings can be read in the table INFORMATION_SCHEMA.SETTINGS. @advanced_1356_p For a complete list of settings, see SysProperties. @advanced_1357_h2 Setting the Server Bind Address @advanced_1358_p Usually server sockets accept connections on any/all local addresses. This may be a problem on multi-homed hosts. To bind only to one address, use the system property h2.bindAddress. This setting is used for both regular server sockets and for SSL server sockets. IPv4 and IPv6 address formats are supported. @advanced_1359_h2 Pluggable File System @advanced_1360_p This database supports a pluggable file system API. The file system implementation is selected using a file name prefix. The following file systems are included: @advanced_1361_code zip: @advanced_1362_li read-only zip-file based file system. Format: zip:/zipFileName!/fileName. @advanced_1363_code split: @advanced_1364_li file system that splits files in 1 GB files (stackable with other file systems). @advanced_1365_code nio: @advanced_1366_li file system that uses FileChannel instead of RandomAccessFile (faster in some operating systems). @advanced_1367_code nioMapped: @advanced_1368_li file system that uses memory mapped files (faster in some operating systems). Please note that there currently is a file size limitation of 2 GB when using this file system when using a 32-bit JVM. To work around this limitation, combine it with the split file system: split:nioMapped:test. @advanced_1369_code memFS: @advanced_1370_li in-memory file system (slower than mem; experimental; mainly used for testing the database engine itself). @advanced_1371_code memLZF: @advanced_1372_li compressing in-memory file system (slower than memFS but uses less memory; experimental; mainly used for testing the database engine itself). @advanced_1373_p As an example, to use the the nio file system, use the following database URL: jdbc:h2:nio:~/test. @advanced_1374_p To register a new file system, extend the classes org.h2.store.fs.FilePath, FileBase, and call the method FilePath.register before using it. @advanced_1375_p For input streams (but not for random access files), URLs may be used in addition to the registered file systems. Example: jar:file:///c:/temp/example.zip!/org/example/nested.csv. To read a stream from the classpath, use the prefix classpath:, as in classpath:/org/h2/samples/newsfeed.sql. @advanced_1376_h2 Split File System @advanced_1377_p The file system prefix split: is used to split logical files into multiple physical files, for example so that a database can get larger than the maximum file system size of the operating system. If the logical file is larger than the maximum file size, then the file is split as follows: @advanced_1378_code <fileName> @advanced_1379_li (first block, is always created) @advanced_1380_code <fileName>.1.part @advanced_1381_li (second block) @advanced_1382_p More physical files (*.2.part, *.3.part) are automatically created / deleted if needed. The maximum physical file size of a block is 2^30 bytes, which is also called 1 GiB or 1 GB. However this can be changed if required, by specifying the block size in the file name. The file name format is: split:<x>:<fileName> where the file size per block is 2^x. For 1 MiB block sizes, use x = 20 (because 2^20 is 1 MiB). The following file name means the logical file is split into 1 MiB blocks: split:20:test.h2.db. An example database URL for this case is jdbc:h2:split:20:~/test. @advanced_1383_h2 Database Upgrade @advanced_1384_p In version 1.2, H2 introduced a new file store implementation which is incompatible to the one used in versions < 1.2. To automatically convert databases to the new file store, it is necessary to include an additional jar file. The file can be found at http://h2database.com/h2mig_pagestore_addon.jar . If this file is in the classpath, every connect to an older database will result in a conversion process. @advanced_1385_p The conversion itself is done internally via 'script to' and 'runscript from'. After the conversion process, the files will be renamed from @advanced_1386_code dbName.data.db @advanced_1387_li to dbName.data.db.backup @advanced_1388_code dbName.index.db @advanced_1389_li to dbName.index.db.backup @advanced_1390_p by default. Also, the temporary script will be written to the database directory instead of a temporary directory. Both defaults can be customized via @advanced_1391_code org.h2.upgrade.DbUpgrade.setDeleteOldDb(boolean) @advanced_1392_code org.h2.upgrade.DbUpgrade.setScriptInTmpDir(boolean) @advanced_1393_p prior opening a database connection. @advanced_1394_p Since version 1.2.140 it is possible to let the old h2 classes (v 1.2.128) connect to the database. The automatic upgrade .jar file must be present, and the URL must start with jdbc:h2v1_1: (the JDBC driver class is org.h2.upgrade.v1_1.Driver). If the database should automatically connect using the old version if a database with the old format exists (without upgrade), and use the new version otherwise, then append ;NO_UPGRADE=TRUE to the database URL. Please note the old driver did not process the system property "h2.baseDir" correctly, so that using this setting is not supported when upgrading. @advanced_1395_h2 Limits and Limitations @advanced_1396_p This database has the following known limitations: @advanced_1397_li Database file size limit: 4 TB (using the default page size of 2 KB) or higher (when using a larger page size). This limit is including CLOB and BLOB data. @advanced_1398_li The maximum file size for FAT or FAT32 file systems is 4 GB. That means when using FAT or FAT32, the limit is 4 GB for the data. This is the limitation of the file system. The database does provide a workaround for this problem, it is to use the file name prefix split:. In that case files are split into files of 1 GB by default. An example database URL is: jdbc:h2:split:~/test. @advanced_1399_li The maximum number of rows per table is 2^64. @advanced_1400_li Main memory requirements: The larger the database, the more main memory is required. With the version 1.1 storage mechanism, the minimum main memory required for a 12 GB database was around 240 MB. With the current page store, the minimum main memory required is much lower, around 1 MB for each 8 GB database file size. @advanced_1401_li Limit on the complexity of SQL statements. Statements of the following form will result in a stack overflow exception: @advanced_1402_li There is no limit for the following entities, except the memory and storage capacity: maximum identifier length (table name, column name, and so on); maximum number of tables, columns, indexes, triggers, and other database objects; maximum statement length, number of parameters per statement, tables per statement, expressions in order by, group by, having, and so on; maximum rows per query; maximum columns per table, columns per index, indexes per table, lob columns per table, and so on; maximum row length, index row length, select row length; maximum length of a varchar column, decimal column, literal in a statement. @advanced_1403_li Querying from the metadata tables is slow if there are many tables (thousands). @advanced_1404_li For limitations on data types, see the documentation of the respective Java data type or the data type documentation of this database. @advanced_1405_h2 Glossary and Links @advanced_1406_th Term @advanced_1407_th Description @advanced_1408_td AES-128 @advanced_1409_td A block encryption algorithm. See also: Wikipedia: AES @advanced_1410_td Birthday Paradox @advanced_1411_td Describes the higher than expected probability that two persons in a room have the same birthday. Also valid for randomly generated UUIDs. See also: Wikipedia: Birthday Paradox @advanced_1412_td Digest @advanced_1413_td Protocol to protect a password (but not to protect data). See also: RFC 2617: HTTP Digest Access Authentication @advanced_1414_td GCJ @advanced_1415_td Compiler for Java. GNU Compiler for the Java and NativeJ (commercial) @advanced_1416_td HTTPS @advanced_1417_td A protocol to provide security to HTTP connections. See also: RFC 2818: HTTP Over TLS @advanced_1418_td Modes of Operation @advanced_1419_a Wikipedia: Block cipher modes of operation @advanced_1420_td Salt @advanced_1421_td Random number to increase the security of passwords. See also: Wikipedia: Key derivation function @advanced_1422_td SHA-256 @advanced_1423_td A cryptographic one-way hash function. See also: Wikipedia: SHA hash functions @advanced_1424_td SQL Injection @advanced_1425_td A security vulnerability where an application embeds SQL statements or expressions in user input. See also: Wikipedia: SQL Injection @advanced_1426_td Watermark Attack @advanced_1427_td Security problem of certain encryption programs where the existence of certain data can be proven without decrypting. For more information, search in the internet for 'watermark attack cryptoloop' @advanced_1428_td SSL/TLS @advanced_1429_td Secure Sockets Layer / Transport Layer Security. See also: Java Secure Socket Extension (JSSE) @advanced_1430_td XTEA @advanced_1431_td A block encryption algorithm. See also: Wikipedia: XTEA @build_1000_h1 Build @build_1001_a Portability @build_1002_a Environment @build_1003_a Building the Software @build_1004_a Build Targets @build_1005_a Using Maven 2 @build_1006_a Using Eclipse @build_1007_a Translating @build_1008_a Providing Patches @build_1009_a Reporting Problems or Requests @build_1010_a Automated Build @build_1011_a Generating Railroad Diagrams @build_1012_h2 Portability @build_1013_p This database is written in Java and therefore works on many platforms. It can also be compiled to a native executable using GCJ. @build_1014_h2 Environment @build_1015_p To run this database, a Java Runtime Environment (JRE) version 1.6 or higher is required. @build_1016_p To create the database executables, the following software stack was used. To use this database, it is not required to install this software however. @build_1017_li Mac OS X and Windows @build_1018_a Sun JDK Version 1.6 and 1.7 @build_1019_a Eclipse @build_1020_li Eclipse Plugins: Subclipse, Eclipse Checkstyle Plug-in, EclEmma Java Code Coverage @build_1021_a Emma Java Code Coverage @build_1022_a Mozilla Firefox @build_1023_a OpenOffice @build_1024_a NSIS @build_1025_li (Nullsoft Scriptable Install System) @build_1026_a Maven @build_1027_h2 Building the Software @build_1028_p You need to install a JDK, for example the Sun JDK version 1.6 or 1.7. Ensure that Java binary directory is included in the PATH environment variable, and that the environment variable JAVA_HOME points to your Java installation. On the command line, go to the directory h2 and execute the following command: @build_1029_p For Linux and OS X, use ./build.sh instead of build. @build_1030_p You will get a list of targets. If you want to build the jar file, execute (Windows): @build_1031_p To run the build tool in shell mode, use the command line option - as in ./build.sh -. @build_1032_h3 Switching the Source Code @build_1033_p The source code uses Java 1.6 features. To switch the source code to the installed version of Java, run: @build_1034_h2 Build Targets @build_1035_p The build system can generate smaller jar files as well. The following targets are currently supported: @build_1036_code jarClient @build_1037_li creates the file h2client.jar. This only contains the JDBC client. @build_1038_code jarSmall @build_1039_li creates the file h2small.jar. This only contains the embedded database. Debug information is disabled. @build_1040_code jarJaqu @build_1041_li creates the file h2jaqu.jar. This only contains the JaQu (Java Query) implementation. All other jar files do not include JaQu. @build_1042_code javadocImpl @build_1043_li creates the Javadocs of the implementation. @build_1044_p To create the file h2client.jar, go to the directory h2 and execute the following command: @build_1045_h3 Using Lucene 2 / 3 @build_1046_p Both Apache Lucene 2 and Lucene 3 are supported. Currently Apache Lucene version 2.x is used by default for H2 version 1.2.x, and Lucene version 3.x is used by default for H2 version 1.3.x. To use a different version of Lucene when compiling, it needs to be specified as follows: @build_1047_h2 Using Maven 2 @build_1048_h3 Using a Central Repository @build_1049_p You can include the database in your Maven 2 project as a dependency. Example: @build_1050_p New versions of this database are first uploaded to http://hsql.sourceforge.net/m2-repo/ and then automatically synchronized with the main Maven repository; however after a new release it may take a few hours before they are available there. @build_1051_h3 Maven Plugin to Start and Stop the TCP Server @build_1052_p A Maven plugin to start and stop the H2 TCP server is available from Laird Nelson at GitHub. To start the H2 server, use: @build_1053_p To stop the H2 server, use: @build_1054_h3 Using Snapshot Version @build_1055_p To build a h2-*-SNAPSHOT.jar file and upload it the to the local Maven 2 repository, execute the following command: @build_1056_p Afterwards, you can include the database in your Maven 2 project as a dependency: @build_1057_h2 Using Eclipse @build_1058_p To create an Eclipse project for H2, use the following steps: @build_1059_li Install Subversion and Eclipse. @build_1060_li Get the H2 source code from the Subversion repository: @build_1061_code svn checkout http://h2database.googlecode.com/svn/trunk h2database-read-only @build_1062_li Download all dependencies (Windows): @build_1063_code build.bat download @build_1064_li In Eclipse, create a new Java project from existing source code: File, New, Project, Java Project, Create project from existing source. @build_1065_li Select the h2 folder, click Next and Finish. @build_1066_li To resolve com.sun.javadoc import statements, you may need to manually add the file <java.home>/../lib/tools.jar to the build path. @build_1067_h2 Translating @build_1068_p The translation of this software is split into the following parts: @build_1069_li H2 Console: src/main/org/h2/server/web/res/_text_*.prop @build_1070_li Error messages: src/main/org/h2/res/_messages_*.prop @build_1071_p To translate the H2 Console, start it and select Preferences / Translate. After you are done, send the translated *.prop file to the Google Group. The web site is currently translated using Google. @build_1072_h2 Providing Patches @build_1073_p If you like to provide patches, please consider the following guidelines to simplify merging them: @build_1074_li Only use Java 6 features (do not use Java 7) (see Environment). @build_1075_li Follow the coding style used in the project, and use Checkstyle (see above) to verify. For example, do not use tabs (use spaces instead). The checkstyle configuration is in src/installer/checkstyle.xml. @build_1076_li A template of the Eclipse settings are in src/installer/eclipse.settings/*. If you want to use them, you need to copy them to the .settings directory. The formatting options (eclipseCodeStyle) are also included. @build_1077_li Please provide test cases and integrate them into the test suite. For Java level tests, see src/test/org/h2/test/TestAll.java. For SQL level tests, see src/test/org/h2/test/test.in.txt or testSimple.in.txt. @build_1078_li The test cases should cover at least 90% of the changed and new code; use a code coverage tool to verify that (see above). or use the build target coverage. @build_1079_li Verify that you did not break other features: run the test cases by executing build test. @build_1080_li Provide end user documentation if required (src/docsrc/html/*). @build_1081_li Document grammar changes in src/docsrc/help/help.csv @build_1082_li Provide a change log entry (src/docsrc/html/changelog.html). @build_1083_li Verify the spelling using build spellcheck. If required add the new words to src/tools/org/h2/build/doc/dictionary.txt. @build_1084_li Run src/installer/buildRelease to find and fix formatting errors. @build_1085_li Verify the formatting using build docs and build javadoc. @build_1086_li Submit patches as .patch files (compressed if big). To create a patch using Eclipse, use Team / Create Patch. @build_1087_p For legal reasons, patches need to be public in the form of an email to the group, or in the form of an issue report or attachment. Significant contributions need to include the following statement: @build_1088_p "I wrote the code, it's mine, and I'm contributing it to H2 for distribution multiple-licensed under the H2 License, version 1.0, and under the Eclipse Public License, version 1.0 (http://h2database.com/html/license.html)." @build_1089_h2 Reporting Problems or Requests @build_1090_p Please consider the following checklist if you have a question, want to report a problem, or if you have a feature request: @build_1091_li For bug reports, please provide a short, self contained, correct (compilable), example of the problem. @build_1092_li Feature requests are always welcome, even if the feature is already on the roadmap. Your mail will help prioritize feature requests. If you urgently need a feature, consider providing a patch. @build_1093_li Before posting problems, check the FAQ and do a Google search. @build_1094_li When got an unexpected exception, please try the Error Analyzer tool. If this doesn't help, please report the problem, including the complete error message and stack trace, and the root cause stack trace(s). @build_1095_li When sending source code, please use a public web clipboard such as Pastebin, Cl1p, or Mystic Paste to avoid formatting problems. Please keep test cases as simple and short as possible, but so that the problem can still be reproduced. As a template, use: HelloWorld.java. Method that simply call other methods should be avoided, as well as unnecessary exception handling. Please use the JDBC API and no external tools or libraries. The test should include all required initialization code, and should be started with the main method. @build_1096_li For large attachments, use a public temporary storage such as Rapidshare. @build_1097_li Google Group versus issue tracking: Use the Google Group for questions or if you are not sure it's a bug. If you are sure it's a bug, you can create an issue, but you don't need to (sending an email to the group is enough). Please note that only few people monitor the issue tracking system. @build_1098_li For out-of-memory problems, please analyze the problem yourself first, for example using the command line option -XX:+HeapDumpOnOutOfMemoryError (to create a heap dump file on out of memory) and a memory analysis tool such as the Eclipse Memory Analyzer (MAT). @build_1099_li It may take a few days to get an answers. Please do not double post. @build_1100_h2 Automated Build @build_1101_p This build process is automated and runs regularly. The build process includes running the tests and code coverage, using the command line ./build.sh clean jar coverage -Dh2.ftpPassword=... uploadBuild. The last results are available here: @build_1102_a Test Output @build_1103_a Code Coverage Summary @build_1104_a Code Coverage Details (download, 1.3 MB) @build_1105_a Build Newsfeed @build_1106_a Latest Jar File (download, 1 MB) @build_1107_h2 Generating Railroad Diagrams @build_1108_p The railroad diagrams of the SQL grammar are HTML, formatted as nested tables. The diagrams are generated as follows: @build_1109_li The BNF parser (org.h2.bnf.Bnf) reads and parses the BNF from the file help.csv. @build_1110_li The page parser (org.h2.server.web.PageParser) reads the template HTML file and fills in the diagrams. @build_1111_li The rail images (one straight, four junctions, two turns) are generated using a simple Java application. @build_1112_p To generate railroad diagrams for other grammars, see the package org.h2.jcr. This package is used to generate the SQL-2 railroad diagrams for the JCR 2.0 specification. @changelog_1000_h1 Change Log @changelog_1001_h2 Next Version (unreleased) @changelog_1002_li Queries with both LIMIT and OFFSET could throw an IllegalArgumentException. @changelog_1003_li MVStore: larger stores (multiple GB) are now much faster. @changelog_1004_li When using local temporary tables and not dropping them manually before closing the session, and then killing the process could result in a database that couldn't be opened (except when using the recover tool). @changelog_1005_li Support TRUNC(timestamp) for improved Oracle compatibility. @changelog_1006_li Add support for CREATE TABLE TEST (ID BIGSERIAL) for PostgreSQL compatibility. Patch from Jesse Long. @changelog_1007_li Add new collation command SET BINARY_COLLATION UNSIGNED, helps with people testing BINARY columns in MySQL mode. @changelog_1008_li Fix issue #453, ABBA race conditions in TABLE LINK connection sharing. @changelog_1009_li Fix Issue 449: Postgres Serial data type should not automatically be marked as primary key @changelog_1010_li Fix Issue 406: support "SELECT h2version()" @changelog_1011_li Fix Issue 389: When there is a multi-column primary key, H2 does not seem to always pick the right index @changelog_1012_li Fix Issue 305: Implement SELECT ... FOR FETCH ONLY @changelog_1013_li Issue 274: Sybase/MSSQLServer compatibility - Add GETDATE and CHARINDEX system functions @changelog_1014_li Issue 274: Sybase/MSSQLServer compatibility - swap parameters of CONVERT function. @changelog_1015_li Issue 274: Sybase/MSSQLServer compatibility - support index clause e.g. "select * from test (index table1_index)" @changelog_1016_h2 Version 1.3.171 (2013-03-17) @changelog_1017_li Security: the TCP server did not correctly restrict access rights of clients in some cases. This was specially a problem when using the flag "tcpAllowOthers". @changelog_1018_li H2 Console: the session timeout can now be configured using the system property "h2.consoleTimeout". @changelog_1019_li Issue 431: Improved compatibility with MySQL: support for "ENGINE=InnoDB charset=UTF8" when creating a table. @changelog_1020_li Issue 249: Improved compatibility with MySQL in the MySQL mode: now the methods DatabaseMetaData methods stores*Case*Identifiers return the same as MySQL when using the MySQL mode. @changelog_1021_li Issue 434: H2 Console didn't work in the Chrome browser due to a wrong viewport argument. @changelog_1022_li There was a possibility that the .lock.db file was not deleted when the database was closed, which could slow down opening the database. @changelog_1023_li The SQL script generated by the "script" command contained inconsistent newlines on Windows. @changelog_1024_li When using trace level 4 (SLF4J) in the server mode, a directory "trace.db" and an empty file was created on the client side. This is no longer made. @changelog_1025_li Optimize IN(...) queries: there was a bug in version 1.3.170 if the type of the left hand side didn't match the type of the right hand side. Fixed. @changelog_1026_li Optimize IN(...) queries: there was a bug in version 1.3.170 for comparison of the type "X IN(NULL, NULL)". Fixed. @changelog_1027_li Timestamps with timezone that were passed as a string were not always converted correctly. For example "2012-11-06T23:00:00.000Z" was converted to "2012-11-06" instead of to "2012-11-07" in the timezone CET. Thanks a lot to Steve Hruda for reporting the problem! @changelog_1028_li New table engine "org.h2.mvstore.db.MVTableEngine" that internally uses the MVStore to persist data. To try it out, append ";DEFAULT_TABLE_ENGINE=org.h2.mvstore.db.MVTableEngine" to the database URL. This is still very experimental, and many features are not supported yet. The data is stored in a file with the suffix ".mv.db". @changelog_1029_li New connection setting "DEFAULT_TABLE_ENGINE" to use a specific table engine if none is set explicitly. This is to simplify testing the MVStore table engine. @changelog_1030_li MVStore: encrypted stores are now supported. Only standardized algorithms are used: PBKDF2, SHA-256, XTS-AES, AES-128. @changelog_1031_li MVStore: improved API thanks to Simo Tripodi. @changelog_1032_li MVStore: maps can now be renamed. @changelog_1033_li MVStore: store the file header also at the end of each chunk, which results in a further reduced number of write operations. @changelog_1034_li MVStore: a map implementation that supports concurrent operations. @changelog_1035_li MVStore: unified exception handling; the version is included in the messages. @changelog_1036_li MVStore: old data is now retained for 45 seconds by default. @changelog_1037_li MVStore: compress is now disabled by default, and can be enabled on request. @changelog_1038_li Support ALTER TABLE ADD ... AFTER. Patch from Andrew Gaul (argaul at gmail.com). Fixes issue 401. @changelog_1039_li Improved OSGi support. H2 now registers itself as a DataSourceFactory service. Fixes issue 365. @changelog_1040_li Add a DISK_SPACE_USED system function. Fixes issue 270. @changelog_1041_li Fix a compile-time ambiguity when compiling with JDK7, thanks to a patch from Lukas Eder. @changelog_1042_li Supporting dropping an index for Lucene full-text indexes. @changelog_1043_li Optimized performance for SELECT ... ORDER BY X LIMIT Y OFFSET Z queries for in-memory databases using partial sort (by Sergi Vladykin). @changelog_1044_li Experimental off-heap memory storage engine "nioMemFS:" and "nioMemLZF:", suggestion from Mark Addleman. @changelog_1045_li Issue 438: JdbcDatabaseMetaData.getSchemas() is no longer supported as of 1.3.169. @changelog_1046_li MySQL compatibility: support for ALTER TABLE tableName MODIFY [COLUMN] columnName columnDef. Patch from Ville Koskela. @changelog_1047_li Issue 404: SHOW COLUMNS FROM tableName does not work with ALLOW_LITERALS=NUMBERS. @changelog_1048_li Throw an explicit error to make it clear we don't support the TRIGGER combination of SELECT and FOR EACH ROW. @changelog_1049_li Issue 439: Utils.sortTopN does not handle single-element arrays. @changelog_1050_h2 Version 1.3.170 (2012-11-30) @changelog_1051_li Issue 407: The TriggerAdapter didn't work with CLOB and BLOB columns. @changelog_1052_li PostgreSQL compatibility: support for data types BIGSERIAL and SERIAL as an alias for AUTO_INCREMENT. @changelog_1053_li Issue 417: H2 Console: the web session timeout didn't work, resulting in a memory leak. This was only a problem if the H2 Console was run for a long time and many sessions were opened. @changelog_1054_li Issue 412: Running the Server tool with just the option "-browser" will now log a warning. @changelog_1055_li Issue 411: CloseWatcher registration was not concurrency-safe. @changelog_1056_li MySQL compatibility: support for CONCAT_WS. Thanks a lot to litailang for the patch! @changelog_1057_li PostgreSQL compatibility: support for EXTRACT(WEEK FROM dateColumn). Thanks to Prashant Bhat for the patch! @changelog_1058_li Fix for a bug where we would sometimes use the wrong unique constraint to validate foreign key constraints. @changelog_1059_li Support BOM at the beginning of files for the RUNSCRIPT command @changelog_1060_li Fix in calling SET @X = IDENTITY() where it would return NULL incorrectly @changelog_1061_li Fix ABBA deadlock between adding a constraint and the H2-Log-Writer thread. @changelog_1062_li Optimize IN(...) queries where the values are constant and of the same type. @changelog_1063_li Restore tool: the parameter "quiet" was not used and is now removed. @changelog_1064_li Fix ConcurrentModificationException when creating tables and executing SHOW TABLES in parallel. Reported by Viktor Voytovych. @changelog_1065_li Serialization is now pluggable using the system property "h2.javaObjectSerializer". Thanks to Sergi Vladykin for the patch! @changelog_1066_h2 Version 1.3.169 (2012-09-09) @changelog_1067_li The default jar file is now compiled for Java 6. @changelog_1068_li The new jar file will probably not end up in the central Maven repository in the next few weeks because Sonatype has disabled automatic synchronization from SourceForge (which they call 'legacy sync' now). It will probably take some time until this is sorted out. The H2 jar files are deployed to http://h2database.com/m2-repo/com/h2database/h2/maven-metadata.xml and http://hsql.sourceforge.net/m2-repo/com/h2database/h2/maven-metadata.xml as usual. @changelog_1069_li A part of the documentation and the H2 Console has been changed to support the Apple retina display. @changelog_1070_li The CreateCluster tool could not be used if the source database contained a CLOB or BLOB. The root cause was that the TCP server did not synchronize on the session, which caused a problem when using the exclusive mode. @changelog_1071_li Statement.getQueryTimeout(): only the first call to this method will query the database. If the query timeout was changed in another way than calling setQueryTimeout, this method will always return the last value. This was changed because Hibernate calls getQueryTimeout() a lot. @changelog_1072_li Issue 416: PreparedStatement.setNString throws AbstractMethodError. All implemented JDBC 4 methods that don't break compatibility with Java 5 are now included in the default jar file. @changelog_1073_li Issue 414: for some functions, the parameters were evaluated twice (for example "char(nextval(..))" ran "nextval(..)" twice). @changelog_1074_li The ResultSetMetaData methods getSchemaName and getTableName could return null instead of "" (an empty string) as specified in the JDBC API. @changelog_1075_li Added compatibility for "SET NAMES" query in MySQL compatibility mode. @changelog_1076_h2 Version 1.3.168 (2012-07-13) @changelog_1077_li The message "Transaction log could not be truncated" was sometimes written to the .trace.db file even if there was no problem truncating the transaction log. @changelog_1078_li New system property "h2.serializeJavaObject" (default: true) that allows to disable serializing Java objects, so that the objects compareTo and toString methods can be used. @changelog_1079_li Dylan has translated the H2 Console tool to Korean. Thanks a lot! @changelog_1080_li Executing the statement CREATE INDEX IF ALREADY EXISTS if the index already exists no longer fails for a read only database. @changelog_1081_li MVCC: concurrently updating a row could result in the row to appear deleted in the second connection, if there are multiple unique indexes (or a primary key and at least one unique index). Thanks a lot to Teruo for the patch! @changelog_1082_li Fulltext search: in-memory Lucene indexes are now supported. @changelog_1083_li Fulltext search: UUID primary keys are now supported. @changelog_1084_li Apache Tomcat 7.x will now longer log a warning when unloading the web application, if using a connection pool. @changelog_1085_li H2 Console: support the Midori browser (for Debian / Raspberry Pi) @changelog_1086_li When opening a remote session, don't open a temporary file if the trace level is set to zero @changelog_1087_li Use HMAC for authenticating remote LOB id's, removing the need for maintaining a cache, and removing the limit on the number of LOBs per result set. @changelog_1088_li H2 Console: HTML and XML documents can now be edited in an updatable result set. There is (limited) support for editing multi-line documents. @changelog_1089_h2 Version 1.3.167 (2012-05-23) @changelog_1090_li H2 Console: when editing a row, an empty varchar column was replaced with a single space. @changelog_1091_li Lukas Eder has updated the jOOQ documentation. @changelog_1092_li Some nested joins could not be executed, for example: select * from (select * from (select * from a) a right join b b) c; @changelog_1093_li MS SQL Server compatibility: ISNULL is now an alias for IFNULL. @changelog_1094_li Terrence Huang has completed the translation of the H2 Console tool to Chinese. Thanks a lot! @changelog_1095_li Server mode: the number of CLOB / BLOB values that were cached on the server is now the maximum of: 5 times the SERVER_RESULT_SET_FETCH_SIZE (which is 100 by default), and SysProperties.SERVER_CACHED_OBJECTS. @changelog_1096_li In the trace file, the query execution time was incorrect in some cases, specially for the statement SET TRACE_LEVEL_FILE 2. @changelog_1097_li The feature LOG_SIZE_LIMIT that was introduced in version 1.3.165 did not always work correctly (specially with regards to multithreading) and has been removed. The message "Transaction log could not be truncated" is still written to the .trace.db file if required. @changelog_1098_li Then reading from a resource using the prefix "classpath:", the ContextClassLoader is now used if the resource can't be read otherwise. @changelog_1099_li DatabaseEventListener now calls setProgress whenever a statement starts and ends. @changelog_1100_li DatabaseEventListener now calls setProgress periodically while a statement is running. @changelog_1101_li The table INFORMATION_SCHEMA.FUNCTION_ALIASES now includes a column TYPE_NAME. @changelog_1102_li Issue 378: when using views, the wrong values were bound to a parameter in some cases. @changelog_1103_li Terrence Huang has translated the error messages to Chinese. Thanks a lot! @changelog_1104_li TRUNC was added as an alias for TRUNCATE. @changelog_1105_li Small optimisation for accessing result values by column name. @changelog_1106_li Fix for bug in Statement.getMoreResults(int) @changelog_1107_li The SCRIPT statements now supports filtering by schema and table. Thanks a lot to Jacob Qvortrup for providing the patch! @changelog_1108_h2 Version 1.3.166 (2012-04-08) @changelog_1109_li Indexes on column that are larger than half the page size (wide indexes) could sometimes get corrupt, resulting in an ArrayIndexOutOfBoundsException in PageBtree.getRow or "Row not found" in PageBtreeLeaf. Also, such indexes used too much disk space. @changelog_1110_li Server mode: when retrieving more than 64 rows each containing a CLOB or BLOB, the error message "The object is already closed" was thrown. @changelog_1111_li ConvertTraceFile: the time in the trace file is now parsed as a long. @changelog_1112_li Invalid connection settings are now detected. @changelog_1113_li Issue 387: WHERE condition getting pushed into sub-query with LIMIT. @changelog_1114_h2 Version 1.3.165 (2012-03-18) @changelog_1115_li Better string representation for decimal values (for example 0.00000000 instead of 0E-26). @changelog_1116_li Prepared statements could only be re-used if the same data types were used the second time they were executed. @changelog_1117_li In error messages about referential constraint violation, the values are now included. @changelog_1118_li SCRIPT and RUNSCRIPT: the password can now be set using a prepared statement. Previously, it was required to be a literal in the SQL statement. @changelog_1119_li MySQL compatibility: SUBSTR with a negative start index now works like MySQL. @changelog_1120_li When enabling autocommit, the transaction is now committed (as required by the JDBC API). @changelog_1121_li The shell script h2.sh did not work with spaces in the path. It also works now with quoted spaces in the argument list. Thanks a lot to Shimizu Fumiyuki for the patch! @changelog_1122_li If the transaction log could not be truncated because of an uncommitted transaction, now "Transaction log could not be truncated" is written to the .trace.db file. Before, the database file was growing and it was hard to find out what the root cause was. To avoid the database file from growing, a new feature to automatically rollback the oldest transaction is available now. To enable it, append ;LOG_SIZE_LIMIT=32 to the database URL (in that case, the oldest session is rolled back if the transaction log is 32 MB). @changelog_1123_li ALTER TABLE ADD can now add more than one column at a time. @changelog_1124_li Issue 380: ALTER TABLE ADD FOREIGN KEY with an explicit index didn't verify the index can be used, which would lead to a NullPointerException later on. @changelog_1125_li Issue 384: the wrong kind of exception (NullPointerException) was thrown in a UNION query with an incorrect ORDER BY expression. @changelog_1126_li Issue 362: support LIMIT in UPDATE statements. @changelog_1127_li Browser: if no default browser is set, Google Chrome is now used if available. If not available, then Konqueror, Netscape, or Opera is used if available (as before). @changelog_1128_li CSV tool: new feature to disable writing the column header (option writeColumnHeader). @changelog_1129_li CSV tool: new feature to preserve the case sensitivity of column names (option caseSensitiveColumnNames). @changelog_1130_li PostgreSQL compatibility: LOG(x) is base 10 in the PostgreSQL mode. @changelog_1131_h2 Version 1.3.164 (2012-02-03) @changelog_1132_li New built-in function ARRAY_CONTAINS. @changelog_1133_li Some DatabaseMetaData methods didn't work when using ALLOW_LITERALS NONE. @changelog_1134_li Trying to convert a VARCHAR to UUID will now fail if the text contains a character that is not a hex digit, '-', or not a whitespace. @changelog_1135_li TriggerAdapter: in "before" triggers, values can be changed using the ResultSet.updateX methods. @changelog_1136_li Creating a table with column data type NULL now works (even if not very useful). @changelog_1137_li ALTER TABLE ALTER COLUMN no longer copies the data for widening conversions (for example if only the precision was increased) unless necessary. @changelog_1138_li Multi-threaded kernel: concurrently running an online backup and updating the database resulted in a broken (transactionally incorrect) backup file in some cases. @changelog_1139_li The script created by SCRIPT DROP did not always work if multiple views existed that depend on each other. @changelog_1140_li MathUtils.getSecureRandom could log a warning to System.err in case the /dev/random is very slow, and the System.getProperties().toString() returned a string larger than 64 KB. @changelog_1141_li The database file locking mechanism "FS" (;FILE_LOCK=FS) did not work on Linux since version 1.3.161. @changelog_1142_li Sequences: the functions NEXTVAL and CURRVAL did not work as expected when using quoted, mixed case sequence names. @changelog_1143_li The constructor for Csv objects is now public, and Csv.getInstance() is now deprecated. @changelog_1144_li SimpleResultSet: updating a result set is now supported. @changelog_1145_li Database URL: extra semicolons are not supported. @changelog_1146_h2 Version 1.3.163 (2011-12-30) @changelog_1147_li On out of disk space, the database could get corrupt sometimes, if later write operations succeeded. The same problem happened on other kinds of I/O exceptions (where one or some of the writes fail, but subsequent writes succeed). Now the file is closed on the first unsuccessful write operation, so that later requests fail consistently. @changelog_1148_li DatabaseEventListener.diskSpaceIsLow() is no longer supported because it can't be guaranteed that it always works correctly. @changelog_1149_li XMLTEXT now supports an optional parameter to escape newlines. @changelog_1150_li XMLNODE now support an optional parameter to disable indentation. @changelog_1151_li Csv.write now formats date, time, and timestamp values using java.sql.Date / Time / Timestamp.toString(). Previously, ResultSet.getString() was used, which didn't work well for Oracle. @changelog_1152_li The shell script h2.sh can now be run from within a different directory. Thanks a lot to Daniel Serodio for the patch! @changelog_1153_li The page size of a persistent database can now be queries using: select * from information_schema.settings where name = 'info.PAGE_SIZE' @changelog_1154_li In the server mode, BLOB and CLOB objects are no longer closed when the result set is closed (as required by the JDBC spec). @changelog_1155_h2 Version 1.3.162 (2011-11-26) @changelog_1156_li The following system properties are no longer supported: h2.allowBigDecimalExtensions, h2.emptyPassword, h2.minColumnNameMap, h2.returnLobObjects, h2.webMaxValueLength. @changelog_1157_li When using a VPN, starting a H2 server did not work (for some VPN software). @changelog_1158_li Oracle compatibility: support for DECODE(...). @changelog_1159_li Lucene fulltext search: creating an index is now faster if the table already contains data. Thanks a lot to Angel Leon from the FrostWire Team for the patch! @changelog_1160_li Update statements with a column list in brackets did not work if the list only contains one column. Example: update test set (id)=(id). @changelog_1161_li Read-only databases in a zip file did not work when using the -baseDir option. @changelog_1162_li Issue 334: SimpleResultSet.getString now also works for Clob columns. @changelog_1163_li Subqueries with an aggregate did not always work. Example: select (select count(*) from test where a = t.a and b = 0) from test t group by a @changelog_1164_li Server: in some (theoretical) cases, exceptions while closing the connection were ignored. @changelog_1165_li Server.createTcpServer, createPgServer, createWebServer: invalid arguments are now detected. @changelog_1166_li The selectivity of LOB columns is no longer calculated because indexes on LOB columns are not supported (however this should have little effect on performance, as the selectivity is calculated from the hash code and not the data). @changelog_1167_li New experimental system property "h2.modifyOnWrite": when enabled, the database file is only modified when writing to the database. When enabled, the serialized file lock is much faster for read-only operations. @changelog_1168_li A NullPointerException could occur in TableView.isDeterministic for invalid views. @changelog_1169_li Issue 180: when deserializing objects, the context class loader is used instead of the default class loader if the system property "h2.useThreadContextClassLoader" is set. Thanks a lot to Noah Fontes for the patch! @changelog_1170_li When using the exclusive mode, LOB operations could cause the thread to block. This also affected the CreateCluster tool (when using BLOB or CLOB data). @changelog_1171_li The optimization for "group by" was not working correctly if the group by column was aliased in the select list. @changelog_1172_li Issue 326: improved support for case sensitive (mixed case) identifiers without quotes when using DATABASE_TO_UPPER=FALSE. @cheatSheet_1000_h1 H2 Database Engine Cheat Sheet @cheatSheet_1001_h2 Using H2 @cheatSheet_1002_a H2 @cheatSheet_1003_li is open source, free to use and distribute. @cheatSheet_1004_a Download @cheatSheet_1005_li : jar, installer (Windows), zip. @cheatSheet_1006_li To start the H2 Console tool, double click the jar file, or run java -jar h2*.jar, h2.bat, or h2.sh. @cheatSheet_1007_a A new database is automatically created @cheatSheet_1008_a by default @cheatSheet_1009_li . @cheatSheet_1010_a Closing the last connection closes the database @cheatSheet_1011_li . @cheatSheet_1012_h2 Documentation @cheatSheet_1013_p Reference: SQL grammar, functions, data types, tools, API @cheatSheet_1014_a Features @cheatSheet_1015_p : fulltext search, encryption, read-only (zip/jar), CSV, auto-reconnect, triggers, user functions @cheatSheet_1016_a Database URLs @cheatSheet_1017_a Embedded @cheatSheet_1018_code jdbc:h2:~/test @cheatSheet_1019_p 'test' in the user home directory @cheatSheet_1020_code jdbc:h2:/data/test @cheatSheet_1021_p 'test' in the directory /data @cheatSheet_1022_code jdbc:h2:test @cheatSheet_1023_p in the current(!) working directory @cheatSheet_1024_a In-Memory @cheatSheet_1025_code jdbc:h2:mem:test @cheatSheet_1026_p multiple connections in one process @cheatSheet_1027_code jdbc:h2:mem: @cheatSheet_1028_p unnamed private; one connection @cheatSheet_1029_a Server Mode @cheatSheet_1030_code jdbc:h2:tcp://localhost/~/test @cheatSheet_1031_p user home dir @cheatSheet_1032_code jdbc:h2:tcp://localhost//data/test @cheatSheet_1033_p absolute dir @cheatSheet_1034_a Server start @cheatSheet_1035_p :java -cp *.jar org.h2.tools.Server @cheatSheet_1036_a Settings @cheatSheet_1037_code jdbc:h2:..;MODE=MySQL @cheatSheet_1038_a compatibility (or HSQLDB,...) @cheatSheet_1039_code jdbc:h2:..;TRACE_LEVEL_FILE=3 @cheatSheet_1040_a log to *.trace.db @cheatSheet_1041_a Using the JDBC API @cheatSheet_1042_a Connection Pool @cheatSheet_1043_a Maven 2 @cheatSheet_1044_a Hibernate @cheatSheet_1045_p hibernate.cfg.xml (or use the HSQLDialect): @cheatSheet_1046_a TopLink and Glassfish @cheatSheet_1047_p Datasource class: org.h2.jdbcx.JdbcDataSource @cheatSheet_1048_code oracle.toplink.essentials.platform. @cheatSheet_1049_code database.H2Platform @download_1000_h1 Downloads @download_1001_h3 Version 1.3.171 (2013-03-17) @download_1002_a Windows Installer @download_1003_a Platform-Independent Zip @download_1004_h3 Version 1.3.170 (2012-11-30), Last Stable @download_1005_a Windows Installer @download_1006_a Platform-Independent Zip @download_1007_h3 Download Mirror and Older Versions @download_1008_a Platform-Independent Zip @download_1009_h3 Jar File @download_1010_a Maven.org @download_1011_a Sourceforge.net @download_1012_a Latest Automated Build (not released) @download_1013_h3 Maven (Binary, Javadoc, and Source) @download_1014_a Binary @download_1015_a Javadoc @download_1016_a Sources @download_1017_h3 Database Upgrade Helper File @download_1018_a Upgrade database from 1.1 to the current version @download_1019_h3 Subversion Source Repository @download_1020_a Google Code @download_1021_p For details about changes, see the Change Log. @download_1022_h3 News and Project Information @download_1023_a Atom Feed @download_1024_a RSS Feed @download_1025_a DOAP File @download_1026_p (what is this) @faq_1000_h1 Frequently Asked Questions @faq_1001_a I Have a Problem or Feature Request @faq_1002_a Are there Known Bugs? When is the Next Release? @faq_1003_a Is this Database Engine Open Source? @faq_1004_a Is Commercial Support Available? @faq_1005_a How to Create a New Database? @faq_1006_a How to Connect to a Database? @faq_1007_a Where are the Database Files Stored? @faq_1008_a What is the Size Limit (Maximum Size) of a Database? @faq_1009_a Is it Reliable? @faq_1010_a Why is Opening my Database Slow? @faq_1011_a My Query is Slow @faq_1012_a H2 is Very Slow @faq_1013_a Column Names are Incorrect? @faq_1014_a Float is Double? @faq_1015_a Is the GCJ Version Stable? Faster? @faq_1016_a How to Translate this Project? @faq_1017_a How to Contribute to this Project? @faq_1018_h3 I Have a Problem or Feature Request @faq_1019_p Please read the support checklist. @faq_1020_h3 Are there Known Bugs? When is the Next Release? @faq_1021_p Usually, bugs get fixes as they are found. There is a release every few weeks. Here is the list of known and confirmed issues: @faq_1022_li When opening a database file in a timezone that has different daylight saving rules: the time part of dates where the daylight saving doesn't match will differ. This is not a problem within regions that use the same rules (such as, within USA, or within Europe), even if the timezone itself is different. As a workaround, export the database to a SQL script using the old timezone, and create a new database in the new timezone. This problem does not occur when using the system property "h2.storeLocalTime" (however such database files are not compatible with older versions of H2). @faq_1023_li Apache Harmony: there seems to be a bug in Harmony that affects H2. See HARMONY-6505. @faq_1024_li Tomcat and Glassfish 3 set most static fields (final or non-final) to null when unloading a web application. This can cause a NullPointerException in H2 versions 1.1.107 and older, and may still not work in newer versions. Please report it if you run into this issue. In Tomcat >= 6.0 this behavior can be disabled by setting the system property org.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES=false, however Tomcat may then run out of memory. A known workaround is to put the h2*.jar file in a shared lib directory (common/lib). @faq_1025_li Some problems have been found with right outer join. Internally, it is converted to left outer join, which does not always produce the same results as other databases when used in combination with other joins. This problem is fixed in H2 version 1.3. @faq_1026_li When using Install4j before 4.1.4 on Linux and enabling pack200, the h2*.jar becomes corrupted by the install process, causing application failure. A workaround is to add an empty file h2*.jar.nopack next to the h2*.jar file. This problem is solved in Install4j 4.1.4. @faq_1027_p For a complete list, see Open Issues. @faq_1028_h3 Is this Database Engine Open Source? @faq_1029_p Yes. It is free to use and distribute, and the source code is included. See also under license. @faq_1030_h3 Is Commercial Support Available? @faq_1031_p Yes, commercial support is available, see Commercial Support. @faq_1032_h3 How to Create a New Database? @faq_1033_p By default, a new database is automatically created if it does not yet exist. See Creating New Databases. @faq_1034_h3 How to Connect to a Database? @faq_1035_p The database driver is org.h2.Driver, and the database URL starts with jdbc:h2:. To connect to a database using JDBC, use the following code: @faq_1036_h3 Where are the Database Files Stored? @faq_1037_p When using database URLs like jdbc:h2:~/test, the database is stored in the user directory. For Windows, this is usually C:\Documents and Settings\<userName> or C:\Users\<userName>. If the base directory is not set (as in jdbc:h2:test), the database files are stored in the directory where the application is started (the current working directory). When using the H2 Console application from the start menu, this is <Installation Directory>/bin. The base directory can be set in the database URL. A fixed or relative path can be used. When using the URL jdbc:h2:file:data/sample, the database is stored in the directory data (relative to the current working directory). The directory is created automatically if it does not yet exist. It is also possible to use the fully qualified directory name (and for Windows, drive name). Example: jdbc:h2:file:C:/data/test @faq_1038_h3 What is the Size Limit (Maximum Size) of a Database? @faq_1039_p See Limits and Limitations. @faq_1040_h3 Is it Reliable? @faq_1041_p That is not easy to say. It is still a quite new product. A lot of tests have been written, and the code coverage of these tests is higher than 80% for each package. Randomized stress tests are run regularly. But there are probably still bugs that have not yet been found (as with most software). Some features are known to be dangerous, they are only supported for situations where performance is more important than reliability. Those dangerous features are: @faq_1042_li Disabling the transaction log or FileDescriptor.sync() using LOG=0 or LOG=1. @faq_1043_li Using the transaction isolation level READ_UNCOMMITTED (LOCK_MODE 0) while at the same time using multiple connections. @faq_1044_li Disabling database file protection using (setting FILE_LOCK to NO in the database URL). @faq_1045_li Disabling referential integrity using SET REFERENTIAL_INTEGRITY FALSE. @faq_1046_p In addition to that, running out of memory should be avoided. In older versions, OutOfMemory errors while using the database could corrupt a databases. @faq_1047_p This database is well tested using automated test cases. The tests run every night and run for more than one hour. But not all areas of this database are equally well tested. When using one of the following features for production, please ensure your use case is well tested (if possible with automated test cases). The areas that are not well tested are: @faq_1048_li Platforms other than Windows XP, Linux, Mac OS X, or JVMs other than Sun 1.6 or 1.7 @faq_1049_li The features AUTO_SERVER and AUTO_RECONNECT. @faq_1050_li The file locking method 'Serialized'. @faq_1051_li Cluster mode, 2-phase commit, savepoints. @faq_1052_li 24/7 operation. @faq_1053_li Fulltext search. @faq_1054_li Operations on LOBs over 2 GB. @faq_1055_li The optimizer may not always select the best plan. @faq_1056_li Using the ICU4J collator. @faq_1057_p Areas considered experimental are: @faq_1058_li The PostgreSQL server @faq_1059_li Clustering (there are cases were transaction isolation can be broken due to timing issues, for example one session overtaking another session). @faq_1060_li Multi-threading within the engine using SET MULTI_THREADED=1. @faq_1061_li Compatibility modes for other databases (only some features are implemented). @faq_1062_li The soft reference cache (CACHE_TYPE=SOFT_LRU). It might not improve performance, and out of memory issues have been reported. @faq_1063_p Some users have reported that after a power failure, the database cannot be opened sometimes. In this case, use a backup of the database or the Recover tool. Please report such problems. The plan is that the database automatically recovers in all situations. @faq_1064_h3 Why is Opening my Database Slow? @faq_1065_p To find out what the problem is, use the H2 Console and click on "Test Connection" instead of "Login". After the "Login Successful" appears, click on it (it's a link). This will list the top stack traces. Then either analyze this yourself, or post those stack traces in the Google Group. @faq_1066_p Other possible reasons are: the database is very big (many GB), or contains linked tables that are slow to open. @faq_1067_h3 My Query is Slow @faq_1068_p Slow SELECT (or DELETE, UPDATE, MERGE) statement can have multiple reasons. Follow this checklist: @faq_1069_li Run ANALYZE (see documentation for details). @faq_1070_li Run the query with EXPLAIN and check if indexes are used (see documentation for details). @faq_1071_li If required, create additional indexes and try again using ANALYZE and EXPLAIN. @faq_1072_li If it doesn't help please report the problem. @faq_1073_h3 H2 is Very Slow @faq_1074_p By default, H2 closes the database when the last connection is closed. If your application closes the only connection after each operation, the database is opened and closed a lot, which is quite slow. There are multiple ways to solve this problem, see Database Performance Tuning. @faq_1075_h3 Column Names are Incorrect? @faq_1076_p For the query SELECT ID AS X FROM TEST the method ResultSetMetaData.getColumnName() returns ID, I expect it to return X. What's wrong? @faq_1077_p This is not a bug. According the the JDBC specification, the method ResultSetMetaData.getColumnName() should return the name of the column and not the alias name. If you need the alias name, use ResultSetMetaData.getColumnLabel(). Some other database don't work like this yet (they don't follow the JDBC specification). If you need compatibility with those databases, use the Compatibility Mode, or append ;ALIAS_COLUMN_NAME=TRUE to the database URL. @faq_1078_p This also applies to DatabaseMetaData calls that return a result set. The columns in the JDBC API are column labels, not column names. @faq_1079_h3 Float is Double? @faq_1080_p For a table defined as CREATE TABLE TEST(X FLOAT) the method ResultSet.getObject() returns a java.lang.Double, I expect it to return a java.lang.Float. What's wrong? @faq_1081_p This is not a bug. According the the JDBC specification, the JDBC data type FLOAT is equivalent to DOUBLE, and both are mapped to java.lang.Double. See also Mapping SQL and Java Types - 8.3.10 FLOAT. @faq_1082_h3 Is the GCJ Version Stable? Faster? @faq_1083_p The GCJ version is not as stable as the Java version. When running the regression test with the GCJ version, sometimes the application just stops at what seems to be a random point without error message. Currently, the GCJ version is also slower than when using the Sun VM. However, the startup of the GCJ version is faster than when using a VM. @faq_1084_h3 How to Translate this Project? @faq_1085_p For more information, see Build/Translating. @faq_1086_h3 How to Contribute to this Project? @faq_1087_p There are various way to help develop an open source project like H2. The first step could be to translate the error messages and the GUI to your native language. Then, you could provide patches. Please start with small patches. That could be adding a test case to improve the code coverage (the target code coverage for this project is 90%, higher is better). You will have to develop, build and run the tests. Once you are familiar with the code, you could implement missing features from the feature request list. I suggest to start with very small features that are easy to implement. Keep in mind to provide test cases as well. @features_1000_h1 Features @features_1001_a Feature List @features_1002_a Comparison to Other Database Engines @features_1003_a H2 in Use @features_1004_a Connection Modes @features_1005_a Database URL Overview @features_1006_a Connecting to an Embedded (Local) Database @features_1007_a In-Memory Databases @features_1008_a Database Files Encryption @features_1009_a Database File Locking @features_1010_a Opening a Database Only if it Already Exists @features_1011_a Closing a Database @features_1012_a Ignore Unknown Settings @features_1013_a Changing Other Settings when Opening a Connection @features_1014_a Custom File Access Mode @features_1015_a Multiple Connections @features_1016_a Database File Layout @features_1017_a Logging and Recovery @features_1018_a Compatibility @features_1019_a Auto-Reconnect @features_1020_a Automatic Mixed Mode @features_1021_a Page Size @features_1022_a Using the Trace Options @features_1023_a Using Other Logging APIs @features_1024_a Read Only Databases @features_1025_a Read Only Databases in Zip or Jar File @features_1026_a Computed Columns / Function Based Index @features_1027_a Multi-Dimensional Indexes @features_1028_a User-Defined Functions and Stored Procedures @features_1029_a Pluggable or User-Defined Tables @features_1030_a Triggers @features_1031_a Compacting a Database @features_1032_a Cache Settings @features_1033_h2 Feature List @features_1034_h3 Main Features @features_1035_li Very fast database engine @features_1036_li Open source @features_1037_li Written in Java @features_1038_li Supports standard SQL, JDBC API @features_1039_li Embedded and Server mode, Clustering support @features_1040_li Strong security features @features_1041_li The PostgreSQL ODBC driver can be used @features_1042_li Multi version concurrency @features_1043_h3 Additional Features @features_1044_li Disk based or in-memory databases and tables, read-only database support, temporary tables @features_1045_li Transaction support (read committed and serializable transaction isolation), 2-phase-commit @features_1046_li Multiple connections, table level locking @features_1047_li Cost based optimizer, using a genetic algorithm for complex queries, zero-administration @features_1048_li Scrollable and updatable result set support, large result set, external result sorting, functions can return a result set @features_1049_li Encrypted database (AES or XTEA), SHA-256 password encryption, encryption functions, SSL @features_1050_h3 SQL Support @features_1051_li Support for multiple schemas, information schema @features_1052_li Referential integrity / foreign key constraints with cascade, check constraints @features_1053_li Inner and outer joins, subqueries, read only views and inline views @features_1054_li Triggers and Java functions / stored procedures @features_1055_li Many built-in functions, including XML and lossless data compression @features_1056_li Wide range of data types including large objects (BLOB/CLOB) and arrays @features_1057_li Sequence and autoincrement columns, computed columns (can be used for function based indexes) @features_1058_code ORDER BY, GROUP BY, HAVING, UNION, LIMIT, TOP @features_1059_li Collation support, including support for the ICU4J library @features_1060_li Support for users and roles @features_1061_li Compatibility modes for IBM DB2, Apache Derby, HSQLDB, MS SQL Server, MySQL, Oracle, and PostgreSQL. @features_1062_h3 Security Features @features_1063_li Includes a solution for the SQL injection problem @features_1064_li User password authentication uses SHA-256 and salt @features_1065_li For server mode connections, user passwords are never transmitted in plain text over the network (even when using insecure connections; this only applies to the TCP server and not to the H2 Console however; it also doesn't apply if you set the password in the database URL) @features_1066_li All database files (including script files that can be used to backup data) can be encrypted using AES-128 and XTEA encryption algorithms @features_1067_li The remote JDBC driver supports TCP/IP connections over SSL/TLS @features_1068_li The built-in web server supports connections over SSL/TLS @features_1069_li Passwords can be sent to the database using char arrays instead of Strings @features_1070_h3 Other Features and Tools @features_1071_li Small footprint (smaller than 1.5 MB), low memory requirements @features_1072_li Multiple index types (b-tree, tree, hash) @features_1073_li Support for multi-dimensional indexes @features_1074_li CSV (comma separated values) file support @features_1075_li Support for linked tables, and a built-in virtual 'range' table @features_1076_li Supports the EXPLAIN PLAN statement; sophisticated trace options @features_1077_li Database closing can be delayed or disabled to improve the performance @features_1078_li Web-based Console application (translated to many languages) with autocomplete @features_1079_li The database can generate SQL script files @features_1080_li Contains a recovery tool that can dump the contents of the database @features_1081_li Support for variables (for example to calculate running totals) @features_1082_li Automatic re-compilation of prepared statements @features_1083_li Uses a small number of database files @features_1084_li Uses a checksum for each record and log entry for data integrity @features_1085_li Well tested (high code coverage, randomized stress tests) @features_1086_h2 Comparison to Other Database Engines @features_1087_p This comparison is based on H2 1.3, Apache Derby version 10.8, HSQLDB 2.2, MySQL 5.5, PostgreSQL 9.0. @features_1088_th Feature @features_1089_th H2 @features_1090_th Derby @features_1091_th HSQLDB @features_1092_th MySQL @features_1093_th PostgreSQL @features_1094_td Pure Java @features_1095_td Yes @features_1096_td Yes @features_1097_td Yes @features_1098_td No @features_1099_td No @features_1100_td Embedded Mode (Java) @features_1101_td Yes @features_1102_td Yes @features_1103_td Yes @features_1104_td No @features_1105_td No @features_1106_td In-Memory Mode @features_1107_td Yes @features_1108_td Yes @features_1109_td Yes @features_1110_td No @features_1111_td No @features_1112_td Explain Plan @features_1113_td Yes @features_1114_td Yes *12 @features_1115_td Yes @features_1116_td Yes @features_1117_td Yes @features_1118_td Built-in Clustering / Replication @features_1119_td Yes @features_1120_td Yes @features_1121_td No @features_1122_td Yes @features_1123_td Yes @features_1124_td Encrypted Database @features_1125_td Yes @features_1126_td Yes *10 @features_1127_td Yes *10 @features_1128_td No @features_1129_td No @features_1130_td Linked Tables @features_1131_td Yes @features_1132_td No @features_1133_td Partially *1 @features_1134_td Partially *2 @features_1135_td No @features_1136_td ODBC Driver @features_1137_td Yes @features_1138_td No @features_1139_td No @features_1140_td Yes @features_1141_td Yes @features_1142_td Fulltext Search @features_1143_td Yes @features_1144_td No @features_1145_td No @features_1146_td Yes @features_1147_td Yes @features_1148_td Domains (User-Defined Types) @features_1149_td Yes @features_1150_td No @features_1151_td Yes @features_1152_td Yes @features_1153_td Yes @features_1154_td Files per Database @features_1155_td Few @features_1156_td Many @features_1157_td Few @features_1158_td Many @features_1159_td Many @features_1160_td Row Level Locking @features_1161_td Yes *9 @features_1162_td Yes @features_1163_td Yes *9 @features_1164_td Yes @features_1165_td Yes @features_1166_td Multi Version Concurrency @features_1167_td Yes @features_1168_td No @features_1169_td Yes @features_1170_td Yes @features_1171_td Yes @features_1172_td Multi-Threaded Statement Processing @features_1173_td No *11 @features_1174_td Yes @features_1175_td Yes @features_1176_td Yes @features_1177_td Yes @features_1178_td Role Based Security @features_1179_td Yes @features_1180_td Yes *3 @features_1181_td Yes @features_1182_td Yes @features_1183_td Yes @features_1184_td Updatable Result Sets @features_1185_td Yes @features_1186_td Yes *7 @features_1187_td Yes @features_1188_td Yes @features_1189_td Yes @features_1190_td Sequences @features_1191_td Yes @features_1192_td Yes @features_1193_td Yes @features_1194_td No @features_1195_td Yes @features_1196_td Limit and Offset @features_1197_td Yes @features_1198_td Yes *13 @features_1199_td Yes @features_1200_td Yes @features_1201_td Yes @features_1202_td Window Functions @features_1203_td No *15 @features_1204_td No *15 @features_1205_td No @features_1206_td No @features_1207_td Yes @features_1208_td Temporary Tables @features_1209_td Yes @features_1210_td Yes *4 @features_1211_td Yes @features_1212_td Yes @features_1213_td Yes @features_1214_td Information Schema @features_1215_td Yes @features_1216_td No *8 @features_1217_td Yes @features_1218_td Yes @features_1219_td Yes @features_1220_td Computed Columns @features_1221_td Yes @features_1222_td Yes @features_1223_td Yes @features_1224_td No @features_1225_td Yes *6 @features_1226_td Case Insensitive Columns @features_1227_td Yes @features_1228_td Yes *14 @features_1229_td Yes @features_1230_td Yes @features_1231_td Yes *6 @features_1232_td Custom Aggregate Functions @features_1233_td Yes @features_1234_td No @features_1235_td Yes @features_1236_td Yes @features_1237_td Yes @features_1238_td CLOB/BLOB Compression @features_1239_td Yes @features_1240_td No @features_1241_td No @features_1242_td No @features_1243_td Yes @features_1244_td Footprint (jar/dll size) @features_1245_td ~1.5 MB *5 @features_1246_td ~3 MB @features_1247_td ~1.5 MB @features_1248_td ~4 MB @features_1249_td ~6 MB @features_1250_p *1 HSQLDB supports text tables. @features_1251_p *2 MySQL supports linked MySQL tables under the name 'federated tables'. @features_1252_p *3 Derby support for roles based security and password checking as an option. @features_1253_p *4 Derby only supports global temporary tables. @features_1254_p *5 The default H2 jar file contains debug information, jar files for other databases do not. @features_1255_p *6 PostgreSQL supports functional indexes. @features_1256_p *7 Derby only supports updatable result sets if the query is not sorted. @features_1257_p *8 Derby doesn't support standard compliant information schema tables. @features_1258_p *9 When using MVCC (multi version concurrency). @features_1259_p *10 Derby and HSQLDB don't hide data patterns well. @features_1260_p *11 The MULTI_THREADED option is not enabled by default, and not yet supported when using MVCC. @features_1261_p *12 Derby doesn't support the EXPLAIN statement, but it supports runtime statistics and retrieving statement execution plans. @features_1262_p *13 Derby doesn't support the syntax LIMIT .. [OFFSET ..], however it supports FETCH FIRST .. ROW[S] ONLY. @features_1263_p *14 Using collations. *15 Derby and H2 support ROW_NUMBER() OVER(). @features_1264_h3 DaffodilDb and One$Db @features_1265_p It looks like the development of this database has stopped. The last release was February 2006. @features_1266_h3 McKoi @features_1267_p It looks like the development of this database has stopped. The last release was August 2004. @features_1268_h2 H2 in Use @features_1269_p For a list of applications that work with or use H2, see: Links. @features_1270_h2 Connection Modes @features_1271_p The following connection modes are supported: @features_1272_li Embedded mode (local connections using JDBC) @features_1273_li Server mode (remote connections using JDBC or ODBC over TCP/IP) @features_1274_li Mixed mode (local and remote connections at the same time) @features_1275_h3 Embedded Mode @features_1276_p In embedded mode, an application opens a database from within the same JVM using JDBC. This is the fastest and easiest connection mode. The disadvantage is that a database may only be open in one virtual machine (and class loader) at any time. As in all modes, both persistent and in-memory databases are supported. There is no limit on the number of database open concurrently, or on the number of open connections. @features_1277_h3 Server Mode @features_1278_p When using the server mode (sometimes called remote mode or client/server mode), an application opens a database remotely using the JDBC or ODBC API. A server needs to be started within the same or another virtual machine, or on another computer. Many applications can connect to the same database at the same time, by connecting to this server. Internally, the server process opens the database(s) in embedded mode. @features_1279_p The server mode is slower than the embedded mode, because all data is transferred over TCP/IP. As in all modes, both persistent and in-memory databases are supported. There is no limit on the number of database open concurrently per server, or on the number of open connections. @features_1280_h3 Mixed Mode @features_1281_p The mixed mode is a combination of the embedded and the server mode. The first application that connects to a database does that in embedded mode, but also starts a server so that other applications (running in different processes or virtual machines) can concurrently access the same data. The local connections are as fast as if the database is used in just the embedded mode, while the remote connections are a bit slower. @features_1282_p The server can be started and stopped from within the application (using the server API), or automatically (automatic mixed mode). When using the automatic mixed mode, all clients that want to connect to the database (no matter if it's an local or remote connection) can do so using the exact same database URL. @features_1283_h2 Database URL Overview @features_1284_p This database supports multiple connection modes and connection settings. This is achieved using different database URLs. Settings in the URLs are not case sensitive. @features_1285_th Topic @features_1286_th URL Format and Examples @features_1287_a Embedded (local) connection @features_1288_td jdbc:h2:[file:][<path>]<databaseName> @features_1289_td jdbc:h2:~/test @features_1290_td jdbc:h2:file:/data/sample @features_1291_td jdbc:h2:file:C:/data/sample (Windows only) @features_1292_a In-memory (private) @features_1293_td jdbc:h2:mem: @features_1294_a In-memory (named) @features_1295_td jdbc:h2:mem:<databaseName> @features_1296_td jdbc:h2:mem:test_mem @features_1297_a Server mode (remote connections) @features_1298_a using TCP/IP @features_1299_td jdbc:h2:tcp://<server>[:<port>]/[<path>]<databaseName> @features_1300_td jdbc:h2:tcp://localhost/~/test @features_1301_td jdbc:h2:tcp://dbserv:8084/~/sample @features_1302_td jdbc:h2:tcp://localhost/mem:test @features_1303_a Server mode (remote connections) @features_1304_a using SSL/TLS @features_1305_td jdbc:h2:ssl://<server>[:<port>]/<databaseName> @features_1306_td jdbc:h2:ssl://localhost:8085/~/sample; @features_1307_a Using encrypted files @features_1308_td jdbc:h2:<url>;CIPHER=[AES|XTEA] @features_1309_td jdbc:h2:ssl://localhost/~/test;CIPHER=AES @features_1310_td jdbc:h2:file:~/secure;CIPHER=XTEA @features_1311_a File locking methods @features_1312_td jdbc:h2:<url>;FILE_LOCK={FILE|SOCKET|NO} @features_1313_td jdbc:h2:file:~/private;CIPHER=XTEA;FILE_LOCK=SOCKET @features_1314_a Only open if it already exists @features_1315_td jdbc:h2:<url>;IFEXISTS=TRUE @features_1316_td jdbc:h2:file:~/sample;IFEXISTS=TRUE @features_1317_a Don't close the database when the VM exits @features_1318_td jdbc:h2:<url>;DB_CLOSE_ON_EXIT=FALSE @features_1319_a Execute SQL on connection @features_1320_td jdbc:h2:<url>;INIT=RUNSCRIPT FROM '~/create.sql' @features_1321_td jdbc:h2:file:~/sample;INIT=RUNSCRIPT FROM '~/create.sql'\;RUNSCRIPT FROM '~/populate.sql' @features_1322_a User name and/or password @features_1323_td jdbc:h2:<url>[;USER=<username>][;PASSWORD=<value>] @features_1324_td jdbc:h2:file:~/sample;USER=sa;PASSWORD=123 @features_1325_a Debug trace settings @features_1326_td jdbc:h2:<url>;TRACE_LEVEL_FILE=<level 0..3> @features_1327_td jdbc:h2:file:~/sample;TRACE_LEVEL_FILE=3 @features_1328_a Ignore unknown settings @features_1329_td jdbc:h2:<url>;IGNORE_UNKNOWN_SETTINGS=TRUE @features_1330_a Custom file access mode @features_1331_td jdbc:h2:<url>;ACCESS_MODE_DATA=rws @features_1332_a Database in a zip file @features_1333_td jdbc:h2:zip:<zipFileName>!/<databaseName> @features_1334_td jdbc:h2:zip:~/db.zip!/test @features_1335_a Compatibility mode @features_1336_td jdbc:h2:<url>;MODE=<databaseType> @features_1337_td jdbc:h2:~/test;MODE=MYSQL @features_1338_a Auto-reconnect @features_1339_td jdbc:h2:<url>;AUTO_RECONNECT=TRUE @features_1340_td jdbc:h2:tcp://localhost/~/test;AUTO_RECONNECT=TRUE @features_1341_a Automatic mixed mode @features_1342_td jdbc:h2:<url>;AUTO_SERVER=TRUE @features_1343_td jdbc:h2:~/test;AUTO_SERVER=TRUE @features_1344_a Page size @features_1345_td jdbc:h2:<url>;PAGE_SIZE=512 @features_1346_a Changing other settings @features_1347_td jdbc:h2:<url>;<setting>=<value>[;<setting>=<value>...] @features_1348_td jdbc:h2:file:~/sample;TRACE_LEVEL_SYSTEM_OUT=3 @features_1349_h2 Connecting to an Embedded (Local) Database @features_1350_p The database URL for connecting to a local database is jdbc:h2:[file:][<path>]<databaseName>. The prefix file: is optional. If no or only a relative path is used, then the current working directory is used as a starting point. The case sensitivity of the path and database name depend on the operating system, however it is recommended to use lowercase letters only. The database name must be at least three characters long (a limitation of File.createTempFile). To point to the user home directory, use ~/, as in: jdbc:h2:~/test. @features_1351_h2 In-Memory Databases @features_1352_p For certain use cases (for example: rapid prototyping, testing, high performance operations, read-only databases), it may not be required to persist data, or persist changes to the data. This database supports the in-memory mode, where the data is not persisted. @features_1353_p In some cases, only one connection to a in-memory database is required. This means the database to be opened is private. In this case, the database URL is jdbc:h2:mem: Opening two connections within the same virtual machine means opening two different (private) databases. @features_1354_p Sometimes multiple connections to the same in-memory database are required. In this case, the database URL must include a name. Example: jdbc:h2:mem:db1. Accessing the same database using this URL only works within the same virtual machine and class loader environment. @features_1355_p To access an in-memory database from another process or from another computer, you need to start a TCP server in the same process as the in-memory database was created. The other processes then need to access the database over TCP/IP or SSL/TLS, using a database URL such as: jdbc:h2:tcp://localhost/mem:db1. @features_1356_p By default, closing the last connection to a database closes the database. For an in-memory database, this means the content is lost. To keep the database open, add ;DB_CLOSE_DELAY=-1 to the database URL. To keep the content of an in-memory database as long as the virtual machine is alive, use jdbc:h2:mem:test;DB_CLOSE_DELAY=-1. @features_1357_h2 Database Files Encryption @features_1358_p The database files can be encrypted. Two encryption algorithms are supported: AES and XTEA. To use file encryption, you need to specify the encryption algorithm (the 'cipher') and the file password (in addition to the user password) when connecting to the database. @features_1359_h3 Creating a New Database with File Encryption @features_1360_p By default, a new database is automatically created if it does not exist yet. To create an encrypted database, connect to it as it would already exist. @features_1361_h3 Connecting to an Encrypted Database @features_1362_p The encryption algorithm is set in the database URL, and the file password is specified in the password field, before the user password. A single space separates the file password and the user password; the file password itself may not contain spaces. File passwords and user passwords are case sensitive. Here is an example to connect to a password-encrypted database: @features_1363_h3 Encrypting or Decrypting a Database @features_1364_p To encrypt an existing database, use the ChangeFileEncryption tool. This tool can also decrypt an encrypted database, or change the file encryption key. The tool is available from within the H2 Console in the tools section, or you can run it from the command line. The following command line will encrypt the database test in the user home directory with the file password filepwd and the encryption algorithm AES: @features_1365_h2 Database File Locking @features_1366_p Whenever a database is opened, a lock file is created to signal other processes that the database is in use. If database is closed, or if the process that opened the database terminates, this lock file is deleted. @features_1367_p The following file locking methods are implemented: @features_1368_li The default method is FILE and uses a watchdog thread to protect the database file. The watchdog reads the lock file each second. @features_1369_li The second method is SOCKET and opens a server socket. The socket method does not require reading the lock file every second. The socket method should only be used if the database files are only accessed by one (and always the same) computer. @features_1370_li The third method is FS. This will use native file locking using FileChannel.lock. @features_1371_li It is also possible to open the database without file locking; in this case it is up to the application to protect the database files. Failing to do so will result in a corrupted database. Using the method NO forces the database to not create a lock file at all. Please note that this is unsafe as another process is able to open the same database, possibly leading to data corruption. @features_1372_p To open the database with a different file locking method, use the parameter FILE_LOCK. The following code opens the database with the 'socket' locking method: @features_1373_p For more information about the algorithms, see Advanced / File Locking Protocols. @features_1374_h2 Opening a Database Only if it Already Exists @features_1375_p By default, when an application calls DriverManager.getConnection(url, ...) and the database specified in the URL does not yet exist, a new (empty) database is created. In some situations, it is better to restrict creating new databases, and only allow to open existing databases. To do this, add ;IFEXISTS=TRUE to the database URL. In this case, if the database does not already exist, an exception is thrown when trying to connect. The connection only succeeds when the database already exists. The complete URL may look like this: @features_1376_h2 Closing a Database @features_1377_h3 Delayed Database Closing @features_1378_p Usually, a database is closed when the last connection to it is closed. In some situations this slows down the application, for example when it is not possible to keep at least one connection open. The automatic closing of a database can be delayed or disabled with the SQL statement SET DB_CLOSE_DELAY <seconds>. The parameter <seconds> specifies the number of seconds to keep a database open after the last connection to it was closed. The following statement will keep a database open for 10 seconds after the last connection was closed: @features_1379_p The value -1 means the database is not closed automatically. The value 0 is the default and means the database is closed when the last connection is closed. This setting is persistent and can be set by an administrator only. It is possible to set the value in the database URL: jdbc:h2:~/test;DB_CLOSE_DELAY=10. @features_1380_h3 Don't Close a Database when the VM Exits @features_1381_p By default, a database is closed when the last connection is closed. However, if it is never closed, the database is closed when the virtual machine exits normally, using a shutdown hook. In some situations, the database should not be closed in this case, for example because the database is still used at virtual machine shutdown (to store the shutdown process in the database for example). For those cases, the automatic closing of the database can be disabled in the database URL. The first connection (the one that is opening the database) needs to set the option in the database URL (it is not possible to change the setting afterwards). The database URL to disable database closing on exit is: @features_1382_h2 Execute SQL on Connection @features_1383_p Sometimes, particularly for in-memory databases, it is useful to be able to execute DDL or DML commands automatically when a client connects to a database. This functionality is enabled via the INIT property. Note that multiple commands may be passed to INIT, but the semicolon delimiter must be escaped, as in the example below. @features_1384_p Please note the double backslash is only required in a Java or properties file. In a GUI, or in an XML file, only one backslash is required: @features_1385_p Backslashes within the init script (for example within a runscript statement, to specify the folder names in Windows) need to be escaped as well (using a second backslash). It might be simpler to avoid backslashes in folder names for this reason; use forward slashes instead. @features_1386_h2 Ignore Unknown Settings @features_1387_p Some applications (for example OpenOffice.org Base) pass some additional parameters when connecting to the database. Why those parameters are passed is unknown. The parameters PREFERDOSLIKELINEENDS and IGNOREDRIVERPRIVILEGES are such examples; they are simply ignored to improve the compatibility with OpenOffice.org. If an application passes other parameters when connecting to the database, usually the database throws an exception saying the parameter is not supported. It is possible to ignored such parameters by adding ;IGNORE_UNKNOWN_SETTINGS=TRUE to the database URL. @features_1388_h2 Changing Other Settings when Opening a Connection @features_1389_p In addition to the settings already described, other database settings can be passed in the database URL. Adding ;setting=value at the end of a database URL is the same as executing the statement SET setting value just after connecting. For a list of supported settings, see SQL Grammar or the DbSettings javadoc. @features_1390_h2 Custom File Access Mode @features_1391_p Usually, the database opens the database file with the access mode rw, meaning read-write (except for read only databases, where the mode r is used). To open a database in read-only mode if the database file is not read-only, use ACCESS_MODE_DATA=r. Also supported are rws and rwd. This setting must be specified in the database URL: @features_1392_p For more information see Durability Problems. On many operating systems the access mode rws does not guarantee that the data is written to the disk. @features_1393_h2 Multiple Connections @features_1394_h3 Opening Multiple Databases at the Same Time @features_1395_p An application can open multiple databases at the same time, including multiple connections to the same database. The number of open database is only limited by the memory available. @features_1396_h3 Multiple Connections to the Same Database: Client/Server @features_1397_p If you want to access the same database at the same time from different processes or computers, you need to use the client / server mode. In this case, one process acts as the server, and the other processes (that could reside on other computers as well) connect to the server via TCP/IP (or SSL/TLS over TCP/IP for improved security). @features_1398_h3 Multithreading Support @features_1399_p This database is multithreading-safe. That means, if an application is multi-threaded, it does not need to worry about synchronizing access to the database. Internally, most requests to the same database are synchronized. That means an application can use multiple threads that access the same database at the same time, however if one thread executes a long running query, the other threads need to wait. @features_1400_p An application should normally use one connection per thread. This database synchronizes access to the same connection, but other databases may not do this. @features_1401_h3 Locking, Lock-Timeout, Deadlocks @features_1402_p Unless multi-version concurrency is used, the database uses table level locks to give each connection a consistent state of the data. There are two kinds of locks: read locks (shared locks) and write locks (exclusive locks). All locks are released when the transaction commits or rolls back. When using the default transaction isolation level 'read committed', read locks are already released after each statement. @features_1403_p If a connection wants to reads from a table, and there is no write lock on the table, then a read lock is added to the table. If there is a write lock, then this connection waits for the other connection to release the lock. If a connection cannot get a lock for a specified time, then a lock timeout exception is thrown. @features_1404_p Usually, SELECT statements will generate read locks. This includes subqueries. Statements that modify data use write locks. It is also possible to lock a table exclusively without modifying data, using the statement SELECT ... FOR UPDATE. The statements COMMIT and ROLLBACK releases all open locks. The commands SAVEPOINT and ROLLBACK TO SAVEPOINT don't affect locks. The locks are also released when the autocommit mode changes, and for connections with autocommit set to true (this is the default), locks are released after each statement. The following statements generate locks: @features_1405_th Type of Lock @features_1406_th SQL Statement @features_1407_td Read @features_1408_td SELECT * FROM TEST; @features_1409_td CALL SELECT MAX(ID) FROM TEST; @features_1410_td SCRIPT; @features_1411_td Write @features_1412_td SELECT * FROM TEST WHERE 1=0 FOR UPDATE; @features_1413_td Write @features_1414_td INSERT INTO TEST VALUES(1, 'Hello'); @features_1415_td INSERT INTO TEST SELECT * FROM TEST; @features_1416_td UPDATE TEST SET NAME='Hi'; @features_1417_td DELETE FROM TEST; @features_1418_td Write @features_1419_td ALTER TABLE TEST ...; @features_1420_td CREATE INDEX ... ON TEST ...; @features_1421_td DROP INDEX ...; @features_1422_p The number of seconds until a lock timeout exception is thrown can be set separately for each connection using the SQL command SET LOCK_TIMEOUT <milliseconds>. The initial lock timeout (that is the timeout used for new connections) can be set using the SQL command SET DEFAULT_LOCK_TIMEOUT <milliseconds>. The default lock timeout is persistent. @features_1423_h3 Avoiding Deadlocks @features_1424_p To avoid deadlocks, ensure that all transactions lock the tables in the same order (for example in alphabetical order), and avoid upgrading read locks to write locks. Both can be achieved using explicitly locking tables using SELECT ... FOR UPDATE. @features_1425_h2 Database File Layout @features_1426_p The following files are created for persistent databases: @features_1427_th File Name @features_1428_th Description @features_1429_th Number of Files @features_1430_td test.h2.db @features_1431_td Database file. @features_1432_td Contains the transaction log, indexes, and data for all tables. @features_1433_td Format: <database>.h2.db @features_1434_td 1 per database @features_1435_td test.lock.db @features_1436_td Database lock file. @features_1437_td Automatically (re-)created while the database is in use. @features_1438_td Format: <database>.lock.db @features_1439_td 1 per database (only if in use) @features_1440_td test.trace.db @features_1441_td Trace file (if the trace option is enabled). @features_1442_td Contains trace information. @features_1443_td Format: <database>.trace.db @features_1444_td Renamed to <database>.trace.db.old is too big. @features_1445_td 0 or 1 per database @features_1446_td test.lobs.db/* @features_1447_td Directory containing one file for each @features_1448_td BLOB or CLOB value larger than a certain size. @features_1449_td Format: <id>.t<tableId>.lob.db @features_1450_td 1 per large object @features_1451_td test.123.temp.db @features_1452_td Temporary file. @features_1453_td Contains a temporary blob or a large result set. @features_1454_td Format: <database>.<id>.temp.db @features_1455_td 1 per object @features_1456_h3 Moving and Renaming Database Files @features_1457_p Database name and location are not stored inside the database files. @features_1458_p While a database is closed, the files can be moved to another directory, and they can be renamed as well (as long as all files of the same database start with the same name and the respective extensions are unchanged). @features_1459_p As there is no platform specific data in the files, they can be moved to other operating systems without problems. @features_1460_h3 Backup @features_1461_p When the database is closed, it is possible to backup the database files. @features_1462_p To backup data while the database is running, the SQL commands SCRIPT and BACKUP can be used. @features_1463_h2 Logging and Recovery @features_1464_p Whenever data is modified in the database and those changes are committed, the changes are written to the transaction log (except for in-memory objects). The changes to the main data area itself are usually written later on, to optimize disk access. If there is a power failure, the main data area is not up-to-date, but because the changes are in the transaction log, the next time the database is opened, the changes are re-applied automatically. @features_1465_h2 Compatibility @features_1466_p All database engines behave a little bit different. Where possible, H2 supports the ANSI SQL standard, and tries to be compatible to other databases. There are still a few differences however: @features_1467_p In MySQL text columns are case insensitive by default, while in H2 they are case sensitive. However H2 supports case insensitive columns as well. To create the tables with case insensitive texts, append IGNORECASE=TRUE to the database URL (example: jdbc:h2:~/test;IGNORECASE=TRUE). @features_1468_h3 Compatibility Modes @features_1469_p For certain features, this database can emulate the behavior of specific databases. However, only a small subset of the differences between databases are implemented in this way. Here is the list of currently supported modes and the differences to the regular mode: @features_1470_h3 DB2 Compatibility Mode @features_1471_p To use the IBM DB2 mode, use the database URL jdbc:h2:~/test;MODE=DB2 or the SQL statement SET MODE DB2. @features_1472_li For aliased columns, ResultSetMetaData.getColumnName() returns the alias name and getTableName() returns null. @features_1473_li Support for the syntax [OFFSET .. ROW] [FETCH ... ONLY] as an alternative for LIMIT .. OFFSET. @features_1474_li Concatenating NULL with another value results in the other value. @features_1475_li Support the pseudo-table SYSIBM.SYSDUMMY1. @features_1476_h3 Derby Compatibility Mode @features_1477_p To use the Apache Derby mode, use the database URL jdbc:h2:~/test;MODE=Derby or the SQL statement SET MODE Derby. @features_1478_li For aliased columns, ResultSetMetaData.getColumnName() returns the alias name and getTableName() returns null. @features_1479_li For unique indexes, NULL is distinct. That means only one row with NULL in one of the columns is allowed. @features_1480_li Concatenating NULL with another value results in the other value. @features_1481_li Support the pseudo-table SYSIBM.SYSDUMMY1. @features_1482_h3 HSQLDB Compatibility Mode @features_1483_p To use the HSQLDB mode, use the database URL jdbc:h2:~/test;MODE=HSQLDB or the SQL statement SET MODE HSQLDB. @features_1484_li For aliased columns, ResultSetMetaData.getColumnName() returns the alias name and getTableName() returns null. @features_1485_li When converting the scale of decimal data, the number is only converted if the new scale is smaller than the current scale. Usually, the scale is converted and 0s are added if required. @features_1486_li For unique indexes, NULL is distinct. That means only one row with NULL in one of the columns is allowed. @features_1487_li Text can be concatenated using '+'. @features_1488_h3 MS SQL Server Compatibility Mode @features_1489_p To use the MS SQL Server mode, use the database URL jdbc:h2:~/test;MODE=MSSQLServer or the SQL statement SET MODE MSSQLServer. @features_1490_li For aliased columns, ResultSetMetaData.getColumnName() returns the alias name and getTableName() returns null. @features_1491_li Identifiers may be quoted using square brackets as in [Test]. @features_1492_li For unique indexes, NULL is distinct. That means only one row with NULL in one of the columns is allowed. @features_1493_li Concatenating NULL with another value results in the other value. @features_1494_li Text can be concatenated using '+'. @features_1495_h3 MySQL Compatibility Mode @features_1496_p To use the MySQL mode, use the database URL jdbc:h2:~/test;MODE=MySQL or the SQL statement SET MODE MySQL. @features_1497_li When inserting data, if a column is defined to be NOT NULL and NULL is inserted, then a 0 (or empty string, or the current timestamp for timestamp columns) value is used. Usually, this operation is not allowed and an exception is thrown. @features_1498_li Creating indexes in the CREATE TABLE statement is allowed using INDEX(..) or KEY(..). Example: create table test(id int primary key, name varchar(255), key idx_name(name)); @features_1499_li Meta data calls return identifiers in lower case. @features_1500_li When converting a floating point number to an integer, the fractional digits are not truncated, but the value is rounded. @features_1501_li Concatenating NULL with another value results in the other value. @features_1502_p Text comparison in MySQL is case insensitive by default, while in H2 it is case sensitive (as in most other databases). H2 does support case insensitive text comparison, but it needs to be set separately, using SET IGNORECASE TRUE. This affects comparison using =, LIKE, REGEXP. @features_1503_h3 Oracle Compatibility Mode @features_1504_p To use the Oracle mode, use the database URL jdbc:h2:~/test;MODE=Oracle or the SQL statement SET MODE Oracle. @features_1505_li For aliased columns, ResultSetMetaData.getColumnName() returns the alias name and getTableName() returns null. @features_1506_li When using unique indexes, multiple rows with NULL in all columns are allowed, however it is not allowed to have multiple rows with the same values otherwise. @features_1507_li Concatenating NULL with another value results in the other value. @features_1508_h3 PostgreSQL Compatibility Mode @features_1509_p To use the PostgreSQL mode, use the database URL jdbc:h2:~/test;MODE=PostgreSQL or the SQL statement SET MODE PostgreSQL. @features_1510_li For aliased columns, ResultSetMetaData.getColumnName() returns the alias name and getTableName() returns null. @features_1511_li When converting a floating point number to an integer, the fractional digits are not be truncated, but the value is rounded. @features_1512_li The system columns CTID and OID are supported. @features_1513_li LOG(x) is base 10 in this mode. @features_1514_h2 Auto-Reconnect @features_1515_p The auto-reconnect feature causes the JDBC driver to reconnect to the database if the connection is lost. The automatic re-connect only occurs when auto-commit is enabled; if auto-commit is disabled, an exception is thrown. To enable this mode, append ;AUTO_RECONNECT=TRUE to the database URL. @features_1516_p Re-connecting will open a new session. After an automatic re-connect, variables and local temporary tables definitions (excluding data) are re-created. The contents of the system table INFORMATION_SCHEMA.SESSION_STATE contains all client side state that is re-created. @features_1517_p If another connection uses the database in exclusive mode (enabled using SET EXCLUSIVE 1 or SET EXCLUSIVE 2), then this connection will try to re-connect until the exclusive mode ends. @features_1518_h2 Automatic Mixed Mode @features_1519_p Multiple processes can access the same database without having to start the server manually. To do that, append ;AUTO_SERVER=TRUE to the database URL. You can use the same database URL independent of whether the database is already open or not. This feature doesn't work with in-memory databases. Example database URL: @features_1520_p Use the same URL for all connections to this database. Internally, when using this mode, the first connection to the database is made in embedded mode, and additionally a server is started internally (as a daemon thread). If the database is already open in another process, the server mode is used automatically. The IP address and port of the server are stored in the file .lock.db, that's why in-memory databases can't be supported. @features_1521_p The application that opens the first connection to the database uses the embedded mode, which is faster than the server mode. Therefore the main application should open the database first if possible. The first connection automatically starts a server on a random port. This server allows remote connections, however only to this database (to ensure that, the client reads .lock.db file and sends the the random key that is stored there to the server). When the first connection is closed, the server stops. If other (remote) connections are still open, one of them will then start a server (auto-reconnect is enabled automatically). @features_1522_p All processes need to have access to the database files. If the first connection is closed (the connection that started the server), open transactions of other connections will be rolled back (this may not be a problem if you don't disable autocommit). Explicit client/server connections (using jdbc:h2:tcp:// or ssl://) are not supported. This mode is not supported for in-memory databases. @features_1523_p Here is an example how to use this mode. Application 1 and 2 are not necessarily started on the same computer, but they need to have access to the database files. Application 1 and 2 are typically two different processes (however they could run within the same process). @features_1524_p When using this feature, by default the server uses any free TCP port. The port can be set manually using AUTO_SERVER_PORT=9090. @features_1525_h2 Page Size @features_1526_p The page size for new databases is 2 KB (2048), unless the page size is set explicitly in the database URL using PAGE_SIZE= when the database is created. The page size of existing databases can not be changed, so this property needs to be set when the database is created. @features_1527_h2 Using the Trace Options @features_1528_p To find problems in an application, it is sometimes good to see what database operations where executed. This database offers the following trace features: @features_1529_li Trace to System.out and/or to a file @features_1530_li Support for trace levels OFF, ERROR, INFO, DEBUG @features_1531_li The maximum size of the trace file can be set @features_1532_li It is possible to generate Java source code from the trace file @features_1533_li Trace can be enabled at runtime by manually creating a file @features_1534_h3 Trace Options @features_1535_p The simplest way to enable the trace option is setting it in the database URL. There are two settings, one for System.out (TRACE_LEVEL_SYSTEM_OUT) tracing, and one for file tracing (TRACE_LEVEL_FILE). The trace levels are 0 for OFF, 1 for ERROR (the default), 2 for INFO, and 3 for DEBUG. A database URL with both levels set to DEBUG is: @features_1536_p The trace level can be changed at runtime by executing the SQL command SET TRACE_LEVEL_SYSTEM_OUT level (for System.out tracing) or SET TRACE_LEVEL_FILE level (for file tracing). Example: @features_1537_h3 Setting the Maximum Size of the Trace File @features_1538_p When using a high trace level, the trace file can get very big quickly. The default size limit is 16 MB, if the trace file exceeds this limit, it is renamed to .old and a new file is created. If another such file exists, it is deleted. To limit the size to a certain number of megabytes, use SET TRACE_MAX_FILE_SIZE mb. Example: @features_1539_h3 Java Code Generation @features_1540_p When setting the trace level to INFO or DEBUG, Java source code is generated as well. This simplifies reproducing problems. The trace file looks like this: @features_1541_p To filter the Java source code, use the ConvertTraceFile tool as follows: @features_1542_p The generated file Test.java will contain the Java source code. The generated source code may be too large to compile (the size of a Java method is limited). If this is the case, the source code needs to be split in multiple methods. The password is not listed in the trace file and therefore not included in the source code. @features_1543_h2 Using Other Logging APIs @features_1544_p By default, this database uses its own native 'trace' facility. This facility is called 'trace' and not 'log' within this database to avoid confusion with the transaction log. Trace messages can be written to both file and System.out. In most cases, this is sufficient, however sometimes it is better to use the same facility as the application, for example Log4j. To do that, this database support SLF4J. @features_1545_a SLF4J @features_1546_p is a simple facade for various logging APIs and allows to plug in the desired implementation at deployment time. SLF4J supports implementations such as Logback, Log4j, Jakarta Commons Logging (JCL), Java logging, x4juli, and Simple Log. @features_1547_p To enable SLF4J, set the file trace level to 4 in the database URL: @features_1548_p Changing the log mechanism is not possible after the database is open, that means executing the SQL statement SET TRACE_LEVEL_FILE 4 when the database is already open will not have the desired effect. To use SLF4J, all required jar files need to be in the classpath. The logger name is h2database. If it does not work, check the file <database>.trace.db for error messages. @features_1549_h2 Read Only Databases @features_1550_p If the database files are read-only, then the database is read-only as well. It is not possible to create new tables, add or modify data in this database. Only SELECT and CALL statements are allowed. To create a read-only database, close the database. Then, make the database file read-only. When you open the database now, it is read-only. There are two ways an application can find out whether database is read-only: by calling Connection.isReadOnly() or by executing the SQL statement CALL READONLY(). @features_1551_p Using the Custom Access Mode r the database can also be opened in read-only mode, even if the database file is not read only. @features_1552_h2 Read Only Databases in Zip or Jar File @features_1553_p To create a read-only database in a zip file, first create a regular persistent database, and then create a backup. The database must not have pending changes, that means you need to close all connections to the database first. To speed up opening the read-only database and running queries, the database should be closed using SHUTDOWN DEFRAG. If you are using a database named test, an easy way to create a zip file is using the Backup tool. You can start the tool from the command line, or from within the H2 Console (Tools - Backup). Please note that the database must be closed when the backup is created. Therefore, the SQL statement BACKUP TO can not be used. @features_1554_p When the zip file is created, you can open the database in the zip file using the following database URL: @features_1555_p Databases in zip files are read-only. The performance for some queries will be slower than when using a regular database, because random access in zip files is not supported (only streaming). How much this affects the performance depends on the queries and the data. The database is not read in memory; therefore large databases are supported as well. The same indexes are used as when using a regular database. @features_1556_p If the database is larger than a few megabytes, performance is much better if the database file is split into multiple smaller files, because random access in compressed files is not possible. See also the sample application ReadOnlyDatabaseInZip. @features_1557_h3 Opening a Corrupted Database @features_1558_p If a database cannot be opened because the boot info (the SQL script that is run at startup) is corrupted, then the database can be opened by specifying a database event listener. The exceptions are logged, but opening the database will continue. @features_1559_h2 Computed Columns / Function Based Index @features_1560_p A computed column is a column whose value is calculated before storing. The formula is evaluated when the row is inserted, and re-evaluated every time the row is updated. One use case is to automatically update the last-modification time: @features_1561_p Function indexes are not directly supported by this database, but they can be emulated by using computed columns. For example, if an index on the upper-case version of a column is required, create a computed column with the upper-case version of the original column, and create an index for this column: @features_1562_p When inserting data, it is not required (and not allowed) to specify a value for the upper-case version of the column, because the value is generated. But you can use the column when querying the table: @features_1563_h2 Multi-Dimensional Indexes @features_1564_p A tool is provided to execute efficient multi-dimension (spatial) range queries. This database does not support a specialized spatial index (R-Tree or similar). Instead, the B-Tree index is used. For each record, the multi-dimensional key is converted (mapped) to a single dimensional (scalar) value. This value specifies the location on a space-filling curve. @features_1565_p Currently, Z-order (also called N-order or Morton-order) is used; Hilbert curve could also be used, but the implementation is more complex. The algorithm to convert the multi-dimensional value is called bit-interleaving. The scalar value is indexed using a B-Tree index (usually using a computed column). @features_1566_p The method can result in a drastic performance improvement over just using an index on the first column. Depending on the data and number of dimensions, the improvement is usually higher than factor 5. The tool generates a SQL query from a specified multi-dimensional range. The method used is not database dependent, and the tool can easily be ported to other databases. For an example how to use the tool, please have a look at the sample code provided in TestMultiDimension.java. @features_1567_h2 User-Defined Functions and Stored Procedures @features_1568_p In addition to the built-in functions, this database supports user-defined Java functions. In this database, Java functions can be used as stored procedures as well. A function must be declared (registered) before it can be used. A function can be defined using source code, or as a reference to a compiled class that is available in the classpath. By default, the function aliases are stored in the current schema. @features_1569_h3 Referencing a Compiled Method @features_1570_p When referencing a method, the class must already be compiled and included in the classpath where the database is running. Only static Java methods are supported; both the class and the method must be public. Example Java class: @features_1571_p The Java function must be registered in the database by calling CREATE ALIAS ... FOR: @features_1572_p For a complete sample application, see src/test/org/h2/samples/Function.java. @features_1573_h3 Declaring Functions as Source Code @features_1574_p When defining a function alias with source code, the database tries to compile the source code using the Sun Java compiler (the class com.sun.tools.javac.Main) if the tools.jar is in the classpath. If not, javac is run as a separate process. Only the source code is stored in the database; the class is compiled each time the database is re-opened. Source code is usually passed as dollar quoted text to avoid escaping problems, however single quotes can be used as well. Example: @features_1575_p By default, the three packages java.util, java.math, java.sql are imported. The method name (nextPrime in the example above) is ignored. Method overloading is not supported when declaring functions as source code, that means only one method may be declared for an alias. If different import statements are required, they must be declared at the beginning and separated with the tag @CODE: @features_1576_p The following template is used to create a complete Java class: @features_1577_h3 Method Overloading @features_1578_p Multiple methods may be bound to a SQL function if the class is already compiled and included in the classpath. Each Java method must have a different number of arguments. Method overloading is not supported when declaring functions as source code. @features_1579_h3 Function Data Type Mapping @features_1580_p Functions that accept non-nullable parameters such as int will not be called if one of those parameters is NULL. Instead, the result of the function is NULL. If the function should be called if a parameter is NULL, you need to use java.lang.Integer instead. @features_1581_p SQL types are mapped to Java classes and vice-versa as in the JDBC API. For details, see Data Types. There are a few special cases: java.lang.Object is mapped to OTHER (a serialized object). Therefore, java.lang.Object can not be used to match all SQL types (matching all SQL types is not supported). The second special case is Object[]: arrays of any class are mapped to ARRAY. Objects of type org.h2.value.Value (the internal value class) are passed through without conversion. @features_1582_h3 Functions That Require a Connection @features_1583_p If the first parameter of a Java function is a java.sql.Connection, then the connection to database is provided. This connection does not need to be closed before returning. When calling the method from within the SQL statement, this connection parameter does not need to be (can not be) specified. @features_1584_h3 Functions Throwing an Exception @features_1585_p If a function throws an exception, then the current statement is rolled back and the exception is thrown to the application. SQLException are directly re-thrown to the calling application; all other exceptions are first converted to a SQLException. @features_1586_h3 Functions Returning a Result Set @features_1587_p Functions may returns a result set. Such a function can be called with the CALL statement: @features_1588_h3 Using SimpleResultSet @features_1589_p A function can create a result set using the SimpleResultSet tool: @features_1590_h3 Using a Function as a Table @features_1591_p A function that returns a result set can be used like a table. However, in this case the function is called at least twice: first while parsing the statement to collect the column names (with parameters set to null where not known at compile time). And then, while executing the statement to get the data (maybe multiple times if this is a join). If the function is called just to get the column list, the URL of the connection passed to the function is jdbc:columnlist:connection. Otherwise, the URL of the connection is jdbc:default:connection. @features_1592_h2 Pluggable or User-Defined Tables @features_1593_p For situations where you need to expose other data-sources to the SQL engine as a table, there are "pluggable tables". For some examples, have a look at the code in org.h2.test.db.TestTableEngines. @features_1594_p In order to create your own TableEngine, you need to implement the org.h2.api.TableEngine interface e.g. something like this: @features_1595_p and then create the table from SQL like this: @features_1596_h2 Triggers @features_1597_p This database supports Java triggers that are called before or after a row is updated, inserted or deleted. Triggers can be used for complex consistency checks, or to update related data in the database. It is also possible to use triggers to simulate materialized views. For a complete sample application, see src/test/org/h2/samples/TriggerSample.java. A Java trigger must implement the interface org.h2.api.Trigger. The trigger class must be available in the classpath of the database engine (when using the server mode, it must be in the classpath of the server). @features_1598_p The connection can be used to query or update data in other tables. The trigger then needs to be defined in the database: @features_1599_p The trigger can be used to veto a change by throwing a SQLException. @features_1600_p As an alternative to implementing the Trigger interface, an application can extend the abstract class org.h2.tools.TriggerAdapter. This will allows to use the ResultSet interface within trigger implementations. In this case, only the fire method needs to be implemented: @features_1601_h2 Compacting a Database @features_1602_p Empty space in the database file re-used automatically. When closing the database, the database is automatically compacted for up to 200 milliseconds by default. To compact more, use the SQL statement SHUTDOWN COMPACT. However re-creating the database may further reduce the database size because this will re-build the indexes. Here is a sample function to do this: @features_1603_p See also the sample application org.h2.samples.Compact. The commands SCRIPT / RUNSCRIPT can be used as well to create a backup of a database and re-build the database from the script. @features_1604_h2 Cache Settings @features_1605_p The database keeps most frequently used data in the main memory. The amount of memory used for caching can be changed using the setting CACHE_SIZE. This setting can be set in the database connection URL (jdbc:h2:~/test;CACHE_SIZE=131072), or it can be changed at runtime using SET CACHE_SIZE size. The size of the cache, as represented by CACHE_SIZE is measured in KB, with each KB being 1024 bytes. This setting has no effect for in-memory databases. For persistent databases, the setting is stored in the database and re-used when the database is opened the next time. However, when opening an existing database, the cache size is set to at most half the amount of memory available for the virtual machine (Runtime.getRuntime().maxMemory()), even if the cache size setting stored in the database is larger; however the setting stored in the database is kept. Setting the cache size in the database URL or explicitly using SET CACHE_SIZE overrides this value (even if larger than the physical memory). To get the current used maximum cache size, use the query SELECT * FROM INFORMATION_SCHEMA.SETTINGS WHERE NAME = 'info.CACHE_MAX_SIZE' @features_1606_p An experimental scan-resistant cache algorithm "Two Queue" (2Q) is available. To enable it, append ;CACHE_TYPE=TQ to the database URL. The cache might not actually improve performance. If you plan to use it, please run your own test cases first. @features_1607_p Also included is an experimental second level soft reference cache. Rows in this cache are only garbage collected on low memory. By default the second level cache is disabled. To enable it, use the prefix SOFT_. Example: jdbc:h2:~/test;CACHE_TYPE=SOFT_LRU. The cache might not actually improve performance. If you plan to use it, please run your own test cases first. @features_1608_p To get information about page reads and writes, and the current caching algorithm in use, call SELECT * FROM INFORMATION_SCHEMA.SETTINGS. The number of pages read / written is listed. @fragments_1000_div    @fragments_1001_label Search: @fragments_1002_label Highlight keyword(s) @fragments_1003_a Home @fragments_1004_a Download @fragments_1005_a Cheat Sheet @fragments_1006_b Documentation @fragments_1007_a Quickstart @fragments_1008_a Installation @fragments_1009_a Tutorial @fragments_1010_a Features @fragments_1011_a Performance @fragments_1012_a Advanced @fragments_1013_b Reference @fragments_1014_a SQL Grammar @fragments_1015_a Functions @fragments_1016_a Data Types @fragments_1017_a Javadoc @fragments_1018_a PDF (1 MB) @fragments_1019_b Support @fragments_1020_a FAQ @fragments_1021_a Error Analyzer @fragments_1022_a Google Group (English) @fragments_1023_a Google Group (Japanese) @fragments_1024_a Google Group (Chinese) @fragments_1025_b Appendix @fragments_1026_a History & Roadmap @fragments_1027_a License @fragments_1028_a Build @fragments_1029_a Links @fragments_1030_a JaQu @fragments_1031_a MVStore @fragments_1032_td   @frame_1000_h1 H2 Database Engine @frame_1001_p Welcome to H2, the free SQL database. The main feature of H2 are: @frame_1002_li It is free to use for everybody, source code is included @frame_1003_li Written in Java, but also available as native executable @frame_1004_li JDBC and (partial) ODBC API @frame_1005_li Embedded and client/server modes @frame_1006_li Clustering is supported @frame_1007_li A web client is included @frame_1008_h2 No Javascript @frame_1009_p If you are not automatically redirected to the main page, then Javascript is currently disabled or your browser does not support Javascript. Some features (for example the integrated search) require Javascript. @frame_1010_p Please enable Javascript, or go ahead without it: H2 Database Engine @history_1000_h1 History and Roadmap @history_1001_a Change Log @history_1002_a Roadmap @history_1003_a History of this Database Engine @history_1004_a Why Java @history_1005_a Supporters @history_1006_h2 Change Log @history_1007_p The up-to-date change log is available at http://www.h2database.com/html/changelog.html @history_1008_h2 Roadmap @history_1009_p The current roadmap is available at http://www.h2database.com/html/roadmap.html @history_1010_h2 History of this Database Engine @history_1011_p The development of H2 was started in May 2004, but it was first published on December 14th 2005. The main author of H2, Thomas Mueller, is also the original developer of Hypersonic SQL. In 2001, he joined PointBase Inc. where he wrote PointBase Micro, a commercial Java SQL database. At that point, he had to discontinue Hypersonic SQL. The HSQLDB Group was formed to continued to work on the Hypersonic SQL codebase. The name H2 stands for Hypersonic 2, however H2 does not share code with Hypersonic SQL or HSQLDB. H2 is built from scratch. @history_1012_h2 Why Java @history_1013_p The main reasons to use a Java database are: @history_1014_li Very simple to integrate in Java applications @history_1015_li Support for many different platforms @history_1016_li More secure than native applications (no buffer overflows) @history_1017_li User defined functions (or triggers) run very fast @history_1018_li Unicode support @history_1019_p Some think Java is too slow for low level operations, but this is no longer true. Garbage collection for example is now faster than manual memory management. @history_1020_p Developing Java code is faster than developing C or C++ code. When using Java, most time can be spent on improving the algorithms instead of porting the code to different platforms or doing memory management. Features such as Unicode and network libraries are already built-in. In Java, writing secure code is easier because buffer overflows can not occur. Features such as reflection can be used for randomized testing. @history_1021_p Java is future proof: a lot of companies support Java. Java is now open source. @history_1022_p To increase the portability and ease of use, this software depends on very few libraries. Features that are not available in open source Java implementations (such as Swing) are not used, or only used for optional features. @history_1023_h2 Supporters @history_1024_p Many thanks for those who reported bugs, gave valuable feedback, spread the word, and translated this project. Also many thanks to the donors: @history_1025_li Martin Wildam, Austria @history_1026_a Code Lutin, France @history_1027_a Code 42 Software, Inc., Minneapolis @history_1028_a NetSuxxess GmbH, Germany @history_1029_a Poker Copilot, Steve McLeod, Germany @history_1030_a SkyCash, Poland @history_1031_a Lumber-mill, Inc., Japan @history_1032_a StockMarketEye, USA @history_1033_a Eckenfelder GmbH & Co.KG, Germany @history_1034_li Richard Hickey, USA @history_1035_li Alessio Jacopo D'Adamo, Italy @history_1036_li Ashwin Jayaprakash, USA @history_1037_li Donald Bleyl, USA @history_1038_li Frank Berger, Germany @history_1039_li Florent Ramiere, France @history_1040_li Jun Iyama, Japan @history_1041_li Antonio Casqueiro, Portugal @history_1042_li Oliver Computing LLC, USA @history_1043_li Harpal Grover Consulting Inc., USA @history_1044_li Elisabetta Berlini, Italy @history_1045_li William Gilbert, USA @history_1046_li Antonio Dieguez Rojas, Chile @history_1047_a Ontology Works, USA @history_1048_li Pete Haidinyak, USA @history_1049_li William Osmond, USA @history_1050_li Joachim Ansorg, Germany @history_1051_li Oliver Soerensen, Germany @history_1052_li Christos Vasilakis, Greece @history_1053_li Fyodor Kupolov, Denmark @history_1054_li Jakob Jenkov, Denmark @history_1055_li Stéphane Chartrand, Switzerland @history_1056_li Glenn Kidd, USA @history_1057_li Gustav Trede, Sweden @history_1058_li Joonas Pulakka, Finland @history_1059_li Bjorn Darri Sigurdsson, Iceland @history_1060_li Iyama Jun, Japan @history_1061_li Gray Watson, USA @history_1062_li Erik Dick, Germany @history_1063_li Pengxiang Shao, China @history_1064_li Bilingual Marketing Group, USA @history_1065_li Philippe Marschall, Switzerland @history_1066_li Knut Staring, Norway @history_1067_li Theis Borg, Denmark @history_1068_li Joel A. Garringer, USA @history_1069_li Olivier Chafik, France @history_1070_li Rene Schwietzke, Germany @history_1071_li Jalpesh Patadia, USA @history_1072_li Takanori Kawashima, Japan @history_1073_li Terrence JC Huang, China @history_1074_a JiaDong Huang, Australia @history_1075_li Laurent van Roy, Belgium @history_1076_li Qian Chen, China @installation_1000_h1 Installation @installation_1001_a Requirements @installation_1002_a Supported Platforms @installation_1003_a Installing the Software @installation_1004_a Directory Structure @installation_1005_h2 Requirements @installation_1006_p To run this database, the following software stack is known to work. Other software most likely also works, but is not tested as much. @installation_1007_h3 Database Engine @installation_1008_li Windows XP or Vista, Mac OS X, or Linux @installation_1009_li Sun Java 6 or newer @installation_1010_li Recommended Windows file system: NTFS (FAT32 only supports files up to 4 GB) @installation_1011_h3 H2 Console @installation_1012_li Mozilla Firefox @installation_1013_h2 Supported Platforms @installation_1014_p As this database is written in Java, it can run on many different platforms. It is tested with Java 6 and 7. Currently, the database is developed and tested on Windows 8 and Mac OS X using Java 6, but it also works in many other operating systems and using other Java runtime environments. All major operating systems (Windows XP, Windows Vista, Windows 7, Mac OS, Ubuntu,...) are supported. @installation_1015_h2 Installing the Software @installation_1016_p To install the software, run the installer or unzip it to a directory of your choice. @installation_1017_h2 Directory Structure @installation_1018_p After installing, you should get the following directory structure: @installation_1019_th Directory @installation_1020_th Contents @installation_1021_td bin @installation_1022_td JAR and batch files @installation_1023_td docs @installation_1024_td Documentation @installation_1025_td docs/html @installation_1026_td HTML pages @installation_1027_td docs/javadoc @installation_1028_td Javadoc files @installation_1029_td ext @installation_1030_td External dependencies (downloaded when building) @installation_1031_td service @installation_1032_td Tools to run the database as a Windows Service @installation_1033_td src @installation_1034_td Source files @installation_1035_td src/docsrc @installation_1036_td Documentation sources @installation_1037_td src/installer @installation_1038_td Installer, shell, and release build script @installation_1039_td src/main @installation_1040_td Database engine source code @installation_1041_td src/test @installation_1042_td Test source code @installation_1043_td src/tools @installation_1044_td Tools and database adapters source code @jaqu_1000_h1 JaQu @jaqu_1001_a What is JaQu @jaqu_1002_a Differences to Other Data Access Tools @jaqu_1003_a Current State @jaqu_1004_a Building the JaQu Library @jaqu_1005_a Requirements @jaqu_1006_a Example Code @jaqu_1007_a Configuration @jaqu_1008_a Natural Syntax @jaqu_1009_a Other Ideas @jaqu_1010_a Similar Projects @jaqu_1011_h2 What is JaQu @jaqu_1012_p Note: This project is currently in maintenance mode. A friendly fork of JaQu is available under the name iciql. @jaqu_1013_p JaQu stands for Java Query and allows to access databases using pure Java. JaQu provides a fluent interface (or internal DSL). JaQu is something like LINQ for Java (LINQ stands for "language integrated query" and is a Microsoft .NET technology). The following JaQu code: @jaqu_1014_p stands for the SQL statement: @jaqu_1015_h2 Differences to Other Data Access Tools @jaqu_1016_p Unlike SQL, JaQu can be easily integrated in Java applications. Because JaQu is pure Java, auto-complete in the IDE is supported. Type checking is performed by the compiler. JaQu fully protects against SQL injection. @jaqu_1017_p JaQu is meant as replacement for JDBC and SQL and not as much as a replacement for tools like Hibernate. With JaQu, you don't write SQL statements as strings. JaQu is much smaller and simpler than other persistence frameworks such as Hibernate, but it also does not provide all the features of those. Unlike iBatis and Hibernate, no XML or annotation based configuration is required; instead the configuration (if required at all) is done in pure Java, within the application. @jaqu_1018_p JaQu does not require or contain any data caching mechanism. Like JDBC and iBatis, JaQu provides full control over when and what SQL statements are executed (but without having to write SQL statements as strings). @jaqu_1019_h3 Restrictions @jaqu_1020_p Primitive types (eg. boolean, int, long, double) are not supported. Use java.lang.Boolean, Integer, Long, Double instead. @jaqu_1021_h3 Why in Java? @jaqu_1022_p Most applications are written in Java. Mixing Java and another language (for example Scala or Groovy) in the same application is complicated: you would need to split the application and database code, and write adapter / wrapper code. @jaqu_1023_h2 Current State @jaqu_1024_p Currently, JaQu is only tested with the H2 database. The API may change in future versions. JaQu is not part of the h2 jar file, however the source code is included in H2, under: @jaqu_1025_code src/test/org/h2/test/jaqu/* @jaqu_1026_li (samples and tests) @jaqu_1027_code src/tools/org/h2/jaqu/* @jaqu_1028_li (framework) @jaqu_1029_h2 Building the JaQu Library @jaqu_1030_p To create the JaQu jar file, run: build jarJaqu. This will create the file bin/h2jaqu.jar. @jaqu_1031_h2 Requirements @jaqu_1032_p JaQu requires Java 6. Annotations are not need. Currently, JaQu is only tested with the H2 database engine, however in theory it should work with any database that supports the JDBC API. @jaqu_1033_h2 Example Code @jaqu_1034_h2 Configuration @jaqu_1035_p JaQu does not require any configuration when using the default field to column mapping. To define table indices, or if you want to map a class to a table with a different name, or a field to a column with another name, create a function called define in the data class. Example: @jaqu_1036_p The method define() contains the mapping definition. It is called once when the class is used for the first time. Like annotations, the mapping is defined in the class itself. Unlike when using annotations, the compiler can check the syntax even for multi-column objects (multi-column indexes, multi-column primary keys and so on). Because the definition is written in Java, the configuration can be set at runtime, which is not possible using annotations. Unlike XML mapping configuration, the configuration is integrated in the class itself. @jaqu_1037_h2 Natural Syntax @jaqu_1038_p The plan is to support more natural (pure Java) syntax in conditions. To do that, the condition class is de-compiled to a SQL condition. A proof of concept decompiler is included (but it doesn't fully work yet; patches are welcome). The planned syntax is: @jaqu_1039_h2 Other Ideas @jaqu_1040_p This project has just been started, and nothing is fixed yet. Some ideas are: @jaqu_1041_li Support queries on collections (instead of using a database). @jaqu_1042_li Provide API level compatibility with JPA (so that JaQu can be used as an extension of JPA). @jaqu_1043_li Internally use a JPA implementation (for example Hibernate) instead of SQL directly. @jaqu_1044_li Use PreparedStatements and cache them. @jaqu_1045_h2 Similar Projects @jaqu_1046_a iciql (a friendly fork of JaQu) @jaqu_1047_a Cement Framework @jaqu_1048_a Dreamsource ORM @jaqu_1049_a Empire-db @jaqu_1050_a JEQUEL: Java Embedded QUEry Language @jaqu_1051_a Joist @jaqu_1052_a jOOQ @jaqu_1053_a JoSQL @jaqu_1054_a LIQUidFORM @jaqu_1055_a Quaere (Alias implementation) @jaqu_1056_a Quaere @jaqu_1057_a Querydsl @jaqu_1058_a Squill @license_1000_h1 License @license_1001_a Summary and License FAQ @license_1002_a H2 License - Version 1.0 @license_1003_a Eclipse Public License - Version 1.0 @license_1004_a Export Control Classification Number (ECCN) @license_1005_h2 Summary and License FAQ @license_1006_p H2 is dual licensed and available under a modified version of the MPL 1.1 (Mozilla Public License) or under the (unmodified) EPL 1.0 (Eclipse Public License). The changes to the MPL are @license_1007_em underlined. There is a license FAQ for both the MPL and the EPL, most of that is applicable to the H2 license as well. @license_1008_li You can use H2 for free. You can integrate it into your applications (including in commercial applications), and you can distribute it. @license_1009_li Files containing only your code are not covered by this license (it is 'commercial friendly'). @license_1010_li Modifications to the H2 source code must be published. @license_1011_li You don't need to provide the source code of H2 if you did not modify anything. @license_1012_li If you distribute a binary that includes H2, you need to add a disclaimer of liability - see the example below. @license_1013_p However, nobody is allowed to rename H2, modify it a little, and sell it as a database engine without telling the customers it is in fact H2. This happened to HSQLDB: a company called 'bungisoft' copied HSQLDB, renamed it to 'RedBase', and tried to sell it, hiding the fact that it was in fact just HSQLDB. It seems 'bungisoft' does not exist any more, but you can use the Wayback Machine and visit old web pages of http://www.bungisoft.com. @license_1014_p About porting the source code to another language (for example C# or C++): converted source code (even if done manually) stays under the same copyright and license as the original code. The copyright of the ported source code does not (automatically) go to the person who ported the code. @license_1015_p If you distribute a binary that includes H2, you need to add the license and a disclaimer of liability (as you should do for your own code). You should add a disclaimer for each open source libraries you use. For example, add a file 3rdparty_license.txt in the directory where the jar files are, and list all open source libraries, each one with its license and disclaimer. For H2, a simple solution is to copy the following text below. You may also include a copy of the complete license. @license_1016_h2 H2 License - Version 1.0 @license_1017_h3 1. Definitions @license_1018_b 1.0.1. "Commercial Use" @license_1019_p means distribution or otherwise making the Covered Code available to a third party. @license_1020_b 1.1. "Contributor" @license_1021_p means each entity that creates or contributes to the creation of Modifications. @license_1022_b 1.2. "Contributor Version" @license_1023_p means the combination of the Original Code, prior Modifications used by a Contributor, and the Modifications made by that particular Contributor. @license_1024_b 1.3. "Covered Code" @license_1025_p means the Original Code or Modifications or the combination of the Original Code and Modifications, in each case including portions thereof. @license_1026_b 1.4. "Electronic Distribution Mechanism" @license_1027_p means a mechanism generally accepted in the software development community for the electronic transfer of data. @license_1028_b 1.5. "Executable" @license_1029_p means Covered Code in any form other than Source Code. @license_1030_b 1.6. "Initial Developer" @license_1031_p means the individual or entity identified as the Initial Developer in the Source Code notice required by Exhibit A. @license_1032_b 1.7. "Larger Work" @license_1033_p means a work which combines Covered Code or portions thereof with code not governed by the terms of this License. @license_1034_b 1.8. "License" @license_1035_p means this document. @license_1036_b 1.8.1. "Licensable" @license_1037_p means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently acquired, any and all of the rights conveyed herein. @license_1038_b 1.9. "Modifications" @license_1039_p means any addition to or deletion from the substance or structure of either the Original Code or any previous Modifications. When Covered Code is released as a series of files, a Modification is: @license_1040_p 1.9.a. Any addition to or deletion from the contents of a file containing Original Code or previous Modifications. @license_1041_p 1.9.b. Any new file that contains any part of the Original Code or previous Modifications. @license_1042_b 1.10. "Original Code" @license_1043_p means Source Code of computer software code which is described in the Source Code notice required by Exhibit A as Original Code, and which, at the time of its release under this License is not already Covered Code governed by this License. @license_1044_b 1.10.1. "Patent Claims" @license_1045_p means any patent claim(s), now owned or hereafter acquired, including without limitation, method, process, and apparatus claims, in any patent Licensable by grantor. @license_1046_b 1.11. "Source Code" @license_1047_p means the preferred form of the Covered Code for making modifications to it, including all modules it contains, plus any associated interface definition files, scripts used to control compilation and installation of an Executable, or source code differential comparisons against either the Original Code or another well known, available Covered Code of the Contributor's choice. The Source Code can be in a compressed or archival form, provided the appropriate decompression or de-archiving software is widely available for no charge. @license_1048_b 1.12. "You" (or "Your") @license_1049_p means an individual or a legal entity exercising rights under, and complying with all of the terms of, this License or a future version of this License issued under Section 6.1. For legal entities, "You" includes any entity which controls, is controlled by, or is under common control with You. For purposes of this definition, "control" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. @license_1050_h3 2. Source Code License @license_1051_h4 2.1. The Initial Developer Grant @license_1052_p The Initial Developer hereby grants You a world-wide, royalty-free, non-exclusive license, subject to third party intellectual property claims: @license_1053_p 2.1.a. under intellectual property rights (other than patent or trademark) Licensable by Initial Developer to use, reproduce, modify, display, perform, sublicense and distribute the Original Code (or portions thereof) with or without Modifications, and/or as part of a Larger Work; and @license_1054_p 2.1.b. under Patents Claims infringed by the making, using or selling of Original Code, to make, have made, use, practice, sell, and offer for sale, and/or otherwise dispose of the Original Code (or portions thereof). @license_1055_p 2.1.c. the licenses granted in this Section 2.1 (a) and (b) are effective on the date Initial Developer first distributes Original Code under the terms of this License. @license_1056_p 2.1.d. Notwithstanding Section 2.1 (b) above, no patent license is granted: 1) for code that You delete from the Original Code; 2) separate from the Original Code; or 3) for infringements caused by: i) the modification of the Original Code or ii) the combination of the Original Code with other software or devices. @license_1057_h4 2.2. Contributor Grant @license_1058_p Subject to third party intellectual property claims, each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license @license_1059_p 2.2.a. under intellectual property rights (other than patent or trademark) Licensable by Contributor, to use, reproduce, modify, display, perform, sublicense and distribute the Modifications created by such Contributor (or portions thereof) either on an unmodified basis, with other Modifications, as Covered Code and/or as part of a Larger Work; and @license_1060_p 2.2.b. under Patent Claims infringed by the making, using, or selling of Modifications made by that Contributor either alone and/or in combination with its Contributor Version (or portions of such combination), to make, use, sell, offer for sale, have made, and/or otherwise dispose of: 1) Modifications made by that Contributor (or portions thereof); and 2) the combination of Modifications made by that Contributor with its Contributor Version (or portions of such combination). @license_1061_p 2.2.c. the licenses granted in Sections 2.2 (a) and 2.2 (b) are effective on the date Contributor first makes Commercial Use of the Covered Code. @license_1062_p 2.2.c. Notwithstanding Section 2.2 (b) above, no patent license is granted: 1) for any code that Contributor has deleted from the Contributor Version; 2) separate from the Contributor Version; 3) for infringements caused by: i) third party modifications of Contributor Version or ii) the combination of Modifications made by that Contributor with other software (except as part of the Contributor Version) or other devices; or 4) under Patent Claims infringed by Covered Code in the absence of Modifications made by that Contributor. @license_1063_h3 3. Distribution Obligations @license_1064_h4 3.1. Application of License @license_1065_p The Modifications which You create or to which You contribute are governed by the terms of this License, including without limitation Section 2.2. The Source Code version of Covered Code may be distributed only under the terms of this License or a future version of this License released under Section 6.1, and You must include a copy of this License with every copy of the Source Code You distribute. You may not offer or impose any terms on any Source Code version that alters or restricts the applicable version of this License or the recipients' rights hereunder. However, You may include an additional document offering the additional rights described in Section 3.5. @license_1066_h4 3.2. Availability of Source Code @license_1067_p Any Modification which You create or to which You contribute must be made available in Source Code form under the terms of this License either on the same media as an Executable version or via an accepted Electronic Distribution Mechanism to anyone to whom you made an Executable version available; and if made available via Electronic Distribution Mechanism, must remain available for at least twelve (12) months after the date it initially became available, or at least six (6) months after a subsequent version of that particular Modification has been made available to such recipients. You are responsible for ensuring that the Source Code version remains available even if the Electronic Distribution Mechanism is maintained by a third party. @license_1068_h4 3.3. Description of Modifications @license_1069_p You must cause all Covered Code to which You contribute to contain a file documenting the changes You made to create that Covered Code and the date of any change. You must include a prominent statement that the Modification is derived, directly or indirectly, from Original Code provided by the Initial Developer and including the name of the Initial Developer in (a) the Source Code, and (b) in any notice in an Executable version or related documentation in which You describe the origin or ownership of the Covered Code. @license_1070_h4 3.4. Intellectual Property Matters @license_1071_b 3.4.a. Third Party Claims: @license_1072_p If Contributor has knowledge that a license under a third party's intellectual property rights is required to exercise the rights granted by such Contributor under Sections 2.1 or 2.2, Contributor must include a text file with the Source Code distribution titled "LEGAL" which describes the claim and the party making the claim in sufficient detail that a recipient will know whom to contact. If Contributor obtains such knowledge after the Modification is made available as described in Section 3.2, Contributor shall promptly modify the LEGAL file in all copies Contributor makes available thereafter and shall take other steps (such as notifying appropriate mailing lists or newsgroups) reasonably calculated to inform those who received the Covered Code that new knowledge has been obtained. @license_1073_b 3.4.b. Contributor APIs: @license_1074_p If Contributor's Modifications include an application programming interface and Contributor has knowledge of patent licenses which are reasonably necessary to implement that API, Contributor must also include this information in the legal file. @license_1075_b 3.4.c. Representations: @license_1076_p Contributor represents that, except as disclosed pursuant to Section 3.4 (a) above, Contributor believes that Contributor's Modifications are Contributor's original creation(s) and/or Contributor has sufficient rights to grant the rights conveyed by this License. @license_1077_h4 3.5. Required Notices @license_1078_p You must duplicate the notice in Exhibit A in each file of the Source Code. If it is not possible to put such notice in a particular Source Code file due to its structure, then You must include such notice in a location (such as a relevant directory) where a user would be likely to look for such a notice. If You created one or more Modification(s) You may add your name as a Contributor to the notice described in Exhibit A. You must also duplicate this License in any documentation for the Source Code where You describe recipients' rights or ownership rights relating to Covered Code. You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Code. However, You may do so only on Your own behalf, and not on behalf of the Initial Developer or any Contributor. You must make it absolutely clear than any such warranty, support, indemnity or liability obligation is offered by You alone, and You hereby agree to indemnify the Initial Developer and every Contributor for any liability incurred by the Initial Developer or such Contributor as a result of warranty, support, indemnity or liability terms You offer. @license_1079_h4 3.6. Distribution of Executable Versions @license_1080_p You may distribute Covered Code in Executable form only if the requirements of Sections 3.1, 3.2, 3.3, 3.4 and 3.5 have been met for that Covered Code, and if You include a notice stating that the Source Code version of the Covered Code is available under the terms of this License, including a description of how and where You have fulfilled the obligations of Section 3.2. The notice must be conspicuously included in any notice in an Executable version, related documentation or collateral in which You describe recipients' rights relating to the Covered Code. You may distribute the Executable version of Covered Code or ownership rights under a license of Your choice, which may contain terms different from this License, provided that You are in compliance with the terms of this License and that the license for the Executable version does not attempt to limit or alter the recipient's rights in the Source Code version from the rights set forth in this License. If You distribute the Executable version under a different license You must make it absolutely clear that any terms which differ from this License are offered by You alone, not by the Initial Developer or any Contributor. You hereby agree to indemnify the Initial Developer and every Contributor for any liability incurred by the Initial Developer or such Contributor as a result of any such terms You offer. @license_1081_h4 3.7. Larger Works @license_1082_p You may create a Larger Work by combining Covered Code with other code not governed by the terms of this License and distribute the Larger Work as a single product. In such a case, You must make sure the requirements of this License are fulfilled for the Covered Code. @license_1083_h3 4. Inability to Comply Due to Statute or Regulation. @license_1084_p If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Code due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be included in the legal file described in Section 3.4 and must be included with all distributions of the Source Code. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. @license_1085_h3 5. Application of this License. @license_1086_p This License applies to code to which the Initial Developer has attached the notice in Exhibit A and to related Covered Code. @license_1087_h3 6. Versions of the License. @license_1088_h4 6.1. New Versions @license_1089_p The @license_1090_em H2 Group may publish revised and/or new versions of the License from time to time. Each version will be given a distinguishing version number. @license_1091_h4 6.2. Effect of New Versions @license_1092_p Once Covered Code has been published under a particular version of the License, You may always continue to use it under the terms of that version. You may also choose to use such Covered Code under the terms of any subsequent version of the License published by the @license_1093_em H2 Group. No one other than the @license_1094_em H2 Group has the right to modify the terms applicable to Covered Code created under this License. @license_1095_h4 6.3. Derivative Works @license_1096_p If You create or use a modified version of this License (which you may only do in order to apply it to code which is not already Covered Code governed by this License), You must (a) rename Your license so that the phrases @license_1097_em "H2 Group", "H2" or any confusingly similar phrase do not appear in your license (except to note that your license differs from this License) and (b) otherwise make it clear that Your version of the license contains terms which differ from the @license_1098_em H2 License. (Filling in the name of the Initial Developer, Original Code or Contributor in the notice described in Exhibit A shall not of themselves be deemed to be modifications of this License.) @license_1099_h3 7. Disclaimer of Warranty @license_1100_p Covered code is provided under this license on an "as is" basis, without warranty of any kind, either expressed or implied, including, without limitation, warranties that the covered code is free of defects, merchantable, fit for a particular purpose or non-infringing. The entire risk as to the quality and performance of the covered code is with you. Should any covered code prove defective in any respect, you (not the initial developer or any other contributor) assume the cost of any necessary servicing, repair or correction. This disclaimer of warranty constitutes an essential part of this license. No use of any covered code is authorized hereunder except under this disclaimer. @license_1101_h3 8. Termination @license_1102_p 8.1. This License and the rights granted hereunder will terminate automatically if You fail to comply with terms herein and fail to cure such breach within 30 days of becoming aware of the breach. All sublicenses to the Covered Code which are properly granted shall survive any termination of this License. Provisions which, by their nature, must remain in effect beyond the termination of this License shall survive. @license_1103_p 8.2. If You initiate litigation by asserting a patent infringement claim (excluding declaratory judgment actions) against Initial Developer or a Contributor (the Initial Developer or Contributor against whom You file such action is referred to as "Participant") alleging that: @license_1104_p 8.2.a. such Participant's Contributor Version directly or indirectly infringes any patent, then any and all rights granted by such Participant to You under Sections 2.1 and/or 2.2 of this License shall, upon 60 days notice from Participant terminate prospectively, unless if within 60 days after receipt of notice You either: (i) agree in writing to pay Participant a mutually agreeable reasonable royalty for Your past and future use of Modifications made by such Participant, or (ii) withdraw Your litigation claim with respect to the Contributor Version against such Participant. If within 60 days of notice, a reasonable royalty and payment arrangement are not mutually agreed upon in writing by the parties or the litigation claim is not withdrawn, the rights granted by Participant to You under Sections 2.1 and/or 2.2 automatically terminate at the expiration of the 60 day notice period specified above. @license_1105_p 8.2.b. any software, hardware, or device, other than such Participant's Contributor Version, directly or indirectly infringes any patent, then any rights granted to You by such Participant under Sections 2.1(b) and 2.2(b) are revoked effective as of the date You first made, used, sold, distributed, or had made, Modifications made by that Participant. @license_1106_p 8.3. If You assert a patent infringement claim against Participant alleging that such Participant's Contributor Version directly or indirectly infringes any patent where such claim is resolved (such as by license or settlement) prior to the initiation of patent infringement litigation, then the reasonable value of the licenses granted by such Participant under Sections 2.1 or 2.2 shall be taken into account in determining the amount or value of any payment or license. @license_1107_p 8.4. In the event of termination under Sections 8.1 or 8.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or any distributor hereunder prior to termination shall survive termination. @license_1108_h3 9. Limitation of Liability @license_1109_p Under no circumstances and under no legal theory, whether tort (including negligence), contract, or otherwise, shall you, the initial developer, any other contributor, or any distributor of covered code, or any supplier of any of such parties, be liable to any person for any indirect, special, incidental, or consequential damages of any character including, without limitation, damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses, even if such party shall have been informed of the possibility of such damages. This limitation of liability shall not apply to liability for death or personal injury resulting from such party's negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to you. @license_1110_h3 10. United States Government End Users @license_1111_p The Covered Code is a "commercial item", as that term is defined in 48 C.F.R. 2.101 (October 1995), consisting of "commercial computer software" and "commercial computer software documentation", as such terms are used in 48 C.F.R. 12.212 (September 1995). Consistent with 48 C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4 (June 1995), all U.S. Government End Users acquire Covered Code with only those rights set forth herein. @license_1112_h3 11. Miscellaneous @license_1113_p This License represents the complete agreement concerning subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. This License shall be governed by California law provisions (except to the extent applicable law, if any, provides otherwise), excluding its conflict-of-law provisions. With respect to disputes in which at least one party is a citizen of, or an entity chartered or registered to do business in United States of America, any litigation relating to this License shall be subject to the jurisdiction of the Federal Courts of the Northern District of California, with venue lying in Santa Clara County, California, with the losing party responsible for costs, including without limitation, court costs and reasonable attorneys' fees and expenses. The application of the United Nations Convention on Contracts for the International Sale of Goods is expressly excluded. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not apply to this License. @license_1114_h3 12. Responsibility for Claims @license_1115_p As between Initial Developer and the Contributors, each party is responsible for claims and damages arising, directly or indirectly, out of its utilization of rights under this License and You agree to work with Initial Developer and Contributors to distribute such responsibility on an equitable basis. Nothing herein is intended or shall be deemed to constitute any admission of liability. @license_1116_h3 13. Multiple-Licensed Code @license_1117_p Initial Developer may designate portions of the Covered Code as "Multiple-Licensed". "Multiple-Licensed" means that the Initial Developer permits you to utilize portions of the Covered Code under Your choice of this or the alternative licenses, if any, specified by the Initial Developer in the file described in Exhibit A. @license_1118_h3 Exhibit A @license_1119_h2 Eclipse Public License - Version 1.0 @license_1120_p THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT. @license_1121_h3 1. DEFINITIONS @license_1122_p "Contribution" means: @license_1123_p a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and @license_1124_p b) in the case of each subsequent Contributor: @license_1125_p i) changes to the Program, and @license_1126_p ii) additions to the Program; @license_1127_p where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program. @license_1128_p "Contributor" means any person or entity that distributes the Program. @license_1129_p "Licensed Patents " mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program. @license_1130_p "Program" means the Contributions distributed in accordance with this Agreement. @license_1131_p "Recipient" means anyone who receives the Program under this Agreement, including all Contributors. @license_1132_h3 2. GRANT OF RIGHTS @license_1133_p a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form. @license_1134_p b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder. @license_1135_p c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program. @license_1136_p d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement. @license_1137_h3 3. REQUIREMENTS @license_1138_p A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that: @license_1139_p a) it complies with the terms and conditions of this Agreement; and @license_1140_p b) its license agreement: @license_1141_p i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose; @license_1142_p ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits; @license_1143_p iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and @license_1144_p iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange. @license_1145_p When the Program is made available in source code form: @license_1146_p a) it must be made available under this Agreement; and @license_1147_p b) a copy of this Agreement must be included with each copy of the Program. @license_1148_p Contributors may not remove or alter any copyright notices contained within the Program. @license_1149_p Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution. @license_1150_h3 4. COMMERCIAL DISTRIBUTION @license_1151_p Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense. @license_1152_p For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages. @license_1153_h3 5. NO WARRANTY @license_1154_p EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations. @license_1155_h3 6. DISCLAIMER OF LIABILITY @license_1156_p EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. @license_1157_h3 7. GENERAL @license_1158_p If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable. @license_1159_p If Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed. @license_1160_p All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive. @license_1161_p Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. The Eclipse Foundation is the initial Agreement Steward. The Eclipse Foundation may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved. @license_1162_p This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation. @license_1163_h2 Export Control Classification Number (ECCN) @license_1164_p As far as we know, the U.S. Export Control Classification Number (ECCN) for this software is 5D002. However, for legal reasons, we can make no warranty that this information is correct. For details, see also the Apache Software Foundation Export Classifications page. @links_1000_h1 Links @links_1001_p If you want to add a link, please send it to the support email address or post it to the group. @links_1002_a Commercial Support @links_1003_a Quotes @links_1004_a Books @links_1005_a Extensions @links_1006_a Blog Articles, Videos @links_1007_a Database Frontends / Tools @links_1008_a Products and Projects @links_1009_h2 Commercial Support @links_1010_a Commercial support for H2 is available @links_1011_p from Steve McLeod (steve dot mcleod at gmail dot com). Please note he is not one of the main developers of H2. He describes himself as follows: @links_1012_li I'm a long time user of H2, routinely working with H2 databases several gigabytes in size. @links_1013_li I'm the creator of popular commercial desktop software that uses H2. @links_1014_li I'm a certified Java developer (SCJP). @links_1015_li I have a decade and more of IT consulting experience with large and small clients in Australia, the UK, and Germany. @links_1016_li I'm based in Germany, and willing to travel within Europe. I can work remotely with teams in the USA and other locations." @links_1017_h2 Quotes @links_1018_a Quote @links_1019_p : "This is by far the easiest and fastest database that I have ever used. Originally the web application that I am working on is using SQL server. But, in less than 15 minutes I had H2 up and working with little recoding of the SQL. Thanks..... " @links_1020_h2 Books @links_1021_a Seam In Action @links_1022_h2 Extensions @links_1023_a Grails H2 Database Plugin @links_1024_a h2osgi: OSGi for the H2 Database @links_1025_a H2Sharp: ADO.NET interface for the H2 database engine @links_1026_a H2 Spatial: spatial functions to H2 database @links_1027_h2 Blog Articles, Videos @links_1028_a Youtube: Minecraft 1.7.3 / How to install Bukkit Server with xAuth and H2 @links_1029_a Analyzing CSVs with H2 in under 10 minutes (2009-12-07) @links_1030_a Efficient sorting and iteration on large databases (2009-06-15) @links_1031_a Porting Flexive to the H2 Database (2008-12-05) @links_1032_a H2 Database with GlassFish (2008-11-24) @links_1033_a Using H2 Database with Glassfish and Toplink (2008-08-07) @links_1034_a H2 Database - Performance Tracing (2008-04-30) @links_1035_a Testing your JDBC data access layer with DBUnit and H2 (2007-09-18) @links_1036_a Open Source Databases Comparison (2007-09-11) @links_1037_a The Codist: The Open Source Frameworks I Use (2007-07-23) @links_1038_a The Codist: SQL Injections: How Not To Get Stuck (2007-05-08) @links_1039_a One Man Band: (Helma + H2) == "to easy" (2007-03-11) @links_1040_a David Coldrick's Weblog: New Version of H2 Database Released (2007-01-06) @links_1041_a The Codist: Write Your Own Database, Again (2006-11-13) @links_1042_h2 Project Pages @links_1043_a Ohloh @links_1044_a Freshmeat Project Page @links_1045_a Free Open Source Software For Us @links_1046_a Wikipedia @links_1047_a Java Source Net @links_1048_a Linux Package Manager @links_1049_h2 Database Frontends / Tools @links_1050_a Dataflyer @links_1051_p A tool to browse databases and export data. @links_1052_a DB Solo @links_1053_p SQL query tool. @links_1054_a DbVisualizer @links_1055_p Database tool. @links_1056_a Execute Query @links_1057_p Database utility written in Java. @links_1058_a Flyway @links_1059_p The agile database migration framework for Java. @links_1060_a [fleXive] @links_1061_p JavaEE 5 open source framework for the development of complex and evolving (web-)applications. @links_1062_a HenPlus @links_1063_p HenPlus is a SQL shell written in Java. @links_1064_a OpenOffice @links_1065_p Base is OpenOffice.org's database application. It provides access to relational data sources. @links_1066_a RazorSQL @links_1067_p An SQL query tool, database browser, SQL editor, and database administration tool. @links_1068_a SQL Developer @links_1069_p Universal Database Frontend. @links_1070_a SQL Workbench/J @links_1071_p Free DBMS-independent SQL tool. @links_1072_a SQuirreL SQL Client @links_1073_p Graphical tool to view the structure of a database, browse the data, issue SQL commands etc. @links_1074_a SQuirreL DB Copy Plugin @links_1075_p Tool to copy data from one database to another. @links_1076_h2 Products and Projects @links_1077_a AccuProcess @links_1078_p Visual business process modeling and simulation software for business users. @links_1079_a Adeptia BPM @links_1080_p A Business Process Management (BPM) suite to quickly and easily automate business processes and workflows. @links_1081_a Adeptia Integration @links_1082_p Process-centric, services-based application integration suite. @links_1083_a Aejaks @links_1084_p A server-side scripting environment to build AJAX enabled web applications. @links_1085_a Axiom Stack @links_1086_p A web framework that let's you write dynamic web applications with Zen-like simplicity. @links_1087_a Apache Cayenne @links_1088_p Open source persistence framework providing object-relational mapping (ORM) and remoting services. @links_1089_a Apache Jackrabbit @links_1090_p Open source implementation of the Java Content Repository API (JCR). @links_1091_a Apache OpenJPA @links_1092_p Open source implementation of the Java Persistence API (JPA). @links_1093_a AppFuse @links_1094_p Helps building web applications. @links_1095_a BGBlitz @links_1096_p The Swiss army knife of Backgammon. @links_1097_a Blojsom @links_1098_p Java-based multi-blog, multi-user software package (Mac OS X Weblog Server). @links_1099_a Bonita @links_1100_p Open source workflow solution for handing long-running, user-oriented processes providing out of the box workflow and business process management features. @links_1101_a Bookmarks Portlet @links_1102_p JSR 168 compliant bookmarks management portlet application. @links_1103_a Claros inTouch @links_1104_p Ajax communication suite with mail, addresses, notes, IM, and rss reader. @links_1105_a CrashPlan PRO Server @links_1106_p Easy and cross platform backup solution for business and service providers. @links_1107_a DbUnit @links_1108_p A JUnit extension (also usable with Ant) targeted for database-driven projects. @links_1109_a DiffKit @links_1110_p DiffKit is a tool for comparing two tables of data, field-by-field. DiffKit is like the Unix diff utility, but for tables instead of lines of text. @links_1111_a Dinamica Framework @links_1112_p Ajax/J2EE framework for RAD development (mainly oriented toward hispanic markets). @links_1113_a District Health Information Software 2 (DHIS) @links_1114_p The DHIS 2 is a tool for collection, validation, analysis, and presentation of aggregate statistical data, tailored (but not limited) to integrated health information management activities. @links_1115_a Ebean ORM Persistence Layer @links_1116_p Open source Java Object Relational Mapping tool. @links_1117_a Eclipse CDO @links_1118_p The CDO (Connected Data Objects) Model Repository is a distributed shared model framework for EMF models, and a fast server-based O/R mapping solution. @links_1119_a Epictetus @links_1120_p Free cross platform database tool. @links_1121_a Fabric3 @links_1122_p Fabric3 is a project implementing a federated service network based on the Service Component Architecture specification (http://www.osoa.org). @links_1123_a FIT4Data @links_1124_p A testing framework for data management applications built on the Java implementation of FIT. @links_1125_a Flux @links_1126_p Java job scheduler, file transfer, workflow, and BPM. @links_1127_a GeoServer @links_1128_p GeoServer is a Java-based software server that allows users to view and edit geospatial data. Using open standards set forth by the Open Geospatial Consortium (OGC), GeoServer allows for great flexibility in map creation and data sharing. @links_1129_a GBIF Integrated Publishing Toolkit (IPT) @links_1130_p The GBIF IPT is an open source, Java based web application that connects and serves three types of biodiversity data: taxon primary occurrence data, taxon checklists and general resource metadata. @links_1131_a GNU Gluco Control @links_1132_p Helps you to manage your diabetes. @links_1133_a Golden T Studios @links_1134_p Fun-to-play games with a simple interface. @links_1135_a GridGain @links_1136_p GridGain is easy to use Cloud Application Platform that enables development of highly scalable distributed Java and Scala applications that auto-scale on any grid or cloud infrastructure. @links_1137_a Group Session @links_1138_p Open source web groupware. @links_1139_a HA-JDBC @links_1140_p High-Availability JDBC: A JDBC proxy that provides light-weight, transparent, fault tolerant clustering capability to any underlying JDBC driver. @links_1141_a Harbor @links_1142_p Pojo Application Server. @links_1143_a Hibernate @links_1144_p Relational persistence for idiomatic Java (O-R mapping tool). @links_1145_a Hibicius @links_1146_p Online Banking Client for the HBCI protocol. @links_1147_a ImageMapper @links_1148_p ImageMapper frees users from having to use file browsers to view their images. They get fast access to images and easy cataloguing of them via a user friendly interface. @links_1149_a JAMWiki @links_1150_p Java-based Wiki engine. @links_1151_a Jala @links_1152_p Open source collection of JavaScript modules. @links_1153_a Jaspa @links_1154_p Java Spatial. Jaspa potentially brings around 200 spatial functions. @links_1155_a Java Simon @links_1156_p Simple Monitoring API. @links_1157_a JBoss jBPM @links_1158_p A platform for executable process languages ranging from business process management (BPM) over workflow to service orchestration. @links_1159_a JBoss Jopr @links_1160_p An enterprise management solution for JBoss middleware projects and other application technologies. @links_1161_a JGeocoder @links_1162_p Free Java geocoder. Geocoding is the process of estimating a latitude and longitude for a given location. @links_1163_a JGrass @links_1164_p Java Geographic Resources Analysis Support System. Free, multi platform, open source GIS based on the GIS framework of uDig. @links_1165_a Jena @links_1166_p Java framework for building Semantic Web applications. @links_1167_a JMatter @links_1168_p Framework for constructing workgroup business applications based on the Naked Objects Architectural Pattern. @links_1169_a jOOQ (Java Object Oriented Querying) @links_1170_p jOOQ is a fluent API for typesafe SQL query construction and execution @links_1171_a JotBot @links_1172_p Records your day at user defined intervals. @links_1173_a JPOX @links_1174_p Java persistent objects. @links_1175_a Liftweb @links_1176_p A Scala-based, secure, developer friendly web framework. @links_1177_a LiquiBase @links_1178_p A tool to manage database changes and refactorings. @links_1179_a Luntbuild @links_1180_p Build automation and management tool. @links_1181_a localdb @links_1182_p A tool that locates the full file path of the folder containing the database files. @links_1183_a Magnolia @links_1184_p Microarray Data Management and Export System for PFGRC (Pathogen Functional Genomics Resource Center) Microarrays. @links_1185_a MiniConnectionPoolManager @links_1186_p A lightweight standalone JDBC connection pool manager. @links_1187_a Mr. Persister @links_1188_p Simple, small and fast object relational mapping. @links_1189_a Myna Application Server @links_1190_p Java web app that provides dynamic web content and Java libraries access from JavaScript. @links_1191_a MyTunesRss @links_1192_p MyTunesRSS lets you listen to your music wherever you are. @links_1193_a NCGC CurveFit @links_1194_p From: NIH Chemical Genomics Center, National Institutes of Health, USA. An open source application in the life sciences research field. This application handles chemical structures and biological responses of thousands of compounds with the potential to handle million+ compounds. It utilizes an embedded H2 database to enable flexible query/retrieval of all data including advanced chemical substructure and similarity searching. The application highlights an automated curve fitting and classification algorithm that outperforms commercial packages in the field. Commercial alternatives are typically small desktop software that handle a few dose response curves at a time. A couple of commercial packages that do handle several thousand curves are very expensive tools (>60k USD) that require manual curation of analysis by the user; require a license to Oracle; lack advanced query/retrieval; and the ability to handle chemical structures. @links_1195_a Nuxeo @links_1196_p Standards-based, open source platform for building ECM applications. @links_1197_a nWire @links_1198_p Eclipse plug-in which expedites Java development. It's main purpose is to help developers find code quicker and easily understand how it relates to the rest of the application, thus, understand the application structure. @links_1199_a Ontology Works @links_1200_p This company provides semantic technologies including deductive information repositories (the Ontology Works Knowledge Servers), semantic information fusion and semantic federation of legacy databases, ontology-based domain modeling, and management of the distributed enterprise. @links_1201_a Ontoprise OntoBroker @links_1202_p SemanticWeb-Middleware. It supports all W3C Semantic Web recommendations: OWL, RDF, RDFS, SPARQL, and F-Logic. @links_1203_a Open Anzo @links_1204_p Semantic Application Server. @links_1205_a OpenTelegard @links_1206_p An OpenSource BBS Software written in JRuby. @links_1207_a OpenGroove @links_1208_p OpenGroove is a groupware program that allows users to synchronize data. @links_1209_a OpenSocial Development Environment (OSDE) @links_1210_p Development tool for OpenSocial application. @links_1211_a Orion @links_1212_p J2EE Application Server. @links_1213_a P5H2 @links_1214_p A library for the Processing programming language and environment. @links_1215_a Phase-6 @links_1216_p A computer based learning software. @links_1217_a Pickle @links_1218_p Pickle is a Java library containing classes for persistence, concurrency, and logging. @links_1219_a Piman @links_1220_p Water treatment projects data management. @links_1221_a PolePosition @links_1222_p Open source database benchmark. @links_1223_a Poormans @links_1224_p Very basic CMS running as a SWT application and generating static html pages. @links_1225_a Railo @links_1226_p Railo is an alternative engine for the Cold Fusion Markup Language, that compiles code programmed in CFML into Java bytecode and executes it on a servlet engine. @links_1227_a Razuna @links_1228_p Open source Digital Asset Management System with integrated Web Content Management. @links_1229_a RIFE @links_1230_p A full-stack web application framework with tools and APIs to implement most common web features. @links_1231_a Rutema @links_1232_p Rutema is a test execution and management tool for heterogeneous development environments written in Ruby. @links_1233_a Sava @links_1234_p Open-source web-based content management system. @links_1235_a Scriptella @links_1236_p ETL (Extract-Transform-Load) and script execution tool. @links_1237_a Sesar @links_1238_p Dependency Injection Container with Aspect Oriented Programming. @links_1239_a SemmleCode @links_1240_p Eclipse plugin to help you improve software quality. @links_1241_a SeQuaLite @links_1242_p A free, light-weight, java data access framework. @links_1243_a ShapeLogic @links_1244_p Toolkit for declarative programming, image processing and computer vision. @links_1245_a Shellbook @links_1246_p Desktop publishing application. @links_1247_a Signsoft intelliBO @links_1248_p Persistence middleware supporting the JDO specification. @links_1249_a SimpleORM @links_1250_p Simple Java Object Relational Mapping. @links_1251_a SymmetricDS @links_1252_p A web-enabled, database independent, data synchronization/replication software. @links_1253_a SmartFoxServer @links_1254_p Platform for developing multiuser applications and games with Macromedia Flash. @links_1255_a Social Bookmarks Friend Finder @links_1256_p A GUI application that allows you to find users with similar bookmarks to the user specified (for delicious.com). @links_1257_a sormula @links_1258_p Simple object relational mapping. @links_1259_a Springfuse @links_1260_p Code generation For Spring, Spring MVC & Hibernate. @links_1261_a SQLOrm @links_1262_p Java Object Relation Mapping. @links_1263_a StelsCSV and StelsXML @links_1264_p StelsCSV is a CSV JDBC type 4 driver that allows to perform SQL queries and other JDBC operations on text files. StelsXML is a XML JDBC type 4 driver that allows to perform SQL queries and other JDBC operations on XML files. Both use H2 as the SQL engine. @links_1265_a StorYBook @links_1266_p A summary-based tool for novelist and script writers. It helps to keep the overview over the various traces a story has. @links_1267_a StreamCruncher @links_1268_p Event (stream) processing kernel. @links_1269_a SUSE Manager, part of Linux Enterprise Server 11 @links_1270_p The SUSE Manager eases the burden of compliance with regulatory requirements and corporate policies. @links_1271_a Tune Backup @links_1272_p Easy-to-use backup solution for your iTunes library. @links_1273_a weblica @links_1274_p Desktop CMS. @links_1275_a Web of Web @links_1276_p Collaborative and realtime interactive media platform for the web. @links_1277_a Werkzeugkasten @links_1278_p Minimum Java Toolset. @links_1279_a VPDA @links_1280_p View providers driven applications is a Java based application framework for building applications composed from server components - view providers. @links_1281_a Volunteer database @links_1282_p A database front end to register volunteers, partnership and donation for a Non Profit organization. @links_1283_a 974 Application Server @links_1284_p A clusterable application server. @mainWeb_1000_h1 H2 Database Engine @mainWeb_1001_p Welcome to H2, the Java SQL database. The main features of H2 are: @mainWeb_1002_li Very fast, open source, JDBC API @mainWeb_1003_li Embedded and server modes; in-memory databases @mainWeb_1004_li Browser based Console application @mainWeb_1005_li Small footprint: around 1.5 MB jar file size @mainWeb_1006_h2 Download @mainWeb_1007_td Version 1.3.171 (2013-03-17) @mainWeb_1008_a Windows Installer (4 MB) @mainWeb_1009_a All Platforms (zip, 5 MB) @mainWeb_1010_a All Downloads @mainWeb_1011_td     @mainWeb_1012_h2 Support @mainWeb_1013_a Stack Overflow (tag H2) @mainWeb_1014_a Google Group English @mainWeb_1015_p , Japanese @mainWeb_1016_p For non-technical issues, use: @mainWeb_1017_h2 Features @mainWeb_1018_th H2 @mainWeb_1019_a Derby @mainWeb_1020_a HSQLDB @mainWeb_1021_a MySQL @mainWeb_1022_a PostgreSQL @mainWeb_1023_td Pure Java @mainWeb_1024_td Yes @mainWeb_1025_td Yes @mainWeb_1026_td Yes @mainWeb_1027_td No @mainWeb_1028_td No @mainWeb_1029_td Memory Mode @mainWeb_1030_td Yes @mainWeb_1031_td Yes @mainWeb_1032_td Yes @mainWeb_1033_td No @mainWeb_1034_td No @mainWeb_1035_td Encrypted Database @mainWeb_1036_td Yes @mainWeb_1037_td Yes @mainWeb_1038_td Yes @mainWeb_1039_td No @mainWeb_1040_td No @mainWeb_1041_td ODBC Driver @mainWeb_1042_td Yes @mainWeb_1043_td No @mainWeb_1044_td No @mainWeb_1045_td Yes @mainWeb_1046_td Yes @mainWeb_1047_td Fulltext Search @mainWeb_1048_td Yes @mainWeb_1049_td No @mainWeb_1050_td No @mainWeb_1051_td Yes @mainWeb_1052_td Yes @mainWeb_1053_td Multi Version Concurrency @mainWeb_1054_td Yes @mainWeb_1055_td No @mainWeb_1056_td Yes @mainWeb_1057_td Yes @mainWeb_1058_td Yes @mainWeb_1059_td Footprint (jar/dll size) @mainWeb_1060_td ~1 MB @mainWeb_1061_td ~2 MB @mainWeb_1062_td ~1 MB @mainWeb_1063_td ~4 MB @mainWeb_1064_td ~6 MB @mainWeb_1065_p See also the detailed comparison. @mainWeb_1066_h2 News @mainWeb_1067_b Newsfeeds: @mainWeb_1068_a Full text (Atom) @mainWeb_1069_p or Header only (RSS). @mainWeb_1070_b Email Newsletter: @mainWeb_1071_p Subscribe to H2 Database News (Google account required) to get informed about new releases. Your email address is only used in this context. @mainWeb_1072_td   @mainWeb_1073_h2 Contribute @mainWeb_1074_p You can contribute to the development of H2 by sending feedback and bug reports, or translate the H2 Console application (for details, start the H2 Console and select Options / Translate). To donate money, click on the PayPal button below. You will be listed as a supporter: @main_1000_h1 H2 Database Engine @main_1001_p Welcome to H2, the free Java SQL database engine. @main_1002_a Quickstart @main_1003_p Get a fast overview. @main_1004_a Tutorial @main_1005_p Go through the samples. @main_1006_a Features @main_1007_p See what this database can do and how to use these features. @mvstore_1000_h1 MVStore @mvstore_1001_a Overview @mvstore_1002_a Example Code @mvstore_1003_a Store Builder @mvstore_1004_a R-Tree @mvstore_1005_a Features @mvstore_1006_div - Maps @mvstore_1007_div - Versions @mvstore_1008_div - Transactions @mvstore_1009_div - In-Memory Performance and Usage @mvstore_1010_div - Pluggable Data Types @mvstore_1011_div - BLOB Support @mvstore_1012_div - R-Tree and Pluggable Map Implementations @mvstore_1013_div - Concurrent Operations and Caching @mvstore_1014_div - Log Structured Storage @mvstore_1015_div - File System Abstraction, File Locking and Online Backup @mvstore_1016_div - Encrypted Files @mvstore_1017_div - Tools @mvstore_1018_div - Exception Handling @mvstore_1019_a Similar Projects and Differences to Other Storage Engines @mvstore_1020_a Current State @mvstore_1021_a Requirements @mvstore_1022_h2 Overview @mvstore_1023_p The MVStore is work in progress, and is planned to be the next storage subsystem of H2. But it can be also directly within an application, without using JDBC or SQL. @mvstore_1024_li MVStore stands for "multi-version store". @mvstore_1025_li Each store contains a number of maps (using the java.util.Map interface). @mvstore_1026_li Both file-based persistence and in-memory operation are supported. @mvstore_1027_li It is intended to be fast, simple to use, and small. @mvstore_1028_li Old versions of the data can be read concurrently with all other operations. @mvstore_1029_li Transaction are supported (including concurrent transactions and 2-phase commit). @mvstore_1030_li The tool is very modular. It supports pluggable data types / serialization, pluggable map implementations (B-tree, R-tree, concurrent B-tree currently), BLOB storage, and a file system abstraction to support encrypted files and zip files. @mvstore_1031_h2 Example Code @mvstore_1032_p The following sample code show how to create a store, open a map, add some data, and access the current and an old version: @mvstore_1033_h2 Store Builder @mvstore_1034_p The MVStore.Builder provides a fluid interface to build a store if more complex configuration options are used. The following code contains all supported configuration options: @mvstore_1035_li cacheSizeMB: the cache size in MB. @mvstore_1036_li compressData: compress the data when storing. @mvstore_1037_li encryptionKey: the encryption key for file encryption. @mvstore_1038_li fileName: the name of the file, for file based stores. @mvstore_1039_li readOnly: open the file in read-only mode. @mvstore_1040_li writeBufferSize: the size of the write buffer in MB. @mvstore_1041_li writeDelay: the maximum delay until committed changes are stored (unless stored explicitly). @mvstore_1042_h2 R-Tree @mvstore_1043_p The MVRTreeMap is an R-tree implementation that supports fast spatial queries. It can be used as follows: @mvstore_1044_p The default number of dimensions is 2. To use a different number of dimensions, call new MVRTreeMap.Builder<String>().dimensions(3). The minimum number of dimensions is 1, the maximum is 255. @mvstore_1045_h2 Features @mvstore_1046_h3 Maps @mvstore_1047_p Each store supports a set of named maps. A map is sorted by key, and supports the common lookup operations, including access to the first and last key, iterate over some or all keys, and so on. @mvstore_1048_p Also supported, and very uncommon for maps, is fast index lookup: the keys of the map can be accessed like a list (get the key at the given index, get the index of a certain key). That means getting the median of two keys is trivial, and range of keys can be counted very quickly. The iterator supports fast skipping. This is possible because internally, each map is organized in the form of a counted B+-tree. @mvstore_1049_p In database terms, a map can be used like a table, where the key of the map is the primary key of the table, and the value is the row. A map can also represent an index, where the key of the map is the key of the index, and the value of the map is the primary key of the table (for non-unique indexes, the key of the map must also contain the primary key). @mvstore_1050_h3 Versions @mvstore_1051_p Multiple versions are supported. A version is a snapshot of all the data of all maps at a given point in time. A transaction is a number of actions between two versions. @mvstore_1052_p Versions are not immediately persisted; instead, only the version counter is incremented. If there is a change after switching to a new version, a snapshot of the old version is kept in memory, so that it can still be read. @mvstore_1053_p Old persisted versions are readable until the old data was explicitly overwritten. Creating a snapshot is fast: only the pages that are changed after a snapshot are copied. This behavior is also called COW (copy on write). @mvstore_1054_p Rollback is supported (rollback to any old in-memory version or an old persisted version). @mvstore_1055_h3 Transactions @mvstore_1056_p The multi-version support is the basis for the transaction support. In the simple case, when only one transaction is open at a time, rolling back the transaction only requires to revert to an old version. @mvstore_1057_p To support multiple concurrent open transactions, a transaction utility is included, the TransactionStore. This utility stores the changed entries in a separate map, similar to a transaction log (except that only the key of a changed row is stored, and the entries of a transaction are removed when the transaction is committed). The storage overhead of this utility is very small compared to the overhead of a regular transaction log. The tool supports PostgreSQL style "read committed" transaction isolation. There is no limit on the size of a transaction (the log is not kept in memory). The tool supports savepoints, two-phase commit, and other features typically available in a database. @mvstore_1058_h3 In-Memory Performance and Usage @mvstore_1059_p Performance of in-memory operations is comparable with java.util.TreeMap (many operations are actually faster), but usually slower than java.util.HashMap. @mvstore_1060_p The memory overhead for large maps is slightly better than for the regular map implementations, but there is a higher overhead per map. For maps with less than 25 entries, the regular map implementations use less memory on average. @mvstore_1061_p If no file name is specified, the store operates purely in memory. Except for persisting data, all features are supported in this mode (multi-versioning, index lookup, R-tree and so on). If a file name is specified, all operations occur in memory (with the same performance characteristics) until data is persisted. @mvstore_1062_h3 Pluggable Data Types @mvstore_1063_p Serialization is pluggable. The default serialization currently supports many common data types, and uses Java serialization for other objects. The following classes are currently directly supported: Boolean, Byte, Short, Character, Integer, Long, Float, Double, BigInteger, BigDecimal, String, UUID, Date and arrays (both primitive arrays and object arrays). @mvstore_1064_p Parameterized data types are supported (for example one could build a string data type that limits the length for some reason). @mvstore_1065_p The storage engine itself does not have any length limits, so that keys, values, pages, and chunks can be very big (as big as fits in memory). Also, there is no inherent limit to the number of maps and chunks. Due to using a log structured storage, there is no special case handling for large keys or pages. @mvstore_1066_h3 BLOB Support @mvstore_1067_p There is a mechanism that stores large binary objects by splitting them into smaller blocks. This allows to store objects that don't fit in memory. Streaming as well as random access reads on such objects are supported. This tool is written on top of the store (only using the map interface). @mvstore_1068_h3 R-Tree and Pluggable Map Implementations @mvstore_1069_p The map implementation is pluggable. In addition to the default MVMap (multi-version map), there is a multi-version R-tree map implementation for spatial operations (contain and intersection; nearest neighbor is not yet implemented). @mvstore_1070_h3 Concurrent Operations and Caching @mvstore_1071_p The default map implementation supports concurrent reads on old versions of the data. All such read operations can occur in parallel. Concurrent reads from the page cache, as well as concurrent reads from the file system are supported. @mvstore_1072_p Storing changes can occur concurrently to modifying the data, as it operates on a snapshot. @mvstore_1073_p Caching is done on the page level. The page cache is a concurrent LIRS cache, which should be resistant against scan operations. @mvstore_1074_p The default map implementation does not support concurrent modification operations on a map (the same as HashMap and TreeMap). Similar to those classes, the map tries to detect concurrent modification. @mvstore_1075_p With the MVMapConcurrent implementation, read operations even on the newest version can happen concurrently with all other operations, without risk of corruption. This comes with slightly reduced speed in single threaded mode, the same as with other ConcurrentHashMap implementations. Write operations first read the relevant area from disk to memory (this can happen concurrently), and only then modify the data. The in-memory part of write operations is synchronized. @mvstore_1076_p For fully scalable concurrent write operations to a map (in-memory and to disk), the map could be split into multiple maps in different stores ('sharding'). The plan is to add such a mechanism later when needed. @mvstore_1077_h3 Log Structured Storage @mvstore_1078_p Changes are buffered in memory, and once enough changes have accumulated, they are written in one continuous disk write operation. (According to a test, write throughput of a common SSD gets higher the larger the block size, until a block size of 2 MB, and then does not further increase.) By default, committed changes are automatically written once every second in a background thread, even if only little data was changed. Changes can also be written explicitly by calling store(). To avoid out of memory, uncommitted changes are also written when needed, however they are rolled back when closing the store, or at the latest (when the store was not correctly closed) when opening the store. @mvstore_1079_p When storing, all changed pages are serialized, optionally compressed using the LZF algorithm, and written sequentially to a free area of the file. Each such change set is called a chunk. All parent pages of the changed B-trees are stored in this chunk as well, so that each chunk also contains the root of each changed map (which is the entry point to read this version of the data). There is no separate index: all data is stored as a list of pages. Per store, there is one additional map that contains the metadata (the list of maps, where the root page of each map is stored, and the list of chunks). @mvstore_1080_p There are usually two write operations per chunk: one to store the chunk data (the pages), and one to update the file header (so it points to the latest chunk). If the chunk is appended at the end of the file, the file header is only written at the end of the chunk. @mvstore_1081_p There is no transaction log, no undo log, and there are no in-place updates (however unused chunks are overwritten by default). @mvstore_1082_p Old data is kept for at least 45 seconds (configurable), so that there are no explicit sync operations required to guarantee data consistency, but an application can also sync explicitly when needed. To reuse disk space, the chunks with the lowest amount of live data are compacted (the live data is simply stored again in the next chunk). To improve data locality and disk space usage, the plan is to automatically defragment and compact data. @mvstore_1083_p Compared to traditional storage engines (that use a transaction log, undo log, and main storage area), the log structured storage is simpler, more flexible, and typically needs less disk operations per change, as data is only written once instead of twice or 3 times, and because the B-tree pages are always full (they are stored next to each other) and can be easily compressed. But temporarily, disk space usage might actually be a bit higher than for a regular database, as disk space is not immediately re-used (there are no in-place updates). @mvstore_1084_h3 File System Abstraction, File Locking and Online Backup @mvstore_1085_p The file system is pluggable (the same file system abstraction is used as H2 uses). The file can be encrypted using an encrypting file system. Other file system implementations support reading from a compressed zip or jar file. @mvstore_1086_p Each store may only be opened once within a JVM. When opening a store, the file is locked in exclusive mode, so that the file can only be changed from within one process. Files can be opened in read-only mode, in which case a shared lock is used. @mvstore_1087_p The persisted data can be backed up to a different file at any time, even during write operations (online backup). To do that, automatic disk space reuse needs to be first disabled, so that new data is always appended at the end of the file. Then, the file can be copied (the file handle is available to the application). @mvstore_1088_h3 Encrypted Files @mvstore_1089_p File encryption ensures the data can only be read with the correct password. Data can be encrypted as follows: @mvstore_1090_p The following algorithms and settings are used: @mvstore_1091_li The password char array is cleared after use, to reduce the risk that the password is stolen even if the attacker has access to the main memory. @mvstore_1092_li The password is hashed according to the PBKDF2 standard, using the SHA-256 hash algorithm. @mvstore_1093_li The length of the salt is 64 bits, so that an attacker can not use a pre-calculated password hash table (rainbow table). It is generated using a cryptographically secure random number generator. @mvstore_1094_li To speed up opening an encrypted stores on Android, the number of PBKDF2 iterations is 10. The higher the value, the better the protection against brute-force password cracking attacks, but the slower is opening a file. @mvstore_1095_li The file itself is encrypted using the standardized disk encryption mode XTS-AES. Only little more than one AES-128 round per block is needed. @mvstore_1096_h3 Tools @mvstore_1097_p There is a tool (MVStoreTool) to dump the contents of a file. @mvstore_1098_h3 Exception Handling @mvstore_1099_p This tool does not throw checked exceptions. Instead, unchecked exceptions are thrown if needed. The error message always contains the version of the tool. The following exceptions can occur: @mvstore_1100_code IllegalStateException @mvstore_1101_li if a map was already closed or an IO exception occurred, for example if the file was locked, is already closed, could not be opened or closed, if reading or writing failed, if the file is corrupt, or if there is an internal error in the tool. @mvstore_1102_code IllegalArgumentException @mvstore_1103_li if a method was called with an illegal argument. @mvstore_1104_code UnsupportedOperationException @mvstore_1105_li if a method was called that is not supported, for example trying to modify a read-only map or view. @mvstore_1106_code ConcurrentModificationException @mvstore_1107_li if the object is modified concurrently. @mvstore_1108_h2 Similar Projects and Differences to Other Storage Engines @mvstore_1109_p Unlike similar storage engines like LevelDB and Kyoto Cabinet, the MVStore is written in Java and can easily be embedded in a Java and Android application. @mvstore_1110_p The MVStore is somewhat similar to the Berkeley DB Java Edition because it is also written in Java, and is also a log structured storage, but the H2 license is more liberal. @mvstore_1111_p Like SQLite, the MVStore keeps all data in one file. Unlike SQLite, the MVStore uses is a log structured storage. The plan is to make the MVStore both easier to use as well as faster than SQLite. In a recent (very simple) test, the MVStore was about twice as fast as SQLite on Android. @mvstore_1112_p The API of the MVStore is similar to MapDB (previously known as JDBM) from Jan Kotek, and some code is shared between MapDB and JDBM. However, unlike MapDB, the MVStore uses is a log structured storage. The MVStore does not have a record size limit. @mvstore_1113_h2 Current State @mvstore_1114_p The code is still experimental at this stage. The API as well as the behavior may partially change. Features may be added and removed (even thought the main features will stay). @mvstore_1115_h2 Requirements @mvstore_1116_p The MVStore is included in the latest H2 jar file. @mvstore_1117_p There are no special requirements to use it. The MVStore should run on any JVM as well as on Android. @mvstore_1118_p To build just the MVStore (without the database engine), run: @mvstore_1119_p This will create the file bin/h2mvstore-1.3.171.jar (about 130 KB). @performance_1000_h1 Performance @performance_1001_a Performance Comparison @performance_1002_a PolePosition Benchmark @performance_1003_a Database Performance Tuning @performance_1004_a Using the Built-In Profiler @performance_1005_a Application Profiling @performance_1006_a Database Profiling @performance_1007_a Statement Execution Plans @performance_1008_a How Data is Stored and How Indexes Work @performance_1009_a Fast Database Import @performance_1010_h2 Performance Comparison @performance_1011_p In many cases H2 is faster than other (open source and not open source) database engines. Please note this is mostly a single connection benchmark run on one computer, with many very simple operations running against the database. This benchmark does not include very complex queries. The embedded mode of H2 is faster than the client-server mode because the per-statement overhead is greatly reduced. @performance_1012_h3 Embedded @performance_1013_th Test Case @performance_1014_th Unit @performance_1015_th H2 @performance_1016_th HSQLDB @performance_1017_th Derby @performance_1018_td Simple: Init @performance_1019_td ms @performance_1020_td 241 @performance_1021_td 431 @performance_1022_td 1027 @performance_1023_td Simple: Query (random) @performance_1024_td ms @performance_1025_td 193 @performance_1026_td 267 @performance_1027_td 748 @performance_1028_td Simple: Query (sequential) @performance_1029_td ms @performance_1030_td 89 @performance_1031_td 179 @performance_1032_td 658 @performance_1033_td Simple: Update (random) @performance_1034_td ms @performance_1035_td 406 @performance_1036_td 772 @performance_1037_td 12175 @performance_1038_td Simple: Delete (sequential) @performance_1039_td ms @performance_1040_td 155 @performance_1041_td 266 @performance_1042_td 6281 @performance_1043_td Simple: Memory Usage @performance_1044_td MB @performance_1045_td 7 @performance_1046_td 13 @performance_1047_td 16 @performance_1048_td BenchA: Init @performance_1049_td ms @performance_1050_td 200 @performance_1051_td 251 @performance_1052_td 1075 @performance_1053_td BenchA: Transactions @performance_1054_td ms @performance_1055_td 1071 @performance_1056_td 1458 @performance_1057_td 8142 @performance_1058_td BenchA: Memory Usage @performance_1059_td MB @performance_1060_td 8 @performance_1061_td 14 @performance_1062_td 12 @performance_1063_td BenchB: Init @performance_1064_td ms @performance_1065_td 787 @performance_1066_td 1584 @performance_1067_td 4163 @performance_1068_td BenchB: Transactions @performance_1069_td ms @performance_1070_td 465 @performance_1071_td 875 @performance_1072_td 2744 @performance_1073_td BenchB: Memory Usage @performance_1074_td MB @performance_1075_td 17 @performance_1076_td 13 @performance_1077_td 10 @performance_1078_td BenchC: Init @performance_1079_td ms @performance_1080_td 348 @performance_1081_td 225 @performance_1082_td 922 @performance_1083_td BenchC: Transactions @performance_1084_td ms @performance_1085_td 1382 @performance_1086_td 865 @performance_1087_td 3527 @performance_1088_td BenchC: Memory Usage @performance_1089_td MB @performance_1090_td 12 @performance_1091_td 20 @performance_1092_td 11 @performance_1093_td Executed statements @performance_1094_td # @performance_1095_td 322929 @performance_1096_td 322929 @performance_1097_td 322929 @performance_1098_td Total time @performance_1099_td ms @performance_1100_td 5337 @performance_1101_td 7173 @performance_1102_td 41462 @performance_1103_td Statements per second @performance_1104_td # @performance_1105_td 60507 @performance_1106_td 45020 @performance_1107_td 7788 @performance_1108_h3 Client-Server @performance_1109_th Test Case @performance_1110_th Unit @performance_1111_th H2 @performance_1112_th HSQLDB @performance_1113_th Derby @performance_1114_th PostgreSQL @performance_1115_th MySQL @performance_1116_td Simple: Init @performance_1117_td ms @performance_1118_td 1715 @performance_1119_td 2096 @performance_1120_td 3008 @performance_1121_td 3093 @performance_1122_td 3084 @performance_1123_td Simple: Query (random) @performance_1124_td ms @performance_1125_td 2615 @performance_1126_td 2119 @performance_1127_td 4450 @performance_1128_td 3201 @performance_1129_td 3313 @performance_1130_td Simple: Query (sequential) @performance_1131_td ms @performance_1132_td 2531 @performance_1133_td 1944 @performance_1134_td 4019 @performance_1135_td 3163 @performance_1136_td 3295 @performance_1137_td Simple: Update (random) @performance_1138_td ms @performance_1139_td 1862 @performance_1140_td 2486 @performance_1141_td 13929 @performance_1142_td 4404 @performance_1143_td 4391 @performance_1144_td Simple: Delete (sequential) @performance_1145_td ms @performance_1146_td 778 @performance_1147_td 1118 @performance_1148_td 7032 @performance_1149_td 1682 @performance_1150_td 1882 @performance_1151_td Simple: Memory Usage @performance_1152_td MB @performance_1153_td 8 @performance_1154_td 14 @performance_1155_td 18 @performance_1156_td 1 @performance_1157_td 2 @performance_1158_td BenchA: Init @performance_1159_td ms @performance_1160_td 1264 @performance_1161_td 1686 @performance_1162_td 2734 @performance_1163_td 2867 @performance_1164_td 3225 @performance_1165_td BenchA: Transactions @performance_1166_td ms @performance_1167_td 5998 @performance_1168_td 6829 @performance_1169_td 14323 @performance_1170_td 11491 @performance_1171_td 10571 @performance_1172_td BenchA: Memory Usage @performance_1173_td MB @performance_1174_td 9 @performance_1175_td 18 @performance_1176_td 14 @performance_1177_td 1 @performance_1178_td 2 @performance_1179_td BenchB: Init @performance_1180_td ms @performance_1181_td 5571 @performance_1182_td 7553 @performance_1183_td 11636 @performance_1184_td 12226 @performance_1185_td 12553 @performance_1186_td BenchB: Transactions @performance_1187_td ms @performance_1188_td 1931 @performance_1189_td 3417 @performance_1190_td 3435 @performance_1191_td 2407 @performance_1192_td 2149 @performance_1193_td BenchB: Memory Usage @performance_1194_td MB @performance_1195_td 18 @performance_1196_td 16 @performance_1197_td 13 @performance_1198_td 2 @performance_1199_td 2 @performance_1200_td BenchC: Init @performance_1201_td ms @performance_1202_td 1333 @performance_1203_td 968 @performance_1204_td 1769 @performance_1205_td 1693 @performance_1206_td 2645 @performance_1207_td BenchC: Transactions @performance_1208_td ms @performance_1209_td 6508 @performance_1210_td 4330 @performance_1211_td 7910 @performance_1212_td 7564 @performance_1213_td 6108 @performance_1214_td BenchC: Memory Usage @performance_1215_td MB @performance_1216_td 12 @performance_1217_td 21 @performance_1218_td 14 @performance_1219_td 2 @performance_1220_td 2 @performance_1221_td Executed statements @performance_1222_td # @performance_1223_td 322929 @performance_1224_td 322929 @performance_1225_td 322929 @performance_1226_td 322929 @performance_1227_td 322929 @performance_1228_td Total time @performance_1229_td ms @performance_1230_td 32106 @performance_1231_td 34546 @performance_1232_td 74245 @performance_1233_td 53791 @performance_1234_td 53216 @performance_1235_td Statements per second @performance_1236_td # @performance_1237_td 10058 @performance_1238_td 9347 @performance_1239_td 4349 @performance_1240_td 6003 @performance_1241_td 6068 @performance_1242_h3 Benchmark Results and Comments @performance_1243_h4 H2 @performance_1244_p Version 1.2.137 (2010-06-06) was used for the test. For most operations, the performance of H2 is about the same as for HSQLDB. One situation where H2 is slow is large result sets, because they are buffered to disk if more than a certain number of records are returned. The advantage of buffering is: there is no limit on the result set size. @performance_1245_h4 HSQLDB @performance_1246_p Version 2.0.0 was used for the test. Cached tables are used in this test (hsqldb.default_table_type=cached), and the write delay is 1 second (SET WRITE_DELAY 1). @performance_1247_h4 Derby @performance_1248_p Version 10.6.1.0 was used for the test. Derby is clearly the slowest embedded database in this test. This seems to be a structural problem, because all operations are really slow. It will be hard for the developers of Derby to improve the performance to a reasonable level. A few problems have been identified: leaving autocommit on is a problem for Derby. If it is switched off during the whole test, the results are about 20% better for Derby. Derby calls FileChannel.force(false), but only twice per log file (not on each commit). Disabling this call improves performance for Derby by about 2%. Unlike H2, Derby does not call FileDescriptor.sync() on each checkpoint. Derby supports a testing mode (system property derby.system.durability=test) where durability is disabled. According to the documentation, this setting should be used for testing only, as the database may not recover after a crash. Enabling this setting improves performance by a factor of 2.6 (embedded mode) or 1.4 (server mode). Even if enabled, Derby is still less than half as fast as H2 in default mode. @performance_1249_h4 PostgreSQL @performance_1250_p Version 8.4.4 was used for the test. The following options where changed in postgresql.conf: fsync = off, commit_delay = 1000. PostgreSQL is run in server mode. The memory usage number is incorrect, because only the memory usage of the JDBC driver is measured. @performance_1251_h4 MySQL @performance_1252_p Version 5.1.47-community was used for the test. MySQL was run with the InnoDB backend. The setting innodb_flush_log_at_trx_commit (found in the my.ini / my.cnf file) was set to 0. Otherwise (and by default), MySQL is slow (around 140 statements per second in this test) because it tries to flush the data to disk for each commit. For small transactions (when autocommit is on) this is really slow. But many use cases use small or relatively small transactions. Too bad this setting is not listed in the configuration wizard, and it always overwritten when using the wizard. You need to change this setting manually in the file my.ini / my.cnf, and then restart the service. The memory usage number is incorrect, because only the memory usage of the JDBC driver is measured. @performance_1253_h4 Firebird @performance_1254_p Firebird 1.5 (default installation) was tested, but the results are not published currently. It is possible to run the performance test with the Firebird database, and any information on how to configure Firebird for higher performance are welcome. @performance_1255_h4 Why Oracle / MS SQL Server / DB2 are Not Listed @performance_1256_p The license of these databases does not allow to publish benchmark results. This doesn't mean that they are fast. They are in fact quite slow, and need a lot of memory. But you will need to test this yourself. SQLite was not tested because the JDBC driver doesn't support transactions. @performance_1257_h3 About this Benchmark @performance_1258_h4 How to Run @performance_1259_p This test was as follows: @performance_1260_h4 Separate Process per Database @performance_1261_p For each database, a new process is started, to ensure the previous test does not impact the current test. @performance_1262_h4 Number of Connections @performance_1263_p This is mostly a single-connection benchmark. BenchB uses multiple connections; the other tests use one connection. @performance_1264_h4 Real-World Tests @performance_1265_p Good benchmarks emulate real-world use cases. This benchmark includes 4 test cases: BenchSimple uses one table and many small updates / deletes. BenchA is similar to the TPC-A test, but single connection / single threaded (see also: www.tpc.org). BenchB is similar to the TPC-B test, using multiple connections (one thread per connection). BenchC is similar to the TPC-C test, but single connection / single threaded. @performance_1266_h4 Comparing Embedded with Server Databases @performance_1267_p This is mainly a benchmark for embedded databases (where the application runs in the same virtual machine as the database engine). However MySQL and PostgreSQL are not Java databases and cannot be embedded into a Java application. For the Java databases, both embedded and server modes are tested. @performance_1268_h4 Test Platform @performance_1269_p This test is run on Mac OS X 10.6. No virus scanner was used, and disk indexing was disabled. The JVM used is Sun JDK 1.6. @performance_1270_h4 Multiple Runs @performance_1271_p When a Java benchmark is run first, the code is not fully compiled and therefore runs slower than when running multiple times. A benchmark should always run the same test multiple times and ignore the first run(s). This benchmark runs three times, but only the last run is measured. @performance_1272_h4 Memory Usage @performance_1273_p It is not enough to measure the time taken, the memory usage is important as well. Performance can be improved by using a bigger cache, but the amount of memory is limited. HSQLDB tables are kept fully in memory by default; this benchmark uses 'disk based' tables for all databases. Unfortunately, it is not so easy to calculate the memory usage of PostgreSQL and MySQL, because they run in a different process than the test. This benchmark currently does not print memory usage of those databases. @performance_1274_h4 Delayed Operations @performance_1275_p Some databases delay some operations (for example flushing the buffers) until after the benchmark is run. This benchmark waits between each database tested, and each database runs in a different process (sequentially). @performance_1276_h4 Transaction Commit / Durability @performance_1277_p Durability means transaction committed to the database will not be lost. Some databases (for example MySQL) try to enforce this by default by calling fsync() to flush the buffers, but most hard drives don't actually flush all data. Calling the method slows down transaction commit a lot, but doesn't always make data durable. When comparing the results, it is important to think about the effect. Many database suggest to 'batch' operations when possible. This benchmark switches off autocommit when loading the data, and calls commit after each 1000 inserts. However many applications need 'short' transactions at runtime (a commit after each update). This benchmark commits after each update / delete in the simple benchmark, and after each business transaction in the other benchmarks. For databases that support delayed commits, a delay of one second is used. @performance_1278_h4 Using Prepared Statements @performance_1279_p Wherever possible, the test cases use prepared statements. @performance_1280_h4 Currently Not Tested: Startup Time @performance_1281_p The startup time of a database engine is important as well for embedded use. This time is not measured currently. Also, not tested is the time used to create a database and open an existing database. Here, one (wrapper) connection is opened at the start, and for each step a new connection is opened and then closed. @performance_1282_h2 PolePosition Benchmark @performance_1283_p The PolePosition is an open source benchmark. The algorithms are all quite simple. It was developed / sponsored by db4o. This test was not run for a longer time, so please be aware that the results below are for older database versions (H2 version 1.1, HSQLDB 1.8, Java 1.4). @performance_1284_th Test Case @performance_1285_th Unit @performance_1286_th H2 @performance_1287_th HSQLDB @performance_1288_th MySQL @performance_1289_td Melbourne write @performance_1290_td ms @performance_1291_td 369 @performance_1292_td 249 @performance_1293_td 2022 @performance_1294_td Melbourne read @performance_1295_td ms @performance_1296_td 47 @performance_1297_td 49 @performance_1298_td 93 @performance_1299_td Melbourne read_hot @performance_1300_td ms @performance_1301_td 24 @performance_1302_td 43 @performance_1303_td 95 @performance_1304_td Melbourne delete @performance_1305_td ms @performance_1306_td 147 @performance_1307_td 133 @performance_1308_td 176 @performance_1309_td Sepang write @performance_1310_td ms @performance_1311_td 965 @performance_1312_td 1201 @performance_1313_td 3213 @performance_1314_td Sepang read @performance_1315_td ms @performance_1316_td 765 @performance_1317_td 948 @performance_1318_td 3455 @performance_1319_td Sepang read_hot @performance_1320_td ms @performance_1321_td 789 @performance_1322_td 859 @performance_1323_td 3563 @performance_1324_td Sepang delete @performance_1325_td ms @performance_1326_td 1384 @performance_1327_td 1596 @performance_1328_td 6214 @performance_1329_td Bahrain write @performance_1330_td ms @performance_1331_td 1186 @performance_1332_td 1387 @performance_1333_td 6904 @performance_1334_td Bahrain query_indexed_string @performance_1335_td ms @performance_1336_td 336 @performance_1337_td 170 @performance_1338_td 693 @performance_1339_td Bahrain query_string @performance_1340_td ms @performance_1341_td 18064 @performance_1342_td 39703 @performance_1343_td 41243 @performance_1344_td Bahrain query_indexed_int @performance_1345_td ms @performance_1346_td 104 @performance_1347_td 134 @performance_1348_td 678 @performance_1349_td Bahrain update @performance_1350_td ms @performance_1351_td 191 @performance_1352_td 87 @performance_1353_td 159 @performance_1354_td Bahrain delete @performance_1355_td ms @performance_1356_td 1215 @performance_1357_td 729 @performance_1358_td 6812 @performance_1359_td Imola retrieve @performance_1360_td ms @performance_1361_td 198 @performance_1362_td 194 @performance_1363_td 4036 @performance_1364_td Barcelona write @performance_1365_td ms @performance_1366_td 413 @performance_1367_td 832 @performance_1368_td 3191 @performance_1369_td Barcelona read @performance_1370_td ms @performance_1371_td 119 @performance_1372_td 160 @performance_1373_td 1177 @performance_1374_td Barcelona query @performance_1375_td ms @performance_1376_td 20 @performance_1377_td 5169 @performance_1378_td 101 @performance_1379_td Barcelona delete @performance_1380_td ms @performance_1381_td 388 @performance_1382_td 319 @performance_1383_td 3287 @performance_1384_td Total @performance_1385_td ms @performance_1386_td 26724 @performance_1387_td 53962 @performance_1388_td 87112 @performance_1389_p There are a few problems with the PolePosition test: @performance_1390_li HSQLDB uses in-memory tables by default while H2 uses persistent tables. The HSQLDB version included in PolePosition does not support changing this, so you need to replace poleposition-0.20/lib/hsqldb.jar with a newer version (for example hsqldb-1.8.0.7.jar), and then use the setting hsqldb.connecturl=jdbc:hsqldb:file:data/hsqldb/dbbench2;hsqldb.default_table_type=cached;sql.enforce_size=true in the file Jdbc.properties. @performance_1391_li HSQLDB keeps the database open between tests, while H2 closes the database (losing all the cache). To change that, use the database URL jdbc:h2:file:data/h2/dbbench;DB_CLOSE_DELAY=-1 @performance_1392_li The amount of cache memory is quite important, specially for the PolePosition test. Unfortunately, the PolePosition test does not take this into account. @performance_1393_h2 Database Performance Tuning @performance_1394_h3 Keep Connections Open or Use a Connection Pool @performance_1395_p If your application opens and closes connections a lot (for example, for each request), you should consider using a connection pool. Opening a connection using DriverManager.getConnection is specially slow if the database is closed. By default the database is closed if the last connection is closed. @performance_1396_p If you open and close connections a lot but don't want to use a connection pool, consider keeping a 'sentinel' connection open for as long as the application runs, or use delayed database closing. See also Closing a database. @performance_1397_h3 Use a Modern JVM @performance_1398_p Newer JVMs are faster. Upgrading to the latest version of your JVM can provide a "free" boost to performance. Switching from the default Client JVM to the Server JVM using the -server command-line option improves performance at the cost of a slight increase in start-up time. @performance_1399_h3 Virus Scanners @performance_1400_p Some virus scanners scan files every time they are accessed. It is very important for performance that database files are not scanned for viruses. The database engine never interprets the data stored in the files as programs, that means even if somebody would store a virus in a database file, this would be harmless (when the virus does not run, it cannot spread). Some virus scanners allow to exclude files by suffix. Ensure files ending with .db are not scanned. @performance_1401_h3 Using the Trace Options @performance_1402_p If the performance hot spots are in the database engine, in many cases the performance can be optimized by creating additional indexes, or changing the schema. Sometimes the application does not directly generate the SQL statements, for example if an O/R mapping tool is used. To view the SQL statements and JDBC API calls, you can use the trace options. For more information, see Using the Trace Options. @performance_1403_h3 Index Usage @performance_1404_p This database uses indexes to improve the performance of SELECT, UPDATE, DELETE. If a column is used in the WHERE clause of a query, and if an index exists on this column, then the index can be used. Multi-column indexes are used if all or the first columns of the index are used. Both equality lookup and range scans are supported. Indexes are used to order result sets, but only if the condition uses the same index or no index at all. The results are sorted in memory if required. Indexes are created automatically for primary key and unique constraints. Indexes are also created for foreign key constraints, if required. For other columns, indexes need to be created manually using the CREATE INDEX statement. @performance_1405_h3 How Data is Stored Internally @performance_1406_p For persistent databases, if a table is created with a single column primary key of type BIGINT, INT, SMALLINT, TINYINT, then the data of the table is organized in this way. This is sometimes also called a "clustered index" or "index organized table". @performance_1407_p H2 internally stores table data and indexes in the form of b-trees. Each b-tree stores entries as a list of unique keys (one or more columns) and data (zero or more columns). The table data is always organized in the form of a "data b-tree" with a single column key of type long. If a single column primary key of type BIGINT, INT, SMALLINT, TINYINT is specified when creating the table (or just after creating the table, but before inserting any rows), then this column is used as the key of the data b-tree. If no primary key has been specified, if the primary key column is of another data type, or if the primary key contains more than one column, then a hidden auto-increment column of type BIGINT is added to the table, which is used as the key for the data b-tree. All other columns of the table are stored within the data area of this data b-tree (except for large BLOB, CLOB columns, which are stored externally). @performance_1408_p For each additional index, one new "index b-tree" is created. The key of this b-tree consists of the indexed columns, plus the key of the data b-tree. If a primary key is created after the table has been created, or if the primary key contains multiple column, or if the primary key is not of the data types listed above, then the primary key is stored in a new index b-tree. @performance_1409_h3 Optimizer @performance_1410_p This database uses a cost based optimizer. For simple and queries and queries with medium complexity (less than 7 tables in the join), the expected cost (running time) of all possible plans is calculated, and the plan with the lowest cost is used. For more complex queries, the algorithm first tries all possible combinations for the first few tables, and the remaining tables added using a greedy algorithm (this works well for most joins). Afterwards a genetic algorithm is used to test at most 2000 distinct plans. Only left-deep plans are evaluated. @performance_1411_h3 Expression Optimization @performance_1412_p After the statement is parsed, all expressions are simplified automatically if possible. Operations are evaluated only once if all parameters are constant. Functions are also optimized, but only if the function is constant (always returns the same result for the same parameter values). If the WHERE clause is always false, then the table is not accessed at all. @performance_1413_h3 COUNT(*) Optimization @performance_1414_p If the query only counts all rows of a table, then the data is not accessed. However, this is only possible if no WHERE clause is used, that means it only works for queries of the form SELECT COUNT(*) FROM table. @performance_1415_h3 Updating Optimizer Statistics / Column Selectivity @performance_1416_p When executing a query, at most one index per join can be used. If the same table is joined multiple times, for each join only one index is used (the same index could be used for both joins, or each join could use a different index). Example: for the query SELECT * FROM TEST T1, TEST T2 WHERE T1.NAME='A' AND T2.ID=T1.ID, two index can be used, in this case the index on NAME for T1 and the index on ID for T2. @performance_1417_p If a table has multiple indexes, sometimes more than one index could be used. Example: if there is a table TEST(ID, NAME, FIRSTNAME) and an index on each column, then two indexes could be used for the query SELECT * FROM TEST WHERE NAME='A' AND FIRSTNAME='B', the index on NAME or the index on FIRSTNAME. It is not possible to use both indexes at the same time. Which index is used depends on the selectivity of the column. The selectivity describes the 'uniqueness' of values in a column. A selectivity of 100 means each value appears only once, and a selectivity of 1 means the same value appears in many or most rows. For the query above, the index on NAME should be used if the table contains more distinct names than first names. @performance_1418_p The SQL statement ANALYZE can be used to automatically estimate the selectivity of the columns in the tables. This command should be run from time to time to improve the query plans generated by the optimizer. @performance_1419_h3 In-Memory (Hash) Indexes @performance_1420_p Using in-memory indexes, specially in-memory hash indexes, can speed up queries and data manipulation. @performance_1421_p In-memory indexes are automatically used for in-memory databases, but can also be created for persistent databases using CREATE MEMORY TABLE. In many cases, the rows itself will also be kept in-memory. Please note this may cause memory problems for large tables. @performance_1422_p In-memory hash indexes are backed by a hash table and are usually faster than regular indexes. However, hash indexes only supports direct lookup (WHERE ID = ?) but not range scan (WHERE ID < ?). To use hash indexes, use HASH as in: CREATE UNIQUE HASH INDEX and CREATE TABLE ...(ID INT PRIMARY KEY HASH,...). @performance_1423_h3 Use Prepared Statements @performance_1424_p If possible, use prepared statements with parameters. @performance_1425_h3 Prepared Statements and IN(...) @performance_1426_p Avoid generating SQL statements with a variable size IN(...) list. Instead, use a prepared statement with arrays as in the following example: @performance_1427_h3 Optimization Examples @performance_1428_p See src/test/org/h2/samples/optimizations.sql for a few examples of queries that benefit from special optimizations built into the database. @performance_1429_h3 Cache Size and Type @performance_1430_p By default the cache size of H2 is quite small. Consider using a larger cache size, or enable the second level soft reference cache. See also Cache Settings. @performance_1431_h3 Data Types @performance_1432_p Each data type has different storage and performance characteristics: @performance_1433_li The DECIMAL/NUMERIC type is slower and requires more storage than the REAL and DOUBLE types. @performance_1434_li Text types are slower to read, write, and compare than numeric types and generally require more storage. @performance_1435_li See Large Objects for information on BINARY vs. BLOB and VARCHAR vs. CLOB performance. @performance_1436_li Parsing and formatting takes longer for the TIME, DATE, and TIMESTAMP types than the numeric types. @performance_1437_code SMALLINT/TINYINT/BOOLEAN @performance_1438_li are not significantly smaller or faster to work with than INTEGER in most modes. @performance_1439_h3 Sorted Insert Optimization @performance_1440_p To reduce disk space usage and speed up table creation, an optimization for sorted inserts is available. When used, b-tree pages are split at the insertion point. To use this optimization, add SORTED before the SELECT statement: @performance_1441_h2 Using the Built-In Profiler @performance_1442_p A very simple Java profiler is built-in. To use it, use the following template: @performance_1443_h2 Application Profiling @performance_1444_h3 Analyze First @performance_1445_p Before trying to optimize performance, it is important to understand where the problem is (what part of the application is slow). Blind optimization or optimization based on guesses should be avoided, because usually it is not an efficient strategy. There are various ways to analyze an application. Sometimes two implementations can be compared using System.currentTimeMillis(). But this does not work for complex applications with many modules, and for memory problems. @performance_1446_p A simple way to profile an application is to use the built-in profiling tool of java. Example: @performance_1447_p Unfortunately, it is only possible to profile the application from start to end. Another solution is to create a number of full thread dumps. To do that, first run jps -l to get the process id, and then run jstack <pid> or kill -QUIT <pid> (Linux) or press Ctrl+C (Windows). @performance_1448_p A simple profiling tool is included in H2. To use it, the application needs to be changed slightly. Example: @performance_1449_p The profiler is built into the H2 Console tool, to analyze databases that open slowly. To use it, run the H2 Console, and then click on 'Test Connection'. Afterwards, click on "Test successful" and you get the most common stack traces, which helps to find out why it took so long to connect. You will only get the stack traces if opening the database took more than a few seconds. @performance_1450_h2 Database Profiling @performance_1451_p The ConvertTraceFile tool generates SQL statement statistics at the end of the SQL script file. The format used is similar to the profiling data generated when using java -Xrunhprof. For this to work, the trace level needs to be 2 or higher (TRACE_LEVEL_FILE=2). The easiest way to set the trace level is to append the setting to the database URL, for example: jdbc:h2:~/test;TRACE_LEVEL_FILE=2 or jdbc:h2:tcp://localhost/~/test;TRACE_LEVEL_FILE=2. As an example, execute the the following script using the H2 Console: @performance_1452_p After running the test case, convert the .trace.db file using the ConvertTraceFile tool. The trace file is located in the same directory as the database file. @performance_1453_p The generated file test.sql will contain the SQL statements as well as the following profiling data (results vary): @performance_1454_h2 Statement Execution Plans @performance_1455_p The SQL statement EXPLAIN displays the indexes and optimizations the database uses for a statement. The following statements support EXPLAIN: SELECT, UPDATE, DELETE, MERGE, INSERT. The following query shows that the database uses the primary key index to search for rows: @performance_1456_p For joins, the tables in the execution plan are sorted in the order they are processed. The following query shows the database first processes the table INVOICE (using the primary key). For each row, it will additionally check that the value of the column AMOUNT is larger than zero, and for those rows the database will search in the table CUSTOMER (using the primary key). The query plan contains some redundancy so it is a valid statement. @performance_1457_h3 Displaying the Scan Count @performance_1458_code EXPLAIN ANALYZE @performance_1459_p additionally shows the scanned rows per table and pages read from disk per table or index. This will actually execute the query, unlike EXPLAIN which only prepares it. The following query scanned 1000 rows, and to do that had to read 85 pages from the data area of the table. Running the query twice will not list the pages read from disk, because they are now in the cache. The tableScan means this query doesn't use an index. @performance_1460_p The cache will prevent the pages are read twice. H2 reads all columns of the row unless only the columns in the index are read. Except for large CLOB and BLOB, which are not store in the table. @performance_1461_h3 Special Optimizations @performance_1462_p For certain queries, the database doesn't need to read all rows, or doesn't need to sort the result even if ORDER BY is used. @performance_1463_p For queries of the form SELECT COUNT(*), MIN(ID), MAX(ID) FROM TEST, the query plan includes the line /* direct lookup */ if the data can be read from an index. @performance_1464_p For queries of the form SELECT DISTINCT CUSTOMER_ID FROM INVOICE, the query plan includes the line /* distinct */ if there is an non-unique or multi-column index on this column, and if this column has a low selectivity. @performance_1465_p For queries of the form SELECT * FROM TEST ORDER BY ID, the query plan includes the line /* index sorted */ to indicate there is no separate sorting required. @performance_1466_p For queries of the form SELECT * FROM TEST GROUP BY ID ORDER BY ID, the query plan includes the line /* group sorted */ to indicate there is no separate sorting required. @performance_1467_h2 How Data is Stored and How Indexes Work @performance_1468_p Internally, each row in a table is identified by a unique number, the row id. The rows of a table are stored with the row id as the key. The row id is a number of type long. If a table has a single column primary key of type INT or BIGINT, then the value of this column is the row id, otherwise the database generates the row id automatically. There is a (non-standard) way to access the row id: using the _ROWID_ pseudo-column: @performance_1469_p The data is stored in the database as follows: @performance_1470_th _ROWID_ @performance_1471_th FIRST_NAME @performance_1472_th NAME @performance_1473_th CITY @performance_1474_th PHONE @performance_1475_td 1 @performance_1476_td John @performance_1477_td Miller @performance_1478_td Berne @performance_1479_td 123 456 789 @performance_1480_td 2 @performance_1481_td Philip @performance_1482_td Jones @performance_1483_td Berne @performance_1484_td 123 012 345 @performance_1485_p Access by row id is fast because the data is sorted by this key. If the query condition does not contain the row id (and if no other index can be used), then all rows of the table are scanned. A table scan iterates over all rows in the table, in the order of the row id. To find out what strategy the database uses to retrieve the data, use EXPLAIN SELECT: @performance_1486_h3 Indexes @performance_1487_p An index internally is basically just a table that contains the indexed column(s), plus the row id: @performance_1488_p In the index, the data is sorted by the indexed columns. So this index contains the following data: @performance_1489_th CITY @performance_1490_th NAME @performance_1491_th FIRST_NAME @performance_1492_th _ROWID_ @performance_1493_td Berne @performance_1494_td Jones @performance_1495_td Philip @performance_1496_td 2 @performance_1497_td Berne @performance_1498_td Miller @performance_1499_td John @performance_1500_td 1 @performance_1501_p When the database uses an index to query the data, it searches the index for the given data, and (if required) reads the remaining columns in the main data table (retrieved using the row id). An index on city, name, and first name (multi-column index) allows to quickly search for rows when the city, name, and first name are known. If only the city and name, or only the city is known, then this index is also used (so creating an additional index on just the city is not needed). This index is also used when reading all rows, sorted by the indexed columns. However, if only the first name is known, then this index is not used: @performance_1502_p If your application often queries the table for a phone number, then it makes sense to create an additional index on it: @performance_1503_p This index contains the phone number, and the row id: @performance_1504_th PHONE @performance_1505_th _ROWID_ @performance_1506_td 123 012 345 @performance_1507_td 2 @performance_1508_td 123 456 789 @performance_1509_td 1 @performance_1510_h3 Using Multiple Indexes @performance_1511_p Within a query, only one index per logical table is used. Using the condition PHONE = '123 567 789' OR CITY = 'Berne' would use a table scan instead of first using the index on the phone number and then the index on the city. It makes sense to write two queries and combine then using UNION. In this case, each individual query uses a different index: @performance_1512_h2 Fast Database Import @performance_1513_p To speed up large imports, consider using the following options temporarily: @performance_1514_code SET LOG 0 @performance_1515_li (disabling the transaction log) @performance_1516_code SET CACHE_SIZE @performance_1517_li (a large cache is faster) @performance_1518_code SET LOCK_MODE 0 @performance_1519_li (disable locking) @performance_1520_code SET UNDO_LOG 0 @performance_1521_li (disable the session undo log) @performance_1522_p These options can be set in the database URL: jdbc:h2:~/test;LOG=0;CACHE_SIZE=65536;LOCK_MODE=0;UNDO_LOG=0. Most of those options are not recommended for regular use, that means you need to reset them after use. @performance_1523_p If you have to import a lot of rows, use a PreparedStatement or use CSV import. Please note that CREATE TABLE(...) ... AS SELECT ... is faster than CREATE TABLE(...); INSERT INTO ... SELECT .... @quickstart_1000_h1 Quickstart @quickstart_1001_a Embedding H2 in an Application @quickstart_1002_a The H2 Console Application @quickstart_1003_h2 Embedding H2 in an Application @quickstart_1004_p This database can be used in embedded mode, or in server mode. To use it in embedded mode, you need to: @quickstart_1005_li Add the h2*.jar to the classpath (H2 does not have any dependencies) @quickstart_1006_li Use the JDBC driver class: org.h2.Driver @quickstart_1007_li The database URL jdbc:h2:~/test opens the database test in your user home directory @quickstart_1008_li A new database is automatically created @quickstart_1009_h2 The H2 Console Application @quickstart_1010_p The Console lets you access a SQL database using a browser interface. @quickstart_1011_p If you don't have Windows XP, or if something does not work as expected, please see the detailed description in the Tutorial. @quickstart_1012_h3 Step-by-Step @quickstart_1013_h4 Installation @quickstart_1014_p Install the software using the Windows Installer (if you did not yet do that). @quickstart_1015_h4 Start the Console @quickstart_1016_p Click [Start], [All Programs], [H2], and [H2 Console (Command Line)]: @quickstart_1017_p A new console window appears: @quickstart_1018_p Also, a new browser page should open with the URL http://localhost:8082. You may get a security warning from the firewall. If you don't want other computers in the network to access the database on your machine, you can let the firewall block these connections. Only local connections are required at this time. @quickstart_1019_h4 Login @quickstart_1020_p Select [Generic H2] and click [Connect]: @quickstart_1021_p You are now logged in. @quickstart_1022_h4 Sample @quickstart_1023_p Click on the [Sample SQL Script]: @quickstart_1024_p The SQL commands appear in the command area. @quickstart_1025_h4 Execute @quickstart_1026_p Click [Run] @quickstart_1027_p On the left side, a new entry TEST is added below the database icon. The operations and results of the statements are shown below the script. @quickstart_1028_h4 Disconnect @quickstart_1029_p Click on [Disconnect]: @quickstart_1030_p to close the connection. @quickstart_1031_h4 End @quickstart_1032_p Close the console window. For more information, see the Tutorial. @roadmap_1000_h1 Roadmap @roadmap_1001_p New (feature) requests will usually be added at the very end of the list. The priority is increased for important and popular requests. Of course, patches are always welcome, but are not always applied as is. See also Providing Patches. @roadmap_1002_h2 Version 1.4.x: Planned Changes @roadmap_1003_li Build the jar file for Java 6 by default (JDBC API 4.1). @roadmap_1004_li Enable the new storage format for dates (system property "h2.storeLocalTime"). Document time literals: between minus 2 million and 2 million hours with nanosecond resolution. @roadmap_1005_li Remove the old connection pool logic (system property "h2.fastConnectionPool"). @roadmap_1006_li Enable "h2.modifyOnWrite". @roadmap_1007_li Enable Mode.supportOffsetFetch by default, so that "select 1 fetch first 1 row only" works. @roadmap_1008_li The default user name should be an empty string and not "sa". @roadmap_1009_li Deprecate Csv.getInstance() (use the public constructor instead). @roadmap_1010_li Move ErrorCode class to org.h2.api. @roadmap_1011_li Deprecate the encryption algorithm XTEA. @roadmap_1012_li Sort order for byte arrays: currently x'99' is smaller than x'09', which is unexpected. Change if possible. @roadmap_1013_li Remove support for the old-style outer join syntax using "(+)" because it is buggy. @roadmap_1014_h2 Priority 1 @roadmap_1015_li Bugfixes. @roadmap_1016_li More tests with MULTI_THREADED=1 (and MULTI_THREADED with MVCC): Online backup (using the 'backup' statement). @roadmap_1017_li Server side cursors. @roadmap_1018_h2 Priority 2 @roadmap_1019_li Support hints for the optimizer (which index to use, enforce the join order). @roadmap_1020_li Full outer joins. @roadmap_1021_li Access rights: remember the owner of an object. Create, alter and drop privileges. COMMENT: allow owner of object to change it. Issue 208: Access rights for schemas. @roadmap_1022_li Test multi-threaded in-memory db access. @roadmap_1023_li MySQL, MS SQL Server compatibility: support case sensitive (mixed case) identifiers without quotes. @roadmap_1024_li Support GRANT SELECT, UPDATE ON [schemaName.] *. @roadmap_1025_li Migrate database tool (also from other database engines). For Oracle, maybe use DBMS_METADATA.GET_DDL / GET_DEPENDENT_DDL. @roadmap_1026_li Clustering: support mixed clustering mode (one embedded, others in server mode). @roadmap_1027_li Clustering: reads should be randomly distributed (optional) or to a designated database on RAM (parameter: READ_FROM=3). @roadmap_1028_li Window functions: RANK() and DENSE_RANK(), partition using OVER(). select *, count(*) over() as fullCount from ... limit 4; @roadmap_1029_li PostgreSQL catalog: use BEFORE SELECT triggers instead of views over metadata tables. @roadmap_1030_li Compatibility: automatically load functions from a script depending on the mode - see FunctionsMySQL.java, issue 211. @roadmap_1031_li Test very large databases and LOBs (up to 256 GB). @roadmap_1032_li Store all temp files in the temp directory. @roadmap_1033_li Don't use temp files, specially not deleteOnExit (bug 4513817: File.deleteOnExit consumes memory). Also to allow opening client / server (remote) connections when using LOBs. @roadmap_1034_li Sequence: add features [NO] MINVALUE, MAXVALUE, CYCLE. @roadmap_1035_li Make DDL (Data Definition) operations transactional. @roadmap_1036_li Deferred integrity checking (DEFERRABLE INITIALLY DEFERRED). @roadmap_1037_li Groovy Stored Procedures: http://groovy.codehaus.org/GSQL @roadmap_1038_li Add a migration guide (list differences between databases). @roadmap_1039_li Optimization: automatic index creation suggestion using the trace file? @roadmap_1040_li Fulltext search Lucene: analyzer configuration, mergeFactor. @roadmap_1041_li Compression performance: don't allocate buffers, compress / expand in to out buffer. @roadmap_1042_li Rebuild index functionality to shrink index size and improve performance. @roadmap_1043_li Console: add accesskey to most important commands (A, AREA, BUTTON, INPUT, LABEL, LEGEND, TEXTAREA). @roadmap_1044_li Test performance again with SQL Server, Oracle, DB2. @roadmap_1045_li Test with Spatial DB in a box / JTS: http://www.opengeospatial.org/standards/sfs - OpenGIS Implementation Specification. @roadmap_1046_li Write more tests and documentation for MVCC (Multi Version Concurrency Control). @roadmap_1047_li Find a tool to view large text file (larger than 100 MB), with find, page up and down (like less), truncate before / after. @roadmap_1048_li Implement, test, document XAConnection and so on. @roadmap_1049_li Pluggable data type (for streaming, hashing, compression, validation, conversion, encryption). @roadmap_1050_li CHECK: find out what makes CHECK=TRUE slow, move to CHECK2. @roadmap_1051_li Drop with invalidate views (so that source code is not lost). Check what other databases do exactly. @roadmap_1052_li Index usage for (ID, NAME)=(1, 'Hi'); document. @roadmap_1053_li Set a connection read only (Connection.setReadOnly) or using a connection parameter. @roadmap_1054_li Access rights: finer grained access control (grant access for specific functions). @roadmap_1055_li ROW_NUMBER() OVER([PARTITION BY columnName][ORDER BY columnName]). @roadmap_1056_li Version check: docs / web console (using Javascript), and maybe in the library (using TCP/IP). @roadmap_1057_li Web server classloader: override findResource / getResourceFrom. @roadmap_1058_li Cost for embedded temporary view is calculated wrong, if result is constant. @roadmap_1059_li Count index range query (count(*) where id between 10 and 20). @roadmap_1060_li Performance: update in-place. @roadmap_1061_li Clustering: when a database is back alive, automatically synchronize with the master (requires readable transaction log). @roadmap_1062_li Database file name suffix: a way to use no or a different suffix (for example using a slash). @roadmap_1063_li Eclipse plugin. @roadmap_1064_li Asynchronous queries to support publish/subscribe: SELECT ... FOR READ WAIT [maxMillisToWait]. See also MS SQL Server "Query Notification". @roadmap_1065_li Fulltext search (native): reader / tokenizer / filter. @roadmap_1066_li Linked schema using CSV files: one schema for a directory of files; support indexes for CSV files. @roadmap_1067_li iReport to support H2. @roadmap_1068_li Include SMPT (mail) client (alert on cluster failure, low disk space,...). @roadmap_1069_li Option for SCRIPT to only process one or a set of schemas or tables, and append to a file. @roadmap_1070_li JSON parser and functions. @roadmap_1071_li Copy database: tool with config GUI and batch mode, extensible (example: compare). @roadmap_1072_li Document, implement tool for long running transactions using user-defined compensation statements. @roadmap_1073_li Support SET TABLE DUAL READONLY. @roadmap_1074_li GCJ: what is the state now? @roadmap_1075_li Events for: database Startup, Connections, Login attempts, Disconnections, Prepare (after parsing), Web Server. See http://docs.openlinksw.com/virtuoso/fn_dbev_startup.html @roadmap_1076_li Optimization: simpler log compression. @roadmap_1077_li Support standard INFORMATION_SCHEMA tables, as defined in http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt - specially KEY_COLUMN_USAGE: http://dev.mysql.com/doc/refman/5.0/en/information-schema.html, http://www.xcdsql.org/Misc/INFORMATION_SCHEMA%20With%20Rolenames.gif @roadmap_1078_li Compatibility: in MySQL, HSQLDB, /0.0 is NULL; in PostgreSQL, Derby: division by zero. HSQLDB: 0.0e1 / 0.0e1 is NaN. @roadmap_1079_li Functional tables should accept parameters from other tables (see FunctionMultiReturn) SELECT * FROM TEST T, P2C(T.A, T.R). @roadmap_1080_li Custom class loader to reload functions on demand. @roadmap_1081_li Test http://mysql-je.sourceforge.net/ @roadmap_1082_li H2 Console: the webclient could support more features like phpMyAdmin. @roadmap_1083_li Support Oracle functions: TRUNC, NVL2, TO_CHAR, TO_DATE, TO_NUMBER. @roadmap_1084_li Work on the Java to C converter. @roadmap_1085_li The HELP information schema can be directly exposed in the Console. @roadmap_1086_li Maybe use the 0x1234 notation for binary fields, see MS SQL Server. @roadmap_1087_li Support Oracle CONNECT BY in some way: http://www.adp-gmbh.ch/ora/sql/connect_by.html http://philip.greenspun.com/sql/trees.html @roadmap_1088_li SQL Server 2005, Oracle: support COUNT(*) OVER(). See http://www.orafusion.com/art_anlytc.htm @roadmap_1089_li SQL 2003: http://www.wiscorp.com/sql_2003_standard.zip @roadmap_1090_li Version column (number/sequence and timestamp based). @roadmap_1091_li Optimize getGeneratedKey: send last identity after each execute (server). @roadmap_1092_li Test and document UPDATE TEST SET (ID, NAME) = (SELECT ID*10, NAME || '!' FROM TEST T WHERE T.ID=TEST.ID). @roadmap_1093_li Max memory rows / max undo log size: use block count / row size not row count. @roadmap_1094_li Support 123L syntax as in Java; example: SELECT (2000000000*2). @roadmap_1095_li Implement point-in-time recovery. @roadmap_1096_li Support PL/SQL (programming language / control flow statements). @roadmap_1097_li LIKE: improved version for larger texts (currently using naive search). @roadmap_1098_li Throw an exception when the application calls getInt on a Long (optional). @roadmap_1099_li Default date format for input and output (local date constants). @roadmap_1100_li Document ROWNUM usage for reports: SELECT ROWNUM, * FROM (subquery). @roadmap_1101_li File system that writes to two file systems (replication, replicating file system). @roadmap_1102_li Standalone tool to get relevant system properties and add it to the trace output. @roadmap_1103_li Support 'call proc(1=value)' (PostgreSQL, Oracle). @roadmap_1104_li Console: improve editing data (Tab, Shift-Tab, Enter, Up, Down, Shift+Del?). @roadmap_1105_li Console: autocomplete Ctrl+Space inserts template. @roadmap_1106_li Option to encrypt .trace.db file. @roadmap_1107_li Auto-Update feature for database, .jar file. @roadmap_1108_li ResultSet SimpleResultSet.readFromURL(String url): id varchar, state varchar, released timestamp. @roadmap_1109_li Partial indexing (see PostgreSQL). @roadmap_1110_li Add GUI to build a custom version (embedded, fulltext,...) using build flags. @roadmap_1111_li http://rubyforge.org/projects/hypersonic/ @roadmap_1112_li Add a sample application that runs the H2 unit test and writes the result to a file (so it can be included in the user app). @roadmap_1113_li Table order: ALTER TABLE TEST ORDER BY NAME DESC (MySQL compatibility). @roadmap_1114_li Backup tool should work with other databases as well. @roadmap_1115_li Console: -ifExists doesn't work for the console. Add a flag to disable other dbs. @roadmap_1116_li Check if 'FSUTIL behavior set disablelastaccess 1' improves the performance (fsutil behavior query disablelastaccess). @roadmap_1117_li Java static code analysis: http://pmd.sourceforge.net/ @roadmap_1118_li Java static code analysis: http://www.eclipse.org/tptp/ @roadmap_1119_li Compatibility for CREATE SCHEMA AUTHORIZATION. @roadmap_1120_li Implement Clob / Blob truncate and the remaining functionality. @roadmap_1121_li Add multiple columns at the same time with ALTER TABLE .. ADD .. ADD ... @roadmap_1122_li File locking: writing a system property to detect concurrent access from the same VM (different classloaders). @roadmap_1123_li Pure SQL triggers (example: update parent table if the child table is changed). @roadmap_1124_li Add H2 to Gem (Ruby install system). @roadmap_1125_li Support linked JCR tables. @roadmap_1126_li Native fulltext search: min word length; store word positions. @roadmap_1127_li Add an option to the SCRIPT command to generate only portable / standard SQL. @roadmap_1128_li Updatable views: create 'instead of' triggers automatically if possible (simple cases first). @roadmap_1129_li Improve create index performance. @roadmap_1130_li Compact databases without having to close the database (vacuum). @roadmap_1131_li Implement more JDBC 4.0 features. @roadmap_1132_li Support TRANSFORM / PIVOT as in MS Access. @roadmap_1133_li SELECT * FROM (VALUES (...), (...), ....) AS alias(f1, ...). @roadmap_1134_li Support updatable views with join on primary keys (to extend a table). @roadmap_1135_li Public interface for functions (not public static). @roadmap_1136_li Support reading the transaction log. @roadmap_1137_li Feature matrix as in i-net software. @roadmap_1138_li Updatable result set on table without primary key or unique index. @roadmap_1139_li Compatibility with Derby and PostgreSQL: VALUES(1), (2); SELECT * FROM (VALUES (1), (2)) AS myTable(c1). Issue 221. @roadmap_1140_li Allow execution time prepare for SELECT * FROM CSVREAD(?, 'columnNameString') @roadmap_1141_li Support data type INTERVAL @roadmap_1142_li Support nested transactions (possibly using savepoints internally). @roadmap_1143_li Add a benchmark for bigger databases, and one for many users. @roadmap_1144_li Compression in the result set over TCP/IP. @roadmap_1145_li Support curtimestamp (like curtime, curdate). @roadmap_1146_li Support ANALYZE {TABLE|INDEX} tableName COMPUTE|ESTIMATE|DELETE STATISTICS ptnOption options. @roadmap_1147_li Release locks (shared or exclusive) on demand @roadmap_1148_li Support OUTER UNION @roadmap_1149_li Support parameterized views (similar to CSVREAD, but using just SQL for the definition) @roadmap_1150_li A way (JDBC driver) to map an URL (jdbc:h2map:c1) to a connection object @roadmap_1151_li Support dynamic linked schema (automatically adding/updating/removing tables) @roadmap_1152_li Clustering: adding a node should be very fast and without interrupting clients (very short lock) @roadmap_1153_li Compatibility: # is the start of a single line comment (MySQL) but date quote (Access). Mode specific @roadmap_1154_li Run benchmarks with Android, Java 7, java -server @roadmap_1155_li Optimizations: faster hash function for strings. @roadmap_1156_li DatabaseEventListener: callback for all operations (including expected time, RUNSCRIPT) and cancel functionality @roadmap_1157_li Benchmark: add a graph to show how databases scale (performance/database size) @roadmap_1158_li Implement a SQLData interface to map your data over to a custom object @roadmap_1159_li In the MySQL and PostgreSQL mode, use lower case identifiers by default (DatabaseMetaData.storesLowerCaseIdentifiers = true) @roadmap_1160_li Support multiple directories (on different hard drives) for the same database @roadmap_1161_li Server protocol: use challenge response authentication, but client sends hash(user+password) encrypted with response @roadmap_1162_li Support EXEC[UTE] (doesn't return a result set, compatible to MS SQL Server) @roadmap_1163_li Support native XML data type - see http://en.wikipedia.org/wiki/SQL/XML @roadmap_1164_li Support triggers with a string property or option: SpringTrigger, OSGITrigger @roadmap_1165_li MySQL compatibility: update test1 t1, test2 t2 set t1.id = t2.id where t1.id = t2.id; @roadmap_1166_li Ability to resize the cache array when resizing the cache @roadmap_1167_li Time based cache writing (one second after writing the log) @roadmap_1168_li Check state of H2 driver for DDLUtils: https://issues.apache.org/jira/browse/DDLUTILS-185 @roadmap_1169_li Index usage for REGEXP LIKE. @roadmap_1170_li Compatibility: add a role DBA (like ADMIN). @roadmap_1171_li Better support multiple processors for in-memory databases. @roadmap_1172_li Support N'text' @roadmap_1173_li Support compatibility for jdbc:hsqldb:res: @roadmap_1174_li HSQLDB compatibility: automatically convert to the next 'higher' data type. Example: cast(2000000000 as int) + cast(2000000000 as int); (HSQLDB: long; PostgreSQL: integer out of range) @roadmap_1175_li Provide an Java SQL builder with standard and H2 syntax @roadmap_1176_li Trace: write OS, file system, JVM,... when opening the database @roadmap_1177_li Support indexes for views (probably requires materialized views) @roadmap_1178_li Document SET SEARCH_PATH, BEGIN, EXECUTE, parameters @roadmap_1179_li Server: use one listener (detect if the request comes from an PG or TCP client) @roadmap_1180_li Optimize SELECT MIN(ID), MAX(ID), COUNT(*) FROM TEST WHERE ID BETWEEN 100 AND 200 @roadmap_1181_li Sequence: PostgreSQL compatibility (rename, create) http://www.postgresql.org/docs/8.2/static/sql-altersequence.html @roadmap_1182_li DISTINCT: support large result sets by sorting on all columns (additionally) and then removing duplicates. @roadmap_1183_li Support a special trigger on all tables to allow building a transaction log reader. @roadmap_1184_li File system with a background writer thread; test if this is faster @roadmap_1185_li Better document the source code (high level documentation). @roadmap_1186_li Support select * from dual a left join dual b on b.x=(select max(x) from dual) @roadmap_1187_li Optimization: don't lock when the database is read-only @roadmap_1188_li Issue 146: Support merge join. @roadmap_1189_li Integrate spatial functions from http://geosysin.iict.ch/irstv-trac/wiki/H2spatial/Download @roadmap_1190_li Cluster: hot deploy (adding a node at runtime). @roadmap_1191_li Support DatabaseMetaData.insertsAreDetected: updatable result sets should detect inserts. @roadmap_1192_li Oracle: support DECODE method (convert to CASE WHEN). @roadmap_1193_li Native search: support "phrase search", wildcard search (* and ?), case-insensitive search, boolean operators, and grouping @roadmap_1194_li Improve documentation of access rights. @roadmap_1195_li Support opening a database that is in the classpath, maybe using a new file system. Workaround: detect jar file using getClass().getProtectionDomain().getCodeSource().getLocation(). @roadmap_1196_li Support ENUM data type (see MySQL, PostgreSQL, MS SQL Server, maybe others). @roadmap_1197_li Remember the user defined data type (domain) of a column. @roadmap_1198_li MVCC: support multi-threaded kernel with multi-version concurrency. @roadmap_1199_li Auto-server: add option to define the port range or list. @roadmap_1200_li Support Jackcess (MS Access databases) @roadmap_1201_li Built-in methods to write large objects (BLOB and CLOB): FILE_WRITE('test.txt', 'Hello World') @roadmap_1202_li Improve time to open large databases (see mail 'init time for distributed setup') @roadmap_1203_li Move Maven 2 repository from hsql.sf.net to h2database.sf.net @roadmap_1204_li Java 1.5 tool: JdbcUtils.closeSilently(s1, s2,...) @roadmap_1205_li Optimize A=? OR B=? to UNION if the cost is lower. @roadmap_1206_li Javadoc: document design patterns used @roadmap_1207_li Support custom collators, for example for natural sort (for text that contains numbers). @roadmap_1208_li Write an article about SQLInjection (h2/src/docsrc/html/images/SQLInjection.txt) @roadmap_1209_li Convert SQL-injection-2.txt to html document, include SQLInjection.java sample @roadmap_1210_li Support OUT parameters in user-defined procedures. @roadmap_1211_li Web site design: http://www.igniterealtime.org/projects/openfire/index.jsp @roadmap_1212_li HSQLDB compatibility: Openfire server uses: CREATE SCHEMA PUBLIC AUTHORIZATION DBA; CREATE USER SA PASSWORD ""; GRANT DBA TO SA; SET SCHEMA PUBLIC @roadmap_1213_li Translation: use ?? in help.csv @roadmap_1214_li Translated .pdf @roadmap_1215_li Recovery tool: bad blocks should be converted to INSERT INTO SYSTEM_ERRORS(...), and things should go into the .trace.db file @roadmap_1216_li Issue 357: support getGeneratedKeys to return multiple rows when used with batch updates. This is supported by MySQL, but not Derby. Both PostgreSQL and HSQLDB don't support getGeneratedKeys. Also support it when using INSERT ... SELECT. @roadmap_1217_li RECOVER=2 to backup the database, run recovery, open the database @roadmap_1218_li Recovery should work with encrypted databases @roadmap_1219_li Corruption: new error code, add help @roadmap_1220_li Space reuse: after init, scan all storages and free those that don't belong to a live database object @roadmap_1221_li Access rights: add missing features (users should be 'owner' of objects; missing rights for sequences; dropping objects) @roadmap_1222_li Support NOCACHE table option (Oracle). @roadmap_1223_li Support table partitioning. @roadmap_1224_li Add regular javadocs (using the default doclet, but another css) to the homepage. @roadmap_1225_li The database should be kept open for a longer time when using the server mode. @roadmap_1226_li Javadocs: for each tool, add a copy & paste sample in the class level. @roadmap_1227_li Javadocs: add @author tags. @roadmap_1228_li Fluent API for tools: Server.createTcpServer().setPort(9081).setPassword(password).start(); @roadmap_1229_li MySQL compatibility: real SQL statement for DESCRIBE TEST @roadmap_1230_li Use a default delay of 1 second before closing a database. @roadmap_1231_li Write (log) to system table before adding to internal data structures. @roadmap_1232_li Support direct lookup for MIN and MAX when using WHERE (see todo.txt / Direct Lookup). @roadmap_1233_li Support other array types (String[], double[]) in PreparedStatement.setObject(int, Object) (with test case). @roadmap_1234_li MVCC should not be memory bound (uncommitted data is kept in memory in the delta index; maybe using a regular b-tree index solves the problem). @roadmap_1235_li Oracle compatibility: support NLS_DATE_FORMAT. @roadmap_1236_li Support for Thread.interrupt to cancel running statements. @roadmap_1237_li Cluster: add feature to make sure cluster nodes can not get out of sync (for example by stopping one process). @roadmap_1238_li H2 Console: support CLOB/BLOB download using a link. @roadmap_1239_li Support flashback queries as in Oracle. @roadmap_1240_li Import / Export of fixed with text files. @roadmap_1241_li HSQLDB compatibility: automatic data type for SUM if value is the value is too big (by default use the same type as the data). @roadmap_1242_li Improve the optimizer to select the right index for special cases: where id between 2 and 4 and booleanColumn @roadmap_1243_li Linked tables: make hidden columns available (Oracle: rowid and ora_rowscn columns). @roadmap_1244_li H2 Console: in-place autocomplete. @roadmap_1245_li Support large databases: split database files to multiple directories / disks (similar to tablespaces). @roadmap_1246_li H2 Console: support configuration option for fixed width (monospace) font. @roadmap_1247_li Native fulltext search: support analyzers (specially for Chinese, Japanese). @roadmap_1248_li Automatically compact databases from time to time (as a background process). @roadmap_1249_li Test Eclipse DTP. @roadmap_1250_li H2 Console: autocomplete: keep the previous setting @roadmap_1251_li executeBatch: option to stop at the first failed statement. @roadmap_1252_li Implement OLAP features as described here: http://www.devx.com/getHelpOn/10MinuteSolution/16573/0/page/5 @roadmap_1253_li Support Oracle ROWID (unique identifier for each row). @roadmap_1254_li MySQL compatibility: alter table add index i(c), add constraint c foreign key(c) references t(c); @roadmap_1255_li Server mode: improve performance for batch updates. @roadmap_1256_li Applets: support read-only databases in a zip file (accessed as a resource). @roadmap_1257_li Long running queries / errors / trace system table. @roadmap_1258_li H2 Console should support JaQu directly. @roadmap_1259_li Better document FTL_SEARCH, FTL_SEARCH_DATA. @roadmap_1260_li Sequences: CURRVAL should be session specific. Compatibility with PostgreSQL. @roadmap_1261_li Index creation using deterministic functions. @roadmap_1262_li ANALYZE: for unique indexes that allow null, count the number of null. @roadmap_1263_li MySQL compatibility: multi-table delete: DELETE .. FROM .. [,...] USING - See http://dev.mysql.com/doc/refman/5.0/en/delete.html @roadmap_1264_li AUTO_SERVER: support changing IP addresses (disable a network while the database is open). @roadmap_1265_li Avoid using java.util.Calendar internally because it's slow, complicated, and buggy. @roadmap_1266_li Support TRUNCATE .. CASCADE like PostgreSQL. @roadmap_1267_li Fulltext search: lazy result generation using SimpleRowSource. @roadmap_1268_li Fulltext search: support alternative syntax: WHERE FTL_CONTAINS(name, 'hello'). @roadmap_1269_li MySQL compatibility: support REPLACE, see http://dev.mysql.com/doc/refman/6.0/en/replace.html and issue 73. @roadmap_1270_li MySQL compatibility: support INSERT INTO table SET column1 = value1, column2 = value2 @roadmap_1271_li Docs: add a one line description for each functions and SQL statements at the top (in the link section). @roadmap_1272_li Javadoc search: weight for titles should be higher ('random' should list Functions as the best match). @roadmap_1273_li Replace information_schema tables with regular tables that are automatically re-built when needed. Use indexes. @roadmap_1274_li Issue 50: Oracle compatibility: support calling 0-parameters functions without parenthesis. Make constants obsolete. @roadmap_1275_li MySQL, HSQLDB compatibility: support where 'a'=1 (not supported by Derby, PostgreSQL) @roadmap_1276_li Support a data type "timestamp with timezone" using java.util.Calendar. @roadmap_1277_li Finer granularity for SLF4J trace - See http://code.google.com/p/h2database/issues/detail?id=62 @roadmap_1278_li Add database creation date and time to the database. @roadmap_1279_li Support ASSERTION. @roadmap_1280_li MySQL compatibility: support comparing 1='a' @roadmap_1281_li Support PostgreSQL lock modes: http://www.postgresql.org/docs/8.3/static/explicit-locking.html @roadmap_1282_li PostgreSQL compatibility: test DbVisualizer and Squirrel SQL using a new PostgreSQL JDBC driver. @roadmap_1283_li RunScript should be able to read from system in (or quite mode for Shell). @roadmap_1284_li Natural join: support select x from dual natural join dual. @roadmap_1285_li Support using system properties in database URLs (may be a security problem). @roadmap_1286_li Natural join: somehow support this: select a.x, b.x, x from dual a natural join dual b @roadmap_1287_li Use the Java service provider mechanism to register file systems and function libraries. @roadmap_1288_li MySQL compatibility: for auto_increment columns, convert 0 to next value (as when inserting NULL). @roadmap_1289_li Optimization for multi-column IN: use an index if possible. Example: (A, B) IN((1, 2), (2, 3)). @roadmap_1290_li Optimization for EXISTS: convert to inner join or IN(..) if possible. @roadmap_1291_li Functions: support hashcode(value); cryptographic and fast @roadmap_1292_li Serialized file lock: support long running queries. @roadmap_1293_li Network: use 127.0.0.1 if other addresses don't work. @roadmap_1294_li Pluggable network protocol (currently Socket/ServerSocket over TCP/IP) - see also TransportServer with master slave replication. @roadmap_1295_li Support reading JCR data: one table per node type; query table; cache option @roadmap_1296_li OSGi: create a sample application, test, document. @roadmap_1297_li help.csv: use complete examples for functions; run as test case. @roadmap_1298_li Functions to calculate the memory and disk space usage of a table, a row, or a value. @roadmap_1299_li Re-implement PooledConnection; use a lightweight connection object. @roadmap_1300_li Doclet: convert tests in javadocs to a java class. @roadmap_1301_li Doclet: format fields like methods, but support sorting by name and value. @roadmap_1302_li Doclet: shrink the html files. @roadmap_1303_li MySQL compatibility: support SET NAMES 'latin1' - See also http://code.google.com/p/h2database/issues/detail?id=56 @roadmap_1304_li Allow to scan index backwards starting with a value (to better support ORDER BY DESC). @roadmap_1305_li Java Service Wrapper: try http://yajsw.sourceforge.net/ @roadmap_1306_li Batch parameter for INSERT, UPDATE, and DELETE, and commit after each batch. See also MySQL DELETE. @roadmap_1307_li MySQL compatibility: support ALTER TABLE .. MODIFY COLUMN. @roadmap_1308_li Use a lazy and auto-close input stream (open resource when reading, close on eof). @roadmap_1309_li PostgreSQL compatibility: generate_series. @roadmap_1310_li Connection pool: 'reset session' command (delete temp tables, rollback, auto-commit true). @roadmap_1311_li Improve SQL documentation, see http://www.w3schools.com/sql/ @roadmap_1312_li MySQL compatibility: DatabaseMetaData.stores*() methods should return the same values. Test with SquirrelSQL. @roadmap_1313_li MS SQL Server compatibility: support DATEPART syntax. @roadmap_1314_li Sybase/DB2/Oracle compatibility: support out parameters in stored procedures - See http://code.google.com/p/h2database/issues/detail?id=83 @roadmap_1315_li Support INTERVAL data type (see Oracle and others). @roadmap_1316_li Combine Server and Console tool (only keep Server). @roadmap_1317_li Store the Lucene index in the database itself. @roadmap_1318_li Support standard MERGE statement: http://en.wikipedia.org/wiki/Merge_%28SQL%29 @roadmap_1319_li Oracle compatibility: support DECODE(x, ...). @roadmap_1320_li MVCC: compare concurrent update behavior with PostgreSQL and Oracle. @roadmap_1321_li HSQLDB compatibility: CREATE FUNCTION (maybe using a Function interface). @roadmap_1322_li HSQLDB compatibility: support CALL "java.lang.Math.sqrt"(2.0) @roadmap_1323_li Support comma as the decimal separator in the CSV tool. @roadmap_1324_li Compatibility: Java functions with SQLJ Part1 http://www.acm.org/sigmod/record/issues/9912/standards.pdf.gz @roadmap_1325_li Compatibility: Java functions with SQL/PSM (Persistent Stored Modules) - need to find the documentation. @roadmap_1326_li CACHE_SIZE: automatically use a fraction of Runtime.maxMemory - maybe automatically the second level cache. @roadmap_1327_li Support date/time/timestamp as documented in http://en.wikipedia.org/wiki/ISO_8601 @roadmap_1328_li PostgreSQL compatibility: when in PG mode, treat BYTEA data like PG. @roadmap_1329_li Support =ANY(array) as in PostgreSQL. See also http://www.postgresql.org/docs/8.0/interactive/arrays.html @roadmap_1330_li IBM DB2 compatibility: support PREVIOUS VALUE FOR sequence. @roadmap_1331_li Compatibility: use different LIKE ESCAPE characters depending on the mode (disable for Derby, HSQLDB, DB2, Oracle, MSSQLServer). @roadmap_1332_li Oracle compatibility: support CREATE SYNONYM table FOR schema.table. @roadmap_1333_li FTP: document the server, including -ftpTask option to execute / kill remote processes @roadmap_1334_li FTP: problems with multithreading? @roadmap_1335_li FTP: implement SFTP / FTPS @roadmap_1336_li FTP: access to a database (.csv for a table, a directory for a schema, a file for a lob, a script.sql file). @roadmap_1337_li More secure default configuration if remote access is enabled. @roadmap_1338_li Improve database file locking (maybe use native file locking). The current approach seems to be problematic if the file system is on a remote share (see Google Group 'Lock file modification time is in the future'). @roadmap_1339_li Document internal features such as BELONGS_TO_TABLE, NULL_TO_DEFAULT, SEQUENCE. @roadmap_1340_li Issue 107: Prefer using the ORDER BY index if LIMIT is used. @roadmap_1341_li An index on (id, name) should be used for a query: select * from t where s=? order by i @roadmap_1342_li Support reading sequences using DatabaseMetaData.getTables(null, null, null, new String[]{"SEQUENCE"}). See PostgreSQL. @roadmap_1343_li Add option to enable TCP_NODELAY using Socket.setTcpNoDelay(true). @roadmap_1344_li Maybe disallow = within database names (jdbc:h2:mem:MODE=DB2 means database name MODE=DB2). @roadmap_1345_li Fast alter table add column. @roadmap_1346_li Improve concurrency for in-memory database operations. @roadmap_1347_li Issue 122: Support for connection aliases for remote tcp connections. @roadmap_1348_li Fast scrambling (strong encryption doesn't help if the password is included in the application). @roadmap_1349_li H2 Console: support -webPassword to require a password to access preferences or shutdown. @roadmap_1350_li Issue 126: The index name should be "IDX_" plus the constraint name unless there is a conflict, in which case append a number. @roadmap_1351_li Issue 127: Support activation/deactivation of triggers @roadmap_1352_li Issue 130: Custom log event listeners @roadmap_1353_li Issue 131: IBM DB2 compatibility: sysibm.sysdummy1 @roadmap_1354_li Issue 132: Use Java enum trigger type. @roadmap_1355_li Issue 134: IBM DB2 compatibility: session global variables. @roadmap_1356_li Cluster: support load balance with values for each server / auto detect. @roadmap_1357_li FTL_SET_OPTION(keyString, valueString) with key stopWords at first. @roadmap_1358_li Pluggable access control mechanism. @roadmap_1359_li Fulltext search (Lucene): support streaming CLOB data. @roadmap_1360_li Document/example how to create and read an encrypted script file. @roadmap_1361_li Check state of https://issues.apache.org/jira/browse/OPENJPA-1367 (H2 does support cross joins). @roadmap_1362_li Fulltext search (Lucene): only prefix column names with _ if they already start with _. Instead of DATA / QUERY / modified use _DATA, _QUERY, _MODIFIED if possible. @roadmap_1363_li Support a way to create or read compressed encrypted script files using an API. @roadmap_1364_li Scripting language support (Javascript). @roadmap_1365_li The network client should better detect if the server is not an H2 server and fail early. @roadmap_1366_li H2 Console: support CLOB/BLOB upload. @roadmap_1367_li Database file lock: detect hibernate / standby / very slow threads (compare system time). @roadmap_1368_li Automatic detection of redundant indexes. @roadmap_1369_li Maybe reject join without "on" (except natural join). @roadmap_1370_li Implement GiST (Generalized Search Tree for Secondary Storage). @roadmap_1371_li Function to read a number of bytes/characters from an BLOB or CLOB. @roadmap_1372_li Issue 156: Support SELECT ? UNION SELECT ?. @roadmap_1373_li Automatic mixed mode: support a port range list (to avoid firewall problems). @roadmap_1374_li Support the pseudo column rowid, oid, _rowid_. @roadmap_1375_li H2 Console / large result sets: stream early instead of keeping a whole result in-memory @roadmap_1376_li Support TRUNCATE for linked tables. @roadmap_1377_li UNION: evaluate INTERSECT before UNION (like most other database except Oracle). @roadmap_1378_li Delay creating the information schema, and share metadata columns. @roadmap_1379_li TCP Server: use a nonce (number used once) to protect unencrypted channels against replay attacks. @roadmap_1380_li Simplify running scripts and recovery: CREATE FORCE USER (overwrites an existing user). @roadmap_1381_li Support CREATE DATABASE LINK (a custom JDBC driver is already supported). @roadmap_1382_li Support large GROUP BY operations. Issue 216. @roadmap_1383_li Issue 163: Allow to create foreign keys on metadata types. @roadmap_1384_li Logback: write a native DBAppender. @roadmap_1385_li Cache size: don't use more cache than what is available. @roadmap_1386_li Allow to defragment at runtime (similar to SHUTDOWN DEFRAG) in a background thread. @roadmap_1387_li Tree index: Instead of an AVL tree, use a general balanced trees or a scapegoat tree. @roadmap_1388_li User defined functions: allow to store the bytecode (of just the class, or the jar file of the extension) in the database. @roadmap_1389_li Compatibility: ResultSet.getObject() on a CLOB (TEXT) should return String for PostgreSQL and MySQL. @roadmap_1390_li Optimizer: WHERE X=? AND Y IN(?), it always uses the index on Y. Should be cost based. @roadmap_1391_li Common Table Expression (CTE) / recursive queries: support parameters. Issue 314. @roadmap_1392_li Oracle compatibility: support INSERT ALL. @roadmap_1393_li Issue 178: Optimizer: index usage when both ascending and descending indexes are available. @roadmap_1394_li Issue 179: Related subqueries in HAVING clause. @roadmap_1395_li IBM DB2 compatibility: NOT NULL WITH DEFAULT. Similar to MySQL Mode.convertInsertNullToZero. @roadmap_1396_li Creating primary key: always create a constraint. @roadmap_1397_li Maybe use a different page layout: keep the data at the head of the page, and ignore the tail (don't store / read it). This may increase write / read performance depending on the file system. @roadmap_1398_li Indexes of temporary tables are currently kept in-memory. Is this how it should be? @roadmap_1399_li The Shell tool should support the same built-in commands as the H2 Console. @roadmap_1400_li Maybe use PhantomReference instead of finalize. @roadmap_1401_li Database file name suffix: should only have one dot by default. Example: .h2db @roadmap_1402_li Issue 196: Function based indexes @roadmap_1403_li ALTER TABLE ... ADD COLUMN IF NOT EXISTS columnName. @roadmap_1404_li Fix the disk space leak (killing the process at the exact right moment will increase the disk space usage; this space is not re-used). See TestDiskSpaceLeak.java @roadmap_1405_li ROWNUM: Oracle compatibility when used within a subquery. Issue 198. @roadmap_1406_li Allow to access the database over HTTP (possibly using port 80) and a servlet in a REST way. @roadmap_1407_li ODBC: encrypted databases are not supported because the ;CIPHER= can not be set. @roadmap_1408_li Support CLOB and BLOB update, specially conn.createBlob().setBinaryStream(1); @roadmap_1409_li Optimizer: index usage when both ascending and descending indexes are available. Issue 178. @roadmap_1410_li Issue 306: Support schema specific domains. @roadmap_1411_li Triggers: support user defined execution order. Oracle: CREATE OR REPLACE TRIGGER TEST_2 BEFORE INSERT ON TEST FOR EACH ROW FOLLOWS TEST_1. SQL specifies that multiple triggers should be fired in time-of-creation order. PostgreSQL uses name order, which was judged to be more convenient. Derby: triggers are fired in the order in which they were created. @roadmap_1412_li PostgreSQL compatibility: combine "users" and "roles". See: http://www.postgresql.org/docs/8.1/interactive/user-manag.html @roadmap_1413_li Improve documentation of system properties: only list the property names, default values, and description. @roadmap_1414_li Support running totals / cumulative sum using SUM(..) OVER(..). @roadmap_1415_li Improve object memory size calculation. Use constants for known VMs, or use reflection to call java.lang.instrument.Instrumentation.getObjectSize(Object objectToSize) @roadmap_1416_li Triggers: NOT NULL checks should be done after running triggers (Oracle behavior, maybe others). @roadmap_1417_li Common Table Expression (CTE) / recursive queries: support INSERT INTO ... SELECT ... Issue 219. @roadmap_1418_li Common Table Expression (CTE) / recursive queries: support non-recursive queries. Issue 217. @roadmap_1419_li Common Table Expression (CTE) / recursive queries: avoid endless loop. Issue 218. @roadmap_1420_li Common Table Expression (CTE) / recursive queries: support multiple named queries. Issue 220. @roadmap_1421_li Common Table Expression (CTE) / recursive queries: identifier scope may be incorrect. Issue 222. @roadmap_1422_li Log long running transactions (similar to long running statements). @roadmap_1423_li Parameter data type is data type of other operand. Issue 205. @roadmap_1424_li Some combinations of nested join with right outer join are not supported. @roadmap_1425_li DatabaseEventListener.openConnection(id) and closeConnection(id). @roadmap_1426_li Listener or authentication module for new connections, or a way to restrict the number of different connections to a tcp server, or to prevent to login with the same username and password from different IPs. Possibly using the DatabaseEventListener API, or a new API. @roadmap_1427_li Compatibility for data type CHAR (Derby, HSQLDB). Issue 212. @roadmap_1428_li Compatibility with MySQL TIMESTAMPDIFF. Issue 209. @roadmap_1429_li Optimizer: use a histogram of the data, specially for non-normal distributions. @roadmap_1430_li Trigger: allow declaring as source code (like functions). @roadmap_1431_li User defined aggregate: allow declaring as source code (like functions). @roadmap_1432_li The error "table not found" is sometimes caused by using the wrong database. Add "(this database is empty)" to the exception message if applicable. @roadmap_1433_li MySQL + PostgreSQL compatibility: support string literal escape with \n. @roadmap_1434_li PostgreSQL compatibility: support string literal escape with double \\. @roadmap_1435_li Document the TCP server "management_db". Maybe include the IP address of the client. @roadmap_1436_li Use javax.tools.JavaCompilerTool instead of com.sun.tools.javac.Main @roadmap_1437_li If a database object was not found in the current schema, but one with the same name existed in another schema, included that in the error message. @roadmap_1438_li Optimization to use an index for OR when using multiple keys: where (key1 = ? and key2 = ?) OR (key1 = ? and key2 = ?) @roadmap_1439_li Issue 302: Support optimizing queries with both inner and outer joins, as in: select * from test a inner join test b on a.id=b.id inner join o on o.id=a.id where b.x=1 (the optimizer should swap a and b here). See also TestNestedJoins, tag "swapInnerJoinTables". @roadmap_1440_li JaQu should support a DataSource and a way to create a Db object using a Connection (for multi-threaded usage with a connection pool). @roadmap_1441_li Move table to a different schema (rename table to a different schema), possibly using ALTER TABLE ... SET SCHEMA ...; @roadmap_1442_li nioMapped file system: automatically fall back to regular (non mapped) IO if there is a problem (out of memory exception for example). @roadmap_1443_li Column as parameter of function table. Issue 228. @roadmap_1444_li Connection pool: detect ;AUTOCOMMIT=FALSE in the database URL, and if set, disable autocommit for all connections. @roadmap_1445_li Compatibility with MS Access: support "&" to concatenate text. @roadmap_1446_li The BACKUP statement should not synchronize on the database, and therefore should not block other users. @roadmap_1447_li Document the database file format. @roadmap_1448_li Support reading LOBs. @roadmap_1449_li Require appending DANGEROUS=TRUE when using certain dangerous settings such as LOG=0, LOG=1, LOCK_MODE=0, disabling FILE_LOCK,... @roadmap_1450_li Support UDT (user defined types) similar to how Apache Derby supports it: check constraint, allow to use it in Java functions as parameters (return values already seem to work). @roadmap_1451_li Encrypted file system (use cipher text stealing so file length doesn't need to decrypt; 4 KB header per file, optional compatibility with current encrypted database files). @roadmap_1452_li Issue 229: SELECT with simple OR tests uses tableScan when it could use indexes. @roadmap_1453_li GROUP BY queries should use a temporary table if there are too many rows. @roadmap_1454_li BLOB: support random access when reading. @roadmap_1455_li CLOB: support random access when reading (this is harder than for BLOB as data is stored in UTF-8 form). @roadmap_1456_li Compatibility: support SELECT INTO (as an alias for CREATE TABLE ... AS SELECT ...). @roadmap_1457_li Compatibility with MySQL: support SELECT INTO OUTFILE (cannot be an existing file) as an alias for CSVWRITE(...). @roadmap_1458_li Compatibility with MySQL: support non-strict mode (sql_mode = "") any data that is too large for the column will just be truncated or set to the default value. @roadmap_1459_li The full condition should be sent to the linked table, not just the indexed condition. Example: TestLinkedTableFullCondition @roadmap_1460_li Compatibility with IBM DB2: CREATE PROCEDURE. @roadmap_1461_li Compatibility with IBM DB2: SQL cursors. @roadmap_1462_li Single-column primary key values are always stored explicitly. This is not required. @roadmap_1463_li Compatibility with MySQL: support CREATE TABLE TEST(NAME VARCHAR(255) CHARACTER SET UTF8). @roadmap_1464_li CALL is incompatible with other databases because it returns a result set, so that CallableStatement.execute() returns true. @roadmap_1465_li Optimization for large lists for column IN(1, 2, 3, 4,...) - currently an list is used, could potentially use a hash set (maybe only for a part of the values - the ones that can be evaluated). @roadmap_1466_li Compatibility for ARRAY data type (Oracle: VARRAY(n) of VARCHAR(m); HSQLDB: VARCHAR(n) ARRAY; Postgres: VARCHAR(n)[]). @roadmap_1467_li PostgreSQL compatible array literal syntax: ARRAY[['a', 'b'], ['c', 'd']] @roadmap_1468_li PostgreSQL compatibility: UPDATE with FROM. @roadmap_1469_li Issue 297: Oracle compatibility for "at time zone". @roadmap_1470_li IBM DB2 compatibility: IDENTITY_VAL_LOCAL(). @roadmap_1471_li Support SQL/XML. @roadmap_1472_li Support concurrent opening of databases. @roadmap_1473_li Improved error message and diagnostics in case of network configuration problems. @roadmap_1474_li TRUNCATE should reset the identity columns as in MySQL and MS SQL Server (and possibly other databases). @roadmap_1475_li Adding a primary key should make the columns 'not null' unless if there is a row with null (compatibility with MySQL, PostgreSQL, HSQLDB; not Derby). @roadmap_1476_li ARRAY data type: support Integer[] and so on in Java functions (currently only Object[] is supported). @roadmap_1477_li MySQL compatibility: LOCK TABLES a READ, b READ - see also http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html @roadmap_1478_li Oracle compatibility: convert empty strings to null. Also convert an empty byte array to null, but not empty varray. @roadmap_1479_li The HTML to PDF converter should use http://code.google.com/p/wkhtmltopdf/ @roadmap_1480_li Issue 303: automatically convert "X NOT IN(SELECT...)" to "NOT EXISTS(...)". @roadmap_1481_li MySQL compatibility: update test1 t1, test2 t2 set t1.name=t2.name where t1.id=t2.id. @roadmap_1482_li Issue 283: Improve performance of H2 on Android. @roadmap_1483_li Support INSERT INTO / UPDATE / MERGE ... RETURNING to retrieve the generated key(s). @roadmap_1484_li Column compression option - see http://groups.google.com/group/h2-database/browse_thread/thread/3e223504e52671fa/243da82244343f5d @roadmap_1485_li PostgreSQL compatibility: ALTER TABLE ADD combined with adding a foreign key constraint, as in ALTER TABLE FOO ADD COLUMN PARENT BIGINT REFERENCES FOO(ID). @roadmap_1486_li MS SQL Server compatibility: support @@ROWCOUNT. @roadmap_1487_li PostgreSQL compatibility: LOG(x) is LOG10(x) and not LN(x). @roadmap_1488_li Issue 311: Serialized lock mode: executeQuery of write operations fails. @roadmap_1489_li PostgreSQL compatibility: support PgAdmin III (specially the function current_setting). @roadmap_1490_li MySQL compatibility: support TIMESTAMPADD. @roadmap_1491_li Support SELECT ... FOR UPDATE with joins (supported by PostgreSQL, MySQL, and HSQLDB; but not Derby). @roadmap_1492_li Support SELECT ... FOR UPDATE OF [field-list] (supported by PostgreSQL, MySQL, and HSQLDB; but not Derby). @roadmap_1493_li Support SELECT ... FOR UPDATE OF [table-list] (supported by PostgreSQL, HSQLDB, Sybase). @roadmap_1494_li TRANSACTION_ID() for in-memory databases. @roadmap_1495_li TRANSACTION_ID() should be long (same as HSQLDB and PostgreSQL). @roadmap_1496_li Support [INNER | OUTER] JOIN USING(column [,...]). @roadmap_1497_li Support NATURAL [ { LEFT | RIGHT } [ OUTER ] | INNER ] JOIN (Derby, Oracle) @roadmap_1498_li GROUP BY columnNumber (similar to ORDER BY columnNumber) (MySQL, PostgreSQL, SQLite; not by HSQLDB and Derby). @roadmap_1499_li Sybase / MS SQL Server compatibility: CONVERT(..) parameters are swapped. @roadmap_1500_li Index conditions: WHERE AGE>1 should not scan through all rows with AGE=1. @roadmap_1501_li PHP support: H2 should support PDO, or test with PostgreSQL PDO. @roadmap_1502_li Outer joins: if no column of the outer join table is referenced, the outer join table could be removed from the query. @roadmap_1503_li Cluster: allow using auto-increment and identity columns by ensuring executed in lock-step. @roadmap_1504_li MySQL compatibility: index names only need to be unique for the given table. @roadmap_1505_li Issue 352: constraints: distinguish between 'no action' and 'restrict'. Currently, only restrict is supported, and 'no action' is internally mapped to 'restrict'. The database meta data returns 'restrict' in all cases. @roadmap_1506_li Oracle compatibility: support MEDIAN aggregate function. @roadmap_1507_li Issue 348: Oracle compatibility: division should return a decimal result. @roadmap_1508_li Read rows on demand: instead of reading the whole row, only read up to that column that is requested. Keep an pointer to the data area and the column id that is already read. @roadmap_1509_li Long running transactions: log session id when detected. @roadmap_1510_li Optimization: "select id from test" should use the index on id even without "order by". @roadmap_1511_li Issue 362: LIMIT support for UPDATE statements (MySQL compatibility). @roadmap_1512_li Sybase SQL Anywhere compatibility: SELECT TOP ... START AT ... @roadmap_1513_li Use Java 6 SQLException subclasses. @roadmap_1514_li Issue 390: RUNSCRIPT FROM '...' CONTINUE_ON_ERROR @roadmap_1515_li Use Java 6 exceptions: SQLDataException, SQLSyntaxErrorException, SQLTimeoutException,.. @roadmap_1516_li MySQL compatibility: support REPLACE INTO as an alias for MERGE INTO. @roadmap_1517_h2 Not Planned @roadmap_1518_li HSQLDB (did) support this: select id i from test where i<0 (other databases don't). Supporting it may break compatibility. @roadmap_1519_li String.intern (so that Strings can be compared with ==) will not be used because some VMs have problems when used extensively. @roadmap_1520_li In prepared statements, identifier names (table names and so on) can not be parameterized. Adding such a feature would complicate the source code without providing reasonable speedup, and would slow down regular prepared statements. @sourceError_1000_h1 Error Analyzer @sourceError_1001_a Home @sourceError_1002_a Input @sourceError_1003_h2   Details  Source Code @sourceError_1004_p Paste the error message and stack trace below and click on 'Details' or 'Source Code': @sourceError_1005_b Error Code: @sourceError_1006_b Product Version: @sourceError_1007_b Message: @sourceError_1008_b More Information: @sourceError_1009_b Stack Trace: @sourceError_1010_b Source File: @sourceError_1011_p Raw file @sourceError_1012_p (fast; only Firefox) @tutorial_1000_h1 Tutorial @tutorial_1001_a Starting and Using the H2 Console @tutorial_1002_a Special H2 Console Syntax @tutorial_1003_a Settings of the H2 Console @tutorial_1004_a Connecting to a Database using JDBC @tutorial_1005_a Creating New Databases @tutorial_1006_a Using the Server @tutorial_1007_a Using Hibernate @tutorial_1008_a Using TopLink and Glassfish @tutorial_1009_a Using EclipseLink @tutorial_1010_a Using Apache ActiveMQ @tutorial_1011_a Using H2 within NetBeans @tutorial_1012_a Using H2 with jOOQ @tutorial_1013_a Using Databases in Web Applications @tutorial_1014_a Android @tutorial_1015_a CSV (Comma Separated Values) Support @tutorial_1016_a Upgrade, Backup, and Restore @tutorial_1017_a Command Line Tools @tutorial_1018_a The Shell Tool @tutorial_1019_a Using OpenOffice Base @tutorial_1020_a Java Web Start / JNLP @tutorial_1021_a Using a Connection Pool @tutorial_1022_a Fulltext Search @tutorial_1023_a User-Defined Variables @tutorial_1024_a Date and Time @tutorial_1025_a Using Spring @tutorial_1026_a OSGi @tutorial_1027_a Java Management Extension (JMX) @tutorial_1028_h2 Starting and Using the H2 Console @tutorial_1029_p The H2 Console application lets you access a database using a browser. This can be a H2 database, or another database that supports the JDBC API. @tutorial_1030_p This is a client/server application, so both a server and a client (a browser) are required to run it. @tutorial_1031_p Depending on your platform and environment, there are multiple ways to start the H2 Console: @tutorial_1032_th OS @tutorial_1033_th Start @tutorial_1034_td Windows @tutorial_1035_td Click [Start], [All Programs], [H2], and [H2 Console (Command Line)] @tutorial_1036_td An icon will be added to the system tray: @tutorial_1037_td If you don't get the window and the system tray icon, then maybe Java is not installed correctly (in this case, try another way to start the application). A browser window should open and point to the login page at http://localhost:8082. @tutorial_1038_td Windows @tutorial_1039_td Open a file browser, navigate to h2/bin, and double click on h2.bat. @tutorial_1040_td A console window appears. If there is a problem, you will see an error message in this window. A browser window will open and point to the login page (URL: http://localhost:8082). @tutorial_1041_td Any @tutorial_1042_td Double click on the h2*.jar file. This only works if the .jar suffix is associated with Java. @tutorial_1043_td Any @tutorial_1044_td Open a console window, navigate to the directory h2/bin, and type: @tutorial_1045_h3 Firewall @tutorial_1046_p If you start the server, you may get a security warning from the firewall (if you have installed one). If you don't want other computers in the network to access the application on your machine, you can let the firewall block those connections. The connection from the local machine will still work. Only if you want other computers to access the database on this computer, you need allow remote connections in the firewall. @tutorial_1047_p It has been reported that when using Kaspersky 7.0 with firewall, the H2 Console is very slow when connecting over the IP address. A workaround is to connect using 'localhost'. @tutorial_1048_p A small firewall is already built into the server: other computers may not connect to the server by default. To change this, go to 'Preferences' and select 'Allow connections from other computers'. @tutorial_1049_h3 Testing Java @tutorial_1050_p To find out which version of Java is installed, open a command prompt and type: @tutorial_1051_p If you get an error message, you may need to add the Java binary directory to the path environment variable. @tutorial_1052_h3 Error Message 'Port may be in use' @tutorial_1053_p You can only start one instance of the H2 Console, otherwise you will get the following error message: "The Web server could not be started. Possible cause: another server is already running...". It is possible to start multiple console applications on the same computer (using different ports), but this is usually not required as the console supports multiple concurrent connections. @tutorial_1054_h3 Using another Port @tutorial_1055_p If the default port of the H2 Console is already in use by another application, then a different port needs to be configured. The settings are stored in a properties file. For details, see Settings of the H2 Console. The relevant entry is webPort. @tutorial_1056_p If no port is specified for the TCP and PG servers, each service will try to listen on its default port. If the default port is already in use, a random port is used. @tutorial_1057_h3 Connecting to the Server using a Browser @tutorial_1058_p If the server started successfully, you can connect to it using a web browser. Javascript needs to be enabled. If you started the server on the same computer as the browser, open the URL http://localhost:8082. If you want to connect to the application from another computer, you need to provide the IP address of the server, for example: http://192.168.0.2:8082. If you enabled SSL on the server side, the URL needs to start with https://. @tutorial_1059_h3 Multiple Concurrent Sessions @tutorial_1060_p Multiple concurrent browser sessions are supported. As that the database objects reside on the server, the amount of concurrent work is limited by the memory available to the server application. @tutorial_1061_h3 Login @tutorial_1062_p At the login page, you need to provide connection information to connect to a database. Set the JDBC driver class of your database, the JDBC URL, user name, and password. If you are done, click [Connect]. @tutorial_1063_p You can save and reuse previously saved settings. The settings are stored in a properties file (see Settings of the H2 Console). @tutorial_1064_h3 Error Messages @tutorial_1065_p Error messages in are shown in red. You can show/hide the stack trace of the exception by clicking on the message. @tutorial_1066_h3 Adding Database Drivers @tutorial_1067_p To register additional JDBC drivers (MySQL, PostgreSQL, HSQLDB,...), add the jar file names to the environment variables H2DRIVERS or CLASSPATH. Example (Windows): to add the HSQLDB JDBC driver C:\Programs\hsqldb\lib\hsqldb.jar, set the environment variable H2DRIVERS to C:\Programs\hsqldb\lib\hsqldb.jar. @tutorial_1068_p Multiple drivers can be set; entries need to be separated by ; (Windows) or : (other operating systems). Spaces in the path names are supported. The settings must not be quoted. @tutorial_1069_h3 Using the H2 Console @tutorial_1070_p The H2 Console application has three main panels: the toolbar on top, the tree on the left, and the query/result panel on the right. The database objects (for example, tables) are listed on the left. Type a SQL command in the query panel and click [Run]. The result appears just below the command. @tutorial_1071_h3 Inserting Table Names or Column Names @tutorial_1072_p To insert table and column names into the script, click on the item in the tree. If you click on a table while the query is empty, then SELECT * FROM ... is added. While typing a query, the table that was used is expanded in the tree. For example if you type SELECT * FROM TEST T WHERE T. then the table TEST is expanded. @tutorial_1073_h3 Disconnecting and Stopping the Application @tutorial_1074_p To log out of the database, click [Disconnect] in the toolbar panel. However, the server is still running and ready to accept new sessions. @tutorial_1075_p To stop the server, right click on the system tray icon and select [Exit]. If you don't have the system tray icon, navigate to [Preferences] and click [Shutdown], press [Ctrl]+[C] in the console where the server was started (Windows), or close the console window. @tutorial_1076_h2 Special H2 Console Syntax @tutorial_1077_p The H2 Console supports a few built-in commands. Those are interpreted within the H2 Console, so they work with any database. Built-in commands need to be at the beginning of a statement (before any remarks), otherwise they are not parsed correctly. If in doubt, add ; before the command. @tutorial_1078_th Command(s) @tutorial_1079_th Description @tutorial_1080_td @autocommit_true; @tutorial_1081_td @autocommit_false; @tutorial_1082_td Enable or disable autocommit. @tutorial_1083_td @cancel; @tutorial_1084_td Cancel the currently running statement. @tutorial_1085_td @columns null null TEST; @tutorial_1086_td @index_info null null TEST; @tutorial_1087_td @tables; @tutorial_1088_td @tables null null TEST; @tutorial_1089_td Call the corresponding DatabaseMetaData.get method. Patterns are case sensitive (usually identifiers are uppercase). For information about the parameters, see the Javadoc documentation. Missing parameters at the end of the line are set to null. The complete list of metadata commands is: @attributes, @best_row_identifier, @catalogs, @columns, @column_privileges, @cross_references, @exported_keys, @imported_keys, @index_info, @primary_keys, @procedures, @procedure_columns, @schemas, @super_tables, @super_types, @tables, @table_privileges, @table_types, @type_info, @udts, @version_columns @tutorial_1090_td @edit select * from test; @tutorial_1091_td Use an updatable result set. @tutorial_1092_td @generated insert into test() values(); @tutorial_1093_td Show the result of Statement.getGeneratedKeys(). @tutorial_1094_td @history; @tutorial_1095_td List the command history. @tutorial_1096_td @info; @tutorial_1097_td Display the result of various Connection and DatabaseMetaData methods. @tutorial_1098_td @list select * from test; @tutorial_1099_td Show the result set in list format (each column on its own line, with row numbers). @tutorial_1100_td @loop 1000 select ?, ?/*rnd*/; @tutorial_1101_td @loop 1000 @statement select ?; @tutorial_1102_td Run the statement this many times. Parameters (?) are set using a loop from 0 up to x - 1. Random values are used for each ?/*rnd*/. A Statement object is used instead of a PreparedStatement if @statement is used. Result sets are read until ResultSet.next() returns false. Timing information is printed. @tutorial_1103_td @maxrows 20; @tutorial_1104_td Set the maximum number of rows to display. @tutorial_1105_td @memory; @tutorial_1106_td Show the used and free memory. This will call System.gc(). @tutorial_1107_td @meta select 1; @tutorial_1108_td List the ResultSetMetaData after running the query. @tutorial_1109_td @parameter_meta select ?; @tutorial_1110_td Show the result of the PreparedStatement.getParameterMetaData() calls. The statement is not executed. @tutorial_1111_td @prof_start; @tutorial_1112_td call hash('SHA256', '', 1000000); @tutorial_1113_td @prof_stop; @tutorial_1114_td Start/stop the built-in profiling tool. The top 3 stack traces of the statement(s) between start and stop are listed (if there are 3). @tutorial_1115_td @prof_start; @tutorial_1116_td @sleep 10; @tutorial_1117_td @prof_stop; @tutorial_1118_td Sleep for a number of seconds. Used to profile a long running query or operation that is running in another session (but in the same process). @tutorial_1119_td @transaction_isolation; @tutorial_1120_td @transaction_isolation 2; @tutorial_1121_td Display (without parameters) or change (with parameters 1, 2, 4, 8) the transaction isolation level. @tutorial_1122_h2 Settings of the H2 Console @tutorial_1123_p The settings of the H2 Console are stored in a configuration file called .h2.server.properties in you user home directory. For Windows installations, the user home directory is usually C:\Documents and Settings\[username] or C:\Users\[username]. The configuration file contains the settings of the application and is automatically created when the H2 Console is first started. Supported settings are: @tutorial_1124_code webAllowOthers @tutorial_1125_li : allow other computers to connect. @tutorial_1126_code webPort @tutorial_1127_li : the port of the H2 Console @tutorial_1128_code webSSL @tutorial_1129_li : use encrypted (HTTPS) connections. @tutorial_1130_p In addition to those settings, the properties of the last recently used connection are listed in the form <number>=<name>|<driver>|<url>|<user> using the escape character \. Example: 1=Generic H2 (Embedded)|org.h2.Driver|jdbc\:h2\:~/test|sa @tutorial_1131_h2 Connecting to a Database using JDBC @tutorial_1132_p To connect to a database, a Java application first needs to load the database driver, and then get a connection. A simple way to do that is using the following code: @tutorial_1133_p This code first loads the driver (Class.forName(...)) and then opens a connection (using DriverManager.getConnection()). The driver name is "org.h2.Driver". The database URL always needs to start with jdbc:h2: to be recognized by this database. The second parameter in the getConnection() call is the user name (sa for System Administrator in this example). The third parameter is the password. In this database, user names are not case sensitive, but passwords are. @tutorial_1134_h2 Creating New Databases @tutorial_1135_p By default, if the database specified in the URL does not yet exist, a new (empty) database is created automatically. The user that created the database automatically becomes the administrator of this database. @tutorial_1136_p Auto-creating new database can be disabled, see Opening a Database Only if it Already Exists. @tutorial_1137_h2 Using the Server @tutorial_1138_p H2 currently supports three server: a web server (for the H2 Console), a TCP server (for client/server connections) and an PG server (for PostgreSQL clients). Please note that only the web server supports browser connections. The servers can be started in different ways, one is using the Server tool. Starting the server doesn't open a database - databases are opened as soon as a client connects. @tutorial_1139_h3 Starting the Server Tool from Command Line @tutorial_1140_p To start the Server tool from the command line with the default settings, run: @tutorial_1141_p This will start the tool with the default options. To get the list of options and default values, run: @tutorial_1142_p There are options available to use other ports, and start or not start parts. @tutorial_1143_h3 Connecting to the TCP Server @tutorial_1144_p To remotely connect to a database using the TCP server, use the following driver and database URL: @tutorial_1145_li JDBC driver class: org.h2.Driver @tutorial_1146_li Database URL: jdbc:h2:tcp://localhost/~/test @tutorial_1147_p For details about the database URL, see also in Features. Please note that you can't connection with a web browser to this URL. You can only connect using a H2 client (over JDBC). @tutorial_1148_h3 Starting the TCP Server within an Application @tutorial_1149_p Servers can also be started and stopped from within an application. Sample code: @tutorial_1150_h3 Stopping a TCP Server from Another Process @tutorial_1151_p The TCP server can be stopped from another process. To stop the server from the command line, run: @tutorial_1152_p To stop the server from a user application, use the following code: @tutorial_1153_p This function will only stop the TCP server. If other server were started in the same process, they will continue to run. To avoid recovery when the databases are opened the next time, all connections to the databases should be closed before calling this method. To stop a remote server, remote connections must be enabled on the server. Shutting down a TCP server can be protected using the option -tcpPassword (the same password must be used to start and stop the TCP server). @tutorial_1154_h2 Using Hibernate @tutorial_1155_p This database supports Hibernate version 3.1 and newer. You can use the HSQLDB Dialect, or the native H2 Dialect. Unfortunately the H2 Dialect included in some old versions of Hibernate was buggy. A patch for Hibernate has been submitted and is now applied. You can rename it to H2Dialect.java and include this as a patch in your application, or upgrade to a version of Hibernate where this is fixed. @tutorial_1156_p When using Hibernate, try to use the H2Dialect if possible. When using the H2Dialect, compatibility modes such as MODE=MySQL are not supported. When using such a compatibility mode, use the Hibernate dialect for the corresponding database instead of the H2Dialect; but please note H2 does not support all features of all databases. @tutorial_1157_h2 Using TopLink and Glassfish @tutorial_1158_p To use H2 with Glassfish (or Sun AS), set the Datasource Classname to org.h2.jdbcx.JdbcDataSource. You can set this in the GUI at Application Server - Resources - JDBC - Connection Pools, or by editing the file sun-resources.xml: at element jdbc-connection-pool, set the attribute datasource-classname to org.h2.jdbcx.JdbcDataSource. @tutorial_1159_p The H2 database is compatible with HSQLDB and PostgreSQL. To take advantage of H2 specific features, use the H2Platform. The source code of this platform is included in H2 at src/tools/oracle/toplink/essentials/platform/database/DatabasePlatform.java.txt. You will need to copy this file to your application, and rename it to .java. To enable it, change the following setting in persistence.xml: @tutorial_1160_p In old versions of Glassfish, the property name is toplink.platform.class.name. @tutorial_1161_p To use H2 within Glassfish, copy the h2*.jar to the directory glassfish/glassfish/lib. @tutorial_1162_h2 Using EclipseLink @tutorial_1163_p To use H2 in EclipseLink, use the platform class org.eclipse.persistence.platform.database.H2Platform. If this platform is not available in your version of EclipseLink, you can use the OraclePlatform instead in many case. See also H2Platform. @tutorial_1164_h2 Using Apache ActiveMQ @tutorial_1165_p When using H2 as the backend database for Apache ActiveMQ, please use the TransactDatabaseLocker instead of the default locking mechanism. Otherwise the database file will grow without bounds. The problem is that the default locking mechanism uses an uncommitted UPDATE transaction, which keeps the transaction log from shrinking (causes the database file to grow). Instead of using an UPDATE statement, the TransactDatabaseLocker uses SELECT ... FOR UPDATE which is not problematic. To use it, change the ApacheMQ configuration element <jdbcPersistenceAdapter> element, property databaseLocker="org.apache.activemq.store.jdbc.adapter.TransactDatabaseLocker". However, using the MVCC mode will again result in the same problem. Therefore, please do not use the MVCC mode in this case. Another (more dangerous) solution is to set useDatabaseLock to false. @tutorial_1166_h2 Using H2 within NetBeans @tutorial_1167_p The project H2 Database Engine Support For NetBeans allows you to start and stop the H2 server from within the IDE. @tutorial_1168_p There is a known issue when using the Netbeans SQL Execution Window: before executing a query, another query in the form SELECT COUNT(*) FROM <query> is run. This is a problem for queries that modify state, such as SELECT SEQ.NEXTVAL. In this case, two sequence values are allocated instead of just one. @tutorial_1169_h2 Using H2 with jOOQ @tutorial_1170_p jOOQ adds a thin layer on top of JDBC, allowing for type-safe SQL construction, including advanced SQL, stored procedures and advanced data types. jOOQ takes your database schema as a base for code generation. If this is your example schema: @tutorial_1171_p then run the jOOQ code generator on the command line using this command: @tutorial_1172_p ...where codegen.xml is on the classpath and contains this information @tutorial_1173_p Using the generated source, you can query the database as follows: @tutorial_1174_p See more details on jOOQ Homepage and in the jOOQ Tutorial @tutorial_1175_h2 Using Databases in Web Applications @tutorial_1176_p There are multiple ways to access a database from within web applications. Here are some examples if you use Tomcat or JBoss. @tutorial_1177_h3 Embedded Mode @tutorial_1178_p The (currently) simplest solution is to use the database in the embedded mode, that means open a connection in your application when it starts (a good solution is using a Servlet Listener, see below), or when a session starts. A database can be accessed from multiple sessions and applications at the same time, as long as they run in the same process. Most Servlet Containers (for example Tomcat) are just using one process, so this is not a problem (unless you run Tomcat in clustered mode). Tomcat uses multiple threads and multiple classloaders. If multiple applications access the same database at the same time, you need to put the database jar in the shared/lib or server/lib directory. It is a good idea to open the database when the web application starts, and close it when the web application stops. If using multiple applications, only one (any) of them needs to do that. In the application, an idea is to use one connection per Session, or even one connection per request (action). Those connections should be closed after use if possible (but it's not that bad if they don't get closed). @tutorial_1179_h3 Server Mode @tutorial_1180_p The server mode is similar, but it allows you to run the server in another process. @tutorial_1181_h3 Using a Servlet Listener to Start and Stop a Database @tutorial_1182_p Add the h2*.jar file to your web application, and add the following snippet to your web.xml file (between the context-param and the filter section): @tutorial_1183_p For details on how to access the database, see the file DbStarter.java. By default this tool opens an embedded connection using the database URL jdbc:h2:~/test, user name sa, and password sa. If you want to use this connection within your servlet, you can access as follows: @tutorial_1184_code DbStarter @tutorial_1185_p can also start the TCP server, however this is disabled by default. To enable it, use the parameter db.tcpServer in the file web.xml. Here is the complete list of options. These options need to be placed between the description tag and the listener / filter tags: @tutorial_1186_p When the web application is stopped, the database connection will be closed automatically. If the TCP server is started within the DbStarter, it will also be stopped automatically. @tutorial_1187_h3 Using the H2 Console Servlet @tutorial_1188_p The H2 Console is a standalone application and includes its own web server, but it can be used as a servlet as well. To do that, include the the h2*.jar file in your application, and add the following configuration to your web.xml: @tutorial_1189_p For details, see also src/tools/WEB-INF/web.xml. @tutorial_1190_p To create a web application with just the H2 Console, run the following command: @tutorial_1191_h2 Android @tutorial_1192_p You can use this database on an Android device (using the Dalvik VM) instead of or in addition to SQLite. So far, only very few tests and benchmarks were run, but it seems that performance is similar to SQLite, except for opening and closing a database, which is not yet optimized in H2 (H2 takes about 0.2 seconds, and SQLite about 0.02 seconds). Read operations seem to be a bit faster than SQLite, and write operations seem to be slower. So far, only very few tests have been run, and everything seems to work as expected. Fulltext search was not yet tested, however the native fulltext search should work. @tutorial_1193_p Reasons to use H2 instead of SQLite are: @tutorial_1194_li Full Unicode support including UPPER() and LOWER(). @tutorial_1195_li Streaming API for BLOB and CLOB data. @tutorial_1196_li Fulltext search. @tutorial_1197_li Multiple connections. @tutorial_1198_li User defined functions and triggers. @tutorial_1199_li Database file encryption. @tutorial_1200_li Reading and writing CSV files (this feature can be used outside the database as well). @tutorial_1201_li Referential integrity and check constraints. @tutorial_1202_li Better data type and SQL support. @tutorial_1203_li In-memory databases, read-only databases, linked tables. @tutorial_1204_li Better compatibility with other databases which simplifies porting applications. @tutorial_1205_li Possibly better performance (so far for read operations). @tutorial_1206_li Server mode (accessing a database on a different machine over TCP/IP). @tutorial_1207_p Currently only the JDBC API is supported (it is planned to support the Android database API in future releases). Both the regular H2 jar file and the smaller h2small-*.jar can be used. To create the smaller jar file, run the command ./build.sh jarSmall (Linux / Mac OS) or build.bat jarSmall (Windows). @tutorial_1208_p The database files needs to be stored in a place that is accessible for the application. Example: @tutorial_1209_p Limitations: Using a connection pool is currently not supported, because the required javax.sql. classes are not available on Android. @tutorial_1210_h2 CSV (Comma Separated Values) Support @tutorial_1211_p The CSV file support can be used inside the database using the functions CSVREAD and CSVWRITE, or it can be used outside the database as a standalone tool. @tutorial_1212_h3 Reading a CSV File from Within a Database @tutorial_1213_p A CSV file can be read using the function CSVREAD. Example: @tutorial_1214_p Please note for performance reason, CSVREAD should not be used inside a join. Instead, import the data first (possibly into a temporary table), create the required indexes if necessary, and then query this table. @tutorial_1215_h3 Importing Data from a CSV File @tutorial_1216_p A fast way to load or import data (sometimes called 'bulk load') from a CSV file is to combine table creation with import. Optionally, the column names and data types can be set when creating the table. Another option is to use INSERT INTO ... SELECT. @tutorial_1217_h3 Writing a CSV File from Within a Database @tutorial_1218_p The built-in function CSVWRITE can be used to create a CSV file from a query. Example: @tutorial_1219_h3 Writing a CSV File from a Java Application @tutorial_1220_p The Csv tool can be used in a Java application even when not using a database at all. Example: @tutorial_1221_h3 Reading a CSV File from a Java Application @tutorial_1222_p It is possible to read a CSV file without opening a database. Example: @tutorial_1223_h2 Upgrade, Backup, and Restore @tutorial_1224_h3 Database Upgrade @tutorial_1225_p The recommended way to upgrade from one version of the database engine to the next version is to create a backup of the database (in the form of a SQL script) using the old engine, and then execute the SQL script using the new engine. @tutorial_1226_h3 Backup using the Script Tool @tutorial_1227_p The recommended way to backup a database is to create a compressed SQL script file. This will result in a small, human readable, and database version independent backup. Creating the script will also verify the checksums of the database file. The Script tool is ran as follows: @tutorial_1228_p It is also possible to use the SQL command SCRIPT to create the backup of the database. For more information about the options, see the SQL command SCRIPT. The backup can be done remotely, however the file will be created on the server side. The built in FTP server could be used to retrieve the file from the server. @tutorial_1229_h3 Restore from a Script @tutorial_1230_p To restore a database from a SQL script file, you can use the RunScript tool: @tutorial_1231_p For more information about the options, see the SQL command RUNSCRIPT. The restore can be done remotely, however the file needs to be on the server side. The built in FTP server could be used to copy the file to the server. It is also possible to use the SQL command RUNSCRIPT to execute a SQL script. SQL script files may contain references to other script files, in the form of RUNSCRIPT commands. However, when using the server mode, the references script files need to be available on the server side. @tutorial_1232_h3 Online Backup @tutorial_1233_p The BACKUP SQL statement and the Backup tool both create a zip file with the database file. However, the contents of this file are not human readable. @tutorial_1234_p The resulting backup is transactionally consistent, meaning the consistency and atomicity rules apply. @tutorial_1235_p The Backup tool (org.h2.tools.Backup) can not be used to create a online backup; the database must not be in use while running this program. @tutorial_1236_p Creating a backup by copying the database files while the database is running is not supported, except if the file systems support creating snapshots. With other file systems, it can't be guaranteed that the data is copied in the right order. @tutorial_1237_h2 Command Line Tools @tutorial_1238_p This database comes with a number of command line tools. To get more information about a tool, start it with the parameter '-?', for example: @tutorial_1239_p The command line tools are: @tutorial_1240_code Backup @tutorial_1241_li creates a backup of a database. @tutorial_1242_code ChangeFileEncryption @tutorial_1243_li allows changing the file encryption password or algorithm of a database. @tutorial_1244_code Console @tutorial_1245_li starts the browser based H2 Console. @tutorial_1246_code ConvertTraceFile @tutorial_1247_li converts a .trace.db file to a Java application and SQL script. @tutorial_1248_code CreateCluster @tutorial_1249_li creates a cluster from a standalone database. @tutorial_1250_code DeleteDbFiles @tutorial_1251_li deletes all files belonging to a database. @tutorial_1252_code Recover @tutorial_1253_li helps recovering a corrupted database. @tutorial_1254_code Restore @tutorial_1255_li restores a backup of a database. @tutorial_1256_code RunScript @tutorial_1257_li runs a SQL script against a database. @tutorial_1258_code Script @tutorial_1259_li allows converting a database to a SQL script for backup or migration. @tutorial_1260_code Server @tutorial_1261_li is used in the server mode to start a H2 server. @tutorial_1262_code Shell @tutorial_1263_li is a command line database tool. @tutorial_1264_p The tools can also be called from an application by calling the main or another public method. For details, see the Javadoc documentation. @tutorial_1265_h2 The Shell Tool @tutorial_1266_p The Shell tool is a simple interactive command line tool. To start it, type: @tutorial_1267_p You will be asked for a database URL, JDBC driver, user name, and password. The connection setting can also be set as command line parameters. After connecting, you will get the list of options. The built-in commands don't need to end with a semicolon, but SQL statements are only executed if the line ends with a semicolon ;. This allows to enter multi-line statements: @tutorial_1268_p By default, results are printed as a table. For results with many column, consider using the list mode: @tutorial_1269_h2 Using OpenOffice Base @tutorial_1270_p OpenOffice.org Base supports database access over the JDBC API. To connect to a H2 database using OpenOffice Base, you first need to add the JDBC driver to OpenOffice. The steps to connect to a H2 database are: @tutorial_1271_li Start OpenOffice Writer, go to [Tools], [Options] @tutorial_1272_li Make sure you have selected a Java runtime environment in OpenOffice.org / Java @tutorial_1273_li Click [Class Path...], [Add Archive...] @tutorial_1274_li Select your h2 jar file (location is up to you, could be wherever you choose) @tutorial_1275_li Click [OK] (as much as needed), stop OpenOffice (including the Quickstarter) @tutorial_1276_li Start OpenOffice Base @tutorial_1277_li Connect to an existing database; select [JDBC]; [Next] @tutorial_1278_li Example datasource URL: jdbc:h2:~/test @tutorial_1279_li JDBC driver class: org.h2.Driver @tutorial_1280_p Now you can access the database stored in the current users home directory. @tutorial_1281_p To use H2 in NeoOffice (OpenOffice without X11): @tutorial_1282_li In NeoOffice, go to [NeoOffice], [Preferences] @tutorial_1283_li Look for the page under [NeoOffice], [Java] @tutorial_1284_li Click [Class Path], [Add Archive...] @tutorial_1285_li Select your h2 jar file (location is up to you, could be wherever you choose) @tutorial_1286_li Click [OK] (as much as needed), restart NeoOffice. @tutorial_1287_p Now, when creating a new database using the "Database Wizard" : @tutorial_1288_li Click [File], [New], [Database]. @tutorial_1289_li Select [Connect to existing database] and the select [JDBC]. Click next. @tutorial_1290_li Example datasource URL: jdbc:h2:~/test @tutorial_1291_li JDBC driver class: org.h2.Driver @tutorial_1292_p Another solution to use H2 in NeoOffice is: @tutorial_1293_li Package the h2 jar within an extension package @tutorial_1294_li Install it as a Java extension in NeoOffice @tutorial_1295_p This can be done by create it using the NetBeans OpenOffice plugin. See also Extensions Development. @tutorial_1296_h2 Java Web Start / JNLP @tutorial_1297_p When using Java Web Start / JNLP (Java Network Launch Protocol), permissions tags must be set in the .jnlp file, and the application .jar file must be signed. Otherwise, when trying to write to the file system, the following exception will occur: java.security.AccessControlException: access denied (java.io.FilePermission ... read). Example permission tags: @tutorial_1298_h2 Using a Connection Pool @tutorial_1299_p For H2, opening a connection is fast if the database is already open. Still, using a connection pool improves performance if you open and close connections a lot. A simple connection pool is included in H2. It is based on the Mini Connection Pool Manager from Christian d'Heureuse. There are other, more complex, open source connection pools available, for example the Apache Commons DBCP. For H2, it is about twice as faster to get a connection from the built-in connection pool than to get one using DriverManager.getConnection().The build-in connection pool is used as follows: @tutorial_1300_h2 Fulltext Search @tutorial_1301_p H2 includes two fulltext search implementations. One is using Apache Lucene, and the other (the native implementation) stores the index data in special tables in the database. @tutorial_1302_h3 Using the Native Fulltext Search @tutorial_1303_p To initialize, call: @tutorial_1304_p You need to initialize it in each database where you want to use it. Afterwards, you can create a fulltext index for a table using: @tutorial_1305_p PUBLIC is the schema name, TEST is the table name. The list of column names (comma separated) is optional, in this case all columns are indexed. The index is updated in realtime. To search the index, use the following query: @tutorial_1306_p This will produce a result set that contains the query needed to retrieve the data: @tutorial_1307_p To drop an index on a table: @tutorial_1308_p To get the raw data, use FT_SEARCH_DATA('Hello', 0, 0);. The result contains the columns SCHEMA (the schema name), TABLE (the table name), COLUMNS (an array of column names), and KEYS (an array of objects). To join a table, use a join as in: SELECT T.* FROM FT_SEARCH_DATA('Hello', 0, 0) FT, TEST T WHERE FT.TABLE='TEST' AND T.ID=FT.KEYS[0]; @tutorial_1309_p You can also call the index from within a Java application: @tutorial_1310_h3 Using the Lucene Fulltext Search @tutorial_1311_p To use the Lucene full text search, you need the Lucene library in the classpath. Currently Apache Lucene version 2.x is used by default for H2 version 1.2.x, and Lucene version 3.x is used by default for H2 version 1.3.x. How to do that depends on the application; if you use the H2 Console, you can add the Lucene jar file to the environment variables H2DRIVERS or CLASSPATH. To initialize the Lucene fulltext search in a database, call: @tutorial_1312_p You need to initialize it in each database where you want to use it. Afterwards, you can create a full text index for a table using: @tutorial_1313_p PUBLIC is the schema name, TEST is the table name. The list of column names (comma separated) is optional, in this case all columns are indexed. The index is updated in realtime. To search the index, use the following query: @tutorial_1314_p This will produce a result set that contains the query needed to retrieve the data: @tutorial_1315_p To drop an index on a table (be warned that this will re-index all of the full-text indices for the entire database): @tutorial_1316_p To get the raw data, use FTL_SEARCH_DATA('Hello', 0, 0);. The result contains the columns SCHEMA (the schema name), TABLE (the table name), COLUMNS (an array of column names), and KEYS (an array of objects). To join a table, use a join as in: SELECT T.* FROM FTL_SEARCH_DATA('Hello', 0, 0) FT, TEST T WHERE FT.TABLE='TEST' AND T.ID=FT.KEYS[0]; @tutorial_1317_p You can also call the index from within a Java application: @tutorial_1318_p The Lucene fulltext search supports searching in specific column only. Column names must be uppercase (except if the original columns are double quoted). For column names starting with an underscore (_), another underscore needs to be added. Example: @tutorial_1319_p The Lucene fulltext search implementation is not synchronized internally. If you update the database and query the fulltext search concurrently (directly using the Java API of H2 or Lucene itself), you need to ensure operations are properly synchronized. If this is not the case, you may get exceptions such as org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed. @tutorial_1320_h2 User-Defined Variables @tutorial_1321_p This database supports user-defined variables. Variables start with @ and can be used wherever expressions or parameters are allowed. Variables are not persisted and session scoped, that means only visible from within the session in which they are defined. A value is usually assigned using the SET command: @tutorial_1322_p The value can also be changed using the SET() method. This is useful in queries: @tutorial_1323_p Variables that are not set evaluate to NULL. The data type of a user-defined variable is the data type of the value assigned to it, that means it is not necessary (or possible) to declare variable names before using them. There are no restrictions on the assigned values; large objects (LOBs) are supported as well. Rolling back a transaction does not affect the value of a user-defined variable. @tutorial_1324_h2 Date and Time @tutorial_1325_p Date, time and timestamp values support ISO 8601 formatting, including time zone: @tutorial_1326_p If the time zone is not set, the value is parsed using the current time zone setting of the system. Date and time information is stored in H2 database files without time zone information. If the database is opened using another system time zone, the date and time will be the same. That means if you store the value '2000-01-01 12:00:00' in one time zone, then close the database and open the database again in a different time zone, you will also get '2000-01-01 12:00:00'. Please note that changing the time zone after the H2 driver is loaded is not supported. @tutorial_1327_h2 Using Spring @tutorial_1328_h3 Using the TCP Server @tutorial_1329_p Use the following configuration to start and stop the H2 TCP server using the Spring Framework: @tutorial_1330_p The destroy-method will help prevent exceptions on hot-redeployment or when restarting the server. @tutorial_1331_h3 Error Code Incompatibility @tutorial_1332_p There is an incompatibility with the Spring JdbcTemplate and H2 version 1.3.154 and newer, because of a change in the error code. This will cause the JdbcTemplate to not detect a duplicate key condition, and so a DataIntegrityViolationException is thrown instead of DuplicateKeyException. See also the issue SPR-8235. The workaround is to add the following XML file to the root of the classpath: @tutorial_1333_h2 OSGi @tutorial_1334_p The standard H2 jar can be dropped in as a bundle in an OSGi container. H2 implements the JDBC Service defined in OSGi Service Platform Release 4 Version 4.2 Enterprise Specification. The H2 Data Source Factory service is registered with the following properties: OSGI_JDBC_DRIVER_CLASS=org.h2.Driver and OSGI_JDBC_DRIVER_NAME=H2. The OSGI_JDBC_DRIVER_VERSION property reflects the version of the driver as is. @tutorial_1335_p The following standard configuration properties are supported: JDBC_USER, JDBC_PASSWORD, JDBC_DESCRIPTION, JDBC_DATASOURCE_NAME, JDBC_NETWORK_PROTOCOL, JDBC_URL, JDBC_SERVER_NAME, JDBC_PORT_NUMBER. Any other standard property will be rejected. Non-standard properties will be passed on to H2 in the connection URL. @tutorial_1336_h2 Java Management Extension (JMX) @tutorial_1337_p Management over JMX is supported, but not enabled by default. To enable JMX, append ;JMX=TRUE to the database URL when opening the database. Various tools support JMX, one such tool is the jconsole. When opening the jconsole, connect to the process where the database is open (when using the server mode, you need to connect to the server process). Then go to the MBeans section. Under org.h2 you will find one entry per database. The object name of the entry is the database short name, plus the path (each colon is replaced with an underscore character). @tutorial_1338_p The following attributes and operations are supported: @tutorial_1339_code CacheSize @tutorial_1340_li : the cache size currently in use in KB. @tutorial_1341_code CacheSizeMax @tutorial_1342_li (read/write): the maximum cache size in KB. @tutorial_1343_code Exclusive @tutorial_1344_li : whether this database is open in exclusive mode or not. @tutorial_1345_code FileReadCount @tutorial_1346_li : the number of file read operations since the database was opened. @tutorial_1347_code FileSize @tutorial_1348_li : the file size in KB. @tutorial_1349_code FileWriteCount @tutorial_1350_li : the number of file write operations since the database was opened. @tutorial_1351_code FileWriteCountTotal @tutorial_1352_li : the number of file write operations since the database was created. @tutorial_1353_code LogMode @tutorial_1354_li (read/write): the current transaction log mode. See SET LOG for details. @tutorial_1355_code Mode @tutorial_1356_li : the compatibility mode (REGULAR if no compatibility mode is used). @tutorial_1357_code MultiThreaded @tutorial_1358_li : true if multi-threaded is enabled. @tutorial_1359_code Mvcc @tutorial_1360_li : true if MVCC is enabled. @tutorial_1361_code ReadOnly @tutorial_1362_li : true if the database is read-only. @tutorial_1363_code TraceLevel @tutorial_1364_li (read/write): the file trace level. @tutorial_1365_code Version @tutorial_1366_li : the database version in use. @tutorial_1367_code listSettings @tutorial_1368_li : list the database settings. @tutorial_1369_code listSessions @tutorial_1370_li : list the open sessions, including currently executing statement (if any) and locked tables (if any). @tutorial_1371_p To enable JMX, you may need to set the system properties com.sun.management.jmxremote and com.sun.management.jmxremote.port as required by the JVM.