advanced_1000_h1=Advanced advanced_1001_a=\ Result Sets advanced_1002_a=\ Large Objects advanced_1003_a=\ Linked Tables advanced_1004_a=\ Spatial Features advanced_1005_a=\ Recursive Queries advanced_1006_a=\ Updatable Views advanced_1007_a=\ Transaction Isolation advanced_1008_a=\ Multi-Version Concurrency Control (MVCC) advanced_1009_a=\ Clustering / High Availability advanced_1010_a=\ Two Phase Commit advanced_1011_a=\ Compatibility advanced_1012_a=\ Standards Compliance advanced_1013_a=\ Run as Windows Service advanced_1014_a=\ ODBC Driver advanced_1015_a=\ Using H2 in Microsoft .NET advanced_1016_a=\ ACID advanced_1017_a=\ Durability Problems advanced_1018_a=\ Using the Recover Tool advanced_1019_a=\ File Locking Protocols advanced_1020_a=\ Using Passwords advanced_1021_a=\ Password Hash advanced_1022_a=\ Protection against SQL Injection advanced_1023_a=\ Protection against Remote Access advanced_1024_a=\ Restricting Class Loading and Usage advanced_1025_a=\ Security Protocols advanced_1026_a=\ TLS Connections advanced_1027_a=\ Universally Unique Identifiers (UUID) advanced_1028_a=\ Settings Read from System Properties advanced_1029_a=\ Setting the Server Bind Address advanced_1030_a=\ Pluggable File System advanced_1031_a=\ Split File System advanced_1032_a=\ Database Upgrade advanced_1033_a=\ Java Objects Serialization advanced_1034_a=\ Custom Data Types Handler API advanced_1035_a=\ Limits and Limitations advanced_1036_a=\ Glossary and Links advanced_1037_h2=Result Sets advanced_1038_h3=Statements that Return a Result Set advanced_1039_p=\ The following statements return a result set\: <code>SELECT, EXPLAIN, CALL, SCRIPT, SHOW, HELP</code>. All other statements return an update count. advanced_1040_h3=Limiting the Number of Rows advanced_1041_p=\ Before the result is returned to the application, all rows are read by the database. Server side cursors are not supported currently. If only the first few rows are interesting for the application, then the result set size should be limited to improve the performance. This can be done using <code>LIMIT</code> in a query (example\: <code>SELECT * FROM TEST LIMIT 100</code>), or by using <code>Statement.setMaxRows(max)</code>. advanced_1042_h3=Large Result Sets and External Sorting advanced_1043_p=\ For large result set, the result is buffered to disk. The threshold can be defined using the statement <code>SET MAX_MEMORY_ROWS</code>. If <code>ORDER BY</code> is used, the sorting is done using an external sort algorithm. In this case, each block of rows is sorted using quick sort, then written to disk; when reading the data, the blocks are merged together. advanced_1044_h2=Large Objects advanced_1045_h3=Storing and Reading Large Objects advanced_1046_p=\ If it is possible that the objects don't fit into memory, then the data type CLOB (for textual data) or BLOB (for binary data) should be used. For these data types, the objects are not fully read into memory, by using streams. To store a BLOB, use <code>PreparedStatement.setBinaryStream</code>. To store a CLOB, use <code>PreparedStatement.setCharacterStream</code>. To read a BLOB, use <code>ResultSet.getBinaryStream</code>, and to read a CLOB, use <code>ResultSet.getCharacterStream</code>. When using the client/server mode, large BLOB and CLOB data is stored in a temporary file on the client side. advanced_1047_h3=When to use CLOB/BLOB advanced_1048_p=\ By default, this database stores large LOB (CLOB and BLOB) objects separate from the main table data. Small LOB objects are stored in-place, the threshold can be set using <a href\="grammar.html\#set_max_length_inplace_lob" class\="notranslate" >MAX_LENGTH_INPLACE_LOB</a>, but there is still an overhead to use CLOB/BLOB. Because of this, BLOB and CLOB should never be used for columns with a maximum size below about 200 bytes. The best threshold depends on the use case; reading in-place objects is faster than reading from separate files, but slows down the performance of operations that don't involve this column. advanced_1049_h3=Large Object Compression advanced_1050_p=\ The following feature is only available for the PageStore storage engine. For the MVStore engine (the default for H2 version 1.4.x), append <code>;COMPRESS\=TRUE</code> to the database URL instead. CLOB and BLOB values can be compressed by using <a href\="grammar.html\#set_compress_lob" class\="notranslate" >SET COMPRESS_LOB</a>. The LZF algorithm is faster but needs more disk space. By default compression is disabled, which usually speeds up write operations. If you store many large compressible values such as XML, HTML, text, and uncompressed binary files, then compressing can save a lot of disk space (sometimes more than 50%), and read operations may even be faster. advanced_1051_h2=Linked Tables advanced_1052_p=\ This database supports linked tables, which means tables that don't exist in the current database but are just links to another database. To create such a link, use the <code>CREATE LINKED TABLE</code> statement\: advanced_1053_p=\ You can then access the table in the usual way. Whenever the linked table is accessed, the database issues specific queries over JDBC. Using the example above, if you issue the query <code>SELECT * FROM LINK WHERE ID\=1</code>, then the following query is run against the PostgreSQL database\: <code>SELECT * FROM TEST WHERE ID\=?</code>. The same happens for insert and update statements. Only simple statements are executed against the target database, that means no joins (queries that contain joins are converted to simple queries). Prepared statements are used where possible. advanced_1054_p=\ To view the statements that are executed against the target table, set the trace level to 3. advanced_1055_p=\ If multiple linked tables point to the same database (using the same database URL), the connection is shared. To disable this, set the system property <code>h2.shareLinkedConnections\=false</code>. advanced_1056_p=\ The statement <a href\="grammar.html\#create_linked_table" class\="notranslate" >CREATE LINKED TABLE</a> supports an optional schema name parameter. advanced_1057_p=\ The following are not supported because they may result in a deadlock\: creating a linked table to the same database, and creating a linked table to another database using the server mode if the other database is open in the same server (use the embedded mode instead). advanced_1058_p=\ Data types that are not supported in H2 are also not supported for linked tables, for example unsigned data types if the value is outside the range of the signed type. In such cases, the columns needs to be cast to a supported type. advanced_1059_h2=Updatable Views advanced_1060_p=\ By default, views are not updatable. To make a view updatable, use an "instead of" trigger as follows\: advanced_1061_p=\ Update the base table(s) within the trigger as required. For details, see the sample application <code>org.h2.samples.UpdatableView</code>. advanced_1062_h2=Transaction Isolation advanced_1063_p=\ Please note that most data definition language (DDL) statements, such as "create table", commit the current transaction. See the <a href\="grammar.html">Grammar</a> for details. advanced_1064_p=\ Transaction isolation is provided for all data manipulation language (DML) statements. advanced_1065_p=\ Please note MVCC is enabled in version 1.4.x by default, when using the MVStore. In this case, table level locking is not used. Instead, rows are locked for update, and read committed is used in all cases (changing the isolation level has no effect). advanced_1066_p=\ This database supports the following transaction isolation levels\: advanced_1067_b=Read Committed advanced_1068_li=\ This is the default level. Read locks are released immediately after executing the statement, but write locks are kept until the transaction commits. Higher concurrency is possible when using this level. advanced_1069_li=\ To enable, execute the SQL statement <code>SET LOCK_MODE 3</code> advanced_1070_li=\ or append <code>;LOCK_MODE\=3</code> to the database URL\: <code>jdbc\:h2\:~/test;LOCK_MODE\=3</code> advanced_1071_b=Serializable advanced_1072_li=\ Both read locks and write locks are kept until the transaction commits. To enable, execute the SQL statement <code>SET LOCK_MODE 1</code> advanced_1073_li=\ or append <code>;LOCK_MODE\=1</code> to the database URL\: <code>jdbc\:h2\:~/test;LOCK_MODE\=1</code> advanced_1074_b=Read Uncommitted advanced_1075_li=\ This level means that transaction isolation is disabled. advanced_1076_li=\ To enable, execute the SQL statement <code>SET LOCK_MODE 0</code> advanced_1077_li=\ or append <code>;LOCK_MODE\=0</code> to the database URL\: <code>jdbc\:h2\:~/test;LOCK_MODE\=0</code> advanced_1078_p=\ When using the isolation level 'serializable', dirty reads, non-repeatable reads, and phantom reads are prohibited. advanced_1079_b=Dirty Reads advanced_1080_li=\ Means a connection can read uncommitted changes made by another connection. advanced_1081_li=\ Possible with\: read uncommitted advanced_1082_b=Non-Repeatable Reads advanced_1083_li=\ A connection reads a row, another connection changes a row and commits, and the first connection re-reads the same row and gets the new result. advanced_1084_li=\ Possible with\: read uncommitted, read committed advanced_1085_b=Phantom Reads advanced_1086_li=\ A connection reads a set of rows using a condition, another connection inserts a row that falls in this condition and commits, then the first connection re-reads using the same condition and gets the new row. advanced_1087_li=\ Possible with\: read uncommitted, read committed advanced_1088_h3=Table Level Locking advanced_1089_p=\ The database allows multiple concurrent connections to the same database. To make sure all connections only see consistent data, table level locking is used by default. This mechanism does not allow high concurrency, but is very fast. Shared locks and exclusive locks are supported. Before reading from a table, the database tries to add a shared lock to the table (this is only possible if there is no exclusive lock on the object by another connection). If the shared lock is added successfully, the table can be read. It is allowed that other connections also have a shared lock on the same object. If a connection wants to write to a table (update or delete a row), an exclusive lock is required. To get the exclusive lock, other connection must not have any locks on the object. After the connection commits, all locks are released. This database keeps all locks in memory. When a lock is released, and multiple connections are waiting for it, one of them is picked at random. advanced_1090_h3=Lock Timeout advanced_1091_p=\ If a connection cannot get a lock on an object, the connection waits for some amount of time (the lock timeout). During this time, hopefully the connection holding the lock commits and it is then possible to get the lock. If this is not possible because the other connection does not release the lock for some time, the unsuccessful connection will get a lock timeout exception. The lock timeout can be set individually for each connection. advanced_1092_h2=Multi-Version Concurrency Control (MVCC) advanced_1093_p=\ The MVCC feature allows higher concurrency than using (table level or row level) locks. When using MVCC in this database, delete, insert and update operations will only issue a shared lock on the table. An exclusive lock is still used when adding or removing columns, when dropping the table, and when using <code>SELECT ... FOR UPDATE</code>. Connections only 'see' committed data, and own changes. That means, if connection A updates a row but doesn't commit this change yet, connection B will see the old value. Only when the change is committed, the new value is visible by other connections (read committed). If multiple connections concurrently try to update the same row, the database waits until it can apply the change, but at most until the lock timeout expires. advanced_1094_p=\ To use the MVCC feature, append <code>;MVCC\=TRUE</code> to the database URL\: advanced_1095_p=\ The setting must be specified in the first connection (the one that opens the database). It is not possible to enable or disable this setting while the database is already open. advanced_1096_p=\ If MVCC is enabled, changing the lock mode (<code>LOCK_MODE</code>) has no effect. advanced_1097_div=\ The MVCC mode is enabled by default in version 1.4.x, with the default MVStore storage engine. MVCC is disabled by default when using the PageStore storage engine (which is the default in version 1.3.x). The following applies when using the PageStore storage engine\: The MVCC feature is not fully tested yet. The limitations of the MVCC mode are\: with the PageStore storage engine, it can not be used at the same time as <code>MULTI_THREADED\=TRUE</code>; the complete undo log (the list of uncommitted changes) must fit in memory when using multi-version concurrency. The setting <code>MAX_MEMORY_UNDO</code> has no effect. Clustering / High Availability advanced_1098_p=\ This database supports a simple clustering / high availability mechanism. The architecture is\: two database servers run on two different computers, and on both computers is a copy of the same database. If both servers run, each database operation is executed on both computers. If one server fails (power, hardware or network failure), the other server can still continue to work. From this point on, the operations will be executed only on one server until the other server is back up. advanced_1099_p=\ Clustering can only be used in the server mode (the embedded mode does not support clustering). The cluster can be re-created using the <code>CreateCluster</code> tool without stopping the remaining server. Applications that are still connected are automatically disconnected, however when appending <code>;AUTO_RECONNECT\=TRUE</code>, they will recover from that. advanced_1100_p=\ To initialize the cluster, use the following steps\: advanced_1101_li=Create a database advanced_1102_li=Use the <code>CreateCluster</code> tool to copy the database to another location and initialize the clustering. Afterwards, you have two databases containing the same data. advanced_1103_li=Start two servers (one for each copy of the database) advanced_1104_li=You are now ready to connect to the databases with the client application(s) advanced_1105_h3=Using the CreateCluster Tool advanced_1106_p=\ To understand how clustering works, please try out the following example. In this example, the two databases reside on the same computer, but usually, the databases will be on different servers. advanced_1107_li=Create two directories\: <code>server1, server2</code>. Each directory will simulate a directory on a computer. advanced_1108_li=Start a TCP server pointing to the first directory. You can do this using the command line\: advanced_1109_li=Start a second TCP server pointing to the second directory. This will simulate a server running on a second (redundant) computer. You can do this using the command line\: advanced_1110_li=Use the <code>CreateCluster</code> tool to initialize clustering. This will automatically create a new, empty database if it does not exist. Run the tool on the command line\: advanced_1111_li=You can now connect to the databases using an application or the H2 Console using the JDBC URL <code>jdbc\:h2\:tcp\://localhost\:9101,localhost\:9102/~/test</code> advanced_1112_li=If you stop a server (by killing the process), you will notice that the other machine continues to work, and therefore the database is still accessible. advanced_1113_li=To restore the cluster, you first need to delete the database that failed, then restart the server that was stopped, and re-run the <code>CreateCluster</code> tool. advanced_1114_h3=Detect Which Cluster Instances are Running advanced_1115_p=\ To find out which cluster nodes are currently running, execute the following SQL statement\: advanced_1116_p=\ If the result is <code>''</code> (two single quotes), then the cluster mode is disabled. Otherwise, the list of servers is returned, enclosed in single quote. Example\: <code>'server1\:9191,server2\:9191'</code>. advanced_1117_p=\ It is also possible to get the list of servers by using Connection.getClientInfo(). advanced_1118_p=\ The property list returned from <code>getClientInfo()</code> contains a <code>numServers</code> property that returns the number of servers that are in the connection list. To get the actual servers, <code>getClientInfo()</code> also has properties <code>server0</code>..<code>serverX</code>, where serverX is the number of servers minus 1. advanced_1119_p=\ Example\: To get the 2nd server in the connection list one uses <code>getClientInfo('server1')</code>. <b>Note\:</b> The <code>serverX</code> property only returns IP addresses and ports and not hostnames. advanced_1120_h3=Clustering Algorithm and Limitations advanced_1121_p=\ Read-only queries are only executed against the first cluster node, but all other statements are executed against all nodes. There is currently no load balancing made to avoid problems with transactions. The following functions may yield different results on different cluster nodes and must be executed with care\: <code>UUID(), RANDOM_UUID(), SECURE_RAND(), SESSION_ID(), MEMORY_FREE(), MEMORY_USED(), CSVREAD(), CSVWRITE(), RAND()</code> [when not using a seed]. Those functions should not be used directly in modifying statements (for example <code>INSERT, UPDATE, MERGE</code>). However, they can be used in read-only statements and the result can then be used for modifying statements. Using auto-increment and identity columns is currently not supported. Instead, sequence values need to be manually requested and then used to insert data (using two statements). advanced_1122_p=\ When using the cluster modes, result sets are read fully in memory by the client, so that there is no problem if the server dies that executed the query. Result sets must fit in memory on the client side. advanced_1123_p=\ The SQL statement <code>SET AUTOCOMMIT FALSE</code> is not supported in the cluster mode. To disable autocommit, the method <code>Connection.setAutoCommit(false)</code> needs to be called. advanced_1124_p=\ It is possible that a transaction from one connection overtakes a transaction from a different connection. Depending on the operations, this might result in different results, for example when conditionally incrementing a value in a row. advanced_1125_h2=Two Phase Commit advanced_1126_p=\ The two phase commit protocol is supported. 2-phase-commit works as follows\: advanced_1127_li=Autocommit needs to be switched off advanced_1128_li=A transaction is started, for example by inserting a row advanced_1129_li=The transaction is marked 'prepared' by executing the SQL statement <code>PREPARE COMMIT transactionName</code> advanced_1130_li=The transaction can now be committed or rolled back advanced_1131_li=If a problem occurs before the transaction was successfully committed or rolled back (for example because a network problem occurred), the transaction is in the state 'in-doubt' advanced_1132_li=When re-connecting to the database, the in-doubt transactions can be listed with <code>SELECT * FROM INFORMATION_SCHEMA.IN_DOUBT</code> advanced_1133_li=Each transaction in this list must now be committed or rolled back by executing <code>COMMIT TRANSACTION transactionName</code> or <code>ROLLBACK TRANSACTION transactionName</code> advanced_1134_li=The database needs to be closed and re-opened to apply the changes advanced_1135_h2=Compatibility advanced_1136_p=\ This database is (up to a certain point) compatible to other databases such as HSQLDB, MySQL and PostgreSQL. There are certain areas where H2 is incompatible. advanced_1137_h3=Transaction Commit when Autocommit is On advanced_1138_p=\ At this time, this database engine commits a transaction (if autocommit is switched on) just before returning the result. For a query, this means the transaction is committed even before the application scans through the result set, and before the result set is closed. Other database engines may commit the transaction in this case when the result set is closed. advanced_1139_h3=Keywords / Reserved Words advanced_1140_p=\ There is a list of keywords that can't be used as identifiers (table names, column names and so on), unless they are quoted (surrounded with double quotes). The list is currently\: advanced_1141_code=\ CROSS, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, DISTINCT, EXCEPT, EXISTS, FALSE, FETCH, FOR, FROM, FULL, GROUP, HAVING, INNER, INTERSECT, IS, JOIN, LIKE, LIMIT, MINUS, NATURAL, NOT, NULL, OFFSET, ON, ORDER, PRIMARY, ROWNUM, SELECT, SYSDATE, SYSTIME, SYSTIMESTAMP, TODAY, TRUE, UNION, UNIQUE, WHERE advanced_1142_p=\ Certain words of this list are keywords because they are functions that can be used without '()' for compatibility, for example <code>CURRENT_TIMESTAMP</code>. advanced_1143_h2=Standards Compliance advanced_1144_p=\ This database tries to be as much standard compliant as possible. For the SQL language, ANSI/ISO is the main standard. There are several versions that refer to the release date\: SQL-92, SQL\:1999, and SQL\:2003. Unfortunately, the standard documentation is not freely available. Another problem is that important features are not standardized. Whenever this is the case, this database tries to be compatible to other databases. advanced_1145_h3=Supported Character Sets, Character Encoding, and Unicode advanced_1146_p=\ H2 internally uses Unicode, and supports all character encoding systems and character sets supported by the virtual machine you use. advanced_1147_h2=Run as Windows Service advanced_1148_p=\ Using a native wrapper / adapter, Java applications can be run as a Windows Service. There are various tools available to do that. The Java Service Wrapper from <a href\="http\://wrapper.tanukisoftware.org">Tanuki Software, Inc.</a> is included in the installation. Batch files are provided to install, start, stop and uninstall the H2 Database Engine Service. This service contains the TCP Server and the H2 Console web application. The batch files are located in the directory <code>h2/service</code>. advanced_1149_p=\ The service wrapper bundled with H2 is a 32-bit version. To use a 64-bit version of Windows (x64), you need to use a 64-bit version of the wrapper, for example the one from <a href\="http\://www.krenger.ch/blog/java-service-wrapper-3-5-14-for-windows-x64/"> Simon Krenger</a>. advanced_1150_p=\ When running the database as a service, absolute path should be used. Using <code>~</code> in the database URL is problematic in this case, because it means to use the home directory of the current user. The service might run without or with the wrong user, so that the database files might end up in an unexpected place. advanced_1151_h3=Install the Service advanced_1152_p=\ The service needs to be registered as a Windows Service first. To do that, double click on <code>1_install_service.bat</code>. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear. advanced_1153_h3=Start the Service advanced_1154_p=\ You can start the H2 Database Engine Service using the service manager of Windows, or by double clicking on <code>2_start_service.bat</code>. Please note that the batch file does not print an error message if the service is not installed. advanced_1155_h3=Connect to the H2 Console advanced_1156_p=\ After installing and starting the service, you can connect to the H2 Console application using a browser. Double clicking on <code>3_start_browser.bat</code> to do that. The default port (8082) is hard coded in the batch file. advanced_1157_h3=Stop the Service advanced_1158_p=\ To stop the service, double click on <code>4_stop_service.bat</code>. Please note that the batch file does not print an error message if the service is not installed or started. advanced_1159_h3=Uninstall the Service advanced_1160_p=\ To uninstall the service, double click on <code>5_uninstall_service.bat</code>. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear. advanced_1161_h3=Additional JDBC drivers advanced_1162_p=\ To use other databases (for example MySQL), the location of the JDBC drivers of those databases need to be added to the environment variables <code>H2DRIVERS</code> or <code>CLASSPATH</code> before installing the service. Multiple drivers can be set; each entry needs to be separated with a <code>;</code> (Windows) or <code>\:</code> (other operating systems). Spaces in the path names are supported. The settings must not be quoted. advanced_1163_h2=ODBC Driver advanced_1164_p=\ This database does not come with its own ODBC driver at this time, but it supports the PostgreSQL network protocol. Therefore, the PostgreSQL ODBC driver can be used. Support for the PostgreSQL network protocol is quite new and should be viewed as experimental. It should not be used for production applications. advanced_1165_p=\ To use the PostgreSQL ODBC driver on 64 bit versions of Windows, first run <code>c\:/windows/syswow64/odbcad32.exe</code>. At this point you set up your DSN just like you would on any other system. See also\: <a href\="http\://archives.postgresql.org/pgsql-odbc/2005-09/msg00125.php">Re\: ODBC Driver on Windows 64 bit</a> advanced_1166_h3=ODBC Installation advanced_1167_p=\ First, the ODBC driver must be installed. Any recent PostgreSQL ODBC driver should work, however version 8.2 (<code>psqlodbc-08_02*</code>) or newer is recommended. The Windows version of the PostgreSQL ODBC driver is available at <a href\="http\://www.postgresql.org/ftp/odbc/versions/msi">http\://www.postgresql.org/ftp/odbc/versions/msi</a>. advanced_1168_h3=Starting the Server advanced_1169_p=\ After installing the ODBC driver, start the H2 Server using the command line\: advanced_1170_p=\ The PG Server (PG for PostgreSQL protocol) is started as well. By default, databases are stored in the current working directory where the server is started. Use <code>-baseDir</code> to save databases in another directory, for example the user home directory\: advanced_1171_p=\ The PG server can be started and stopped from within a Java application as follows\: advanced_1172_p=\ By default, only connections from localhost are allowed. To allow remote connections, use <code>-pgAllowOthers</code> when starting the server. advanced_1173_p=\ To map an ODBC database name to a different JDBC database name, use the option <code>-key</code> when starting the server. Please note only one mapping is allowed. The following will map the ODBC database named <code>TEST</code> to the database URL <code>jdbc\:h2\:~/data/test;cipher\=aes</code>\: advanced_1174_h3=ODBC Configuration advanced_1175_p=\ After installing the driver, a new Data Source must be added. In Windows, run <code>odbcad32.exe</code> to open the Data Source Administrator. Then click on 'Add...' and select the PostgreSQL Unicode driver. Then click 'Finish'. You will be able to change the connection properties. The property column represents the property key in the <code>odbc.ini</code> file (which may be different from the GUI). advanced_1176_th=Property advanced_1177_th=Example advanced_1178_th=Remarks advanced_1179_td=Data Source advanced_1180_td=H2 Test advanced_1181_td=The name of the ODBC Data Source advanced_1182_td=Database advanced_1183_td=~/test;ifexists\=true advanced_1184_td=\ The database name. This can include connections settings. By default, the database is stored in the current working directory where the Server is started except when the -baseDir setting is used. The name must be at least 3 characters. advanced_1185_td=Servername advanced_1186_td=localhost advanced_1187_td=The server name or IP address. advanced_1188_td=By default, only remote connections are allowed advanced_1189_td=Username advanced_1190_td=sa advanced_1191_td=The database user name. advanced_1192_td=SSL advanced_1193_td=false (disabled) advanced_1194_td=At this time, SSL is not supported. advanced_1195_td=Port advanced_1196_td=5435 advanced_1197_td=The port where the PG Server is listening. advanced_1198_td=Password advanced_1199_td=sa advanced_1200_td=The database password. advanced_1201_p=\ To improve performance, please enable 'server side prepare' under Options / Datasource / Page 2 / Server side prepare. advanced_1202_p=\ Afterwards, you may use this data source. advanced_1203_h3=PG Protocol Support Limitations advanced_1204_p=\ At this time, only a subset of the PostgreSQL network protocol is implemented. Also, there may be compatibility problems on the SQL level, with the catalog, or with text encoding. Problems are fixed as they are found. Currently, statements can not be canceled when using the PG protocol. Also, H2 does not provide index meta over ODBC. advanced_1205_p=\ PostgreSQL ODBC Driver Setup requires a database password; that means it is not possible to connect to H2 databases without password. This is a limitation of the ODBC driver. advanced_1206_h3=Security Considerations advanced_1207_p=\ Currently, the PG Server does not support challenge response or encrypt passwords. This may be a problem if an attacker can listen to the data transferred between the ODBC driver and the server, because the password is readable to the attacker. Also, it is currently not possible to use encrypted SSL connections. Therefore the ODBC driver should not be used where security is important. advanced_1208_p=\ The first connection that opens a database using the PostgreSQL server needs to be an administrator user. Subsequent connections don't need to be opened by an administrator. advanced_1209_h3=Using Microsoft Access advanced_1210_p=\ When using Microsoft Access to edit data in a linked H2 table, you may need to enable the following option\: Tools - Options - Edit/Find - ODBC fields. advanced_1211_h2=Using H2 in Microsoft .NET advanced_1212_p=\ The database can be used from Microsoft .NET even without using Java, by using IKVM.NET. You can access a H2 database on .NET using the JDBC API, or using the ADO.NET interface. advanced_1213_h3=Using the ADO.NET API on .NET advanced_1214_p=\ An implementation of the ADO.NET interface is available in the open source project <a href\="http\://code.google.com/p/h2sharp">H2Sharp</a>. advanced_1215_h3=Using the JDBC API on .NET advanced_1216_li=Install the .NET Framework from <a href\="http\://www.microsoft.com">Microsoft</a>. Mono has not yet been tested. advanced_1217_li=Install <a href\="http\://www.ikvm.net">IKVM.NET</a>. advanced_1218_li=Copy the <code>h2*.jar</code> file to <code>ikvm/bin</code> advanced_1219_li=Run the H2 Console using\: <code>ikvm -jar h2*.jar</code> advanced_1220_li=Convert the H2 Console to an <code>.exe</code> file using\: <code>ikvmc -target\:winexe h2*.jar</code>. You may ignore the warnings. advanced_1221_li=Create a <code>.dll</code> file using (change the version accordingly)\: <code>ikvmc.exe -target\:library -version\:1.0.69.0 h2*.jar</code> advanced_1222_p=\ If you want your C\# application use H2, you need to add the <code>h2.dll</code> and the <code>IKVM.OpenJDK.ClassLibrary.dll</code> to your C\# solution. Here some sample code\: advanced_1223_h2=ACID advanced_1224_p=\ In the database world, ACID stands for\: advanced_1225_li=Atomicity\: transactions must be atomic, meaning either all tasks are performed or none. advanced_1226_li=Consistency\: all operations must comply with the defined constraints. advanced_1227_li=Isolation\: transactions must be isolated from each other. advanced_1228_li=Durability\: committed transaction will not be lost. advanced_1229_h3=Atomicity advanced_1230_p=\ Transactions in this database are always atomic. advanced_1231_h3=Consistency advanced_1232_p=\ By default, this database is always in a consistent state. Referential integrity rules are enforced except when explicitly disabled. advanced_1233_h3=Isolation advanced_1234_p=\ For H2, as with most other database systems, the default isolation level is 'read committed'. This provides better performance, but also means that transactions are not completely isolated. H2 supports the transaction isolation levels 'serializable', 'read committed', and 'read uncommitted'. advanced_1235_h3=Durability advanced_1236_p=\ This database does not guarantee that all committed transactions survive a power failure. Tests show that all databases sometimes lose transactions on power failure (for details, see below). Where losing transactions is not acceptable, a laptop or UPS (uninterruptible power supply) should be used. If durability is required for all possible cases of hardware failure, clustering should be used, such as the H2 clustering mode. advanced_1237_h2=Durability Problems advanced_1238_p=\ Complete durability means all committed transaction survive a power failure. Some databases claim they can guarantee durability, but such claims are wrong. A durability test was run against H2, HSQLDB, PostgreSQL, and Derby. All of those databases sometimes lose committed transactions. The test is included in the H2 download, see <code>org.h2.test.poweroff.Test</code>. advanced_1239_h3=Ways to (Not) Achieve Durability advanced_1240_p=\ Making sure that committed transactions are not lost is more complicated than it seems first. To guarantee complete durability, a database must ensure that the log record is on the hard drive before the commit call returns. To do that, databases use different methods. One is to use the 'synchronous write' file access mode. In Java, <code>RandomAccessFile</code> supports the modes <code>rws</code> and <code>rwd</code>\: advanced_1241_code=rwd advanced_1242_li=\: every update to the file's content is written synchronously to the underlying storage device. advanced_1243_code=rws advanced_1244_li=\: in addition to <code>rwd</code>, every update to the metadata is written synchronously. advanced_1245_p=\ A test (<code>org.h2.test.poweroff.TestWrite</code>) with one of those modes achieves around 50 thousand write operations per second. Even when the operating system write buffer is disabled, the write rate is around 50 thousand operations per second. This feature does not force changes to disk because it does not flush all buffers. The test updates the same byte in the file again and again. If the hard drive was able to write at this rate, then the disk would need to make at least 50 thousand revolutions per second, or 3 million RPM (revolutions per minute). There are no such hard drives. The hard drive used for the test is about 7200 RPM, or about 120 revolutions per second. There is an overhead, so the maximum write rate must be lower than that. advanced_1246_p=\ Calling <code>fsync</code> flushes the buffers. There are two ways to do that in Java\: advanced_1247_code=FileDescriptor.sync() advanced_1248_li=. The documentation says that this forces all system buffers to synchronize with the underlying device. This method is supposed to return after all in-memory modified copies of buffers associated with this file descriptor have been written to the physical medium. advanced_1249_code=FileChannel.force() advanced_1250_li=. This method is supposed to force any updates to this channel's file to be written to the storage device that contains it. advanced_1251_p=\ By default, MySQL calls <code>fsync</code> for each commit. When using one of those methods, only around 60 write operations per second can be achieved, which is consistent with the RPM rate of the hard drive used. Unfortunately, even when calling <code>FileDescriptor.sync()</code> or <code>FileChannel.force()</code>, data is not always persisted to the hard drive, because most hard drives do not obey <code>fsync()</code>\: see <a href\="http\://hardware.slashdot.org/article.pl?sid\=05/05/13/0529252">Your Hard Drive Lies to You</a>. In Mac OS X, <code>fsync</code> does not flush hard drive buffers. See <a href\="http\://lists.apple.com/archives/darwin-dev/2005/Feb/msg00072.html">Bad fsync?</a>. So the situation is confusing, and tests prove there is a problem. advanced_1252_p=\ Trying to flush hard drive buffers is hard, and if you do the performance is very bad. First you need to make sure that the hard drive actually flushes all buffers. Tests show that this can not be done in a reliable way. Then the maximum number of transactions is around 60 per second. Because of those reasons, the default behavior of H2 is to delay writing committed transactions. advanced_1253_p=\ In H2, after a power failure, a bit more than one second of committed transactions may be lost. To change the behavior, use <code>SET WRITE_DELAY</code> and <code>CHECKPOINT SYNC</code>. Most other databases support commit delay as well. In the performance comparison, commit delay was used for all databases that support it. advanced_1254_h3=Running the Durability Test advanced_1255_p=\ To test the durability / non-durability of this and other databases, you can use the test application in the package <code>org.h2.test.poweroff</code>. Two computers with network connection are required to run this test. One computer just listens, while the test application is run (and power is cut) on the other computer. The computer with the listener application opens a TCP/IP port and listens for an incoming connection. The second computer first connects to the listener, and then created the databases and starts inserting records. The connection is set to 'autocommit', which means after each inserted record a commit is performed automatically. Afterwards, the test computer notifies the listener that this record was inserted successfully. The listener computer displays the last inserted record number every 10 seconds. Now, switch off the power manually, then restart the computer, and run the application again. You will find out that in most cases, none of the databases contains all the records that the listener computer knows about. For details, please consult the source code of the listener and test application. advanced_1256_h2=Using the Recover Tool advanced_1257_p=\ The <code>Recover</code> tool can be used to extract the contents of a database file, even if the database is corrupted. It also extracts the content of the transaction log and large objects (CLOB or BLOB). To run the tool, type on the command line\: advanced_1258_p=\ For each database in the current directory, a text file will be created. This file contains raw insert statements (for the data) and data definition (DDL) statements to recreate the schema of the database. This file can be executed using the <code>RunScript</code> tool or a <code>RUNSCRIPT FROM</code> SQL statement. The script includes at least one <code>CREATE USER</code> statement. If you run the script against a database that was created with the same user, or if there are conflicting users, running the script will fail. Consider running the script against a database that was created with a user name that is not in the script. advanced_1259_p=\ The <code>Recover</code> tool creates a SQL script from database file. It also processes the transaction log. advanced_1260_p=\ To verify the database can recover at any time, append <code>;RECOVER_TEST\=64</code> to the database URL in your test environment. This will simulate an application crash after each 64 writes to the database file. A log file named <code>databaseName.h2.db.log</code> is created that lists the operations. The recovery is tested using an in-memory file system, that means it may require a larger heap setting. advanced_1261_h2=File Locking Protocols advanced_1262_p=\ Multiple concurrent connections to the same database are supported, however a database file can only be open for reading and writing (in embedded mode) by one process at the same time. Otherwise, the processes would overwrite each others data and corrupt the database file. To protect against this problem, whenever a database is opened, a lock file is created to signal other processes that the database is in use. If the database is closed, or if the process that opened the database stops normally, this lock file is deleted. advanced_1263_p=\ In special cases (if the process did not terminate normally, for example because there was a power failure), the lock file is not deleted by the process that created it. That means the existence of the lock file is not a safe protocol for file locking. However, this software uses a challenge-response protocol to protect the database files. There are two methods (algorithms) implemented to provide both security (that is, the same database files cannot be opened by two processes at the same time) and simplicity (that is, the lock file does not need to be deleted manually by the user). The two methods are 'file method' and 'socket methods'. advanced_1264_p=\ The file locking protocols (except the file locking method 'FS') have the following limitation\: if a shared file system is used, and the machine with the lock owner is sent to sleep (standby or hibernate), another machine may take over. If the machine that originally held the lock wakes up, the database may become corrupt. If this situation can occur, the application must ensure the database is closed when the application is put to sleep. advanced_1265_h3=File Locking Method 'File' advanced_1266_p=\ The default method for database file locking for version 1.3 and older is the 'File Method'. The algorithm is\: advanced_1267_li=If the lock file does not exist, it is created (using the atomic operation <code>File.createNewFile</code>). Then, the process waits a little bit (20 ms) and checks the file again. If the file was changed during this time, the operation is aborted. This protects against a race condition when one process deletes the lock file just after another one create it, and a third process creates the file again. It does not occur if there are only two writers. advanced_1268_li=\ If the file can be created, a random number is inserted together with the locking method ('file'). Afterwards, a watchdog thread is started that checks regularly (every second once by default) if the file was deleted or modified by another (challenger) thread / process. Whenever that occurs, the file is overwritten with the old data. The watchdog thread runs with high priority so that a change to the lock file does not get through undetected even if the system is very busy. However, the watchdog thread does use very little resources (CPU time), because it waits most of the time. Also, the watchdog only reads from the hard disk and does not write to it. advanced_1269_li=\ If the lock file exists and was recently modified, the process waits for some time (up to two seconds). If it was still changed, an exception is thrown (database is locked). This is done to eliminate race conditions with many concurrent writers. Afterwards, the file is overwritten with a new version (challenge). After that, the thread waits for 2 seconds. If there is a watchdog thread protecting the file, he will overwrite the change and this process will fail to lock the database. However, if there is no watchdog thread, the lock file will still be as written by this thread. In this case, the file is deleted and atomically created again. The watchdog thread is started in this case and the file is locked. advanced_1270_p=\ This algorithm is tested with over 100 concurrent threads. In some cases, when there are many concurrent threads trying to lock the database, they block each other (meaning the file cannot be locked by any of them) for some time. However, the file never gets locked by two threads at the same time. However using that many concurrent threads / processes is not the common use case. Generally, an application should throw an error to the user if it cannot open a database, and not try again in a (fast) loop. advanced_1271_h3=File Locking Method 'Socket' advanced_1272_p=\ There is a second locking mechanism implemented, but disabled by default. To use it, append <code>;FILE_LOCK\=SOCKET</code> to the database URL. The algorithm is\: advanced_1273_li=If the lock file does not exist, it is created. Then a server socket is opened on a defined port, and kept open. The port and IP address of the process that opened the database is written into the lock file. advanced_1274_li=If the lock file exists, and the lock method is 'file', then the software switches to the 'file' method. advanced_1275_li=If the lock file exists, and the lock method is 'socket', then the process checks if the port is in use. If the original process is still running, the port is in use and this process throws an exception (database is in use). If the original process died (for example due to a power failure, or abnormal termination of the virtual machine), then the port was released. The new process deletes the lock file and starts again. advanced_1276_p=\ This method does not require a watchdog thread actively polling (reading) the same file every second. The problem with this method is, if the file is stored on a network share, two processes (running on different computers) could still open the same database files, if they do not have a direct TCP/IP connection. advanced_1277_h3=File Locking Method 'FS' advanced_1278_p=\ This is the default mode for version 1.4 and newer. This database file locking mechanism uses native file system lock on the database file. No *.lock.db file is created in this case, and no background thread is started. This mechanism may not work on all systems as expected. Some systems allow to lock the same file multiple times within the same virtual machine, and on some system native file locking is not supported or files are not unlocked after a power failure. advanced_1279_p=\ To enable this feature, append <code>;FILE_LOCK\=FS</code> to the database URL. advanced_1280_p=\ This feature is relatively new. When using it for production, please ensure your system does in fact lock files as expected. advanced_1281_h2=Using Passwords advanced_1282_h3=Using Secure Passwords advanced_1283_p=\ Remember that weak passwords can be broken regardless of the encryption and security protocols. Don't use passwords that can be found in a dictionary. Appending numbers does not make passwords secure. A way to create good passwords that can be remembered is\: take the first letters of a sentence, use upper and lower case characters, and creatively include special characters (but it's more important to use a long password than to use special characters). Example\: advanced_1284_code=i'sE2rtPiUKtT advanced_1285_p=\ from the sentence <code>it's easy to remember this password if you know the trick</code>. advanced_1286_h3=Passwords\: Using Char Arrays instead of Strings advanced_1287_p=\ Java strings are immutable objects and cannot be safely 'destroyed' by the application. After creating a string, it will remain in the main memory of the computer at least until it is garbage collected. The garbage collection cannot be controlled by the application, and even if it is garbage collected the data may still remain in memory. It might also be possible that the part of memory containing the password is swapped to disk (if not enough main memory is available), which is a problem if the attacker has access to the swap file of the operating system. advanced_1288_p=\ It is a good idea to use char arrays instead of strings for passwords. Char arrays can be cleared (filled with zeros) after use, and therefore the password will not be stored in the swap file. advanced_1289_p=\ This database supports using char arrays instead of string to pass user and file passwords. The following code can be used to do that\: advanced_1290_p=\ This example requires Java 1.6. When using Swing, use <code>javax.swing.JPasswordField</code>. advanced_1291_h3=Passing the User Name and/or Password in the URL advanced_1292_p=\ Instead of passing the user name as a separate parameter as in <code> Connection conn \= DriverManager. getConnection("jdbc\:h2\:~/test", "sa", "123"); </code> the user name (and/or password) can be supplied in the URL itself\: <code> Connection conn \= DriverManager. getConnection("jdbc\:h2\:~/test;USER\=sa;PASSWORD\=123"); </code> The settings in the URL override the settings passed as a separate parameter. advanced_1293_h2=Password Hash advanced_1294_p=\ Sometimes the database password needs to be stored in a configuration file (for example in the <code>web.xml</code> file). In addition to connecting with the plain text password, this database supports connecting with the password hash. This means that only the hash of the password (and not the plain text password) needs to be stored in the configuration file. This will only protect others from reading or re-constructing the plain text password (even if they have access to the configuration file); it does not protect others from accessing the database using the password hash. advanced_1295_p=\ To connect using the password hash instead of plain text password, append <code>;PASSWORD_HASH\=TRUE</code> to the database URL, and replace the password with the password hash. To calculate the password hash from a plain text password, run the following command within the H2 Console tool\: <code>@password_hash <upperCaseUserName> <password></code>. As an example, if the user name is <code>sa</code> and the password is <code>test</code>, run the command <code>@password_hash SA test</code>. Then use the resulting password hash as you would use the plain text password. When using an encrypted database, then the user password and file password need to be hashed separately. To calculate the hash of the file password, run\: <code>@password_hash file <filePassword></code>. advanced_1296_h2=Protection against SQL Injection advanced_1297_h3=What is SQL Injection advanced_1298_p=\ This database engine provides a solution for the security vulnerability known as 'SQL Injection'. Here is a short description of what SQL injection means. Some applications build SQL statements with embedded user input such as\: advanced_1299_p=\ If this mechanism is used anywhere in the application, and user input is not correctly filtered or encoded, it is possible for a user to inject SQL functionality or statements by using specially built input such as (in this example) this password\: <code>' OR ''\='</code>. In this case the statement becomes\: advanced_1300_p=\ Which is always true no matter what the password stored in the database is. For more information about SQL Injection, see <a href\="\#glossary_links">Glossary and Links</a>. advanced_1301_h3=Disabling Literals advanced_1302_p=\ SQL Injection is not possible if user input is not directly embedded in SQL statements. A simple solution for the problem above is to use a prepared statement\: advanced_1303_p=\ This database provides a way to enforce usage of parameters when passing user input to the database. This is done by disabling embedded literals in SQL statements. To do this, execute the statement\: advanced_1304_p=\ Afterwards, SQL statements with text and number literals are not allowed any more. That means, SQL statement of the form <code>WHERE NAME\='abc'</code> or <code>WHERE CustomerId\=10</code> will fail. It is still possible to use prepared statements and parameters as described above. Also, it is still possible to generate SQL statements dynamically, and use the Statement API, as long as the SQL statements do not include literals. There is also a second mode where number literals are allowed\: <code>SET ALLOW_LITERALS NUMBERS</code>. To allow all literals, execute <code>SET ALLOW_LITERALS ALL</code> (this is the default setting). Literals can only be enabled or disabled by an administrator. advanced_1305_h3=Using Constants advanced_1306_p=\ Disabling literals also means disabling hard-coded 'constant' literals. This database supports defining constants using the <code>CREATE CONSTANT</code> command. Constants can be defined only when literals are enabled, but used even when literals are disabled. To avoid name clashes with column names, constants can be defined in other schemas\: advanced_1307_p=\ Even when literals are enabled, it is better to use constants instead of hard-coded number or text literals in queries or views. With constants, typos are found at compile time, the source code is easier to understand and change. advanced_1308_h3=Using the ZERO() Function advanced_1309_p=\ It is not required to create a constant for the number 0 as there is already a built-in function <code>ZERO()</code>\: advanced_1310_h2=Protection against Remote Access advanced_1311_p=\ By default this database does not allow connections from other machines when starting the H2 Console, the TCP server, or the PG server. Remote access can be enabled using the command line options <code>-webAllowOthers, -tcpAllowOthers, -pgAllowOthers</code>. advanced_1312_p=\ If you enable remote access using <code>-tcpAllowOthers</code> or <code>-pgAllowOthers</code>, please also consider using the options <code>-baseDir, -ifExists</code>, so that remote users can not create new databases or access existing databases with weak passwords. When using the option <code>-baseDir</code>, only databases within that directory may be accessed. Ensure the existing accessible databases are protected using strong passwords. advanced_1313_p=\ If you enable remote access using <code>-webAllowOthers</code>, please ensure the web server can only be accessed from trusted networks. The options <code>-baseDir, -ifExists</code> don't protect access to the tools section, prevent remote shutdown of the web server, changes to the preferences, the saved connection settings, or access to other databases accessible from the system. advanced_1314_h2=Restricting Class Loading and Usage advanced_1315_p=\ By default there is no restriction on loading classes and executing Java code for admins. That means an admin may call system functions such as <code>System.setProperty</code> by executing\: advanced_1316_p=\ To restrict users (including admins) from loading classes and executing code, the list of allowed classes can be set in the system property <code>h2.allowedClasses</code> in the form of a comma separated list of classes or patterns (items ending with <code>*</code>). By default all classes are allowed. Example\: advanced_1317_p=\ This mechanism is used for all user classes, including database event listeners, trigger classes, user-defined functions, user-defined aggregate functions, and JDBC driver classes (with the exception of the H2 driver) when using the H2 Console. advanced_1318_h2=Security Protocols advanced_1319_p=\ The following paragraphs document the security protocols used in this database. These descriptions are very technical and only intended for security experts that already know the underlying security primitives. advanced_1320_h3=User Password Encryption advanced_1321_p=\ When a user tries to connect to a database, the combination of user name, @, and password are hashed using SHA-256, and this hash value is transmitted to the database. This step does not protect against an attacker that re-uses the value if he is able to listen to the (unencrypted) transmission between the client and the server. But, the passwords are never transmitted as plain text, even when using an unencrypted connection between client and server. That means if a user reuses the same password for different things, this password is still protected up to some point. See also 'RFC 2617 - HTTP Authentication\: Basic and Digest Access Authentication' for more information. advanced_1322_p=\ When a new database or user is created, a new random salt value is generated. The size of the salt is 64 bits. Using the random salt reduces the risk of an attacker pre-calculating hash values for many different (commonly used) passwords. advanced_1323_p=\ The combination of user-password hash value (see above) and salt is hashed using SHA-256. The resulting value is stored in the database. When a user tries to connect to the database, the database combines user-password hash value with the stored salt value and calculates the hash value. Other products use multiple iterations (hash the hash value again and again), but this is not done in this product to reduce the risk of denial of service attacks (where the attacker tries to connect with bogus passwords, and the server spends a lot of time calculating the hash value for each password). The reasoning is\: if the attacker has access to the hashed passwords, he also has access to the data in plain text, and therefore does not need the password any more. If the data is protected by storing it on another computer and only accessible remotely, then the iteration count is not required at all. advanced_1324_h3=File Encryption advanced_1325_p=\ The database files can be encrypted using the AES-128 algorithm. advanced_1326_p=\ When a user tries to connect to an encrypted database, the combination of <code>file@</code> and the file password is hashed using SHA-256. This hash value is transmitted to the server. advanced_1327_p=\ When a new database file is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bits. The combination of the file password hash and the salt value is hashed 1024 times using SHA-256. The reason for the iteration is to make it harder for an attacker to calculate hash values for common passwords. advanced_1328_p=\ The resulting hash value is used as the key for the block cipher algorithm. Then, an initialization vector (IV) key is calculated by hashing the key again using SHA-256. This is to make sure the IV is unknown to the attacker. The reason for using a secret IV is to protect against watermark attacks. advanced_1329_p=\ Before saving a block of data (each block is 8 bytes long), the following operations are executed\: first, the IV is calculated by encrypting the block number with the IV key (using the same block cipher algorithm). This IV is combined with the plain text using XOR. The resulting data is encrypted using the AES-128 algorithm. advanced_1330_p=\ When decrypting, the operation is done in reverse. First, the block is decrypted using the key, and then the IV is calculated combined with the decrypted text using XOR. advanced_1331_p=\ Therefore, the block cipher mode of operation is CBC (cipher-block chaining), but each chain is only one block long. The advantage over the ECB (electronic codebook) mode is that patterns in the data are not revealed, and the advantage over multi block CBC is that flipped cipher text bits are not propagated to flipped plaintext bits in the next block. advanced_1332_p=\ Database encryption is meant for securing the database while it is not in use (stolen laptop and so on). It is not meant for cases where the attacker has access to files while the database is in use. When he has write access, he can for example replace pieces of files with pieces of older versions and manipulate data like this. advanced_1333_p=\ File encryption slows down the performance of the database engine. Compared to unencrypted mode, database operations take about 2.5 times longer using AES (embedded mode). advanced_1334_h3=Wrong Password / User Name Delay advanced_1335_p=\ To protect against remote brute force password attacks, the delay after each unsuccessful login gets double as long. Use the system properties <code>h2.delayWrongPasswordMin</code> and <code>h2.delayWrongPasswordMax</code> to change the minimum (the default is 250 milliseconds) or maximum delay (the default is 4000 milliseconds, or 4 seconds). The delay only applies for those using the wrong password. Normally there is no delay for a user that knows the correct password, with one exception\: after using the wrong password, there is a delay of up to (randomly distributed) the same delay as for a wrong password. This is to protect against parallel brute force attacks, so that an attacker needs to wait for the whole delay. Delays are synchronized. This is also required to protect against parallel attacks. advanced_1336_p=\ There is only one exception message for both wrong user and for wrong password, to make it harder to get the list of user names. It is not possible from the stack trace to see if the user name was wrong or the password. advanced_1337_h3=HTTPS Connections advanced_1338_p=\ The web server supports HTTP and HTTPS connections using <code>SSLServerSocket</code>. There is a default self-certified certificate to support an easy starting point, but custom certificates are supported as well. advanced_1339_h2=TLS Connections advanced_1340_p=\ Remote TLS connections are supported using the Java Secure Socket Extension (<code>SSLServerSocket, SSLSocket</code>). By default, anonymous TLS is enabled. advanced_1341_p=\ To use your own keystore, set the system properties <code>javax.net.ssl.keyStore</code> and <code>javax.net.ssl.keyStorePassword</code> before starting the H2 server and client. See also <a href\="http\://java.sun.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html\#CustomizingStores"> Customizing the Default Key and Trust Stores, Store Types, and Store Passwords</a> for more information. advanced_1342_p=\ To disable anonymous TLS, set the system property <code>h2.enableAnonymousTLS</code> to false. advanced_1343_h2=Universally Unique Identifiers (UUID) advanced_1344_p=\ This database supports UUIDs. Also supported is a function to create new UUIDs using a cryptographically strong pseudo random number generator. With random UUIDs, the chance of two having the same value can be calculated using the probability theory. See also 'Birthday Paradox'. Standardized randomly generated UUIDs have 122 random bits. 4 bits are used for the version (Randomly generated UUID), and 2 bits for the variant (Leach-Salz). This database supports generating such UUIDs using the built-in function <code>RANDOM_UUID()</code> or <code>UUID()</code>. Here is a small program to estimate the probability of having two identical UUIDs after generating a number of values\: advanced_1345_p=\ Some values are\: advanced_1346_th=Number of UUIs advanced_1347_th=Probability of Duplicates advanced_1348_td=2^36\=68'719'476'736 advanced_1349_td=0.000'000'000'000'000'4 advanced_1350_td=2^41\=2'199'023'255'552 advanced_1351_td=0.000'000'000'000'4 advanced_1352_td=2^46\=70'368'744'177'664 advanced_1353_td=0.000'000'000'4 advanced_1354_p=\ To help non-mathematicians understand what those numbers mean, here a comparison\: one's annual risk of being hit by a meteorite is estimated to be one chance in 17 billion, that means the probability is about 0.000'000'000'06. advanced_1355_h2=Spatial Features advanced_1356_p=\ H2 supports the geometry data type and spatial indexes if the <a href\="http\://tsusiatsoftware.net/jts/main.html">JTS Topology Suite</a> is in the classpath. To run the H2 Console tool with the JTS tool, you need to download the <a href\="http\://search.maven.org/remotecontent?filepath\=com/vividsolutions/jts-core/1.14.0/jts-core-1.14.0.jar">JTS-CORE 1.14.0 jar file</a> and place it in the h2 bin directory. Then edit the <code>h2.sh</code> file as follows\: advanced_1357_p=\ Here is an example SQL script to create a table with a spatial column and index\: advanced_1358_p=\ To query the table using geometry envelope intersection, use the operation <code>&&</code>, as in PostGIS\: advanced_1359_p=\ You can verify that the spatial index is used using the "explain plan" feature\: advanced_1360_p=\ For persistent databases, the spatial index is stored on disk; for in-memory databases, the index is kept in memory. advanced_1361_h2=Recursive Queries advanced_1362_p=\ H2 has experimental support for recursive queries using so called "common table expressions" (CTE). Examples\: advanced_1363_p=\ Limitations\: Recursive queries need to be of the type <code>UNION ALL</code>, and the recursion needs to be on the second part of the query. No tables or views with the name of the table expression may exist. Different table expression names need to be used when using multiple distinct table expressions within the same transaction and for the same session. All columns of the table expression are of type <code>VARCHAR</code>, and may need to be cast to the required data type. Views with recursive queries are not supported. Subqueries and <code>INSERT INTO ... FROM</code> with recursive queries are not supported. Parameters are only supported within the last <code>SELECT</code> statement (a workaround is to use session variables like <code>@start</code> within the table expression). The syntax is\: advanced_1364_h2=Settings Read from System Properties advanced_1365_p=\ Some settings of the database can be set on the command line using <code>-DpropertyName\=value</code>. It is usually not required to change those settings manually. The settings are case sensitive. Example\: advanced_1366_p=\ The current value of the settings can be read in the table <code>INFORMATION_SCHEMA.SETTINGS</code>. advanced_1367_p=\ For a complete list of settings, see <a href\="../javadoc/org/h2/engine/SysProperties.html">SysProperties</a>. advanced_1368_h2=Setting the Server Bind Address advanced_1369_p=\ Usually server sockets accept connections on any/all local addresses. This may be a problem on multi-homed hosts. To bind only to one address, use the system property <code>h2.bindAddress</code>. This setting is used for both regular server sockets and for TLS server sockets. IPv4 and IPv6 address formats are supported. advanced_1370_h2=Pluggable File System advanced_1371_p=\ This database supports a pluggable file system API. The file system implementation is selected using a file name prefix. Internally, the interfaces are very similar to the Java 7 NIO2 API, but do not (yet) use or require Java 7. The following file systems are included\: advanced_1372_code=zip\: advanced_1373_li=\ read-only zip-file based file system. Format\: <code>zip\:/zipFileName\!/fileName</code>. advanced_1374_code=split\: advanced_1375_li=\ file system that splits files in 1 GB files (stackable with other file systems). advanced_1376_code=nio\: advanced_1377_li=\ file system that uses <code>FileChannel</code> instead of <code>RandomAccessFile</code> (faster in some operating systems). advanced_1378_code=nioMapped\: advanced_1379_li=\ file system that uses memory mapped files (faster in some operating systems). Please note that there currently is a file size limitation of 2 GB when using this file system. To work around this limitation, combine it with the split file system\: <code>split\:nioMapped\:test</code>. advanced_1380_code=memFS\: advanced_1381_li=\ in-memory file system (slower than mem; experimental; mainly used for testing the database engine itself). advanced_1382_code=memLZF\: advanced_1383_li=\ compressing in-memory file system (slower than memFS but uses less memory; experimental; mainly used for testing the database engine itself). advanced_1384_code=nioMemFS\: advanced_1385_li=\ stores data outside of the VM's heap - useful for large memory DBs without incurring GC costs. advanced_1386_code=nioMemLZF\: advanced_1387_li=\ stores compressed data outside of the VM's heap - useful for large memory DBs without incurring GC costs. Use "nioMemLZF\:12\:" to tweak the % of blocks that are stored uncompressed. If you size this to your working set correctly, compressed storage is roughly the same performance as uncompressed. The default value is 1%. advanced_1388_p=\ As an example, to use the the <code>nio</code> file system, use the following database URL\: <code>jdbc\:h2\:nio\:~/test</code>. advanced_1389_p=\ To register a new file system, extend the classes <code>org.h2.store.fs.FilePath, FileBase</code>, and call the method <code>FilePath.register</code> before using it. advanced_1390_p=\ For input streams (but not for random access files), URLs may be used in addition to the registered file systems. Example\: <code>jar\:file\:///c\:/temp/example.zip\!/org/example/nested.csv</code>. To read a stream from the classpath, use the prefix <code>classpath\:</code>, as in <code>classpath\:/org/h2/samples/newsfeed.sql</code>. advanced_1391_h2=Split File System advanced_1392_p=\ The file system prefix <code>split\:</code> is used to split logical files into multiple physical files, for example so that a database can get larger than the maximum file system size of the operating system. If the logical file is larger than the maximum file size, then the file is split as follows\: advanced_1393_code=<fileName> advanced_1394_li=\ (first block, is always created) advanced_1395_code=<fileName>.1.part advanced_1396_li=\ (second block) advanced_1397_p=\ More physical files (<code>*.2.part, *.3.part</code>) are automatically created / deleted if needed. The maximum physical file size of a block is 2^30 bytes, which is also called 1 GiB or 1 GB. However this can be changed if required, by specifying the block size in the file name. The file name format is\: <code>split\:<x>\:<fileName></code> where the file size per block is 2^x. For 1 MiB block sizes, use x \= 20 (because 2^20 is 1 MiB). The following file name means the logical file is split into 1 MiB blocks\: <code>split\:20\:test.h2.db</code>. An example database URL for this case is <code>jdbc\:h2\:split\:20\:~/test</code>. advanced_1398_h2=Database Upgrade advanced_1399_p=\ In version 1.2, H2 introduced a new file store implementation which is incompatible to the one used in versions < 1.2. To automatically convert databases to the new file store, it is necessary to include an additional jar file. The file can be found at <a href\="http\://h2database.com/h2mig_pagestore_addon.jar">http\://h2database.com/h2mig_pagestore_addon.jar</a> . If this file is in the classpath, every connect to an older database will result in a conversion process. advanced_1400_p=\ The conversion itself is done internally via <code>'script to'</code> and <code>'runscript from'</code>. After the conversion process, the files will be renamed from advanced_1401_code=dbName.data.db advanced_1402_li=\ to <code>dbName.data.db.backup</code> advanced_1403_code=dbName.index.db advanced_1404_li=\ to <code>dbName.index.db.backup</code> advanced_1405_p=\ by default. Also, the temporary script will be written to the database directory instead of a temporary directory. Both defaults can be customized via advanced_1406_code=org.h2.upgrade.DbUpgrade.setDeleteOldDb(boolean) advanced_1407_code=org.h2.upgrade.DbUpgrade.setScriptInTmpDir(boolean) advanced_1408_p=\ prior opening a database connection. advanced_1409_p=\ Since version 1.2.140 it is possible to let the old h2 classes (v 1.2.128) connect to the database. The automatic upgrade .jar file must be present, and the URL must start with <code>jdbc\:h2v1_1\:</code> (the JDBC driver class is <code>org.h2.upgrade.v1_1.Driver</code>). If the database should automatically connect using the old version if a database with the old format exists (without upgrade), and use the new version otherwise, then append <code>;NO_UPGRADE\=TRUE</code> to the database URL. Please note the old driver did not process the system property <code>"h2.baseDir"</code> correctly, so that using this setting is not supported when upgrading. advanced_1410_h2=Java Objects Serialization advanced_1411_p=\ Java objects serialization is enabled by default for columns of type <code>OTHER</code>, using standard Java serialization/deserialization semantics. advanced_1412_p=\ To disable this feature set the system property <code>h2.serializeJavaObject\=false</code> (default\: true). advanced_1413_p=\ Serialization and deserialization of java objects is customizable both at system level and at database level providing a <a href\="../javadoc/org/h2/api/JavaObjectSerializer.html">JavaObjectSerializer</a> implementation\: advanced_1414_li=\ At system level set the system property <code>h2.javaObjectSerializer</code> with the Fully Qualified Name of the <code>JavaObjectSerializer</code> interface implementation. It will be used over the entire JVM session to (de)serialize java objects being stored in column of type OTHER. Example <code>h2.javaObjectSerializer\=com.acme.SerializerClassName</code>. advanced_1415_li=\ At database level execute the SQL statement <code>SET JAVA_OBJECT_SERIALIZER 'com.acme.SerializerClassName'</code> or append <code>;JAVA_OBJECT_SERIALIZER\='com.acme.SerializerClassName'</code> to the database URL\: <code>jdbc\:h2\:~/test;JAVA_OBJECT_SERIALIZER\='com.acme.SerializerClassName'</code>. advanced_1416_p=\ Please note that this SQL statement can only be executed before any tables are defined. advanced_1417_h2=Custom Data Types Handler API advanced_1418_p=\ It is possible to extend the type system of the database by providing your own implementation of minimal required API basically consisting of type identification and conversion routines. advanced_1419_p=\ In order to enable this feature, set the system property <code>h2.customDataTypesHandler</code> (default\: null) to the fully qualified name of the class providing <a href\="../javadoc/org/h2/api/CustomDataTypesHandler.html">CustomDataTypesHandler</a> interface implementation. advanced_1420_p=\ The instance of that class will be created by H2 and used to\: advanced_1421_li=resolve the names and identifiers of extrinsic data types. advanced_1422_li=convert values of extrinsic data types to and from values of built-in types. advanced_1423_li=provide order of the data types. advanced_1424_p=This is a system-level setting, i.e. affects all the databases. advanced_1425_b=Note\: advanced_1426_p=Please keep in mind that this feature may not possibly provide the same ABI stability level as other features as it exposes many of the H2 internals. You may be required to update your code occasionally due to internal changes in H2 if you are going to use this feature. advanced_1427_h2=Limits and Limitations advanced_1428_p=\ This database has the following known limitations\: advanced_1429_li=Database file size limit\: 4 TB (using the default page size of 2 KB) or higher (when using a larger page size). This limit is including CLOB and BLOB data. advanced_1430_li=The maximum file size for FAT or FAT32 file systems is 4 GB. That means when using FAT or FAT32, the limit is 4 GB for the data. This is the limitation of the file system. The database does provide a workaround for this problem, it is to use the file name prefix <code>split\:</code>. In that case files are split into files of 1 GB by default. An example database URL is\: <code>jdbc\:h2\:split\:~/test</code>. advanced_1431_li=The maximum number of rows per table is 2^64. advanced_1432_li=The maximum number of open transactions is 65535. advanced_1433_li=Main memory requirements\: The larger the database, the more main memory is required. With the current storage mechanism (the page store), the minimum main memory required is around 1 MB for each 8 GB database file size. advanced_1434_li=Limit on the complexity of SQL statements. Statements of the following form will result in a stack overflow exception\: advanced_1435_li=There is no limit for the following entities, except the memory and storage capacity\: maximum identifier length (table name, column name, and so on); maximum number of tables, columns, indexes, triggers, and other database objects; maximum statement length, number of parameters per statement, tables per statement, expressions in order by, group by, having, and so on; maximum rows per query; maximum columns per table, columns per index, indexes per table, lob columns per table, and so on; maximum row length, index row length, select row length; maximum length of a varchar column, decimal column, literal in a statement. advanced_1436_li=Querying from the metadata tables is slow if there are many tables (thousands). advanced_1437_li=For limitations on data types, see the documentation of the respective Java data type or the data type documentation of this database. advanced_1438_h2=Glossary and Links advanced_1439_th=Term advanced_1440_th=Description advanced_1441_td=AES-128 advanced_1442_td=A block encryption algorithm. See also\: <a href\="http\://en.wikipedia.org/wiki/Advanced_Encryption_Standard">Wikipedia\: AES</a> advanced_1443_td=Birthday Paradox advanced_1444_td=Describes the higher than expected probability that two persons in a room have the same birthday. Also valid for randomly generated UUIDs. See also\: <a href\="http\://en.wikipedia.org/wiki/Birthday_paradox">Wikipedia\: Birthday Paradox</a> advanced_1445_td=Digest advanced_1446_td=Protocol to protect a password (but not to protect data). See also\: <a href\="http\://www.faqs.org/rfcs/rfc2617.html">RFC 2617\: HTTP Digest Access Authentication</a> advanced_1447_td=GCJ advanced_1448_td=Compiler for Java. <a href\="http\://gcc.gnu.org/java">GNU Compiler for the Java</a> and <a href\="http\://www.dobysoft.com/products/nativej">NativeJ (commercial)</a> advanced_1449_td=HTTPS advanced_1450_td=A protocol to provide security to HTTP connections. See also\: <a href\="http\://www.ietf.org/rfc/rfc2818.txt">RFC 2818\: HTTP Over TLS</a> advanced_1451_td=Modes of Operation advanced_1452_a=Wikipedia\: Block cipher modes of operation advanced_1453_td=Salt advanced_1454_td=Random number to increase the security of passwords. See also\: <a href\="http\://en.wikipedia.org/wiki/Key_derivation_function">Wikipedia\: Key derivation function</a> advanced_1455_td=SHA-256 advanced_1456_td=A cryptographic one-way hash function. See also\: <a href\="http\://en.wikipedia.org/wiki/SHA_family">Wikipedia\: SHA hash functions</a> advanced_1457_td=SQL Injection advanced_1458_td=A security vulnerability where an application embeds SQL statements or expressions in user input. See also\: <a href\="http\://en.wikipedia.org/wiki/SQL_injection">Wikipedia\: SQL Injection</a> advanced_1459_td=Watermark Attack advanced_1460_td=Security problem of certain encryption programs where the existence of certain data can be proven without decrypting. For more information, search in the internet for 'watermark attack cryptoloop' advanced_1461_td=SSL/TLS advanced_1462_td=Secure Sockets Layer / Transport Layer Security. See also\: <a href\="http\://java.sun.com/products/jsse/">Java Secure Socket Extension (JSSE)</a> architecture_1000_h1=Architecture architecture_1001_a=\ Introduction architecture_1002_a=\ Top-down overview architecture_1003_a=\ JDBC driver architecture_1004_a=\ Connection/session management architecture_1005_a=\ Command execution and planning architecture_1006_a=\ Table/index/constraints architecture_1007_a=\ Undo log, redo log, and transactions layer architecture_1008_a=\ B-tree engine and page-based storage allocation architecture_1009_a=\ Filesystem abstraction architecture_1010_h2=Introduction architecture_1011_p=\ H2 implements an embedded and standalone ANSI-SQL89 compliant SQL engine on top of a B-tree based disk store. architecture_1012_p=\ As of October 2013, Thomas is still working on our next-generation storage engine called MVStore. This will in time replace the B-tree based storage engine. architecture_1013_h2=Top-down Overview architecture_1014_p=\ Working from the top down, the layers look like this\: architecture_1015_li=JDBC driver. architecture_1016_li=Connection/session management. architecture_1017_li=SQL Parser. architecture_1018_li=Command execution and planning. architecture_1019_li=Table/Index/Constraints. architecture_1020_li=Undo log, redo log, and transactions layer. architecture_1021_li=B-tree engine and page-based storage allocation. architecture_1022_li=Filesystem abstraction. architecture_1023_h2=JDBC Driver architecture_1024_p=\ The JDBC driver implementation lives in <code>org.h2.jdbc, org.h2.jdbcx</code> architecture_1025_h2=Connection/session management architecture_1026_p=\ The primary classes of interest are\: architecture_1027_th=Package architecture_1028_th=Description architecture_1029_td=org.h2.engine.Database architecture_1030_td=the root/global class architecture_1031_td=org.h2.engine.SessionInterface architecture_1032_td=abstracts over the differences between embedded and remote sessions architecture_1033_td=org.h2.engine.Session architecture_1034_td=local/embedded session architecture_1035_td=org.h2.engine.SessionRemote architecture_1036_td=remote session architecture_1037_h2=Parser architecture_1038_p=\ The parser lives in <code>org.h2.command.Parser</code>. It uses a straightforward recursive-descent design. architecture_1039_p=\ See Wikipedia <a href\="http\://en.wikipedia.org/wiki/Recursive_descent_parser">Recursive-descent parser</a> page. architecture_1040_h2=Command execution and planning architecture_1041_p=\ Unlike other databases, we do not have an intermediate step where we generate some kind of IR (intermediate representation) of the query. The parser class directly generates a command execution object. Then we run some optimisation steps over the command to possibly generate a more efficient command. The primary packages of interest are\: architecture_1042_th=Package architecture_1043_th=Description architecture_1044_td=org.h2.command.ddl architecture_1045_td=Commands that modify schema data structures architecture_1046_td=org.h2.command.dml architecture_1047_td=Commands that modify data architecture_1048_h2=Table/Index/Constraints architecture_1049_p=\ One thing to note here is that indexes are simply stored as special kinds of tables. architecture_1050_p=\ The primary packages of interest are\: architecture_1051_th=Package architecture_1052_th=Description architecture_1053_td=org.h2.table architecture_1054_td=Implementations of different kinds of tables architecture_1055_td=org.h2.index architecture_1056_td=Implementations of different kinds of indices architecture_1057_h2=Undo log, redo log, and transactions layer architecture_1058_p=\ We have a transaction log, which is shared among all sessions. See also http\://en.wikipedia.org/wiki/Transaction_log http\://h2database.com/html/grammar.html\#set_log architecture_1059_p=\ We also have an undo log, which is per session, to undo an operation (an update that fails for example) and to rollback a transaction. Theoretically, the transaction log could be used, but for simplicity, H2 currently uses it's own "list of operations" (usually in-memory). architecture_1060_p=\ With the MVStore, this is no longer needed (just the transaction log). architecture_1061_h2=B-tree engine and page-based storage allocation. architecture_1062_p=\ The primary package of interest is <code>org.h2.store</code>. architecture_1063_p=\ This implements a storage mechanism which allocates pages of storage (typically 2k in size) and also implements a b-tree over those pages to allow fast retrieval and update. architecture_1064_h2=Filesystem abstraction. architecture_1065_p=\ The primary class of interest is <code>org.h2.store.FileStore</code>. architecture_1066_p=\ This implements an abstraction of a random-access file. This allows the higher layers to treat in-memory vs. on-disk vs. zip-file databases the same. build_1000_h1=Build build_1001_a=\ Portability build_1002_a=\ Environment build_1003_a=\ Building the Software build_1004_a=\ Build Targets build_1005_a=\ Using Maven 2 build_1006_a=\ Using Eclipse build_1007_a=\ Translating build_1008_a=\ Submitting Source Code Changes build_1009_a=\ Reporting Problems or Requests build_1010_a=\ Automated Build build_1011_a=\ Generating Railroad Diagrams build_1012_h2=Portability build_1013_p=\ This database is written in Java and therefore works on many platforms. It can also be compiled to a native executable using GCJ. build_1014_h2=Environment build_1015_p=\ To run this database, a Java Runtime Environment (JRE) version 1.7 or higher is required. build_1016_p=\ To create the database executables, the following software stack was used. To use this database, it is not required to install this software however. build_1017_li=Mac OS X and Windows build_1018_a=Oracle JDK Version 1.7 build_1019_a=Eclipse build_1020_li=Eclipse Plugins\: <a href\="http\://subclipse.tigris.org">Subclipse</a>, <a href\="http\://eclipse-cs.sourceforge.net">Eclipse Checkstyle Plug-in</a>, <a href\="http\://www.eclemma.org">EclEmma Java Code Coverage</a> build_1021_a=Emma Java Code Coverage build_1022_a=Mozilla Firefox build_1023_a=OpenOffice build_1024_a=NSIS build_1025_li=\ (Nullsoft Scriptable Install System) build_1026_a=Maven build_1027_h2=Building the Software build_1028_p=\ You need to install a JDK, for example the Oracle JDK version 1.7 or 1.8. Ensure that Java binary directory is included in the <code>PATH</code> environment variable, and that the environment variable <code>JAVA_HOME</code> points to your Java installation. On the command line, go to the directory <code>h2</code> and execute the following command\: build_1029_p=\ For Linux and OS X, use <code>./build.sh</code> instead of <code>build</code>. build_1030_p=\ You will get a list of targets. If you want to build the <code>jar</code> file, execute (Windows)\: build_1031_p=\ To run the build tool in shell mode, use the command line option <code>-</code> as in <code>./build.sh -</code>. build_1032_h3=Switching the Source Code build_1033_p=\ The source code uses Java 1.7 features. To switch the source code to the installed version of Java, run\: build_1034_h2=Build Targets build_1035_p=\ The build system can generate smaller jar files as well. The following targets are currently supported\: build_1036_code=jarClient build_1037_li=\ creates the file <code>h2client.jar</code>. This only contains the JDBC client. build_1038_code=jarSmall build_1039_li=\ creates the file <code>h2small.jar</code>. This only contains the embedded database. Debug information is disabled. build_1040_code=jarJaqu build_1041_li=\ creates the file <code>h2jaqu.jar</code>. This only contains the JaQu (Java Query) implementation. All other jar files do not include JaQu. build_1042_code=javadocImpl build_1043_li=\ creates the Javadocs of the implementation. build_1044_p=\ To create the file <code>h2client.jar</code>, go to the directory <code>h2</code> and execute the following command\: build_1045_h3=Using Apache Lucene build_1046_p=\ Apache Lucene 3.6.2 is used for testing. Newer versions may work, however they are not tested. build_1047_h2=Using Maven 2 build_1048_h3=Using a Central Repository build_1049_p=\ You can include the database in your Maven 2 project as a dependency. Example\: build_1050_p=\ New versions of this database are first uploaded to http\://hsql.sourceforge.net/m2-repo/ and then automatically synchronized with the main <a href\="http\://repo2.maven.org/maven2/com/h2database/h2/">Maven repository</a>; however after a new release it may take a few hours before they are available there. build_1051_h3=Maven Plugin to Start and Stop the TCP Server build_1052_p=\ A Maven plugin to start and stop the H2 TCP server is available from <a href\="http\://github.com/ljnelson/h2-maven-plugin">Laird Nelson at GitHub</a>. To start the H2 server, use\: build_1053_p=\ To stop the H2 server, use\: build_1054_h3=Using Snapshot Version build_1055_p=\ To build a <code>h2-*-SNAPSHOT.jar</code> file and upload it the to the local Maven 2 repository, execute the following command\: build_1056_p=\ Afterwards, you can include the database in your Maven 2 project as a dependency\: build_1057_h2=Using Eclipse build_1058_p=\ To create an Eclipse project for H2, use the following steps\: build_1059_li=Install Git and <a href\="http\://www.eclipse.org">Eclipse</a>. build_1060_li=Get the H2 source code from Github\: build_1061_code=git clone https\://github.com/h2database/h2database build_1062_li=Download all dependencies\: build_1063_code=build.bat download build_1064_li=(Windows) build_1065_code=./build.sh download build_1066_li=(otherwise) build_1067_li=In Eclipse, create a new Java project from existing source code\: <code>File, New, Project, Java Project, Create project from existing source</code>. build_1068_li=Select the <code>h2</code> folder, click <code>Next</code> and <code>Finish</code>. build_1069_li=To resolve <code>com.sun.javadoc</code> import statements, you may need to manually add the file <code><java.home>/../lib/tools.jar</code> to the build path. build_1070_h2=Translating build_1071_p=\ The translation of this software is split into the following parts\: build_1072_li=H2 Console\: <code>src/main/org/h2/server/web/res/_text_*.prop</code> build_1073_li=Error messages\: <code>src/main/org/h2/res/_messages_*.prop</code> build_1074_p=\ To translate the H2 Console, start it and select Preferences / Translate. After you are done, send the translated <code>*.prop</code> file to the Google Group. The web site is currently translated using Google. build_1075_h2=Submitting Source Code Changes build_1076_p=\ If you'd like to contribute bug fixes or new features, please consider the following guidelines to simplify merging them\: build_1077_li=Only use Java 7 features (do not use Java 8/9/etc) (see <a href\="\#environment">Environment</a>). build_1078_li=Follow the coding style used in the project, and use Checkstyle (see above) to verify. For example, do not use tabs (use spaces instead). The checkstyle configuration is in <code>src/installer/checkstyle.xml</code>. build_1079_li=A template of the Eclipse settings are in <code>src/installer/eclipse.settings/*</code>. If you want to use them, you need to copy them to the <code>.settings</code> directory. The formatting options (<code>eclipseCodeStyle</code>) are also included. build_1080_li=Please provide test cases and integrate them into the test suite. For Java level tests, see <code>src/test/org/h2/test/TestAll.java</code>. For SQL level tests, see <code>src/test/org/h2/test/test.in.txt</code> or <code>testSimple.in.txt</code>. build_1081_li=The test cases should cover at least 90% of the changed and new code; use a code coverage tool to verify that (see above). or use the build target <code>coverage</code>. build_1082_li=Verify that you did not break other features\: run the test cases by executing <code>build test</code>. build_1083_li=Provide end user documentation if required (<code>src/docsrc/html/*</code>). build_1084_li=Document grammar changes in <code>src/docsrc/help/help.csv</code> build_1085_li=Provide a change log entry (<code>src/docsrc/html/changelog.html</code>). build_1086_li=Verify the spelling using <code>build spellcheck</code>. If required add the new words to <code>src/tools/org/h2/build/doc/dictionary.txt</code>. build_1087_li=Run <code>src/installer/buildRelease</code> to find and fix formatting errors. build_1088_li=Verify the formatting using <code>build docs</code> and <code>build javadoc</code>. build_1089_li=Submit changes using GitHub's "pull requests". You'll require a free <a href\="https\://github.com/">GitHub</a> account. If you are not familiar with pull requests, please read GitHub's <a href\="https\://help.github.com/articles/using-pull-requests/">Using pull requests</a> page. build_1090_p=\ For legal reasons, patches need to be public in the form of an <a href\="https\://github.com/h2database/h2database/issues">issue report or attachment</a> or in the form of an email to the <a href\="http\://groups.google.com/group/h2-database">group</a>. Significant contributions need to include the following statement\: build_1091_p=\ "I wrote the code, it's mine, and I'm contributing it to H2 for distribution multiple-licensed under the MPL 2.0, and the EPL 1.0 (http\://h2database.com/html/license.html)." build_1092_h2=Reporting Problems or Requests build_1093_p=\ Please consider the following checklist if you have a question, want to report a problem, or if you have a feature request\: build_1094_li=For bug reports, please provide a <a href\="http\://sscce.org/">short, self contained, correct (compilable), example</a> of the problem. build_1095_li=Feature requests are always welcome, even if the feature is already on the <a href\="roadmap.html">roadmap</a>. Your mail will help prioritize feature requests. If you urgently need a feature, consider <a href\="\#providing_patches">providing a patch</a>. build_1096_li=Before posting problems, check the <a href\="faq.html">FAQ</a> and do a <a href\="http\://google.com">Google search</a>. build_1097_li=When got an unexpected exception, please try the <a href\="sourceError.html">Error Analyzer tool</a>. If this doesn't help, please report the problem, including the complete error message and stack trace, and the root cause stack trace(s). build_1098_li=When sending source code, please use a public web clipboard such as <a href\="http\://pastebin.com">Pastebin</a>, <a href\="http\://cl1p.net">Cl1p</a>, or <a href\="http\://www.mysticpaste.com/new">Mystic Paste</a> to avoid formatting problems. Please keep test cases as simple and short as possible, but so that the problem can still be reproduced. As a template, use\: <a href\="https\://github.com/h2database/h2database/tree/master/h2/src/test/org/h2/samples/HelloWorld.java">HelloWorld.java</a>. Method that simply call other methods should be avoided, as well as unnecessary exception handling. Please use the JDBC API and no external tools or libraries. The test should include all required initialization code, and should be started with the main method. build_1099_li=For large attachments, use a public temporary storage such as <a href\="http\://rapidshare.com">Rapidshare</a>. build_1100_li=Google Group versus issue tracking\: Use the <a href\="http\://groups.google.com/group/h2-database">Google Group</a> for questions or if you are not sure it's a bug. If you are sure it's a bug, you can create an <a href\="https\://github.com/h2database/h2database/issues">issue</a>, but you don't need to (sending an email to the group is enough). Please note that only few people monitor the issue tracking system. build_1101_li=For out-of-memory problems, please analyze the problem yourself first, for example using the command line option <code>-XX\:+HeapDumpOnOutOfMemoryError</code> (to create a heap dump file on out of memory) and a memory analysis tool such as the <a href\="http\://www.eclipse.org/mat">Eclipse Memory Analyzer (MAT)</a>. build_1102_li=It may take a few days to get an answers. Please do not double post. build_1103_h2=Automated Build build_1104_p=\ This build process is automated and runs regularly. The build process includes running the tests and code coverage, using the command line <code>./build.sh clean jar coverage -Dh2.ftpPassword\=... uploadBuild</code>. The last results are available here\: build_1105_a=Test Output build_1106_a=Code Coverage Summary build_1107_a=Code Coverage Details (download, 1.3 MB) build_1108_a=Build Newsfeed build_1109_h2=Generating Railroad Diagrams build_1110_p=\ The railroad diagrams of the <a href\="grammar.html">SQL grammar</a> are HTML, formatted as nested tables. The diagrams are generated as follows\: build_1111_li=The BNF parser (<code>org.h2.bnf.Bnf</code>) reads and parses the BNF from the file <code>help.csv</code>. build_1112_li=The page parser (<code>org.h2.server.web.PageParser</code>) reads the template HTML file and fills in the diagrams. build_1113_li=The rail images (one straight, four junctions, two turns) are generated using a simple Java application. build_1114_p=\ To generate railroad diagrams for other grammars, see the package <code>org.h2.jcr</code>. This package is used to generate the SQL-2 railroad diagrams for the JCR 2.0 specification. changelog_1000_h1=Change Log changelog_1001_h2=Next Version (unreleased) changelog_1002_li=Issue \#654\: List ENUM type values in INFORMATION_SCHEMA.COLUMNS changelog_1003_li=Issue \#668\: Fail of an update command on large table with ENUM column changelog_1004_li=Issue \#662\: column called CONSTRAINT is not properly escaped when storing to metadata changelog_1005_li=Issue \#660\: Outdated java version mentioned on http\://h2database.com/html/build.html\#providing_patches changelog_1006_li=Issue \#643\: H2 doesn't use index when I use IN and EQUAL in one query changelog_1007_li=Reset transaction start timestamp on ROLLBACK changelog_1008_li=Issue \#632\: CREATE OR REPLACE VIEW creates incorrect columns names changelog_1009_li=Issue \#630\: Integer overflow in CacheLRU can cause unrestricted cache growth changelog_1010_li=Issue \#497\: Fix TO_DATE in cases of 'inline' text. E.g. the "T" and "Z" in to_date('2017-04-21T00\:00\:00Z', 'YYYY-MM-DD"T"HH24\:MI\:SS"Z"') changelog_1011_li=Fix bug in MySQL/ORACLE-syntax silently corrupting the modified column in cases of setting the 'NULL'- or 'NOT NULL'-constraint. E.g. alter table T modify C NULL; changelog_1012_li=Issue \#570\: MySQL compatibility for ALTER TABLE .. DROP INDEX changelog_1013_li=Issue \#537\: Include the COLUMN name in message "Numeric value out of range" changelog_1014_li=Issue \#600\: ROW_NUMBER() behaviour change in H2 1.4.195 changelog_1015_li=Fix a bunch of race conditions found by vmlens.com, thank you to vmlens for giving us a license. changelog_1016_li=PR \#597\: Support more types in getObject changelog_1017_li=Issue \#591\: Generated SQL from WITH-CTEs does not include a table identifier changelog_1018_li=PR \#593\: Make it possible to create a cluster without using temporary files. changelog_1019_li=PR \#592\: "Connection is broken\: "unexpected status 16777216" [90067-192]" message when using older h2 releases as client changelog_1020_li=Issue \#585\: MySQL mode DELETE statements compatibility changelog_1021_li=PR \#586\: remove extra tx preparation changelog_1022_li=PR \#568\: Implement MetaData.getColumns() for synonyms. changelog_1023_li=Issue \#581\: org.h2.tools.RunScript assumes -script parameter is part of protocol changelog_1024_li=Fix a deadlock in the TransactionStore changelog_1025_li=PR \#579\: Disallow BLOB type in PostgreSQL mode changelog_1026_li=Issue \#576\: Common Table Expression (CTE)\: WITH supports INSERT, UPDATE, MERGE, DELETE, CREATE TABLE ... changelog_1027_li=Issue \#493\: Query with distinct/limit/offset subquery returns unexpected rows changelog_1028_li=Issue \#575\: Support for full text search in multithreaded mode changelog_1029_li=Issue \#569\: ClassCastException when filtering on ENUM value in WHERE clause changelog_1030_li=Issue \#539\: Allow override of builtin functions/aliases changelog_1031_li=Issue \#535\: Allow explicit paths on Windows without drive letter changelog_1032_li=Issue \#549\: Removed UNION ALL requirements for CTE changelog_1033_li=Issue \#548\: Table synonym support changelog_1034_li=Issue \#531\: Rollback and delayed meta save. changelog_1035_li=Issue \#515\: "Unique index or primary key violation" in TestMvccMultiThreaded changelog_1036_li=Issue \#458\: TIMESTAMPDIFF() test failing. Handling of timestamp literals. changelog_1037_li=PR \#546\: Fixes the missing file tree.js in the web console changelog_1038_li=Issue \#543\: Prepare statement with regexp will not refresh parameter after metadata change changelog_1039_li=PR \#536\: Support TIMESTAMP_WITH_TIMEZONE 2014 JDBC type changelog_1040_li=Fix bug in parsing ANALYZE TABLE xxx SAMPLE_SIZE yyy changelog_1041_li=Add padding for CHAR(N) values in PostgreSQL mode changelog_1042_li=Issue \#89\: Add DB2 timestamp format compatibility changelog_1043_h2=Version 1.4.196 (2017-06-10) changelog_1044_li=Issue\#479 Allow non-recursive CTEs (WITH statements), patch from stumc changelog_1045_li=Fix startup issue when using "CHECK" as a column name. changelog_1046_li=Issue \#423\: ANALYZE performed multiple times on one table during execution of the same statement. changelog_1047_li=Issue \#426\: Support ANALYZE TABLE statement changelog_1048_li=Issue \#438\: Fix slow logging via SLF4J (TRACE_LEVEL_FILE\=4). changelog_1049_li=Issue \#472\: Support CREATE SEQUENCE ... ORDER as a NOOP for Oracle compatibility changelog_1050_li=Issue \#479\: Allow non-recursive Common Table Expressions (CTE) changelog_1051_li=On Mac OS X, with IPv6 and no network connection, the Console tool was not working as expected. changelog_1052_h2=Version 1.4.195 (2017-04-23) changelog_1053_li=Lazy query execution support. changelog_1054_li=Added API for handling custom data types (System property "h2.customDataTypesHandler", API org.h2.api.CustomDataTypesHandler). changelog_1055_li=Added support for invisible columns. changelog_1056_li=Added an ENUM data type, with syntax similar to that of MySQL. changelog_1057_li=MVStore\: for object data types, the cache size memory estimation was sometimes far off in a read-only scenario. This could result in inefficient cache usage. changelog_1058_h2=Version 1.4.194 (2017-03-10) changelog_1059_li=Issue \#453\: MVStore setCacheSize() should also limit the cacheChunkRef. changelog_1060_li=Issue \#448\: Newly added TO_DATE and TO_TIMESTAMP functions have wrong datatype. changelog_1061_li=The "nioMemLZF" filesystem now supports an extra option "nioMemLZF\:12\:" to tweak the size of the compress later cache. changelog_1062_li=Various multi-threading fixes and optimisations to the "nioMemLZF" filesystem. changelog_1063_strong=[API CHANGE]</strong> \#439\: the JDBC type of TIMESTAMP WITH TIME ZONE changed from Types.OTHER (1111) to Types.TIMESTAMP_WITH_TIMEZONE (2014) changelog_1064_li=\#430\: Subquery not cached if number of rows exceeds MAX_MEMORY_ROWS. changelog_1065_li=\#411\: "TIMEZONE" should be "TIME ZONE" in type "TIMESTAMP WITH TIMEZONE". changelog_1066_li=PR \#418, Implement Connection\#createArrayOf and PreparedStatement\#setArray. changelog_1067_li=PR \#427, Add MySQL compatibility functions UNIX_TIMESTAMP, FROM_UNIXTIME and DATE. changelog_1068_li=\#429\: Tables not found \: Fix some Turkish locale bugs around uppercasing. changelog_1069_li=Fixed bug in metadata locking, obscure combination of DDL and SELECT SEQUENCE.NEXTVAL required. changelog_1070_li=Added index hints\: SELECT * FROM TEST USE INDEX (idx1, idx2). changelog_1071_li=Add a test case to ensure that spatial index is used with and order by command by Fortin N. changelog_1072_li=Fix multi-threaded mode update exception "NullPointerException", test case by Anatolii K. changelog_1073_li=Fix multi-threaded mode insert exception "Unique index or primary key violation", test case by Anatolii K. changelog_1074_li=Implement ILIKE operator for case-insensitive matching. changelog_1075_li=Optimise LIKE queries for the common cases of '%Foo' and '%Foo%'. changelog_1076_li=Issue \#387\: H2 MSSQL Compatibility Mode - Support uniqueidentifier. changelog_1077_li=Issue \#401\: NPE in "SELECT DISTINCT * ORDER BY". changelog_1078_li=Added BITGET function. changelog_1079_li=Fixed bug in FilePathRetryOnInterrupt that caused infinite loop. changelog_1080_li=PR \#389, Handle LocalTime with nanosecond resolution, patch by katzyn. changelog_1081_li=PR \#382, Recover for "page store" H2 breaks LOBs consistency, patch by vitalus. changelog_1082_li=PR \#393, Run tests on Travis, patch by marschall. changelog_1083_li=Fix bug in REGEX_REPLACE, not parsing the mode parameter. changelog_1084_li=ResultSet.getObject(..., Class) threw a ClassNotFoundException if the JTS suite was not in the classpath. changelog_1085_li=File systems\: the "cache\:" file system, and the compressed in-memory file systems memLZF and nioMemLZF did not correctly support concurrent reading and writing. changelog_1086_li=TIMESTAMP WITH TIMEZONE\: serialization for the PageStore was broken. changelog_1087_h2=Version 1.4.193 (2016-10-31) changelog_1088_li=PR \#386\: Add JSR-310 Support (introduces JTS dependency fixed in 1.4.194) changelog_1089_li=WARNING\: THE MERGE BELOW WILL AFFECT ANY 'TIMESTAMP WITH TIMEZONE' INDEXES. You will need to drop and recreate any such indexes. changelog_1090_li=PR \#364\: fix compare TIMESTAMP WITH TIMEZONE changelog_1091_li=Fix bug in picking the right index for INSERT..ON DUPLICATE KEY UPDATE when there are both UNIQUE and PRIMARY KEY constraints. changelog_1092_li=Issue \#380\: Error Analyzer doesn't show source code changelog_1093_li=Remove the "TIMESTAMP UTC" datatype, an experiment that was never finished. changelog_1094_li=PR \#363\: Added support to define last IDENTIFIER on a Trigger. changelog_1095_li=PR \#366\: Tests for timestamps changelog_1096_li=PR \#361\: Improve TimestampWithTimeZone javadoc changelog_1097_li=PR \#360\: Change getters in TimestampWithTimeZone to int changelog_1098_li=PR \#359\: Added missing source encoding. Assuming UTF-8. changelog_1099_li=PR \#353\: Add support for converting JAVA_OBJECT to UUID changelog_1100_li=PR \#358\: Add support for getObject(int|String, Class) changelog_1101_li=PR \#357\: Server\: use xdg-open to open the WebConsole in the user's preferred browser on Linux changelog_1102_li=PR \#356\: Support for BEFORE and AFTER clauses when using multiple columns in ALTER TABLE ADD changelog_1103_li=PR \#351\: Respect format codes from Bind message when sending results changelog_1104_li=ignore summary line when compiling stored procedure changelog_1105_li=PR \#348\: pg\: send RowDescription in response to Describe (statement variant), patch by kostya-sh changelog_1106_li=PR \#337\: Update russian translation, patch by avp1983 changelog_1107_li=PR \#329\: Update to servlet API version 3.1.0 from 3.0.1, patch by Mat Booth changelog_1108_li=PR \#331\: ChangeFileEncryption progress logging ignores -quiet flag, patch by Stefan Bodewig changelog_1109_li=PR \#325\: Make Row an interface changelog_1110_li=PR \#323\: Regular expression functions (REGEXP_REPLACE, REGEXP_LIKE) enhancement, patch by Akkuzin changelog_1111_li=Use System.nanoTime for measuring query statistics changelog_1112_li=Issue \#324\: Deadlock when sending BLOBs over TCP changelog_1113_li=Fix for creating and accessing views in MULTITHREADED mode, test-case courtesy of Daniel Rosenbaum changelog_1114_li=Issue \#266\: Spatial index not updating, fixed by merging PR \#267 changelog_1115_li=PR \#302\: add support for "with"-subqueries into "join" & "sub-query" statements changelog_1116_li=Issue \#299\: Nested derived tables did not always work as expected. changelog_1117_li=Use interfaces to replace the java version templating, idea from Lukas Eder. changelog_1118_li=Issue \#295\: JdbcResultSet.getObject(int, Class) returns null instead of throwing. changelog_1119_li=Mac OS X\: Console tool process did not stop on exit. changelog_1120_li=MVStoreTool\: add "repair" feature. changelog_1121_li=Garbage collection of unused chunks should be faster still. changelog_1122_li=MVStore / transaction store\: opening a store in read-only mode does no longer loop. changelog_1123_li=MVStore\: disabled the file system cache by default, because it limits concurrency when using larger databases and many threads. To re-enable, use the file name prefix "cache\:". changelog_1124_li=MVStore\: add feature to set the cache concurrency. changelog_1125_li=File system nioMemFS\: support concurrent reads. changelog_1126_li=File systems\: the compressed in-memory file systems now compress better. changelog_1127_li=LIRS cache\: improved hit rate because now added entries get hot if they were in the non-resident part of the cache before. changelog_1128_h2=Version 1.4.192 Beta (2016-05-26) changelog_1129_li=Java 6 is no longer supported (the jar files are compiled for Java 7). changelog_1130_li=Garbage collection of unused chunks should now be faster. changelog_1131_li=Prevent people using unsupported combination of auto-increment columns and clustering mode. changelog_1132_li=Support for DB2 time format, patch by Niklas Mehner changelog_1133_li=Added support for Connection.setClientInfo() in compatibility modes for DB2, Postgresql, Oracle and MySQL. changelog_1134_li=Issue \#249\: Clarify license declaration in Maven POM xml changelog_1135_li=Fix NullPointerException in querying spatial data through a sub-select. changelog_1136_li=Fix bug where a lock on the SYS table was not released when closing a session that contained a temp table with an LOB column. changelog_1137_li=Issue \#255\: ConcurrentModificationException with multiple threads in embedded mode and temporary LOBs changelog_1138_li=Issue \#235\: Anonymous SSL connections fail in many situations changelog_1139_li=Fix race condition in FILE_LOCK\=SOCKET, which could result in the watchdog thread not running changelog_1140_li=Experimental support for datatype TIMESTAMP WITH TIMEZONE changelog_1141_li=Add support for ALTER TABLE ... RENAME CONSTRAINT .. TO ... changelog_1142_li=Add support for PostgreSQL ALTER TABLE ... RENAME COLUMN .. TO ... changelog_1143_li=Add support for ALTER SCHEMA [ IF EXISTS ] changelog_1144_li=Add support for ALTER TABLE [ IF EXISTS ] changelog_1145_li=Add support for ALTER VIEW [ IF EXISTS ] changelog_1146_li=Add support for ALTER INDEX [ IF EXISTS ] changelog_1147_li=Add support for ALTER SEQUENCE [ IF EXISTS ] changelog_1148_li=Improve performance of cleaning up temp tables - patch from Eric Faulhaber. changelog_1149_li=Fix bug where table locks were not dropped when the connection closed changelog_1150_li=Fix extra CPU usage caused by query planner enhancement in 1.4.191 changelog_1151_li=improve performance of queries that use LIKE 'foo%' - 10x in the case of one of my queries changelog_1152_li=The function IFNULL did not always return the result in the right data type. changelog_1153_li=Issue \#231\: Possible infinite loop when initializing the ObjectDataType class when concurrently writing into MVStore. changelog_1154_h2=Version 1.4.191 Beta (2016-01-21) changelog_1155_li=TO_DATE and TO_TIMESTAMP functions. Thanks a lot to Sam Blume for the patch\! changelog_1156_li=Issue \#229\: DATEDIFF does not work for 'WEEK'. changelog_1157_li=Issue \#156\: Add support for getGeneratedKeys() when executing commands via PreparedStatement\#executeBatch. changelog_1158_li=Issue \#195\: The new Maven uses a .cmd file instead of a .bat file. changelog_1159_li=Issue \#212\: EXPLAIN PLAN for UPDATE statement did not display LIMIT expression. changelog_1160_li=Support OFFSET without LIMIT in SELECT. changelog_1161_li=Improve error message for METHOD_NOT_FOUND_1/90087. changelog_1162_li=CLOB and BLOB objects of removed rows were sometimes kept in the database file. changelog_1163_li=Server mode\: executing "shutdown" left a thread on the server. changelog_1164_li=The condition "in(select...)" did not work correctly in some cases if the subquery had an "order by". changelog_1165_li=Issue \#184\: The Platform-independent zip had Windows line endings in Linux scripts. changelog_1166_li=Issue \#186\: The "script" command did not include sequences of temporary tables. changelog_1167_li=Issue \#115\: to_char fails with pattern FM0D099. changelog_1168_h2=Version 1.4.190 Beta (2015-10-11) changelog_1169_li=Pull request \#183\: optimizer hints (so far without special SQL syntax). changelog_1170_li=Issue \#180\: In MVCC mode, executing UPDATE and SELECT ... FOR UPDATE simultaneously silently can drop rows. changelog_1171_li=PageStore storage\: the cooperative file locking mechanism did not always work as expected (with very slow computers). changelog_1172_li=Temporary CLOB and BLOB objects are now removed while the database is open (and not just when closing the database). changelog_1173_li=MVStore CLOB and BLOB larger than about 25 MB\: An exception could be thrown when using the MVStore storage. changelog_1174_li=Add FILE_WRITE function. Patch provided by Nicolas Fortin (Lab-STICC - CNRS UMR 6285 and Ecole Centrale de Nantes) changelog_1175_h2=Version 1.4.189 Beta (2015-09-13) changelog_1176_li=Add support for dropping multiple columns in ALTER TABLE DROP COLUMN... changelog_1177_li=Fix bug in XA management when doing rollback after prepare. Patch by Stephane Lacoin. changelog_1178_li=MVStore CLOB and BLOB\: An exception with the message "Block not found" could be thrown when using the MVStore storage, when copying LOB objects (for example due to "alter table" on a table with a LOB object), and then re-opening the database. changelog_1179_li=Fix for issue \#171\: Broken QueryStatisticsData duration data when trace level smaller than TraceSystem.INFO changelog_1180_li=Pull request \#170\: Added SET QUERY_STATISTICS_MAX_ENTRIES changelog_1181_li=Pull request \#165\: Fix compatibility postgresql function string_agg changelog_1182_li=Pull request \#163\: improved performance when not using the default timezone. changelog_1183_li=Local temporary tables with many rows did not work correctly due to automatic analyze. changelog_1184_li=Server mode\: concurrently using the same connection could throw an exception "Connection is broken\: unexpected status". changelog_1185_li=Performance improvement for metadata queries that join against the COLUMNS metadata table. changelog_1186_li=An ArrayIndexOutOfBoundsException was thrown in some cases when opening an old version 1.3 database, or an 1.4 database with both "mv_store\=false" and the system property "h2.storeLocalTime" set to false. It mainly showed up with an index on a time, date, or timestamp column. The system property "h2.storeLocalTime" is no longer supported (MVStore databases always store local time, and PageStore now databases never do). changelog_1187_h2=Version 1.4.188 Beta (2015-08-01) changelog_1188_li=Server mode\: CLOB processing for texts larger than about 1 MB sometimes did not work. changelog_1189_li=Server mode\: BLOB processing for binaries larger than 2 GB did not work. changelog_1190_li=Multi-threaded processing\: concurrent deleting the same row could throw the exception "Row not found when trying to delete". changelog_1191_li=MVStore transactions\: a thread could see a change of a different thread within a different map. Pull request \#153. changelog_1192_li=H2 Console\: improved IBM DB2 compatibility. changelog_1193_li=A thread deadlock detector (disabled by default) can help detect and analyze Java level deadlocks. To enable, set the system property "h2.threadDeadlockDetector" to true. changelog_1194_li=Performance improvement for metadata queries that join against the COLUMNS metadata table. changelog_1195_li=MVStore\: power failure could corrupt the store, if writes were re-ordered. changelog_1196_li=For compatibility with other databases, support for (double and float) -0.0 has been removed. 0.0 is used instead. changelog_1197_li=Fix for \#134, Column name with a \# character. Patch by bradmesserle. changelog_1198_li=In version 1.4.186, "order by" was broken in some cases due to the change "Make the planner use indexes for sorting when doing a GROUP BY". The change was reverted. changelog_1199_li=Pull request \#146\: Improved CompareMode. changelog_1200_li=Fix for \#144, JdbcResultSet.setFetchDirection() throws "Feature not supported". changelog_1201_li=Fix for issue \#143, deadlock between two sessions hitting the same sequence on a column. changelog_1202_li=Pull request \#137\: SourceCompiler should not throw a syntax error on javac warning. changelog_1203_li=MVStore\: out of memory while storing could corrupt the store (theoretically, a rollback would be possible, but this case is not yet implemented). changelog_1204_li=The compressed in-memory file systems (memLZF\:) could not be used in the MVStore. changelog_1205_li=The in-memory file systems (memFS\: and memLZF\:) did not support files larger than 2 GB due to an integer overflow. changelog_1206_li=Pull request \#138\: Added the simple Oracle function\: ORA_HASH (+ tests) \#138 changelog_1207_li=Timestamps in the trace log follow the format (yyyy-MM-dd HH\:mm\:ss) instead of the old format (MM-dd HH\:mm\:ss). Patch by Richard Bull. changelog_1208_li=Pull request \#125\: Improved Oracle compatibility with "truncate" with timestamps and dates. changelog_1209_li=Pull request \#127\: Linked tables now support geometry columns. changelog_1210_li=ABS(CAST(0.0 AS DOUBLE)) returned -0.0 instead of 0.0. changelog_1211_li=BNF auto-completion failed with unquoted identifiers. changelog_1212_li=Oracle compatibility\: empty strings were not converted to NULL when using prepared statements. changelog_1213_li=PostgreSQL compatibility\: new syntax "create index ... using ...". changelog_1214_li=There was a bug in DataType.convertToValue when reading a ResultSet from a ResultSet. changelog_1215_li=Pull request \#116\: Improved concurrency in the trace system. changelog_1216_li=Issue 609\: the spatial index did not support NULL. changelog_1217_li=Granting a schema is now supported. changelog_1218_li=Linked tables did not work when a function-based index is present (Oracle). changelog_1219_li=Creating a user with a null password, salt, or hash threw a NullPointerException. changelog_1220_li=Foreign key\: don't add a single column index if column is leading key of existing index. changelog_1221_li=Pull request \#4\: Creating and removing temporary tables was getting slower and slower over time, because an internal object id was allocated but never de-allocated. changelog_1222_li=Issue 609\: the spatial index did not support NULL with update and delete operations. changelog_1223_li=Pull request \#2\: Add external metadata type support (table type "external") changelog_1224_li=MS SQL Server\: the CONVERT method did not work in views and derived tables. changelog_1225_li=Java 8 compatibility for "regexp_replace". changelog_1226_li=When in cluster mode, and one of the nodes goes down, we need to log the problem with priority "error", not "debug" changelog_1227_h2=Version 1.4.187 Beta (2015-04-10) changelog_1228_li=MVStore\: concurrent changes to the same row could result in the exception "The transaction log might be corrupt for key ...". This could only be reproduced with 3 or more threads. changelog_1229_li=Results with CLOB or BLOB data are no longer reused. changelog_1230_li=References to BLOB and CLOB objects now have a timeout. The configuration setting is LOB_TIMEOUT (default 5 minutes). This should avoid growing the database file if there are many queries that return BLOB or CLOB objects, and the database is not closed for a longer time. changelog_1231_li=MVStore\: when committing a session that removed LOB values, changes were flushed unnecessarily. changelog_1232_li=Issue 610\: possible integer overflow in WriteBuffer.grow(). changelog_1233_li=Issue 609\: the spatial index did not support NULL (ClassCastException). changelog_1234_li=MVStore\: in some cases, CLOB/BLOB data blocks were removed incorrectly when opening a database. changelog_1235_li=MVStore\: updates that affected many rows were were slow in some cases if there was a secondary index. changelog_1236_li=Using "runscript" with autocommit disabled could result in a lock timeout on the internal table "SYS". changelog_1237_li=Issue 603\: there was a memory leak when using H2 in a web application. Apache Tomcat logged an error message\: "The web application ... created a ThreadLocal with key of type [org.h2.util.DateTimeUtils$1]". changelog_1238_li=When using the MVStore, running a SQL script generate by the Recover tool from a PageStore file failed with a strange error message (NullPointerException), now a clear error message is shown. changelog_1239_li=Issue 605\: with version 1.4.186, opening a database could result in an endless loop in LobStorageMap.init. changelog_1240_li=Queries that use the same table alias multiple times now work. Before, the select expression list was expanded incorrectly. Example\: "select * from a as x, b as x". changelog_1241_li=The MySQL compatibility feature "insert ... on duplicate key update" did not work with a non-default schema. changelog_1242_li=Issue 599\: the condition "in(x, y)" could not be used in the select list when using "group by". changelog_1243_li=The LIRS cache could grow larger than the allocated memory. changelog_1244_li=A new file system implementation that re-opens the file if it was closed due to the application calling Thread.interrupt(). File name prefix "retry\:". Please note it is strongly recommended to avoid calling Thread.interrupt; this is a problem for various libraries, including Apache Lucene. changelog_1245_li=MVStore\: use RandomAccessFile file system if the file name starts with "file\:". changelog_1246_li=Allow DATEADD to take a long value for count when manipulating milliseconds. changelog_1247_li=When using MV_STORE\=TRUE and the SET CACHE_SIZE setting, the cache size was incorrectly set, so that it was effectively 1024 times smaller than it should be. changelog_1248_li=Concurrent CREATE TABLE... IF NOT EXISTS in the presence of MULTI_THREAD\=TRUE could throw an exception. changelog_1249_li=Fix bug in MVStore when creating lots of temporary tables, where we could run out of transaction IDs. changelog_1250_li=Add support for PostgreSQL STRING_AGG function. Patch by Fred Aquiles. changelog_1251_li=Fix bug in "jdbc\:h2\:nioMemFS" isRoot() function. Also, the page size was increased to 64 KB. changelog_1252_h2=Version 1.4.186 Beta (2015-03-02) changelog_1253_li=The Servlet API 3.0.1 is now used, instead of 2.4. changelog_1254_li=MVStore\: old chunks no longer removed in append-only mode. changelog_1255_li=MVStore\: the cache for page references could grow far too big, resulting in out of memory in some cases. changelog_1256_li=MVStore\: orphaned lob objects were not correctly removed in some cases, making the database grow unnecessarily. changelog_1257_li=MVStore\: the maximum cache size was artificially limited to 2 GB (due to an integer overflow). changelog_1258_li=MVStore / TransactionStore\: concurrent updates could result in a "Too many open transactions" exception. changelog_1259_li=StringUtils.toUpperEnglish now has a small cache. This should speed up reading from a ResultSet when using the column name. changelog_1260_li=MVStore\: up to 65535 open transactions are now supported. Previously, the limit was at most 65535 transactions between the oldest open and the newest open transaction (which was quite a strange limit). changelog_1261_li=The default limit for in-place LOB objects was changed from 128 to 256 bytes. This is because each read creates a reference to a LOB, and maintaining the references is a big overhead. With the higher limit, less references are needed. changelog_1262_li=Tables without columns didn't work. (The use case for such tables is testing.) changelog_1263_li=The LIRS cache now resizes the table automatically in all cases and no longer needs the averageMemory configuration. changelog_1264_li=Creating a linked table from an MVStore database to a non-MVStore database created a second (non-MVStore) database file. changelog_1265_li=In version 1.4.184, a bug was introduced that broke queries that have both joins and wildcards, for example\: select * from dual join(select x from dual) on 1\=1 changelog_1266_li=Issue 598\: parser fails on timestamp "24\:00\:00.1234" - prevent the creation of out-of-range time values. changelog_1267_li=Allow declaring triggers as source code (like functions). Patch by Sylvain Cuaz. changelog_1268_li=Make the planner use indexes for sorting when doing a GROUP BY where all of the GROUP BY columns are not mentioned in the select. Patch by Frederico (zepfred). changelog_1269_li=PostgreSQL compatibility\: generate_series (as an alias for system_range). Patch by litailang. changelog_1270_li=Fix missing "column" type in right-hand parameter in ConditionIn. Patch by Arnaud Thimel. changelog_1271_h2=Version 1.4.185 Beta (2015-01-16) changelog_1272_li=In version 1.4.184, "group by" ignored the table name, and could pick a select column by mistake. Example\: select 0 as x from system_range(1, 2) d group by d.x; changelog_1273_li=New connection setting "REUSE_SPACE" (default\: true). If disabled, all changes are appended to the database file, and existing content is never overwritten. This allows to rollback to a previous state of the database by truncating the database file. changelog_1274_li=Issue 587\: MVStore\: concurrent compaction and store operations could result in an IllegalStateException. changelog_1275_li=Issue 594\: Profiler.copyInThread does not work properly. changelog_1276_li=Script tool\: Now, SCRIPT ... TO is always used (for higher speed and lower disk space usage). changelog_1277_li=Script tool\: Fix parsing of BLOCKSIZE parameter, original patch by Ken Jorissen. changelog_1278_li=Fix bug in PageStore\#commit method - when the ignoreBigLog flag was set, the logic that cleared the flag could never be reached, resulting in performance degradation. Reported by Alexander Nesterov. changelog_1279_li=Issue 552\: Implement BIT_AND and BIT_OR aggregate functions. changelog_1280_h2=Version 1.4.184 Beta (2014-12-19) changelog_1281_li=In version 1.3.183, indexes were not used if the table contains columns with a default value generated by a sequence. This includes tables with identity and auto-increment columns. This bug was introduced by supporting "rownum" in views and derived tables. changelog_1282_li=MVStore\: imported BLOB and CLOB data sometimes disappeared. This was caused by a bug in the ObjectDataType comparison. changelog_1283_li=Reading from a StreamStore now throws an IOException if the underlying data doesn't exist. changelog_1284_li=MVStore\: if there is an exception while saving, the store is now in all cases immediately closed. changelog_1285_li=MVStore\: the dump tool could go into an endless loop for some files. changelog_1286_li=MVStore\: recovery for a database with many CLOB or BLOB entries is now much faster. changelog_1287_li=Group by with a quoted select column name alias didn't work. Example\: select 1 "a" from dual group by "a" changelog_1288_li=Auto-server mode\: the host name is now stored in the .lock.db file. changelog_1289_h2=Version 1.4.183 Beta (2014-12-13) changelog_1290_li=MVStore\: the default auto-commit buffer size is now about twice as big. This should reduce the database file size after inserting a lot of data. changelog_1291_li=The built-in functions "power" and "radians" now always return a double. changelog_1292_li=Using "row_number" or "rownum" in views or derived tables had unexpected results if the outer query contained constraints for the given view. Example\: select b.nr, b.id from (select row_number() over() as nr, a.id as id from (select id from test order by name) as a) as b where b.id \= 1 changelog_1293_li=MVStore\: the Recover tool can now deal with more types of corruption in the file. changelog_1294_li=MVStore\: the TransactionStore now first needs to be initialized before it can be used. changelog_1295_li=Views and derived tables with equality and range conditions on the same columns did not work properly. example\: select x from (select x from (select 1 as x) where x > 0 and x < 2) where x \= 1 changelog_1296_li=The database URL setting PAGE_SIZE setting is now also used for the MVStore. changelog_1297_li=MVStore\: the default page split size for persistent stores is now 4096 (it was 16 KB so far). This should reduce the database file size for most situations (in some cases, less than half the size of the previous version). changelog_1298_li=With query literals disabled, auto-analyze of a table with CLOB or BLOB did not work. changelog_1299_li=MVStore\: use a mark and sweep GC algorithm instead of reference counting, to ensure used chunks are never overwrite, even if the reference counting algorithm does not work properly. changelog_1300_li=In the multi-threaded mode, updating the column selectivity ("analyze") in the background sometimes did not work. changelog_1301_li=In the multi-threaded mode, database metadata operations did sometimes not work if the schema was changed at the same time (for example, if tables were dropped). changelog_1302_li=Some CLOB and BLOB values could no longer be read when the original row was removed (even when using the MVCC mode). changelog_1303_li=The MVStoreTool could throw an IllegalArgumentException. changelog_1304_li=Improved performance for some date / time / timestamp conversion operations. Thanks to Sergey Evdokimov for reporting the problem. changelog_1305_li=H2 Console\: the built-in web server did not work properly if an unknown file was requested. changelog_1306_li=MVStore\: the jar file is renamed to "h2-mvstore-*.jar" and is deployed to Maven separately. changelog_1307_li=MVStore\: support for concurrent reads and writes is now enabled by default. changelog_1308_li=Server mode\: the transfer buffer size has been changed from 16 KB to 64 KB, after it was found that this improves performance on Linux quite a lot. changelog_1309_li=H2 Console and server mode\: SSL is now disabled and TLS is used to protect against the Poodle SSLv3 vulnerability. The system property to disable secure anonymous connections is now "h2.enableAnonymousTLS". The default certificate is still self-signed, so you need to manually install another one if you want to avoid man in the middle attacks. changelog_1310_li=MVStore\: the R-tree did not correctly measure the memory usage. changelog_1311_li=MVStore\: compacting a store with an R-tree did not always work. changelog_1312_li=Issue 581\: When running in LOCK_MODE\=0, JdbcDatabaseMetaData\#supportsTransactionIsolationLevel(TRANSACTION_READ_UNCOMMITTED) should return false changelog_1313_li=Fix bug which could generate deadlocks when multiple connections accessed the same table. changelog_1314_li=Some places in the code were not respecting the value set in the "SET MAX_MEMORY_ROWS x" command changelog_1315_li=Fix bug which could generate a NegativeArraySizeException when performing large (>40M) row union operations changelog_1316_li=Fix "USE schema" command for MySQL compatibility, patch by mfulton changelog_1317_li=Parse and ignore the ROW_FORMAT\=DYNAMIC MySQL syntax, patch by mfulton changelog_1318_h2=Version 1.4.182 Beta (2014-10-17) changelog_1319_li=MVStore\: improved error messages and logging; improved behavior if there is an error when serializing objects. changelog_1320_li=OSGi\: the MVStore packages are now exported. changelog_1321_li=With the MVStore option, when using multiple threads that concurrently create indexes or tables, it was relatively easy to get a lock timeout on the "SYS" table. changelog_1322_li=When using the multi-threaded option, the exception "Unexpected code path" could be thrown, specially if the option "analyze_auto" was set to a low value. changelog_1323_li=In the server mode, when reading from a CLOB or BLOB, if the connection was closed, a NullPointerException could be thrown instead of an exception saying the connection is closed. changelog_1324_li=DatabaseMetaData.getProcedures and getProcedureColumns could throw an exception if a user defined class is not available. changelog_1325_li=Issue 584\: the error message for a wrong sequence definition was wrong. changelog_1326_li=CSV tool\: the rowSeparator option is no longer supported, as the same can be achieved with the lineSeparator. changelog_1327_li=Descending indexes on MVStore tables did not work properly. changelog_1328_li=Issue 579\: Conditions on the "_rowid_" pseudo-column didn't use an index when using the MVStore. changelog_1329_li=Fixed documentation that "offset" and "fetch" are also keywords since version 1.4.x. changelog_1330_li=The Long.MIN_VALUE could not be parsed for auto-increment (identity) columns. changelog_1331_li=Issue 573\: Add implementation for Methods "isWrapperFor()" and "unwrap()" in other JDBC classes. changelog_1332_li=Issue 572\: MySQL compatibility for "order by" in update statements. changelog_1333_li=The change in JDBC escape processing in version 1.4.181 affects both the parser (which is running on the server) and the JDBC API (which is running on the client). If you (or a tool you use) use the syntax "{t 'time}", or "{ts 'timestamp'}", or "{d 'data'}", then both the client and the server need to be upgraded to version 1.4.181 or later. changelog_1334_h2=Version 1.4.181 Beta (2014-08-06) changelog_1335_li=Improved MySQL compatibility by supporting "use schema". Thanks a lot to Karl Pietrzak for the patch\! changelog_1336_li=Writing to the trace file is now faster, specially with the debug level. changelog_1337_li=The database option "defrag_always\=true" did not work with the MVStore. changelog_1338_li=The JDBC escape syntax {ts 'value'} did not interpret the value as a timestamp. The same for {d 'value'} (for date) and {t 'value'} (for time). Thanks to Lukas Eder for reporting the issue. The following problem was detected after version 1.4.181 was released\: The change in JDBC escape processing affects both the parser (which is running on the server) and the JDBC API (which is running on the client). If you (or a tool you use) use the syntax {t 'time'}, or {ts 'timestamp'}, or {d 'date'}, then both the client and the server need to be upgraded to version 1.4.181 or later. changelog_1339_li=File system abstraction\: support replacing existing files using move (currently not for Windows). changelog_1340_li=The statement "shutdown defrag" now compresses the database (with the MVStore). This command can greatly reduce the file size, and is relatively fast, but is not incremental. changelog_1341_li=The MVStore now automatically compacts the store in the background if there is no read or write activity, which should (after some time; sometimes about one minute) reduce the file size. This is still work in progress, feedback is welcome\! changelog_1342_li=Change default value of PAGE_SIZE from 2048 to 4096 to more closely match most file systems block size (PageStore only; the MVStore already used 4096). changelog_1343_li=Auto-scale MAX_MEMORY_ROWS and CACHE_SIZE settings by the amount of available RAM. Gives a better out of box experience for people with more powerful machines. changelog_1344_li=Handle tabs like 4 spaces in web console, patch by Martin Grajcar. changelog_1345_li=Issue 573\: Add implementation for Methods "isWrapperFor()" and "unwrap()" in JdbcConnection.java, patch by BigMichi1. changelog_1346_h2=Version 1.4.180 Beta (2014-07-13) changelog_1347_li=MVStore\: the store is now auto-compacted automatically up to some point, to avoid very large file sizes. This area is still work in progress. changelog_1348_li=Sequences of temporary tables (auto-increment or identity columns) were persisted unnecessarily in the database file, and were not removed when re-opening the database. changelog_1349_li=MVStore\: an IndexOutOfBoundsException could sometimes occur MVMap.openVersion when concurrently accessing the store. changelog_1350_li=The LIRS cache now re-sizes the internal hash map if needed. changelog_1351_li=Optionally persist session history in the H2 console. (patch from Martin Grajcar) changelog_1352_li=Add client-info property to get the number of servers currently in the cluster and which servers that are available. (patch from Nikolaj Fogh) changelog_1353_li=Fix bug in changing encrypted DB password that kept the file handle open when the wrong password was supplied. (test case from Jens Hohmuth). changelog_1354_li=Issue 567\: H2 hangs for a long time then (sometimes) recovers. Introduce a queue when doing table locking to prevent session starvation. cheatSheet_1000_h1=H2 Database Engine Cheat Sheet cheatSheet_1001_h2=Using H2 cheatSheet_1002_a=H2 cheatSheet_1003_li=\ is <a href\="https\://github.com/h2database/h2database">open source</a>, <a href\="license.html">free to use and distribute</a>. cheatSheet_1004_a=Download cheatSheet_1005_li=\: <a href\="http\://repo1.maven.org/maven2/com/h2database/h2/1.4.196/h2-1.4.196.jar" class\="link">jar</a>, <a href\="http\://www.h2database.com/h2-setup-2017-06-10.exe" class\="link">installer (Windows)</a>, <a href\="http\://www.h2database.com/h2-2017-06-10.zip" class\="link">zip</a>. cheatSheet_1006_li=To start the <a href\="quickstart.html\#h2_console">H2 Console tool</a>, double click the jar file, or run <code>java -jar h2*.jar</code>, <code>h2.bat</code>, or <code>h2.sh</code>. cheatSheet_1007_a=A new database is automatically created cheatSheet_1008_a=by default cheatSheet_1009_li=. cheatSheet_1010_a=Closing the last connection closes the database cheatSheet_1011_li=. cheatSheet_1012_h2=Documentation cheatSheet_1013_p=\ Reference\: <a href\="grammar.html" class\="link">SQL grammar</a>, <a href\="functions.html" class\="link">functions</a>, <a href\="datatypes.html" class\="link">data types</a>, <a href\="tutorial.html\#command_line_tools" class\="link">tools</a>, <a href\="../javadoc/index.html" class\="link">API</a> cheatSheet_1014_a=Features cheatSheet_1015_p=\: <a href\="tutorial.html\#fulltext" class\="link">fulltext search</a>, <a href\="features.html\#file_encryption" class\="link">encryption</a>, <a href\="features.html\#read_only" class\="link">read-only</a> <a href\="features.html\#database_in_zip" class\="link">(zip/jar)</a>, <a href\="tutorial.html\#csv" class\="link">CSV</a>, <a href\="features.html\#auto_reconnect" class\="link">auto-reconnect</a>, <a href\="features.html\#triggers" class\="link">triggers</a>, <a href\="features.html\#user_defined_functions" class\="link">user functions</a> cheatSheet_1016_a=Database URLs cheatSheet_1017_a=Embedded cheatSheet_1018_code=jdbc\:h2\:~/test cheatSheet_1019_p=\ 'test' in the user home directory cheatSheet_1020_code=jdbc\:h2\:/data/test cheatSheet_1021_p=\ 'test' in the directory /data cheatSheet_1022_code=jdbc\:h2\:test cheatSheet_1023_p=\ in the current(\!) working directory cheatSheet_1024_a=In-Memory cheatSheet_1025_code=jdbc\:h2\:mem\:test cheatSheet_1026_p=\ multiple connections in one process cheatSheet_1027_code=jdbc\:h2\:mem\: cheatSheet_1028_p=\ unnamed private; one connection cheatSheet_1029_a=Server Mode cheatSheet_1030_code=jdbc\:h2\:tcp\://localhost/~/test cheatSheet_1031_p=\ user home dir cheatSheet_1032_code=jdbc\:h2\:tcp\://localhost//data/test cheatSheet_1033_p=\ absolute dir cheatSheet_1034_a=Server start cheatSheet_1035_p=\:<code>java -cp *.jar org.h2.tools.Server</code> cheatSheet_1036_a=Settings cheatSheet_1037_code=jdbc\:h2\:..;MODE\=MySQL cheatSheet_1038_a=compatibility (or HSQLDB,...) cheatSheet_1039_code=jdbc\:h2\:..;TRACE_LEVEL_FILE\=3 cheatSheet_1040_a=log to *.trace.db cheatSheet_1041_a=Using the JDBC API cheatSheet_1042_a=Connection Pool cheatSheet_1043_a=Maven 2 cheatSheet_1044_a=Hibernate cheatSheet_1045_p=\ hibernate.cfg.xml (or use the HSQLDialect)\: cheatSheet_1046_a=TopLink and Glassfish cheatSheet_1047_p=\ Datasource class\: <code>org.h2.jdbcx.JdbcDataSource</code> cheatSheet_1048_code=oracle.toplink.essentials.platform. cheatSheet_1049_code=database.H2Platform download_1000_h1=Downloads download_1001_h3=Version 1.4.196 (2017-06-10) download_1002_a=Windows Installer download_1003_a=Platform-Independent Zip download_1004_h3=Version 1.4.195 (2017-04-23), Last Stable download_1005_a=Windows Installer download_1006_a=Platform-Independent Zip download_1007_h3=Old Versions download_1008_a=Platform-Independent Zip download_1009_h3=Jar File download_1010_a=Maven.org download_1011_a=Sourceforge.net download_1012_h3=Maven (Binary, Javadoc, and Source) download_1013_a=Binary download_1014_a=Javadoc download_1015_a=Sources download_1016_h3=Database Upgrade Helper File download_1017_a=Upgrade database from 1.1 to the current version download_1018_h3=Git Source Repository download_1019_a=Github download_1020_p=\ For details about changes, see the <a href\="changelog.html">Change Log</a>. download_1021_h3=News and Project Information download_1022_a=Atom Feed download_1023_a=RSS Feed download_1024_a=DOAP File download_1025_p=\ (<a href\="http\://en.wikipedia.org/wiki/DOAP">what is this</a>) faq_1000_h1=Frequently Asked Questions faq_1001_a=\ I Have a Problem or Feature Request faq_1002_a=\ Are there Known Bugs? When is the Next Release? faq_1003_a=\ Is this Database Engine Open Source? faq_1004_a=\ Is Commercial Support Available? faq_1005_a=\ How to Create a New Database? faq_1006_a=\ How to Connect to a Database? faq_1007_a=\ Where are the Database Files Stored? faq_1008_a=\ What is the Size Limit (Maximum Size) of a Database? faq_1009_a=\ Is it Reliable? faq_1010_a=\ Why is Opening my Database Slow? faq_1011_a=\ My Query is Slow faq_1012_a=\ H2 is Very Slow faq_1013_a=\ Column Names are Incorrect? faq_1014_a=\ Float is Double? faq_1015_a=\ Is the GCJ Version Stable? Faster? faq_1016_a=\ How to Translate this Project? faq_1017_a=\ How to Contribute to this Project? faq_1018_h3=I Have a Problem or Feature Request faq_1019_p=\ Please read the <a href\="build.html\#support">support checklist</a>. faq_1020_h3=Are there Known Bugs? When is the Next Release? faq_1021_p=\ Usually, bugs get fixes as they are found. There is a release every few weeks. Here is the list of known and confirmed issues\: faq_1022_li=When opening a database file in a timezone that has different daylight saving rules\: the time part of dates where the daylight saving doesn't match will differ. This is not a problem within regions that use the same rules (such as, within USA, or within Europe), even if the timezone itself is different. As a workaround, export the database to a SQL script using the old timezone, and create a new database in the new timezone. faq_1023_li=Apache Harmony\: there seems to be a bug in Harmony that affects H2. See <a href\="http\://issues.apache.org/jira/browse/HARMONY-6505">HARMONY-6505</a>. faq_1024_li=Tomcat and Glassfish 3 set most static fields (final or non-final) to <code>null</code> when unloading a web application. This can cause a <code>NullPointerException</code> in H2 versions 1.1.107 and older, and may still not work in newer versions. Please report it if you run into this issue. In Tomcat >\= 6.0 this behavior can be disabled by setting the system property <code>org.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES\=false</code>, however Tomcat may then run out of memory. A known workaround is to put the <code>h2*.jar</code> file in a shared <code>lib</code> directory (<code>common/lib</code>). faq_1025_li=Some problems have been found with right outer join. Internally, it is converted to left outer join, which does not always produce the same results as other databases when used in combination with other joins. This problem is fixed in H2 version 1.3. faq_1026_li=When using Install4j before 4.1.4 on Linux and enabling <code>pack200</code>, the <code>h2*.jar</code> becomes corrupted by the install process, causing application failure. A workaround is to add an empty file <code>h2*.jar.nopack</code> next to the <code>h2*.jar</code> file. This problem is solved in Install4j 4.1.4. faq_1027_p=\ For a complete list, see <a href\="https\://github.com/h2database/h2database/issues">Open Issues</a>. faq_1028_h3=Is this Database Engine Open Source? faq_1029_p=\ Yes. It is free to use and distribute, and the source code is included. See also under license. faq_1030_h3=Is Commercial Support Available? faq_1031_p=\ No, currently commercial support is not available. faq_1032_h3=How to Create a New Database? faq_1033_p=\ By default, a new database is automatically created if it does not yet exist. See <a href\="tutorial.html\#creating_new_databases">Creating New Databases</a>. faq_1034_h3=How to Connect to a Database? faq_1035_p=\ The database driver is <code>org.h2.Driver</code>, and the database URL starts with <code>jdbc\:h2\:</code>. To connect to a database using JDBC, use the following code\: faq_1036_h3=Where are the Database Files Stored? faq_1037_p=\ When using database URLs like <code>jdbc\:h2\:~/test</code>, the database is stored in the user directory. For Windows, this is usually <code>C\:\\Documents and Settings\\<userName></code> or <code>C\:\\Users\\<userName></code>. If the base directory is not set (as in <code>jdbc\:h2\:./test</code>), the database files are stored in the directory where the application is started (the current working directory). When using the H2 Console application from the start menu, this is <code><Installation Directory>/bin</code>. The base directory can be set in the database URL. A fixed or relative path can be used. When using the URL <code>jdbc\:h2\:file\:./data/sample</code>, the database is stored in the directory <code>data</code> (relative to the current working directory). The directory is created automatically if it does not yet exist. It is also possible to use the fully qualified directory name (and for Windows, drive name). Example\: <code>jdbc\:h2\:file\:C\:/data/test</code> faq_1038_h3=What is the Size Limit (Maximum Size) of a Database? faq_1039_p=\ See <a href\="advanced.html\#limits_limitations">Limits and Limitations</a>. faq_1040_h3=Is it Reliable? faq_1041_p=\ That is not easy to say. It is still a quite new product. A lot of tests have been written, and the code coverage of these tests is higher than 80% for each package. Randomized stress tests are run regularly. But there are probably still bugs that have not yet been found (as with most software). Some features are known to be dangerous, they are only supported for situations where performance is more important than reliability. Those dangerous features are\: faq_1042_li=Disabling the transaction log or FileDescriptor.sync() using LOG\=0 or LOG\=1. faq_1043_li=Using the transaction isolation level <code>READ_UNCOMMITTED</code> (<code>LOCK_MODE 0</code>) while at the same time using multiple connections. faq_1044_li=Disabling database file protection using (setting <code>FILE_LOCK</code> to <code>NO</code> in the database URL). faq_1045_li=Disabling referential integrity using <code>SET REFERENTIAL_INTEGRITY FALSE</code>. faq_1046_p=\ In addition to that, running out of memory should be avoided. In older versions, OutOfMemory errors while using the database could corrupt a databases. faq_1047_p=\ This database is well tested using automated test cases. The tests run every night and run for more than one hour. But not all areas of this database are equally well tested. When using one of the following features for production, please ensure your use case is well tested (if possible with automated test cases). The areas that are not well tested are\: faq_1048_li=Platforms other than Windows, Linux, Mac OS X, or JVMs other than Oracle 1.6, 1.7, 1.8. faq_1049_li=The features <code>AUTO_SERVER</code> and <code>AUTO_RECONNECT</code>. faq_1050_li=Cluster mode, 2-phase commit, savepoints. faq_1051_li=Fulltext search. faq_1052_li=Operations on LOBs over 2 GB. faq_1053_li=The optimizer may not always select the best plan. faq_1054_li=Using the ICU4J collator. faq_1055_p=\ Areas considered experimental are\: faq_1056_li=The PostgreSQL server faq_1057_li=Clustering (there are cases were transaction isolation can be broken due to timing issues, for example one session overtaking another session). faq_1058_li=Multi-threading within the engine using <code>SET MULTI_THREADED\=1</code>. faq_1059_li=Compatibility modes for other databases (only some features are implemented). faq_1060_li=The soft reference cache (<code>CACHE_TYPE\=SOFT_LRU</code>). It might not improve performance, and out of memory issues have been reported. faq_1061_p=\ Some users have reported that after a power failure, the database cannot be opened sometimes. In this case, use a backup of the database or the Recover tool. Please report such problems. The plan is that the database automatically recovers in all situations. faq_1062_h3=Why is Opening my Database Slow? faq_1063_p=\ To find out what the problem is, use the H2 Console and click on "Test Connection" instead of "Login". After the "Login Successful" appears, click on it (it's a link). This will list the top stack traces. Then either analyze this yourself, or post those stack traces in the Google Group. faq_1064_p=\ Other possible reasons are\: the database is very big (many GB), or contains linked tables that are slow to open. faq_1065_h3=My Query is Slow faq_1066_p=\ Slow <code>SELECT</code> (or <code>DELETE, UPDATE, MERGE</code>) statement can have multiple reasons. Follow this checklist\: faq_1067_li=Run <code>ANALYZE</code> (see documentation for details). faq_1068_li=Run the query with <code>EXPLAIN</code> and check if indexes are used (see documentation for details). faq_1069_li=If required, create additional indexes and try again using <code>ANALYZE</code> and <code>EXPLAIN</code>. faq_1070_li=If it doesn't help please report the problem. faq_1071_h3=H2 is Very Slow faq_1072_p=\ By default, H2 closes the database when the last connection is closed. If your application closes the only connection after each operation, the database is opened and closed a lot, which is quite slow. There are multiple ways to solve this problem, see <a href\="performance.html\#database_performance_tuning">Database Performance Tuning</a>. faq_1073_h3=Column Names are Incorrect? faq_1074_p=\ For the query <code>SELECT ID AS X FROM TEST</code> the method <code>ResultSetMetaData.getColumnName()</code> returns <code>ID</code>, I expect it to return <code>X</code>. What's wrong? faq_1075_p=\ This is not a bug. According the the JDBC specification, the method <code>ResultSetMetaData.getColumnName()</code> should return the name of the column and not the alias name. If you need the alias name, use <a href\="http\://java.sun.com/javase/6/docs/api/java/sql/ResultSetMetaData.html\#getColumnLabel(int)"><code>ResultSetMetaData.getColumnLabel()</code></a>. Some other database don't work like this yet (they don't follow the JDBC specification). If you need compatibility with those databases, use the <a href\="features.html\#compatibility">Compatibility Mode</a>, or append <a href\="../javadoc/org/h2/engine/DbSettings.html\#ALIAS_COLUMN_NAME"><code>;ALIAS_COLUMN_NAME\=TRUE</code></a> to the database URL. faq_1076_p=\ This also applies to DatabaseMetaData calls that return a result set. The columns in the JDBC API are column labels, not column names. faq_1077_h3=Float is Double? faq_1078_p=\ For a table defined as <code>CREATE TABLE TEST(X FLOAT)</code> the method <code>ResultSet.getObject()</code> returns a <code>java.lang.Double</code>, I expect it to return a <code>java.lang.Float</code>. What's wrong? faq_1079_p=\ This is not a bug. According the the JDBC specification, the JDBC data type <code>FLOAT</code> is equivalent to <code>DOUBLE</code>, and both are mapped to <code>java.lang.Double</code>. See also <a href\="http\://java.sun.com/j2se/1.5.0/docs/guide/jdbc/getstart/mapping.html\#1055162"> Mapping SQL and Java Types - 8.3.10 FLOAT</a>. faq_1080_h3=Is the GCJ Version Stable? Faster? faq_1081_p=\ The GCJ version is not as stable as the Java version. When running the regression test with the GCJ version, sometimes the application just stops at what seems to be a random point without error message. Currently, the GCJ version is also slower than when using the Sun VM. However, the startup of the GCJ version is faster than when using a VM. faq_1082_h3=How to Translate this Project? faq_1083_p=\ For more information, see <a href\="build.html\#translating">Build/Translating</a>. faq_1084_h3=How to Contribute to this Project? faq_1085_p=\ There are various way to help develop an open source project like H2. The first step could be to <a href\="build.html\#translating">translate</a> the error messages and the GUI to your native language. Then, you could <a href\="build.html\#providing_patches">provide patches</a>. Please start with small patches. That could be adding a test case to improve the <a href\="build.html\#automated">code coverage</a> (the target code coverage for this project is 90%, higher is better). You will have to <a href\="build.html">develop, build and run the tests</a>. Once you are familiar with the code, you could implement missing features from the <a href\="roadmap.html">feature request list</a>. I suggest to start with very small features that are easy to implement. Keep in mind to provide test cases as well. features_1000_h1=Features features_1001_a=\ Feature List features_1002_a=\ Comparison to Other Database Engines features_1003_a=\ H2 in Use features_1004_a=\ Connection Modes features_1005_a=\ Database URL Overview features_1006_a=\ Connecting to an Embedded (Local) Database features_1007_a=\ In-Memory Databases features_1008_a=\ Database Files Encryption features_1009_a=\ Database File Locking features_1010_a=\ Opening a Database Only if it Already Exists features_1011_a=\ Closing a Database features_1012_a=\ Ignore Unknown Settings features_1013_a=\ Changing Other Settings when Opening a Connection features_1014_a=\ Custom File Access Mode features_1015_a=\ Multiple Connections features_1016_a=\ Database File Layout features_1017_a=\ Logging and Recovery features_1018_a=\ Compatibility features_1019_a=\ Auto-Reconnect features_1020_a=\ Automatic Mixed Mode features_1021_a=\ Page Size features_1022_a=\ Using the Trace Options features_1023_a=\ Using Other Logging APIs features_1024_a=\ Read Only Databases features_1025_a=\ Read Only Databases in Zip or Jar File features_1026_a=\ Computed Columns / Function Based Index features_1027_a=\ Multi-Dimensional Indexes features_1028_a=\ User-Defined Functions and Stored Procedures features_1029_a=\ Pluggable or User-Defined Tables features_1030_a=\ Triggers features_1031_a=\ Compacting a Database features_1032_a=\ Cache Settings features_1033_h2=Feature List features_1034_h3=Main Features features_1035_li=Very fast database engine features_1036_li=Open source features_1037_li=Written in Java features_1038_li=Supports standard SQL, JDBC API features_1039_li=Embedded and Server mode, Clustering support features_1040_li=Strong security features features_1041_li=The PostgreSQL ODBC driver can be used features_1042_li=Multi version concurrency features_1043_h3=Additional Features features_1044_li=Disk based or in-memory databases and tables, read-only database support, temporary tables features_1045_li=Transaction support (read committed), 2-phase-commit features_1046_li=Multiple connections, table level locking features_1047_li=Cost based optimizer, using a genetic algorithm for complex queries, zero-administration features_1048_li=Scrollable and updatable result set support, large result set, external result sorting, functions can return a result set features_1049_li=Encrypted database (AES), SHA-256 password encryption, encryption functions, SSL features_1050_h3=SQL Support features_1051_li=Support for multiple schemas, information schema features_1052_li=Referential integrity / foreign key constraints with cascade, check constraints features_1053_li=Inner and outer joins, subqueries, read only views and inline views features_1054_li=Triggers and Java functions / stored procedures features_1055_li=Many built-in functions, including XML and lossless data compression features_1056_li=Wide range of data types including large objects (BLOB/CLOB) and arrays features_1057_li=Sequence and autoincrement columns, computed columns (can be used for function based indexes) features_1058_code=ORDER BY, GROUP BY, HAVING, UNION, LIMIT, TOP features_1059_li=Collation support, including support for the ICU4J library features_1060_li=Support for users and roles features_1061_li=Compatibility modes for IBM DB2, Apache Derby, HSQLDB, MS SQL Server, MySQL, Oracle, and PostgreSQL. features_1062_h3=Security Features features_1063_li=Includes a solution for the SQL injection problem features_1064_li=User password authentication uses SHA-256 and salt features_1065_li=For server mode connections, user passwords are never transmitted in plain text over the network (even when using insecure connections; this only applies to the TCP server and not to the H2 Console however; it also doesn't apply if you set the password in the database URL) features_1066_li=All database files (including script files that can be used to backup data) can be encrypted using the AES-128 encryption algorithm features_1067_li=The remote JDBC driver supports TCP/IP connections over TLS features_1068_li=The built-in web server supports connections over TLS features_1069_li=Passwords can be sent to the database using char arrays instead of Strings features_1070_h3=Other Features and Tools features_1071_li=Small footprint (smaller than 1.5 MB), low memory requirements features_1072_li=Multiple index types (b-tree, tree, hash) features_1073_li=Support for multi-dimensional indexes features_1074_li=CSV (comma separated values) file support features_1075_li=Support for linked tables, and a built-in virtual 'range' table features_1076_li=Supports the <code>EXPLAIN PLAN</code> statement; sophisticated trace options features_1077_li=Database closing can be delayed or disabled to improve the performance features_1078_li=Web-based Console application (translated to many languages) with autocomplete features_1079_li=The database can generate SQL script files features_1080_li=Contains a recovery tool that can dump the contents of the database features_1081_li=Support for variables (for example to calculate running totals) features_1082_li=Automatic re-compilation of prepared statements features_1083_li=Uses a small number of database files features_1084_li=Uses a checksum for each record and log entry for data integrity features_1085_li=Well tested (high code coverage, randomized stress tests) features_1086_h2=Comparison to Other Database Engines features_1087_p=\ This comparison is based on H2 1.3, <a href\="http\://db.apache.org/derby">Apache Derby version 10.8</a>, <a href\="http\://hsqldb.org">HSQLDB 2.2</a>, <a href\="http\://mysql.com">MySQL 5.5</a>, <a href\="http\://www.postgresql.org">PostgreSQL 9.0</a>. features_1088_th=Feature features_1089_th=H2 features_1090_th=Derby features_1091_th=HSQLDB features_1092_th=MySQL features_1093_th=PostgreSQL features_1094_td=Pure Java features_1095_td=Yes features_1096_td=Yes features_1097_td=Yes features_1098_td=No features_1099_td=No features_1100_td=Embedded Mode (Java) features_1101_td=Yes features_1102_td=Yes features_1103_td=Yes features_1104_td=No features_1105_td=No features_1106_td=In-Memory Mode features_1107_td=Yes features_1108_td=Yes features_1109_td=Yes features_1110_td=No features_1111_td=No features_1112_td=Explain Plan features_1113_td=Yes features_1114_td=Yes *12 features_1115_td=Yes features_1116_td=Yes features_1117_td=Yes features_1118_td=Built-in Clustering / Replication features_1119_td=Yes features_1120_td=Yes features_1121_td=No features_1122_td=Yes features_1123_td=Yes features_1124_td=Encrypted Database features_1125_td=Yes features_1126_td=Yes *10 features_1127_td=Yes *10 features_1128_td=No features_1129_td=No features_1130_td=Linked Tables features_1131_td=Yes features_1132_td=No features_1133_td=Partially *1 features_1134_td=Partially *2 features_1135_td=Yes features_1136_td=ODBC Driver features_1137_td=Yes features_1138_td=No features_1139_td=No features_1140_td=Yes features_1141_td=Yes features_1142_td=Fulltext Search features_1143_td=Yes features_1144_td=Yes features_1145_td=No features_1146_td=Yes features_1147_td=Yes features_1148_td=Domains (User-Defined Types) features_1149_td=Yes features_1150_td=No features_1151_td=Yes features_1152_td=Yes features_1153_td=Yes features_1154_td=Files per Database features_1155_td=Few features_1156_td=Many features_1157_td=Few features_1158_td=Many features_1159_td=Many features_1160_td=Row Level Locking features_1161_td=Yes *9 features_1162_td=Yes features_1163_td=Yes *9 features_1164_td=Yes features_1165_td=Yes features_1166_td=Multi Version Concurrency features_1167_td=Yes features_1168_td=No features_1169_td=Yes features_1170_td=Yes features_1171_td=Yes features_1172_td=Multi-Threaded Processing features_1173_td=No *11 features_1174_td=Yes features_1175_td=Yes features_1176_td=Yes features_1177_td=Yes features_1178_td=Role Based Security features_1179_td=Yes features_1180_td=Yes *3 features_1181_td=Yes features_1182_td=Yes features_1183_td=Yes features_1184_td=Updatable Result Sets features_1185_td=Yes features_1186_td=Yes *7 features_1187_td=Yes features_1188_td=Yes features_1189_td=Yes features_1190_td=Sequences features_1191_td=Yes features_1192_td=Yes features_1193_td=Yes features_1194_td=No features_1195_td=Yes features_1196_td=Limit and Offset features_1197_td=Yes features_1198_td=Yes *13 features_1199_td=Yes features_1200_td=Yes features_1201_td=Yes features_1202_td=Window Functions features_1203_td=No *15 features_1204_td=No *15 features_1205_td=No features_1206_td=No features_1207_td=Yes features_1208_td=Temporary Tables features_1209_td=Yes features_1210_td=Yes *4 features_1211_td=Yes features_1212_td=Yes features_1213_td=Yes features_1214_td=Information Schema features_1215_td=Yes features_1216_td=No *8 features_1217_td=Yes features_1218_td=Yes features_1219_td=Yes features_1220_td=Computed Columns features_1221_td=Yes features_1222_td=Yes features_1223_td=Yes features_1224_td=Yes features_1225_td=Yes *6 features_1226_td=Case Insensitive Columns features_1227_td=Yes features_1228_td=Yes *14 features_1229_td=Yes features_1230_td=Yes features_1231_td=Yes *6 features_1232_td=Custom Aggregate Functions features_1233_td=Yes features_1234_td=No features_1235_td=Yes features_1236_td=No features_1237_td=Yes features_1238_td=CLOB/BLOB Compression features_1239_td=Yes features_1240_td=No features_1241_td=No features_1242_td=No features_1243_td=Yes features_1244_td=Footprint (jar/dll size) features_1245_td=~1.5 MB *5 features_1246_td=~3 MB features_1247_td=~1.5 MB features_1248_td=~4 MB features_1249_td=~6 MB features_1250_p=\ *1 HSQLDB supports text tables. features_1251_p=\ *2 MySQL supports linked MySQL tables under the name 'federated tables'. features_1252_p=\ *3 Derby support for roles based security and password checking as an option. features_1253_p=\ *4 Derby only supports global temporary tables. features_1254_p=\ *5 The default H2 jar file contains debug information, jar files for other databases do not. features_1255_p=\ *6 PostgreSQL supports functional indexes. features_1256_p=\ *7 Derby only supports updatable result sets if the query is not sorted. features_1257_p=\ *8 Derby doesn't support standard compliant information schema tables. features_1258_p=\ *9 When using MVCC (multi version concurrency). features_1259_p=\ *10 Derby and HSQLDB <a href\="http\://en.wikipedia.org/wiki/Block_cipher_modes_of_operation\#Electronic_codebook_.28ECB.29">don't hide data patterns well</a>. features_1260_p=\ *11 The MULTI_THREADED option is not enabled by default, and with version 1.3.x not supported when using MVCC. features_1261_p=\ *12 Derby doesn't support the <code>EXPLAIN</code> statement, but it supports runtime statistics and retrieving statement execution plans. features_1262_p=\ *13 Derby doesn't support the syntax <code>LIMIT .. [OFFSET ..]</code>, however it supports <code>FETCH FIRST .. ROW[S] ONLY</code>. features_1263_p=\ *14 Using collations. *15 Derby and H2 support <code>ROW_NUMBER() OVER()</code>. features_1264_h3=DaffodilDb and One$Db features_1265_p=\ It looks like the development of this database has stopped. The last release was February 2006. features_1266_h3=McKoi features_1267_p=\ It looks like the development of this database has stopped. The last release was August 2004. features_1268_h2=H2 in Use features_1269_p=\ For a list of applications that work with or use H2, see\: <a href\="links.html">Links</a>. features_1270_h2=Connection Modes features_1271_p=\ The following connection modes are supported\: features_1272_li=Embedded mode (local connections using JDBC) features_1273_li=Server mode (remote connections using JDBC or ODBC over TCP/IP) features_1274_li=Mixed mode (local and remote connections at the same time) features_1275_h3=Embedded Mode features_1276_p=\ In embedded mode, an application opens a database from within the same JVM using JDBC. This is the fastest and easiest connection mode. The disadvantage is that a database may only be open in one virtual machine (and class loader) at any time. As in all modes, both persistent and in-memory databases are supported. There is no limit on the number of database open concurrently, or on the number of open connections. features_1277_h3=Server Mode features_1278_p=\ When using the server mode (sometimes called remote mode or client/server mode), an application opens a database remotely using the JDBC or ODBC API. A server needs to be started within the same or another virtual machine, or on another computer. Many applications can connect to the same database at the same time, by connecting to this server. Internally, the server process opens the database(s) in embedded mode. features_1279_p=\ The server mode is slower than the embedded mode, because all data is transferred over TCP/IP. As in all modes, both persistent and in-memory databases are supported. There is no limit on the number of database open concurrently per server, or on the number of open connections. features_1280_h3=Mixed Mode features_1281_p=\ The mixed mode is a combination of the embedded and the server mode. The first application that connects to a database does that in embedded mode, but also starts a server so that other applications (running in different processes or virtual machines) can concurrently access the same data. The local connections are as fast as if the database is used in just the embedded mode, while the remote connections are a bit slower. features_1282_p=\ The server can be started and stopped from within the application (using the server API), or automatically (automatic mixed mode). When using the <a href\="\#auto_mixed_mode">automatic mixed mode</a>, all clients that want to connect to the database (no matter if it's an local or remote connection) can do so using the exact same database URL. features_1283_h2=Database URL Overview features_1284_p=\ This database supports multiple connection modes and connection settings. This is achieved using different database URLs. Settings in the URLs are not case sensitive. features_1285_th=Topic features_1286_th=URL Format and Examples features_1287_a=Embedded (local) connection features_1288_td=\ jdbc\:h2\:[file\:][<path>]<databaseName> features_1289_td=\ jdbc\:h2\:~/test features_1290_td=\ jdbc\:h2\:file\:/data/sample features_1291_td=\ jdbc\:h2\:file\:C\:/data/sample (Windows only) features_1292_a=In-memory (private) features_1293_td=jdbc\:h2\:mem\: features_1294_a=In-memory (named) features_1295_td=\ jdbc\:h2\:mem\:<databaseName> features_1296_td=\ jdbc\:h2\:mem\:test_mem features_1297_a=Server mode (remote connections) features_1298_a=\ using TCP/IP features_1299_td=\ jdbc\:h2\:tcp\://<server>[\:<port>]/[<path>]<databaseName> features_1300_td=\ jdbc\:h2\:tcp\://localhost/~/test features_1301_td=\ jdbc\:h2\:tcp\://dbserv\:8084/~/sample features_1302_td=\ jdbc\:h2\:tcp\://localhost/mem\:test features_1303_a=Server mode (remote connections) features_1304_a=\ using TLS features_1305_td=\ jdbc\:h2\:ssl\://<server>[\:<port>]/<databaseName> features_1306_td=\ jdbc\:h2\:ssl\://localhost\:8085/~/sample; features_1307_a=Using encrypted files features_1308_td=\ jdbc\:h2\:<url>;CIPHER\=AES features_1309_td=\ jdbc\:h2\:ssl\://localhost/~/test;CIPHER\=AES features_1310_td=\ jdbc\:h2\:file\:~/secure;CIPHER\=AES features_1311_a=File locking methods features_1312_td=\ jdbc\:h2\:<url>;FILE_LOCK\={FILE|SOCKET|NO} features_1313_td=\ jdbc\:h2\:file\:~/private;CIPHER\=AES;FILE_LOCK\=SOCKET features_1314_a=Only open if it already exists features_1315_td=\ jdbc\:h2\:<url>;IFEXISTS\=TRUE features_1316_td=\ jdbc\:h2\:file\:~/sample;IFEXISTS\=TRUE features_1317_a=Don't close the database when the VM exits features_1318_td=\ jdbc\:h2\:<url>;DB_CLOSE_ON_EXIT\=FALSE features_1319_a=Execute SQL on connection features_1320_td=\ jdbc\:h2\:<url>;INIT\=RUNSCRIPT FROM '~/create.sql' features_1321_td=\ jdbc\:h2\:file\:~/sample;INIT\=RUNSCRIPT FROM '~/create.sql'\\;RUNSCRIPT FROM '~/populate.sql' features_1322_a=User name and/or password features_1323_td=\ jdbc\:h2\:<url>[;USER\=<username>][;PASSWORD\=<value>] features_1324_td=\ jdbc\:h2\:file\:~/sample;USER\=sa;PASSWORD\=123 features_1325_a=Debug trace settings features_1326_td=\ jdbc\:h2\:<url>;TRACE_LEVEL_FILE\=<level 0..3> features_1327_td=\ jdbc\:h2\:file\:~/sample;TRACE_LEVEL_FILE\=3 features_1328_a=Ignore unknown settings features_1329_td=\ jdbc\:h2\:<url>;IGNORE_UNKNOWN_SETTINGS\=TRUE features_1330_a=Custom file access mode features_1331_td=\ jdbc\:h2\:<url>;ACCESS_MODE_DATA\=rws features_1332_a=Database in a zip file features_1333_td=\ jdbc\:h2\:zip\:<zipFileName>\!/<databaseName> features_1334_td=\ jdbc\:h2\:zip\:~/db.zip\!/test features_1335_a=Compatibility mode features_1336_td=\ jdbc\:h2\:<url>;MODE\=<databaseType> features_1337_td=\ jdbc\:h2\:~/test;MODE\=MYSQL features_1338_a=Auto-reconnect features_1339_td=\ jdbc\:h2\:<url>;AUTO_RECONNECT\=TRUE features_1340_td=\ jdbc\:h2\:tcp\://localhost/~/test;AUTO_RECONNECT\=TRUE features_1341_a=Automatic mixed mode features_1342_td=\ jdbc\:h2\:<url>;AUTO_SERVER\=TRUE features_1343_td=\ jdbc\:h2\:~/test;AUTO_SERVER\=TRUE features_1344_a=Page size features_1345_td=\ jdbc\:h2\:<url>;PAGE_SIZE\=512 features_1346_a=Changing other settings features_1347_td=\ jdbc\:h2\:<url>;<setting>\=<value>[;<setting>\=<value>...] features_1348_td=\ jdbc\:h2\:file\:~/sample;TRACE_LEVEL_SYSTEM_OUT\=3 features_1349_h2=Connecting to an Embedded (Local) Database features_1350_p=\ The database URL for connecting to a local database is <code>jdbc\:h2\:[file\:][<path>]<databaseName></code>. The prefix <code>file\:</code> is optional. If no or only a relative path is used, then the current working directory is used as a starting point. The case sensitivity of the path and database name depend on the operating system, however it is recommended to use lowercase letters only. The database name must be at least three characters long (a limitation of <code>File.createTempFile</code>). The database name must not contain a semicolon. To point to the user home directory, use <code>~/</code>, as in\: <code>jdbc\:h2\:~/test</code>. features_1351_h2=In-Memory Databases features_1352_p=\ For certain use cases (for example\: rapid prototyping, testing, high performance operations, read-only databases), it may not be required to persist data, or persist changes to the data. This database supports the in-memory mode, where the data is not persisted. features_1353_p=\ In some cases, only one connection to a in-memory database is required. This means the database to be opened is private. In this case, the database URL is <code>jdbc\:h2\:mem\:</code> Opening two connections within the same virtual machine means opening two different (private) databases. features_1354_p=\ Sometimes multiple connections to the same in-memory database are required. In this case, the database URL must include a name. Example\: <code>jdbc\:h2\:mem\:db1</code>. Accessing the same database using this URL only works within the same virtual machine and class loader environment. features_1355_p=\ To access an in-memory database from another process or from another computer, you need to start a TCP server in the same process as the in-memory database was created. The other processes then need to access the database over TCP/IP or TLS, using a database URL such as\: <code>jdbc\:h2\:tcp\://localhost/mem\:db1</code>. features_1356_p=\ By default, closing the last connection to a database closes the database. For an in-memory database, this means the content is lost. To keep the database open, add <code>;DB_CLOSE_DELAY\=-1</code> to the database URL. To keep the content of an in-memory database as long as the virtual machine is alive, use <code>jdbc\:h2\:mem\:test;DB_CLOSE_DELAY\=-1</code>. features_1357_h2=Database Files Encryption features_1358_p=\ The database files can be encrypted. Three encryption algorithms are supported\: features_1359_li="AES" - also known as Rijndael, only AES-128 is implemented. features_1360_li="XTEA" - the 32 round version. features_1361_li="FOG" - pseudo-encryption only useful for hiding data from a text editor. features_1362_p=\ To use file encryption, you need to specify the encryption algorithm (the 'cipher') and the file password (in addition to the user password) when connecting to the database. features_1363_h3=Creating a New Database with File Encryption features_1364_p=\ By default, a new database is automatically created if it does not exist yet. To create an encrypted database, connect to it as it would already exist. features_1365_h3=Connecting to an Encrypted Database features_1366_p=\ The encryption algorithm is set in the database URL, and the file password is specified in the password field, before the user password. A single space separates the file password and the user password; the file password itself may not contain spaces. File passwords and user passwords are case sensitive. Here is an example to connect to a password-encrypted database\: features_1367_h3=Encrypting or Decrypting a Database features_1368_p=\ To encrypt an existing database, use the <code>ChangeFileEncryption</code> tool. This tool can also decrypt an encrypted database, or change the file encryption key. The tool is available from within the H2 Console in the tools section, or you can run it from the command line. The following command line will encrypt the database <code>test</code> in the user home directory with the file password <code>filepwd</code> and the encryption algorithm AES\: features_1369_h2=Database File Locking features_1370_p=\ Whenever a database is opened, a lock file is created to signal other processes that the database is in use. If database is closed, or if the process that opened the database terminates, this lock file is deleted. features_1371_p=\ The following file locking methods are implemented\: features_1372_li=The default method is <code>FILE</code> and uses a watchdog thread to protect the database file. The watchdog reads the lock file each second. features_1373_li=The second method is <code>SOCKET</code> and opens a server socket. The socket method does not require reading the lock file every second. The socket method should only be used if the database files are only accessed by one (and always the same) computer. features_1374_li=The third method is <code>FS</code>. This will use native file locking using <code>FileChannel.lock</code>. features_1375_li=It is also possible to open the database without file locking; in this case it is up to the application to protect the database files. Failing to do so will result in a corrupted database. Using the method <code>NO</code> forces the database to not create a lock file at all. Please note that this is unsafe as another process is able to open the same database, possibly leading to data corruption. features_1376_p=\ To open the database with a different file locking method, use the parameter <code>FILE_LOCK</code>. The following code opens the database with the 'socket' locking method\: features_1377_p=\ For more information about the algorithms, see <a href\="advanced.html\#file_locking_protocols">Advanced / File Locking Protocols</a>. features_1378_h2=Opening a Database Only if it Already Exists features_1379_p=\ By default, when an application calls <code>DriverManager.getConnection(url, ...)</code> and the database specified in the URL does not yet exist, a new (empty) database is created. In some situations, it is better to restrict creating new databases, and only allow to open existing databases. To do this, add <code>;IFEXISTS\=TRUE</code> to the database URL. In this case, if the database does not already exist, an exception is thrown when trying to connect. The connection only succeeds when the database already exists. The complete URL may look like this\: features_1380_h2=Closing a Database features_1381_h3=Delayed Database Closing features_1382_p=\ Usually, a database is closed when the last connection to it is closed. In some situations this slows down the application, for example when it is not possible to keep at least one connection open. The automatic closing of a database can be delayed or disabled with the SQL statement <code>SET DB_CLOSE_DELAY <seconds></code>. The parameter <seconds> specifies the number of seconds to keep a database open after the last connection to it was closed. The following statement will keep a database open for 10 seconds after the last connection was closed\: features_1383_p=\ The value -1 means the database is not closed automatically. The value 0 is the default and means the database is closed when the last connection is closed. This setting is persistent and can be set by an administrator only. It is possible to set the value in the database URL\: <code>jdbc\:h2\:~/test;DB_CLOSE_DELAY\=10</code>. features_1384_h3=Don't Close a Database when the VM Exits features_1385_p=\ By default, a database is closed when the last connection is closed. However, if it is never closed, the database is closed when the virtual machine exits normally, using a shutdown hook. In some situations, the database should not be closed in this case, for example because the database is still used at virtual machine shutdown (to store the shutdown process in the database for example). For those cases, the automatic closing of the database can be disabled in the database URL. The first connection (the one that is opening the database) needs to set the option in the database URL (it is not possible to change the setting afterwards). The database URL to disable database closing on exit is\: features_1386_h2=Execute SQL on Connection features_1387_p=\ Sometimes, particularly for in-memory databases, it is useful to be able to execute DDL or DML commands automatically when a client connects to a database. This functionality is enabled via the INIT property. Note that multiple commands may be passed to INIT, but the semicolon delimiter must be escaped, as in the example below. features_1388_p=\ Please note the double backslash is only required in a Java or properties file. In a GUI, or in an XML file, only one backslash is required\: features_1389_p=\ Backslashes within the init script (for example within a runscript statement, to specify the folder names in Windows) need to be escaped as well (using a second backslash). It might be simpler to avoid backslashes in folder names for this reason; use forward slashes instead. features_1390_h2=Ignore Unknown Settings features_1391_p=\ Some applications (for example OpenOffice.org Base) pass some additional parameters when connecting to the database. Why those parameters are passed is unknown. The parameters <code>PREFERDOSLIKELINEENDS</code> and <code>IGNOREDRIVERPRIVILEGES</code> are such examples; they are simply ignored to improve the compatibility with OpenOffice.org. If an application passes other parameters when connecting to the database, usually the database throws an exception saying the parameter is not supported. It is possible to ignored such parameters by adding <code>;IGNORE_UNKNOWN_SETTINGS\=TRUE</code> to the database URL. features_1392_h2=Changing Other Settings when Opening a Connection features_1393_p=\ In addition to the settings already described, other database settings can be passed in the database URL. Adding <code>;setting\=value</code> at the end of a database URL is the same as executing the statement <code>SET setting value</code> just after connecting. For a list of supported settings, see <a href\="grammar.html">SQL Grammar</a> or the <a href\="../javadoc/org/h2/engine/DbSettings.html">DbSettings</a> javadoc. features_1394_h2=Custom File Access Mode features_1395_p=\ Usually, the database opens the database file with the access mode <code>rw</code>, meaning read-write (except for read only databases, where the mode <code>r</code> is used). To open a database in read-only mode if the database file is not read-only, use <code>ACCESS_MODE_DATA\=r</code>. Also supported are <code>rws</code> and <code>rwd</code>. This setting must be specified in the database URL\: features_1396_p=\ For more information see <a href\="advanced.html\#durability_problems">Durability Problems</a>. On many operating systems the access mode <code>rws</code> does not guarantee that the data is written to the disk. features_1397_h2=Multiple Connections features_1398_h3=Opening Multiple Databases at the Same Time features_1399_p=\ An application can open multiple databases at the same time, including multiple connections to the same database. The number of open database is only limited by the memory available. features_1400_h3=Multiple Connections to the Same Database\: Client/Server features_1401_p=\ If you want to access the same database at the same time from different processes or computers, you need to use the client / server mode. In this case, one process acts as the server, and the other processes (that could reside on other computers as well) connect to the server via TCP/IP (or TLS over TCP/IP for improved security). features_1402_h3=Multithreading Support features_1403_p=\ This database is multithreading-safe. If an application is multi-threaded, it does not need to worry about synchronizing access to the database. An application should normally use one connection per thread. This database synchronizes access to the same connection, but other databases may not do this. To get higher concurrency, you need to use multiple connections. features_1404_p=\ By default, requests to the same database are synchronized. That means an application can use multiple threads that access the same database at the same time, however if one thread executes a long running query, the other threads need to wait. To enable concurrent database usage, see the setting <code>MULTI_THREADED</code>. features_1405_h3=Locking, Lock-Timeout, Deadlocks features_1406_p=\ Please note MVCC is enabled in version 1.4.x by default, when using the MVStore. In this case, table level locking is not used. If <a href\="advanced.html\#mvcc">multi-version concurrency</a> is not used, the database uses table level locks to give each connection a consistent state of the data. There are two kinds of locks\: read locks (shared locks) and write locks (exclusive locks). All locks are released when the transaction commits or rolls back. When using the default transaction isolation level 'read committed', read locks are already released after each statement. features_1407_p=\ If a connection wants to reads from a table, and there is no write lock on the table, then a read lock is added to the table. If there is a write lock, then this connection waits for the other connection to release the lock. If a connection cannot get a lock for a specified time, then a lock timeout exception is thrown. features_1408_p=\ Usually, <code>SELECT</code> statements will generate read locks. This includes subqueries. Statements that modify data use write locks. It is also possible to lock a table exclusively without modifying data, using the statement <code>SELECT ... FOR UPDATE</code>. The statements <code>COMMIT</code> and <code>ROLLBACK</code> releases all open locks. The commands <code>SAVEPOINT</code> and <code>ROLLBACK TO SAVEPOINT</code> don't affect locks. The locks are also released when the autocommit mode changes, and for connections with autocommit set to true (this is the default), locks are released after each statement. The following statements generate locks\: features_1409_th=Type of Lock features_1410_th=SQL Statement features_1411_td=Read features_1412_td=SELECT * FROM TEST; features_1413_td=\ CALL SELECT MAX(ID) FROM TEST; features_1414_td=\ SCRIPT; features_1415_td=Write features_1416_td=SELECT * FROM TEST WHERE 1\=0 FOR UPDATE; features_1417_td=Write features_1418_td=INSERT INTO TEST VALUES(1, 'Hello'); features_1419_td=\ INSERT INTO TEST SELECT * FROM TEST; features_1420_td=\ UPDATE TEST SET NAME\='Hi'; features_1421_td=\ DELETE FROM TEST; features_1422_td=Write features_1423_td=ALTER TABLE TEST ...; features_1424_td=\ CREATE INDEX ... ON TEST ...; features_1425_td=\ DROP INDEX ...; features_1426_p=\ The number of seconds until a lock timeout exception is thrown can be set separately for each connection using the SQL command <code>SET LOCK_TIMEOUT <milliseconds></code>. The initial lock timeout (that is the timeout used for new connections) can be set using the SQL command <code>SET DEFAULT_LOCK_TIMEOUT <milliseconds></code>. The default lock timeout is persistent. features_1427_h3=Avoiding Deadlocks features_1428_p=\ To avoid deadlocks, ensure that all transactions lock the tables in the same order (for example in alphabetical order), and avoid upgrading read locks to write locks. Both can be achieved using explicitly locking tables using <code>SELECT ... FOR UPDATE</code>. features_1429_h2=Database File Layout features_1430_p=\ The following files are created for persistent databases\: features_1431_th=File Name features_1432_th=Description features_1433_th=Number of Files features_1434_td=\ test.h2.db features_1435_td=\ Database file. features_1436_td=\ Contains the transaction log, indexes, and data for all tables. features_1437_td=\ Format\: <code><database>.h2.db</code> features_1438_td=\ 1 per database features_1439_td=\ test.lock.db features_1440_td=\ Database lock file. features_1441_td=\ Automatically (re-)created while the database is in use. features_1442_td=\ Format\: <code><database>.lock.db</code> features_1443_td=\ 1 per database (only if in use) features_1444_td=\ test.trace.db features_1445_td=\ Trace file (if the trace option is enabled). features_1446_td=\ Contains trace information. features_1447_td=\ Format\: <code><database>.trace.db</code> features_1448_td=\ Renamed to <code><database>.trace.db.old</code> is too big. features_1449_td=\ 0 or 1 per database features_1450_td=\ test.lobs.db/* features_1451_td=\ Directory containing one file for each features_1452_td=\ BLOB or CLOB value larger than a certain size. features_1453_td=\ Format\: <code><id>.t<tableId>.lob.db</code> features_1454_td=\ 1 per large object features_1455_td=\ test.123.temp.db features_1456_td=\ Temporary file. features_1457_td=\ Contains a temporary blob or a large result set. features_1458_td=\ Format\: <code><database>.<id>.temp.db</code> features_1459_td=\ 1 per object features_1460_h3=Moving and Renaming Database Files features_1461_p=\ Database name and location are not stored inside the database files. features_1462_p=\ While a database is closed, the files can be moved to another directory, and they can be renamed as well (as long as all files of the same database start with the same name and the respective extensions are unchanged). features_1463_p=\ As there is no platform specific data in the files, they can be moved to other operating systems without problems. features_1464_h3=Backup features_1465_p=\ When the database is closed, it is possible to backup the database files. features_1466_p=\ To backup data while the database is running, the SQL commands <code>SCRIPT</code> and <code>BACKUP</code> can be used. features_1467_h2=Logging and Recovery features_1468_p=\ Whenever data is modified in the database and those changes are committed, the changes are written to the transaction log (except for in-memory objects). The changes to the main data area itself are usually written later on, to optimize disk access. If there is a power failure, the main data area is not up-to-date, but because the changes are in the transaction log, the next time the database is opened, the changes are re-applied automatically. features_1469_h2=Compatibility features_1470_p=\ All database engines behave a little bit different. Where possible, H2 supports the ANSI SQL standard, and tries to be compatible to other databases. There are still a few differences however\: features_1471_p=\ In MySQL text columns are case insensitive by default, while in H2 they are case sensitive. However H2 supports case insensitive columns as well. To create the tables with case insensitive texts, append <code>IGNORECASE\=TRUE</code> to the database URL (example\: <code>jdbc\:h2\:~/test;IGNORECASE\=TRUE</code>). features_1472_h3=Compatibility Modes features_1473_p=\ For certain features, this database can emulate the behavior of specific databases. However, only a small subset of the differences between databases are implemented in this way. Here is the list of currently supported modes and the differences to the regular mode\: features_1474_h3=DB2 Compatibility Mode features_1475_p=\ To use the IBM DB2 mode, use the database URL <code>jdbc\:h2\:~/test;MODE\=DB2</code> or the SQL statement <code>SET MODE DB2</code>. features_1476_li=For aliased columns, <code>ResultSetMetaData.getColumnName()</code> returns the alias name and <code>getTableName()</code> returns <code>null</code>. features_1477_li=Support for the syntax <code>[OFFSET .. ROW] [FETCH ... ONLY]</code> as an alternative for <code>LIMIT .. OFFSET</code>. features_1478_li=Concatenating <code>NULL</code> with another value results in the other value. features_1479_li=Support the pseudo-table SYSIBM.SYSDUMMY1. features_1480_h3=Derby Compatibility Mode features_1481_p=\ To use the Apache Derby mode, use the database URL <code>jdbc\:h2\:~/test;MODE\=Derby</code> or the SQL statement <code>SET MODE Derby</code>. features_1482_li=For aliased columns, <code>ResultSetMetaData.getColumnName()</code> returns the alias name and <code>getTableName()</code> returns <code>null</code>. features_1483_li=For unique indexes, <code>NULL</code> is distinct. That means only one row with <code>NULL</code> in one of the columns is allowed. features_1484_li=Concatenating <code>NULL</code> with another value results in the other value. features_1485_li=Support the pseudo-table SYSIBM.SYSDUMMY1. features_1486_h3=HSQLDB Compatibility Mode features_1487_p=\ To use the HSQLDB mode, use the database URL <code>jdbc\:h2\:~/test;MODE\=HSQLDB</code> or the SQL statement <code>SET MODE HSQLDB</code>. features_1488_li=For aliased columns, <code>ResultSetMetaData.getColumnName()</code> returns the alias name and <code>getTableName()</code> returns <code>null</code>. features_1489_li=When converting the scale of decimal data, the number is only converted if the new scale is smaller than the current scale. Usually, the scale is converted and 0s are added if required. features_1490_li=For unique indexes, <code>NULL</code> is distinct. That means only one row with <code>NULL</code> in one of the columns is allowed. features_1491_li=Text can be concatenated using '+'. features_1492_h3=MS SQL Server Compatibility Mode features_1493_p=\ To use the MS SQL Server mode, use the database URL <code>jdbc\:h2\:~/test;MODE\=MSSQLServer</code> or the SQL statement <code>SET MODE MSSQLServer</code>. features_1494_li=For aliased columns, <code>ResultSetMetaData.getColumnName()</code> returns the alias name and <code>getTableName()</code> returns <code>null</code>. features_1495_li=Identifiers may be quoted using square brackets as in <code>[Test]</code>. features_1496_li=For unique indexes, <code>NULL</code> is distinct. That means only one row with <code>NULL</code> in one of the columns is allowed. features_1497_li=Concatenating <code>NULL</code> with another value results in the other value. features_1498_li=Text can be concatenated using '+'. features_1499_h3=MySQL Compatibility Mode features_1500_p=\ To use the MySQL mode, use the database URL <code>jdbc\:h2\:~/test;MODE\=MySQL</code> or the SQL statement <code>SET MODE MySQL</code>. features_1501_li=When inserting data, if a column is defined to be <code>NOT NULL</code> and <code>NULL</code> is inserted, then a 0 (or empty string, or the current timestamp for timestamp columns) value is used. Usually, this operation is not allowed and an exception is thrown. features_1502_li=Creating indexes in the <code>CREATE TABLE</code> statement is allowed using <code>INDEX(..)</code> or <code>KEY(..)</code>. Example\: <code>create table test(id int primary key, name varchar(255), key idx_name(name));</code> features_1503_li=Meta data calls return identifiers in lower case. features_1504_li=When converting a floating point number to an integer, the fractional digits are not truncated, but the value is rounded. features_1505_li=Concatenating <code>NULL</code> with another value results in the other value. features_1506_p=\ Text comparison in MySQL is case insensitive by default, while in H2 it is case sensitive (as in most other databases). H2 does support case insensitive text comparison, but it needs to be set separately, using <code>SET IGNORECASE TRUE</code>. This affects comparison using <code>\=, LIKE, REGEXP</code>. features_1507_h3=Oracle Compatibility Mode features_1508_p=\ To use the Oracle mode, use the database URL <code>jdbc\:h2\:~/test;MODE\=Oracle</code> or the SQL statement <code>SET MODE Oracle</code>. features_1509_li=For aliased columns, <code>ResultSetMetaData.getColumnName()</code> returns the alias name and <code>getTableName()</code> returns <code>null</code>. features_1510_li=When using unique indexes, multiple rows with <code>NULL</code> in all columns are allowed, however it is not allowed to have multiple rows with the same values otherwise. features_1511_li=Concatenating <code>NULL</code> with another value results in the other value. features_1512_li=Empty strings are treated like <code>NULL</code> values. features_1513_h3=PostgreSQL Compatibility Mode features_1514_p=\ To use the PostgreSQL mode, use the database URL <code>jdbc\:h2\:~/test;MODE\=PostgreSQL</code> or the SQL statement <code>SET MODE PostgreSQL</code>. features_1515_li=For aliased columns, <code>ResultSetMetaData.getColumnName()</code> returns the alias name and <code>getTableName()</code> returns <code>null</code>. features_1516_li=When converting a floating point number to an integer, the fractional digits are not be truncated, but the value is rounded. features_1517_li=The system columns <code>CTID</code> and <code>OID</code> are supported. features_1518_li=LOG(x) is base 10 in this mode. features_1519_h2=Auto-Reconnect features_1520_p=\ The auto-reconnect feature causes the JDBC driver to reconnect to the database if the connection is lost. The automatic re-connect only occurs when auto-commit is enabled; if auto-commit is disabled, an exception is thrown. To enable this mode, append <code>;AUTO_RECONNECT\=TRUE</code> to the database URL. features_1521_p=\ Re-connecting will open a new session. After an automatic re-connect, variables and local temporary tables definitions (excluding data) are re-created. The contents of the system table <code>INFORMATION_SCHEMA.SESSION_STATE</code> contains all client side state that is re-created. features_1522_p=\ If another connection uses the database in exclusive mode (enabled using <code>SET EXCLUSIVE 1</code> or <code>SET EXCLUSIVE 2</code>), then this connection will try to re-connect until the exclusive mode ends. features_1523_h2=Automatic Mixed Mode features_1524_p=\ Multiple processes can access the same database without having to start the server manually. To do that, append <code>;AUTO_SERVER\=TRUE</code> to the database URL. You can use the same database URL independent of whether the database is already open or not. This feature doesn't work with in-memory databases. Example database URL\: features_1525_p=\ Use the same URL for all connections to this database. Internally, when using this mode, the first connection to the database is made in embedded mode, and additionally a server is started internally (as a daemon thread). If the database is already open in another process, the server mode is used automatically. The IP address and port of the server are stored in the file <code>.lock.db</code>, that's why in-memory databases can't be supported. features_1526_p=\ The application that opens the first connection to the database uses the embedded mode, which is faster than the server mode. Therefore the main application should open the database first if possible. The first connection automatically starts a server on a random port. This server allows remote connections, however only to this database (to ensure that, the client reads <code>.lock.db</code> file and sends the the random key that is stored there to the server). When the first connection is closed, the server stops. If other (remote) connections are still open, one of them will then start a server (auto-reconnect is enabled automatically). features_1527_p=\ All processes need to have access to the database files. If the first connection is closed (the connection that started the server), open transactions of other connections will be rolled back (this may not be a problem if you don't disable autocommit). Explicit client/server connections (using <code>jdbc\:h2\:tcp\://</code> or <code>ssl\://</code>) are not supported. This mode is not supported for in-memory databases. features_1528_p=\ Here is an example how to use this mode. Application 1 and 2 are not necessarily started on the same computer, but they need to have access to the database files. Application 1 and 2 are typically two different processes (however they could run within the same process). features_1529_p=\ When using this feature, by default the server uses any free TCP port. The port can be set manually using <code>AUTO_SERVER_PORT\=9090</code>. features_1530_h2=Page Size features_1531_p=\ The page size for new databases is 2 KB (2048), unless the page size is set explicitly in the database URL using <code>PAGE_SIZE\=</code> when the database is created. The page size of existing databases can not be changed, so this property needs to be set when the database is created. features_1532_h2=Using the Trace Options features_1533_p=\ To find problems in an application, it is sometimes good to see what database operations where executed. This database offers the following trace features\: features_1534_li=Trace to <code>System.out</code> and/or to a file features_1535_li=Support for trace levels <code>OFF, ERROR, INFO, DEBUG</code> features_1536_li=The maximum size of the trace file can be set features_1537_li=It is possible to generate Java source code from the trace file features_1538_li=Trace can be enabled at runtime by manually creating a file features_1539_h3=Trace Options features_1540_p=\ The simplest way to enable the trace option is setting it in the database URL. There are two settings, one for <code>System.out</code> (<code>TRACE_LEVEL_SYSTEM_OUT</code>) tracing, and one for file tracing (<code>TRACE_LEVEL_FILE</code>). The trace levels are 0 for <code>OFF</code>, 1 for <code>ERROR</code> (the default), 2 for <code>INFO</code>, and 3 for <code>DEBUG</code>. A database URL with both levels set to <code>DEBUG</code> is\: features_1541_p=\ The trace level can be changed at runtime by executing the SQL command <code>SET TRACE_LEVEL_SYSTEM_OUT level</code> (for <code>System.out</code> tracing) or <code>SET TRACE_LEVEL_FILE level</code> (for file tracing). Example\: features_1542_h3=Setting the Maximum Size of the Trace File features_1543_p=\ When using a high trace level, the trace file can get very big quickly. The default size limit is 16 MB, if the trace file exceeds this limit, it is renamed to <code>.old</code> and a new file is created. If another such file exists, it is deleted. To limit the size to a certain number of megabytes, use <code>SET TRACE_MAX_FILE_SIZE mb</code>. Example\: features_1544_h3=Java Code Generation features_1545_p=\ When setting the trace level to <code>INFO</code> or <code>DEBUG</code>, Java source code is generated as well. This simplifies reproducing problems. The trace file looks like this\: features_1546_p=\ To filter the Java source code, use the <code>ConvertTraceFile</code> tool as follows\: features_1547_p=\ The generated file <code>Test.java</code> will contain the Java source code. The generated source code may be too large to compile (the size of a Java method is limited). If this is the case, the source code needs to be split in multiple methods. The password is not listed in the trace file and therefore not included in the source code. features_1548_h2=Using Other Logging APIs features_1549_p=\ By default, this database uses its own native 'trace' facility. This facility is called 'trace' and not 'log' within this database to avoid confusion with the transaction log. Trace messages can be written to both file and <code>System.out</code>. In most cases, this is sufficient, however sometimes it is better to use the same facility as the application, for example Log4j. To do that, this database support SLF4J. features_1550_a=SLF4J features_1551_p=\ is a simple facade for various logging APIs and allows to plug in the desired implementation at deployment time. SLF4J supports implementations such as Logback, Log4j, Jakarta Commons Logging (JCL), Java logging, x4juli, and Simple Log. features_1552_p=\ To enable SLF4J, set the file trace level to 4 in the database URL\: features_1553_p=\ Changing the log mechanism is not possible after the database is open, that means executing the SQL statement <code>SET TRACE_LEVEL_FILE 4</code> when the database is already open will not have the desired effect. To use SLF4J, all required jar files need to be in the classpath. The logger name is <code>h2database</code>. If it does not work, check the file <code><database>.trace.db</code> for error messages. features_1554_h2=Read Only Databases features_1555_p=\ If the database files are read-only, then the database is read-only as well. It is not possible to create new tables, add or modify data in this database. Only <code>SELECT</code> and <code>CALL</code> statements are allowed. To create a read-only database, close the database. Then, make the database file read-only. When you open the database now, it is read-only. There are two ways an application can find out whether database is read-only\: by calling <code>Connection.isReadOnly()</code> or by executing the SQL statement <code>CALL READONLY()</code>. features_1556_p=\ Using the <a href\="\#custom_access_mode">Custom Access Mode</a> <code>r</code> the database can also be opened in read-only mode, even if the database file is not read only. features_1557_h2=Read Only Databases in Zip or Jar File features_1558_p=\ To create a read-only database in a zip file, first create a regular persistent database, and then create a backup. The database must not have pending changes, that means you need to close all connections to the database first. To speed up opening the read-only database and running queries, the database should be closed using <code>SHUTDOWN DEFRAG</code>. If you are using a database named <code>test</code>, an easy way to create a zip file is using the <code>Backup</code> tool. You can start the tool from the command line, or from within the H2 Console (Tools - Backup). Please note that the database must be closed when the backup is created. Therefore, the SQL statement <code>BACKUP TO</code> can not be used. features_1559_p=\ When the zip file is created, you can open the database in the zip file using the following database URL\: features_1560_p=\ Databases in zip files are read-only. The performance for some queries will be slower than when using a regular database, because random access in zip files is not supported (only streaming). How much this affects the performance depends on the queries and the data. The database is not read in memory; therefore large databases are supported as well. The same indexes are used as when using a regular database. features_1561_p=\ If the database is larger than a few megabytes, performance is much better if the database file is split into multiple smaller files, because random access in compressed files is not possible. See also the sample application <a href\="https\://github.com/h2database/h2database/tree/master/h2/src/test/org/h2/samples/ReadOnlyDatabaseInZip.java">ReadOnlyDatabaseInZip</a>. features_1562_h3=Opening a Corrupted Database features_1563_p=\ If a database cannot be opened because the boot info (the SQL script that is run at startup) is corrupted, then the database can be opened by specifying a database event listener. The exceptions are logged, but opening the database will continue. features_1564_h2=Computed Columns / Function Based Index features_1565_p=\ A computed column is a column whose value is calculated before storing. The formula is evaluated when the row is inserted, and re-evaluated every time the row is updated. One use case is to automatically update the last-modification time\: features_1566_p=\ Function indexes are not directly supported by this database, but they can be emulated by using computed columns. For example, if an index on the upper-case version of a column is required, create a computed column with the upper-case version of the original column, and create an index for this column\: features_1567_p=\ When inserting data, it is not required (and not allowed) to specify a value for the upper-case version of the column, because the value is generated. But you can use the column when querying the table\: features_1568_h2=Multi-Dimensional Indexes features_1569_p=\ A tool is provided to execute efficient multi-dimension (spatial) range queries. This database does not support a specialized spatial index (R-Tree or similar). Instead, the B-Tree index is used. For each record, the multi-dimensional key is converted (mapped) to a single dimensional (scalar) value. This value specifies the location on a space-filling curve. features_1570_p=\ Currently, Z-order (also called N-order or Morton-order) is used; Hilbert curve could also be used, but the implementation is more complex. The algorithm to convert the multi-dimensional value is called bit-interleaving. The scalar value is indexed using a B-Tree index (usually using a computed column). features_1571_p=\ The method can result in a drastic performance improvement over just using an index on the first column. Depending on the data and number of dimensions, the improvement is usually higher than factor 5. The tool generates a SQL query from a specified multi-dimensional range. The method used is not database dependent, and the tool can easily be ported to other databases. For an example how to use the tool, please have a look at the sample code provided in <code>TestMultiDimension.java</code>. features_1572_h2=User-Defined Functions and Stored Procedures features_1573_p=\ In addition to the built-in functions, this database supports user-defined Java functions. In this database, Java functions can be used as stored procedures as well. A function must be declared (registered) before it can be used. A function can be defined using source code, or as a reference to a compiled class that is available in the classpath. By default, the function aliases are stored in the current schema. features_1574_h3=Referencing a Compiled Method features_1575_p=\ When referencing a method, the class must already be compiled and included in the classpath where the database is running. Only static Java methods are supported; both the class and the method must be public. Example Java class\: features_1576_p=\ The Java function must be registered in the database by calling <code>CREATE ALIAS ... FOR</code>\: features_1577_p=\ For a complete sample application, see <code>src/test/org/h2/samples/Function.java</code>. features_1578_h3=Declaring Functions as Source Code features_1579_p=\ When defining a function alias with source code, the database tries to compile the source code using the Sun Java compiler (the class <code>com.sun.tools.javac.Main</code>) if the <code>tools.jar</code> is in the classpath. If not, <code>javac</code> is run as a separate process. Only the source code is stored in the database; the class is compiled each time the database is re-opened. Source code is usually passed as dollar quoted text to avoid escaping problems, however single quotes can be used as well. Example\: features_1580_p=\ By default, the three packages <code>java.util, java.math, java.sql</code> are imported. The method name (<code>nextPrime</code> in the example above) is ignored. Method overloading is not supported when declaring functions as source code, that means only one method may be declared for an alias. If different import statements are required, they must be declared at the beginning and separated with the tag <code>@CODE</code>\: features_1581_p=\ The following template is used to create a complete Java class\: features_1582_h3=Method Overloading features_1583_p=\ Multiple methods may be bound to a SQL function if the class is already compiled and included in the classpath. Each Java method must have a different number of arguments. Method overloading is not supported when declaring functions as source code. features_1584_h3=Function Data Type Mapping features_1585_p=\ Functions that accept non-nullable parameters such as <code>int</code> will not be called if one of those parameters is <code>NULL</code>. Instead, the result of the function is <code>NULL</code>. If the function should be called if a parameter is <code>NULL</code>, you need to use <code>java.lang.Integer</code> instead. features_1586_p=\ SQL types are mapped to Java classes and vice-versa as in the JDBC API. For details, see <a href\="datatypes.html">Data Types</a>. There are a few special cases\: <code>java.lang.Object</code> is mapped to <code>OTHER</code> (a serialized object). Therefore, <code>java.lang.Object</code> can not be used to match all SQL types (matching all SQL types is not supported). The second special case is <code>Object[]</code>\: arrays of any class are mapped to <code>ARRAY</code>. Objects of type <code>org.h2.value.Value</code> (the internal value class) are passed through without conversion. features_1587_h3=Functions That Require a Connection features_1588_p=\ If the first parameter of a Java function is a <code>java.sql.Connection</code>, then the connection to database is provided. This connection does not need to be closed before returning. When calling the method from within the SQL statement, this connection parameter does not need to be (can not be) specified. features_1589_h3=Functions Throwing an Exception features_1590_p=\ If a function throws an exception, then the current statement is rolled back and the exception is thrown to the application. SQLException are directly re-thrown to the calling application; all other exceptions are first converted to a SQLException. features_1591_h3=Functions Returning a Result Set features_1592_p=\ Functions may returns a result set. Such a function can be called with the <code>CALL</code> statement\: features_1593_h3=Using SimpleResultSet features_1594_p=\ A function can create a result set using the <code>SimpleResultSet</code> tool\: features_1595_h3=Using a Function as a Table features_1596_p=\ A function that returns a result set can be used like a table. However, in this case the function is called at least twice\: first while parsing the statement to collect the column names (with parameters set to <code>null</code> where not known at compile time). And then, while executing the statement to get the data (maybe multiple times if this is a join). If the function is called just to get the column list, the URL of the connection passed to the function is <code>jdbc\:columnlist\:connection</code>. Otherwise, the URL of the connection is <code>jdbc\:default\:connection</code>. features_1597_h2=Pluggable or User-Defined Tables features_1598_p=\ For situations where you need to expose other data-sources to the SQL engine as a table, there are "pluggable tables". For some examples, have a look at the code in <code>org.h2.test.db.TestTableEngines</code>. features_1599_p=\ In order to create your own TableEngine, you need to implement the <code>org.h2.api.TableEngine</code> interface e.g. something like this\: features_1600_p=\ and then create the table from SQL like this\: features_1601_p=\ It is also possible to pass in parameters to the table engine, like so\: features_1602_p=\ In which case the parameters are passed down in the tableEngineParams field of the CreateTableData object. features_1603_p=\ It is also possible to specify default table engine params on schema creation\: features_1604_p=\ Params from the schema are used when CREATE TABLE issued on this schema does not have its own engine params specified. features_1605_h2=Triggers features_1606_p=\ This database supports Java triggers that are called before or after a row is updated, inserted or deleted. Triggers can be used for complex consistency checks, or to update related data in the database. It is also possible to use triggers to simulate materialized views. For a complete sample application, see <code>src/test/org/h2/samples/TriggerSample.java</code>. A Java trigger must implement the interface <code>org.h2.api.Trigger</code>. The trigger class must be available in the classpath of the database engine (when using the server mode, it must be in the classpath of the server). features_1607_p=\ The connection can be used to query or update data in other tables. The trigger then needs to be defined in the database\: features_1608_p=\ The trigger can be used to veto a change by throwing a <code>SQLException</code>. features_1609_p=\ As an alternative to implementing the <code>Trigger</code> interface, an application can extend the abstract class <code>org.h2.tools.TriggerAdapter</code>. This will allows to use the <code>ResultSet</code> interface within trigger implementations. In this case, only the <code>fire</code> method needs to be implemented\: features_1610_h2=Compacting a Database features_1611_p=\ Empty space in the database file re-used automatically. When closing the database, the database is automatically compacted for up to 200 milliseconds by default. To compact more, use the SQL statement SHUTDOWN COMPACT. However re-creating the database may further reduce the database size because this will re-build the indexes. Here is a sample function to do this\: features_1612_p=\ See also the sample application <code>org.h2.samples.Compact</code>. The commands <code>SCRIPT / RUNSCRIPT</code> can be used as well to create a backup of a database and re-build the database from the script. features_1613_h2=Cache Settings features_1614_p=\ The database keeps most frequently used data in the main memory. The amount of memory used for caching can be changed using the setting <code>CACHE_SIZE</code>. This setting can be set in the database connection URL (<code>jdbc\:h2\:~/test;CACHE_SIZE\=131072</code>), or it can be changed at runtime using <code>SET CACHE_SIZE size</code>. The size of the cache, as represented by <code>CACHE_SIZE</code> is measured in KB, with each KB being 1024 bytes. This setting has no effect for in-memory databases. For persistent databases, the setting is stored in the database and re-used when the database is opened the next time. However, when opening an existing database, the cache size is set to at most half the amount of memory available for the virtual machine (Runtime.getRuntime().maxMemory()), even if the cache size setting stored in the database is larger; however the setting stored in the database is kept. Setting the cache size in the database URL or explicitly using <code>SET CACHE_SIZE</code> overrides this value (even if larger than the physical memory). To get the current used maximum cache size, use the query <code>SELECT * FROM INFORMATION_SCHEMA.SETTINGS WHERE NAME \= 'info.CACHE_MAX_SIZE'</code> features_1615_p=\ An experimental scan-resistant cache algorithm "Two Queue" (2Q) is available. To enable it, append <code>;CACHE_TYPE\=TQ</code> to the database URL. The cache might not actually improve performance. If you plan to use it, please run your own test cases first. features_1616_p=\ Also included is an experimental second level soft reference cache. Rows in this cache are only garbage collected on low memory. By default the second level cache is disabled. To enable it, use the prefix <code>SOFT_</code>. Example\: <code>jdbc\:h2\:~/test;CACHE_TYPE\=SOFT_LRU</code>. The cache might not actually improve performance. If you plan to use it, please run your own test cases first. features_1617_p=\ To get information about page reads and writes, and the current caching algorithm in use, call <code>SELECT * FROM INFORMATION_SCHEMA.SETTINGS</code>. The number of pages read / written is listed. fragments_1000_div=\ <span id \= "goTop" onclick\="window.scrollTo(0,0)" style\="color\: \#fff; position\:fixed; font-size\: 20px; cursor\: pointer;">&\#x25b2;</span> fragments_1001_label=Search\: fragments_1002_label=Highlight keyword(s) fragments_1003_a=Home fragments_1004_a=Download fragments_1005_a=Cheat Sheet fragments_1006_b=Documentation fragments_1007_a=Quickstart fragments_1008_a=Installation fragments_1009_a=Tutorial fragments_1010_a=Features fragments_1011_a=Performance fragments_1012_a=Advanced fragments_1013_b=Reference fragments_1014_a=SQL Grammar fragments_1015_a=Functions fragments_1016_a=Data Types fragments_1017_a=Javadoc fragments_1018_a=PDF (1 MB) fragments_1019_b=Support fragments_1020_a=FAQ fragments_1021_a=Error Analyzer fragments_1022_a=Google Group (English) fragments_1023_a=Google Group (Japanese) fragments_1024_a=Google Group (Chinese) fragments_1025_b=Appendix fragments_1026_a=History & Roadmap fragments_1027_a=License fragments_1028_a=Build fragments_1029_a=Links fragments_1030_a=JaQu fragments_1031_a=MVStore fragments_1032_a=Architecture fragments_1033_td= frame_1000_h1=H2 Database Engine frame_1001_p=\ Welcome to H2, the free SQL database. The main feature of H2 are\: frame_1002_li=It is free to use for everybody, source code is included frame_1003_li=Written in Java, but also available as native executable frame_1004_li=JDBC and (partial) ODBC API frame_1005_li=Embedded and client/server modes frame_1006_li=Clustering is supported frame_1007_li=A web client is included frame_1008_h2=No Javascript frame_1009_p=\ If you are not automatically redirected to the main page, then Javascript is currently disabled or your browser does not support Javascript. Some features (for example the integrated search) require Javascript. frame_1010_p=\ Please enable Javascript, or go ahead without it\: <a href\="main.html" style\="font-size\: 16px; font-weight\: bold">H2 Database Engine</a> history_1000_h1=History and Roadmap history_1001_a=\ Change Log history_1002_a=\ Roadmap history_1003_a=\ History of this Database Engine history_1004_a=\ Why Java history_1005_a=\ Supporters history_1006_h2=Change Log history_1007_p=\ The up-to-date change log is available at <a href\="http\://www.h2database.com/html/changelog.html"> http\://www.h2database.com/html/changelog.html </a> history_1008_h2=Roadmap history_1009_p=\ The current roadmap is available at <a href\="http\://www.h2database.com/html/roadmap.html"> http\://www.h2database.com/html/roadmap.html </a> history_1010_h2=History of this Database Engine history_1011_p=\ The development of H2 was started in May 2004, but it was first published on December 14th 2005. The original author of H2, Thomas Mueller, is also the original developer of Hypersonic SQL. In 2001, he joined PointBase Inc. where he wrote PointBase Micro, a commercial Java SQL database. At that point, he had to discontinue Hypersonic SQL. The HSQLDB Group was formed to continued to work on the Hypersonic SQL codebase. The name H2 stands for Hypersonic 2, however H2 does not share code with Hypersonic SQL or HSQLDB. H2 is built from scratch. history_1012_h2=Why Java history_1013_p=\ The main reasons to use a Java database are\: history_1014_li=Very simple to integrate in Java applications history_1015_li=Support for many different platforms history_1016_li=More secure than native applications (no buffer overflows) history_1017_li=User defined functions (or triggers) run very fast history_1018_li=Unicode support history_1019_p=\ Some think Java is too slow for low level operations, but this is no longer true. Garbage collection for example is now faster than manual memory management. history_1020_p=\ Developing Java code is faster than developing C or C++ code. When using Java, most time can be spent on improving the algorithms instead of porting the code to different platforms or doing memory management. Features such as Unicode and network libraries are already built-in. In Java, writing secure code is easier because buffer overflows can not occur. Features such as reflection can be used for randomized testing. history_1021_p=\ Java is future proof\: a lot of companies support Java. Java is now open source. history_1022_p=\ To increase the portability and ease of use, this software depends on very few libraries. Features that are not available in open source Java implementations (such as Swing) are not used, or only used for optional features. history_1023_h2=Supporters history_1024_p=\ Many thanks for those who reported bugs, gave valuable feedback, spread the word, and translated this project. history_1025_p=\ Also many thanks to the donors. To become a donor, use PayPal (at the very bottom of the main web page). Donators are\: history_1026_li=Martin Wildam, Austria history_1027_a=tagtraum industries incorporated, USA history_1028_a=TimeWriter, Netherlands history_1029_a=Cognitect, USA history_1030_a=Code 42 Software, Inc., Minneapolis history_1031_a=Code Lutin, France history_1032_a=NetSuxxess GmbH, Germany history_1033_a=Poker Copilot, Steve McLeod, Germany history_1034_a=SkyCash, Poland history_1035_a=Lumber-mill, Inc., Japan history_1036_a=StockMarketEye, USA history_1037_a=Eckenfelder GmbH & Co.KG, Germany history_1038_li=Jun Iyama, Japan history_1039_li=Steven Branda, USA history_1040_li=Anthony Goubard, Netherlands history_1041_li=Richard Hickey, USA history_1042_li=Alessio Jacopo D'Adamo, Italy history_1043_li=Ashwin Jayaprakash, USA history_1044_li=Donald Bleyl, USA history_1045_li=Frank Berger, Germany history_1046_li=Florent Ramiere, France history_1047_li=Antonio Casqueiro, Portugal history_1048_li=Oliver Computing LLC, USA history_1049_li=Harpal Grover Consulting Inc., USA history_1050_li=Elisabetta Berlini, Italy history_1051_li=William Gilbert, USA history_1052_li=Antonio Dieguez Rojas, Chile history_1053_a=Ontology Works, USA history_1054_li=Pete Haidinyak, USA history_1055_li=William Osmond, USA history_1056_li=Joachim Ansorg, Germany history_1057_li=Oliver Soerensen, Germany history_1058_li=Christos Vasilakis, Greece history_1059_li=Fyodor Kupolov, Denmark history_1060_li=Jakob Jenkov, Denmark history_1061_li=Stéphane Chartrand, Switzerland history_1062_li=Glenn Kidd, USA history_1063_li=Gustav Trede, Sweden history_1064_li=Joonas Pulakka, Finland history_1065_li=Bjorn Darri Sigurdsson, Iceland history_1066_li=Gray Watson, USA history_1067_li=Erik Dick, Germany history_1068_li=Pengxiang Shao, China history_1069_li=Bilingual Marketing Group, USA history_1070_li=Philippe Marschall, Switzerland history_1071_li=Knut Staring, Norway history_1072_li=Theis Borg, Denmark history_1073_li=Mark De Mendonca Duske, USA history_1074_li=Joel A. Garringer, USA history_1075_li=Olivier Chafik, France history_1076_li=Rene Schwietzke, Germany history_1077_li=Jalpesh Patadia, USA history_1078_li=Takanori Kawashima, Japan history_1079_li=Terrence JC Huang, China history_1080_a=JiaDong Huang, Australia history_1081_li=Laurent van Roy, Belgium history_1082_li=Qian Chen, China history_1083_li=Clinton Hyde, USA history_1084_li=Kritchai Phromros, Thailand history_1085_li=Alan Thompson, USA history_1086_li=Ladislav Jech, Czech Republic history_1087_li=Dimitrijs Fedotovs, Latvia history_1088_li=Richard Manley-Reeve, United Kingdom history_1089_li=Daniel Cyr, ThirdHalf.com, LLC, USA history_1090_li=Peter Jünger, Germany history_1091_li=Dan Keegan, USA history_1092_li=Rafel Israels, Germany history_1093_li=Fabien Todescato, France history_1094_li=Cristan Meijer, Netherlands history_1095_li=Adam McMahon, USA history_1096_li=Fábio Gomes Lisboa Gomes, Brasil history_1097_li=Lyderic Landry, England history_1098_li=Mederp, Morocco history_1099_li=Joaquim Golay, Switzerland history_1100_li=Clemens Quoss, Germany history_1101_li=Kervin Pierre, USA history_1102_li=Jake Bellotti, Australia history_1103_li=Arun Chittanoor, USA installation_1000_h1=Installation installation_1001_a=\ Requirements installation_1002_a=\ Supported Platforms installation_1003_a=\ Installing the Software installation_1004_a=\ Directory Structure installation_1005_h2=Requirements installation_1006_p=\ To run this database, the following software stack is known to work. Other software most likely also works, but is not tested as much. installation_1007_h3=Database Engine installation_1008_li=Windows XP or Vista, Mac OS X, or Linux installation_1009_li=Oracle Java 7 or newer installation_1010_li=Recommended Windows file system\: NTFS (FAT32 only supports files up to 4 GB) installation_1011_h3=H2 Console installation_1012_li=Mozilla Firefox installation_1013_h2=Supported Platforms installation_1014_p=\ As this database is written in Java, it can run on many different platforms. It is tested with Java 7. Currently, the database is developed and tested on Windows 8 and Mac OS X using Java 7, but it also works in many other operating systems and using other Java runtime environments. All major operating systems (Windows XP, Windows Vista, Windows 7, Mac OS, Ubuntu,...) are supported. installation_1015_h2=Installing the Software installation_1016_p=\ To install the software, run the installer or unzip it to a directory of your choice. installation_1017_h2=Directory Structure installation_1018_p=\ After installing, you should get the following directory structure\: installation_1019_th=Directory installation_1020_th=Contents installation_1021_td=bin installation_1022_td=JAR and batch files installation_1023_td=docs installation_1024_td=Documentation installation_1025_td=docs/html installation_1026_td=HTML pages installation_1027_td=docs/javadoc installation_1028_td=Javadoc files installation_1029_td=ext installation_1030_td=External dependencies (downloaded when building) installation_1031_td=service installation_1032_td=Tools to run the database as a Windows Service installation_1033_td=src installation_1034_td=Source files installation_1035_td=src/docsrc installation_1036_td=Documentation sources installation_1037_td=src/installer installation_1038_td=Installer, shell, and release build script installation_1039_td=src/main installation_1040_td=Database engine source code installation_1041_td=src/test installation_1042_td=Test source code installation_1043_td=src/tools installation_1044_td=Tools and database adapters source code jaqu_1000_h1=JaQu jaqu_1001_a=\ What is JaQu jaqu_1002_a=\ Differences to Other Data Access Tools jaqu_1003_a=\ Current State jaqu_1004_a=\ Building the JaQu Library jaqu_1005_a=\ Requirements jaqu_1006_a=\ Example Code jaqu_1007_a=\ Configuration jaqu_1008_a=\ Natural Syntax jaqu_1009_a=\ Other Ideas jaqu_1010_a=\ Similar Projects jaqu_1011_h2=What is JaQu jaqu_1012_p=\ Note\: This project is currently in maintenance mode. A friendly fork of JaQu is <a href\="http\://iciql.com">available under the name iciql</a>. jaqu_1013_p=\ JaQu stands for Java Query and allows to access databases using pure Java. JaQu provides a fluent interface (or internal DSL). JaQu is something like LINQ for Java (LINQ stands for "language integrated query" and is a Microsoft .NET technology). The following JaQu code\: jaqu_1014_p=\ stands for the SQL statement\: jaqu_1015_h2=Differences to Other Data Access Tools jaqu_1016_p=\ Unlike SQL, JaQu can be easily integrated in Java applications. Because JaQu is pure Java, auto-complete in the IDE is supported. Type checking is performed by the compiler. JaQu fully protects against SQL injection. jaqu_1017_p=\ JaQu is meant as replacement for JDBC and SQL and not as much as a replacement for tools like Hibernate. With JaQu, you don't write SQL statements as strings. JaQu is much smaller and simpler than other persistence frameworks such as Hibernate, but it also does not provide all the features of those. Unlike iBatis and Hibernate, no XML or annotation based configuration is required; instead the configuration (if required at all) is done in pure Java, within the application. jaqu_1018_p=\ JaQu does not require or contain any data caching mechanism. Like JDBC and iBatis, JaQu provides full control over when and what SQL statements are executed (but without having to write SQL statements as strings). jaqu_1019_h3=Restrictions jaqu_1020_p=\ Primitive types (eg. <code>boolean, int, long, double</code>) are not supported. Use <code>java.lang.Boolean, Integer, Long, Double</code> instead. jaqu_1021_h3=Why in Java? jaqu_1022_p=\ Most applications are written in Java. Mixing Java and another language (for example Scala or Groovy) in the same application is complicated\: you would need to split the application and database code, and write adapter / wrapper code. jaqu_1023_h2=Current State jaqu_1024_p=\ Currently, JaQu is only tested with the H2 database. The API may change in future versions. JaQu is not part of the h2 jar file, however the source code is included in H2, under\: jaqu_1025_code=src/test/org/h2/test/jaqu/* jaqu_1026_li=\ (samples and tests) jaqu_1027_code=src/tools/org/h2/jaqu/* jaqu_1028_li=\ (framework) jaqu_1029_h2=Building the JaQu Library jaqu_1030_p=\ To create the JaQu jar file, run\: <code>build jarJaqu</code>. This will create the file <code>bin/h2jaqu.jar</code>. jaqu_1031_h2=Requirements jaqu_1032_p=\ JaQu requires Java 6. Annotations are not need. Currently, JaQu is only tested with the H2 database engine, however in theory it should work with any database that supports the JDBC API. jaqu_1033_h2=Example Code jaqu_1034_h2=Configuration jaqu_1035_p=\ JaQu does not require any configuration when using the default field to column mapping. To define table indices, or if you want to map a class to a table with a different name, or a field to a column with another name, create a function called <code>define</code> in the data class. Example\: jaqu_1036_p=\ The method <code>define()</code> contains the mapping definition. It is called once when the class is used for the first time. Like annotations, the mapping is defined in the class itself. Unlike when using annotations, the compiler can check the syntax even for multi-column objects (multi-column indexes, multi-column primary keys and so on). Because the definition is written in Java, the configuration can be set at runtime, which is not possible using annotations. Unlike XML mapping configuration, the configuration is integrated in the class itself. jaqu_1037_h2=Natural Syntax jaqu_1038_p=The plan is to support more natural (pure Java) syntax in conditions. To do that, the condition class is de-compiled to a SQL condition. A proof of concept decompiler is included (but it doesn't fully work yet; patches are welcome). The planned syntax is\: jaqu_1039_h2=Other Ideas jaqu_1040_p=\ This project has just been started, and nothing is fixed yet. Some ideas are\: jaqu_1041_li=Support queries on collections (instead of using a database). jaqu_1042_li=Provide API level compatibility with JPA (so that JaQu can be used as an extension of JPA). jaqu_1043_li=Internally use a JPA implementation (for example Hibernate) instead of SQL directly. jaqu_1044_li=Use PreparedStatements and cache them. jaqu_1045_h2=Similar Projects jaqu_1046_a=iciql (a friendly fork of JaQu) jaqu_1047_a=Cement Framework jaqu_1048_a=Dreamsource ORM jaqu_1049_a=Empire-db jaqu_1050_a=JEQUEL\: Java Embedded QUEry Language jaqu_1051_a=Joist jaqu_1052_a=jOOQ jaqu_1053_a=JoSQL jaqu_1054_a=LIQUidFORM jaqu_1055_a=Quaere (Alias implementation) jaqu_1056_a=Quaere jaqu_1057_a=Querydsl jaqu_1058_a=Squill license_1000_h1=License license_1001_a=\ Summary and License FAQ license_1002_a=\ Mozilla Public License Version 2.0 license_1003_a=\ Eclipse Public License - Version 1.0 license_1004_a=\ Export Control Classification Number (ECCN) license_1005_h2=Summary and License FAQ license_1006_p=\ H2 is dual licensed and available under the MPL 2.0 (<a href\="http\://www.mozilla.org/MPL/2.0">Mozilla Public License Version 2.0</a>) or under the EPL 1.0 (<a href\="http\://opensource.org/licenses/eclipse-1.0.php">Eclipse Public License</a>). There is a license FAQ for both the MPL and the EPL. license_1007_li=You can use H2 for free. license_1008_li=You can integrate it into your applications (including in commercial applications) and distribute it. license_1009_li=Files containing only your code are not covered by this license (it is 'commercial friendly'). license_1010_li=Modifications to the H2 source code must be published. license_1011_li=You don't need to provide the source code of H2 if you did not modify anything. license_1012_li=If you distribute a binary that includes H2, you need to add a disclaimer of liability - see the example below. license_1013_p=\ However, nobody is allowed to rename H2, modify it a little, and sell it as a database engine without telling the customers it is in fact H2. This happened to HSQLDB\: a company called 'bungisoft' copied HSQLDB, renamed it to 'RedBase', and tried to sell it, hiding the fact that it was in fact just HSQLDB. It seems 'bungisoft' does not exist any more, but you can use the <a href\="http\://www.archive.org">Wayback Machine</a> and visit old web pages of <code>http\://www.bungisoft.com</code>. license_1014_p=\ About porting the source code to another language (for example C\# or C++)\: converted source code (even if done manually) stays under the same copyright and license as the original code. The copyright of the ported source code does not (automatically) go to the person who ported the code. license_1015_p=\ If you distribute a binary that includes H2, you need to add the license and a disclaimer of liability (as you should do for your own code). You should add a disclaimer for each open source library you use. For example, add a file <code>3rdparty_license.txt</code> in the directory where the jar files are, and list all open source libraries, each one with its license and disclaimer. For H2, a simple solution is to copy the following text below. You may also include a copy of the complete license. license_1016_h2=Mozilla Public License Version 2.0 license_1017_h3=1. Definitions license_1018_p=1.1. "Contributor" means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. license_1019_p=1.2. "Contributor Version" means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor's Contribution. license_1020_p=1.3. "Contribution" means Covered Software of a particular Contributor. license_1021_p=1.4. "Covered Software" means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. license_1022_p=1.5. "Incompatible With Secondary Licenses" means license_1023_p=a. that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or license_1024_p=b. that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. license_1025_p=1.6. "Executable Form" means any form of the work other than Source Code Form. license_1026_p=1.7. "Larger Work" means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. license_1027_p=1.8. "License" means this document. license_1028_p=1.9. "Licensable" means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. license_1029_p=1.10. "Modifications" means any of the following\: license_1030_p=a. any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or license_1031_p=b. any new file in Source Code Form that contains any Covered Software. license_1032_p=1.11. "Patent Claims" of a Contributor means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. license_1033_p=1.12. "Secondary License" means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. license_1034_p=1.13. "Source Code Form" means the form of the work preferred for making modifications. license_1035_p=1.14. "You" (or "Your") means an individual or a legal entity exercising rights under this License. For legal entities, "You" includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, "control" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. license_1036_h3=2. License Grants and Conditions license_1037_h4=2.1. Grants license_1038_p=Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license\: license_1039_p=under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and license_1040_p=under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. license_1041_h4=2.2. Effective Date license_1042_p=The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. license_1043_h4=2.3. Limitations on Grant Scope license_1044_p=The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor\: license_1045_p=for any code that a Contributor has removed from Covered Software; or license_1046_p=for infringements caused by\: (i) Your and any other third party's modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or license_1047_p=under Patent Claims infringed by Covered Software in the absence of its Contributions. license_1048_p=This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). license_1049_h4=2.4. Subsequent Licenses license_1050_p=No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). license_1051_h4=2.5. Representation license_1052_p=Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. license_1053_h4=2.6. Fair Use license_1054_p=This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. license_1055_h4=2.7. Conditions license_1056_p=Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. license_1057_h3=3. Responsibilities license_1058_h4=3.1. Distribution of Source Form license_1059_p=All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients' rights in the Source Code Form. license_1060_h4=3.2. Distribution of Executable Form license_1061_p=If You distribute Covered Software in Executable Form then\: license_1062_p=such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and license_1063_p=You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients' rights in the Source Code Form under this License. license_1064_h4=3.3. Distribution of a Larger Work license_1065_p=You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s). license_1066_h4=3.4. Notices license_1067_p=You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. license_1068_h4=3.5. Application of Additional Terms license_1069_p=You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. license_1070_h3=4. Inability to Comply Due to Statute or Regulation license_1071_p=If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must\: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. license_1072_h3=5. Termination license_1073_p=5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. license_1074_p=5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. license_1075_p=5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. license_1076_h3=6. Disclaimer of Warranty license_1077_p=Covered Software is provided under this License on an "as is" basis, without warranty of any kind, either expressed, implied, or statutory, including, without limitation, warranties that the Covered Software is free of defects, merchantable, fit for a particular purpose or non-infringing. The entire risk as to the quality and performance of the Covered Software is with You. Should any Covered Software prove defective in any respect, You (not any Contributor) assume the cost of any necessary servicing, repair, or correction. This disclaimer of warranty constitutes an essential part of this License. No use of any Covered Software is authorized under this License except under this disclaimer. license_1078_h3=7. Limitation of Liability license_1079_p=Under no circumstances and under no legal theory, whether tort (including negligence), contract, or otherwise, shall any Contributor, or anyone who distributes Covered Software as permitted above, be liable to You for any direct, indirect, special, incidental, or consequential damages of any character including, without limitation, damages for lost profits, loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses, even if such party shall have been informed of the possibility of such damages. This limitation of liability shall not apply to liability for death or personal injury resulting from such party's negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to You. license_1080_h3=8. Litigation license_1081_p=Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party's ability to bring cross-claims or counter-claims. license_1082_h3=9. Miscellaneous license_1083_p=This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. license_1084_h3=10. Versions of the License license_1085_h4=10.1. New Versions license_1086_p=Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. license_1087_h4=10.2. Effect of New Versions license_1088_p=You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. license_1089_h4=10.3. Modified Versions license_1090_p=If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). license_1091_h4=10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses license_1092_p=If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. license_1093_h3=Exhibit A - Source Code Form License Notice license_1094_p=If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. license_1095_p=You may add additional accurate notices of copyright ownership. license_1096_h3=Exhibit B - "Incompatible With Secondary Licenses" Notice license_1097_h2=Eclipse Public License - Version 1.0 license_1098_p=\ THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT. license_1099_h3=1. DEFINITIONS license_1100_p=\ "Contribution" means\: license_1101_p=\ a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and license_1102_p=\ b) in the case of each subsequent Contributor\: license_1103_p=\ i) changes to the Program, and license_1104_p=\ ii) additions to the Program; license_1105_p=\ where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which\: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program. license_1106_p=\ "Contributor" means any person or entity that distributes the Program. license_1107_p=\ "Licensed Patents " mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program. license_1108_p=\ "Program" means the Contributions distributed in accordance with this Agreement. license_1109_p=\ "Recipient" means anyone who receives the Program under this Agreement, including all Contributors. license_1110_h3=2. GRANT OF RIGHTS license_1111_p=\ a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form. license_1112_p=\ b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder. license_1113_p=\ c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program. license_1114_p=\ d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement. license_1115_h3=3. REQUIREMENTS license_1116_p=\ A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that\: license_1117_p=\ a) it complies with the terms and conditions of this Agreement; and license_1118_p=\ b) its license agreement\: license_1119_p=\ i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose; license_1120_p=\ ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits; license_1121_p=\ iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and license_1122_p=\ iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange. license_1123_p=\ When the Program is made available in source code form\: license_1124_p=\ a) it must be made available under this Agreement; and license_1125_p=\ b) a copy of this Agreement must be included with each copy of the Program. license_1126_p=\ Contributors may not remove or alter any copyright notices contained within the Program. license_1127_p=\ Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution. license_1128_h3=4. COMMERCIAL DISTRIBUTION license_1129_p=\ Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must\: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense. license_1130_p=\ For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages. license_1131_h3=5. NO WARRANTY license_1132_p=\ EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations. license_1133_h3=6. DISCLAIMER OF LIABILITY license_1134_p=\ EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. license_1135_h3=7. GENERAL license_1136_p=\ If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable. license_1137_p=\ If Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed. license_1138_p=\ All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive. license_1139_p=\ Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. The Eclipse Foundation is the initial Agreement Steward. The Eclipse Foundation may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved. license_1140_p=\ This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation. license_1141_h2=Export Control Classification Number (ECCN) license_1142_p=\ As far as we know, the <a href\="http\://www.bis.doc.gov/licensing/exportingbasics.htm">U.S. Export Control Classification Number (ECCN)</a> for this software is <code>5D002</code>. However, for legal reasons, we can make no warranty that this information is correct. For details, see also the <a href\="http\://www.apache.org/licenses/exports/">Apache Software Foundation Export Classifications page</a>. links_1000_h1=Links links_1001_p=\ If you want to add a link, please send it to the support email address or post it to the group. links_1002_a=\ Quotes links_1003_a=\ Books links_1004_a=\ Extensions links_1005_a=\ Blog Articles, Videos links_1006_a=\ Database Frontends / Tools links_1007_a=\ Products and Projects links_1008_h2=Quotes links_1009_a=\ Quote links_1010_p=\: "This is by far the easiest and fastest database that I have ever used. Originally the web application that I am working on is using SQL server. But, in less than 15 minutes I had H2 up and working with little recoding of the SQL. Thanks..... " links_1011_h2=Books links_1012_a=\ Seam In Action links_1013_h2=Extensions links_1014_a=\ Grails H2 Database Plugin links_1015_a=\ h2osgi\: OSGi for the H2 Database links_1016_a=\ H2Sharp\: ADO.NET interface for the H2 database engine links_1017_a=\ A spatial extension of the H2 database. links_1018_h2=Blog Articles, Videos links_1019_a=\ Youtube\: Minecraft 1.7.3 / How to install Bukkit Server with xAuth and H2 links_1020_a=\ Analyzing CSVs with H2 in under 10 minutes (2009-12-07) links_1021_a=\ Efficient sorting and iteration on large databases (2009-06-15) links_1022_a=\ Porting Flexive to the H2 Database (2008-12-05) links_1023_a=\ H2 Database with GlassFish (2008-11-24) links_1024_a=\ H2 Database - Performance Tracing (2008-04-30) links_1025_a=\ Open Source Databases Comparison (2007-09-11) links_1026_a=\ The Codist\: The Open Source Frameworks I Use (2007-07-23) links_1027_a=\ The Codist\: SQL Injections\: How Not To Get Stuck (2007-05-08) links_1028_a=\ David Coldrick's Weblog\: New Version of H2 Database Released (2007-01-06) links_1029_a=\ The Codist\: Write Your Own Database, Again (2006-11-13) links_1030_h2=Project Pages links_1031_a=\ Ohloh links_1032_a=\ Freshmeat Project Page links_1033_a=\ Wikipedia links_1034_a=\ Java Source Net links_1035_a=\ Linux Package Manager links_1036_h2=Database Frontends / Tools links_1037_a=\ Dataflyer links_1038_p=\ A tool to browse databases and export data. links_1039_a=\ DB Solo links_1040_p=\ SQL query tool. links_1041_a=\ DbVisualizer links_1042_p=\ Database tool. links_1043_a=\ Execute Query links_1044_p=\ Database utility written in Java. links_1045_a=\ Flyway links_1046_p=\ The agile database migration framework for Java. links_1047_a=\ [fleXive] links_1048_p=\ JavaEE 5 open source framework for the development of complex and evolving (web-)applications. links_1049_a=\ JDBC Console links_1050_p=\ This small webapp gives an ability to execute SQL against datasources bound in container's JNDI. Based on H2 Console. links_1051_a=\ HenPlus links_1052_p=\ HenPlus is a SQL shell written in Java. links_1053_a=\ JDBC lint links_1054_p=\ Helps write correct and efficient code when using the JDBC API. links_1055_a=\ OpenOffice links_1056_p=\ Base is OpenOffice.org's database application. It provides access to relational data sources. links_1057_a=\ RazorSQL links_1058_p=\ An SQL query tool, database browser, SQL editor, and database administration tool. links_1059_a=\ SQL Developer links_1060_p=\ Universal Database Frontend. links_1061_a=\ SQL Workbench/J links_1062_p=\ Free DBMS-independent SQL tool. links_1063_a=\ SQuirreL SQL Client links_1064_p=\ Graphical tool to view the structure of a database, browse the data, issue SQL commands etc. links_1065_a=\ SQuirreL DB Copy Plugin links_1066_p=\ Tool to copy data from one database to another. links_1067_h2=Products and Projects links_1068_a=\ AccuProcess links_1069_p=\ Visual business process modeling and simulation software for business users. links_1070_a=\ Adeptia BPM links_1071_p=\ A Business Process Management (BPM) suite to quickly and easily automate business processes and workflows. links_1072_a=\ Adeptia Integration links_1073_p=\ Process-centric, services-based application integration suite. links_1074_a=\ Aejaks links_1075_p=\ A server-side scripting environment to build AJAX enabled web applications. links_1076_a=\ Axiom Stack links_1077_p=\ A web framework that let's you write dynamic web applications with Zen-like simplicity. links_1078_a=\ Apache Cayenne links_1079_p=\ Open source persistence framework providing object-relational mapping (ORM) and remoting services. links_1080_a=\ Apache Jackrabbit links_1081_p=\ Open source implementation of the Java Content Repository API (JCR). links_1082_a=\ Apache OpenJPA links_1083_p=\ Open source implementation of the Java Persistence API (JPA). links_1084_a=\ AppFuse links_1085_p=\ Helps building web applications. links_1086_a=\ BGBlitz links_1087_p=\ The Swiss army knife of Backgammon. links_1088_a=\ Bonita links_1089_p=\ Open source workflow solution for handing long-running, user-oriented processes providing out of the box workflow and business process management features. links_1090_a=\ Bookmarks Portlet links_1091_p=\ JSR 168 compliant bookmarks management portlet application. links_1092_a=\ Claros inTouch links_1093_p=\ Ajax communication suite with mail, addresses, notes, IM, and rss reader. links_1094_a=\ CrashPlan PRO Server links_1095_p=\ Easy and cross platform backup solution for business and service providers. links_1096_a=\ DataNucleus links_1097_p=\ Java persistent objects. links_1098_a=\ DbUnit links_1099_p=\ A JUnit extension (also usable with Ant) targeted for database-driven projects. links_1100_a=\ DiffKit links_1101_p=\ DiffKit is a tool for comparing two tables of data, field-by-field. DiffKit is like the Unix diff utility, but for tables instead of lines of text. links_1102_a=\ Dinamica Framework links_1103_p=\ Ajax/J2EE framework for RAD development (mainly oriented toward hispanic markets). links_1104_a=\ District Health Information Software 2 (DHIS) links_1105_p=\ The DHIS 2 is a tool for collection, validation, analysis, and presentation of aggregate statistical data, tailored (but not limited) to integrated health information management activities. links_1106_a=\ Ebean ORM Persistence Layer links_1107_p=\ Open source Java Object Relational Mapping tool. links_1108_a=\ Eclipse CDO links_1109_p=\ The CDO (Connected Data Objects) Model Repository is a distributed shared model framework for EMF models, and a fast server-based O/R mapping solution. links_1110_a=\ Fabric3 links_1111_p=\ Fabric3 is a project implementing a federated service network based on the Service Component Architecture specification (http\://www.osoa.org). links_1112_a=\ FIT4Data links_1113_p=\ A testing framework for data management applications built on the Java implementation of FIT. links_1114_a=\ Flux links_1115_p=\ Java job scheduler, file transfer, workflow, and BPM. links_1116_a=\ GeoServer links_1117_p=\ GeoServer is a Java-based software server that allows users to view and edit geospatial data. Using open standards set forth by the Open Geospatial Consortium (OGC), GeoServer allows for great flexibility in map creation and data sharing. links_1118_a=\ GBIF Integrated Publishing Toolkit (IPT) links_1119_p=\ The GBIF IPT is an open source, Java based web application that connects and serves three types of biodiversity data\: taxon primary occurrence data, taxon checklists and general resource metadata. links_1120_a=\ GNU Gluco Control links_1121_p=\ Helps you to manage your diabetes. links_1122_a=\ Golden T Studios links_1123_p=\ Fun-to-play games with a simple interface. links_1124_a=\ GridGain links_1125_p=\ GridGain is easy to use Cloud Application Platform that enables development of highly scalable distributed Java and Scala applications that auto-scale on any grid or cloud infrastructure. links_1126_a=\ Group Session links_1127_p=\ Open source web groupware. links_1128_a=\ HA-JDBC links_1129_p=\ High-Availability JDBC\: A JDBC proxy that provides light-weight, transparent, fault tolerant clustering capability to any underlying JDBC driver. links_1130_a=\ Hibernate links_1131_p=\ Relational persistence for idiomatic Java (O-R mapping tool). links_1132_a=\ Hibicius links_1133_p=\ Online Banking Client for the HBCI protocol. links_1134_a=\ ImageMapper links_1135_p=\ ImageMapper frees users from having to use file browsers to view their images. They get fast access to images and easy cataloguing of them via a user friendly interface. links_1136_a=\ JAMWiki links_1137_p=\ Java-based Wiki engine. links_1138_a=\ Jaspa links_1139_p=\ Java Spatial. Jaspa potentially brings around 200 spatial functions. links_1140_a=\ Java Simon links_1141_p=\ Simple Monitoring API. links_1142_a=\ JBoss jBPM links_1143_p=\ A platform for executable process languages ranging from business process management (BPM) over workflow to service orchestration. links_1144_a=\ JBoss Jopr links_1145_p=\ An enterprise management solution for JBoss middleware projects and other application technologies. links_1146_a=\ JGeocoder links_1147_p=\ Free Java geocoder. Geocoding is the process of estimating a latitude and longitude for a given location. links_1148_a=\ JGrass links_1149_p=\ Java Geographic Resources Analysis Support System. Free, multi platform, open source GIS based on the GIS framework of uDig. links_1150_a=\ Jena links_1151_p=\ Java framework for building Semantic Web applications. links_1152_a=\ JMatter links_1153_p=\ Framework for constructing workgroup business applications based on the Naked Objects Architectural Pattern. links_1154_a=\ jOOQ (Java Object Oriented Querying) links_1155_p=\ jOOQ is a fluent API for typesafe SQL query construction and execution links_1156_a=\ Liftweb links_1157_p=\ A Scala-based, secure, developer friendly web framework. links_1158_a=\ LiquiBase links_1159_p=\ A tool to manage database changes and refactorings. links_1160_a=\ Luntbuild links_1161_p=\ Build automation and management tool. links_1162_a=\ localdb links_1163_p=\ A tool that locates the full file path of the folder containing the database files. links_1164_a=\ Magnolia links_1165_p=\ Microarray Data Management and Export System for PFGRC (Pathogen Functional Genomics Resource Center) Microarrays. links_1166_a=\ MiniConnectionPoolManager links_1167_p=\ A lightweight standalone JDBC connection pool manager. links_1168_a=\ Mr. Persister links_1169_p=\ Simple, small and fast object relational mapping. links_1170_a=\ Myna Application Server links_1171_p=\ Java web app that provides dynamic web content and Java libraries access from JavaScript. links_1172_a=\ MyTunesRss links_1173_p=\ MyTunesRSS lets you listen to your music wherever you are. links_1174_a=\ NCGC CurveFit links_1175_p=\ From\: NIH Chemical Genomics Center, National Institutes of Health, USA. An open source application in the life sciences research field. This application handles chemical structures and biological responses of thousands of compounds with the potential to handle million+ compounds. It utilizes an embedded H2 database to enable flexible query/retrieval of all data including advanced chemical substructure and similarity searching. The application highlights an automated curve fitting and classification algorithm that outperforms commercial packages in the field. Commercial alternatives are typically small desktop software that handle a few dose response curves at a time. A couple of commercial packages that do handle several thousand curves are very expensive tools (>60k USD) that require manual curation of analysis by the user; require a license to Oracle; lack advanced query/retrieval; and the ability to handle chemical structures. links_1176_a=\ Nuxeo links_1177_p=\ Standards-based, open source platform for building ECM applications. links_1178_a=\ nWire links_1179_p=\ Eclipse plug-in which expedites Java development. It's main purpose is to help developers find code quicker and easily understand how it relates to the rest of the application, thus, understand the application structure. links_1180_a=\ Ontology Works links_1181_p=\ This company provides semantic technologies including deductive information repositories (the Ontology Works Knowledge Servers), semantic information fusion and semantic federation of legacy databases, ontology-based domain modeling, and management of the distributed enterprise. links_1182_a=\ Ontoprise OntoBroker links_1183_p=\ SemanticWeb-Middleware. It supports all W3C Semantic Web recommendations\: OWL, RDF, RDFS, SPARQL, and F-Logic. links_1184_a=\ Open Anzo links_1185_p=\ Semantic Application Server. links_1186_a=\ OpenGroove links_1187_p=\ OpenGroove is a groupware program that allows users to synchronize data. links_1188_a=\ OpenSocial Development Environment (OSDE) links_1189_p=\ Development tool for OpenSocial application. links_1190_a=\ Orion links_1191_p=\ J2EE Application Server. links_1192_a=\ P5H2 links_1193_p=\ A library for the <a href\="http\://www.processing.org">Processing</a> programming language and environment. links_1194_a=\ Phase-6 links_1195_p=\ A computer based learning software. links_1196_a=\ Pickle links_1197_p=\ Pickle is a Java library containing classes for persistence, concurrency, and logging. links_1198_a=\ Piman links_1199_p=\ Water treatment projects data management. links_1200_a=\ PolePosition links_1201_p=\ Open source database benchmark. links_1202_a=\ Poormans links_1203_p=\ Very basic CMS running as a SWT application and generating static html pages. links_1204_a=\ Railo links_1205_p=\ Railo is an alternative engine for the Cold Fusion Markup Language, that compiles code programmed in CFML into Java bytecode and executes it on a servlet engine. links_1206_a=\ Razuna links_1207_p=\ Open source Digital Asset Management System with integrated Web Content Management. links_1208_a=\ RIFE links_1209_p=\ A full-stack web application framework with tools and APIs to implement most common web features. links_1210_a=\ Sava links_1211_p=\ Open-source web-based content management system. links_1212_a=\ Scriptella links_1213_p=\ ETL (Extract-Transform-Load) and script execution tool. links_1214_a=\ Sesar links_1215_p=\ Dependency Injection Container with Aspect Oriented Programming. links_1216_a=\ SemmleCode links_1217_p=\ Eclipse plugin to help you improve software quality. links_1218_a=\ SeQuaLite links_1219_p=\ A free, light-weight, java data access framework. links_1220_a=\ ShapeLogic links_1221_p=\ Toolkit for declarative programming, image processing and computer vision. links_1222_a=\ Shellbook links_1223_p=\ Desktop publishing application. links_1224_a=\ Signsoft intelliBO links_1225_p=\ Persistence middleware supporting the JDO specification. links_1226_a=\ SimpleORM links_1227_p=\ Simple Java Object Relational Mapping. links_1228_a=\ SymmetricDS links_1229_p=\ A web-enabled, database independent, data synchronization/replication software. links_1230_a=\ SmartFoxServer links_1231_p=\ Platform for developing multiuser applications and games with Macromedia Flash. links_1232_a=\ Social Bookmarks Friend Finder links_1233_p=\ A GUI application that allows you to find users with similar bookmarks to the user specified (for delicious.com). links_1234_a=\ sormula links_1235_p=\ Simple object relational mapping. links_1236_a=\ Springfuse links_1237_p=\ Code generation For Spring, Spring MVC & Hibernate. links_1238_a=\ SQLOrm links_1239_p=\ Java Object Relation Mapping. links_1240_a=\ StelsCSV and StelsXML links_1241_p=\ StelsCSV is a CSV JDBC type 4 driver that allows to perform SQL queries and other JDBC operations on text files. StelsXML is a XML JDBC type 4 driver that allows to perform SQL queries and other JDBC operations on XML files. Both use H2 as the SQL engine. links_1242_a=\ StorYBook links_1243_p=\ A summary-based tool for novelist and script writers. It helps to keep the overview over the various traces a story has. links_1244_a=\ StreamCruncher links_1245_p=\ Event (stream) processing kernel. links_1246_a=\ SUSE Manager, part of Linux Enterprise Server 11 links_1247_p=\ The SUSE Manager <a href\="http\://www.suse.com/blogs/suse-manager-eases-the-buden-of-compliance"> eases the burden of compliance</a> with regulatory requirements and corporate policies. links_1248_a=\ Tune Backup links_1249_p=\ Easy-to-use backup solution for your iTunes library. links_1250_a=\ TimeWriter links_1251_p=\ TimeWriter is a very flexible program for time administration / time tracking. The older versions used dBase tables. The new version 5 is completely rewritten, now using the H2 database. TimeWriter is delivered in Dutch and English. links_1252_a=\ weblica links_1253_p=\ Desktop CMS. links_1254_a=\ Web of Web links_1255_p=\ Collaborative and realtime interactive media platform for the web. links_1256_a=\ Werkzeugkasten links_1257_p=\ Minimum Java Toolset. links_1258_a=\ VPDA links_1259_p=\ View providers driven applications is a Java based application framework for building applications composed from server components - view providers. links_1260_a=\ Volunteer database links_1261_p=\ A database front end to register volunteers, partnership and donation for a Non Profit organization. mainWeb_1000_h1=H2 Database Engine mainWeb_1001_p=\ Welcome to H2, the Java SQL database. The main features of H2 are\: mainWeb_1002_li=Very fast, open source, JDBC API mainWeb_1003_li=Embedded and server modes; in-memory databases mainWeb_1004_li=Browser based Console application mainWeb_1005_li=Small footprint\: around 1.5 MB jar file size mainWeb_1006_h2=Download mainWeb_1007_td=\ Version 1.4.196 (2017-06-10) mainWeb_1008_a=Windows Installer (5 MB) mainWeb_1009_a=All Platforms (zip, 8 MB) mainWeb_1010_a=All Downloads mainWeb_1011_td= mainWeb_1012_h2=Support mainWeb_1013_a=Stack Overflow (tag H2) mainWeb_1014_a=Google Group English mainWeb_1015_p=, <a href\="http\://groups.google.co.jp/group/h2-database-jp">Japanese</a> mainWeb_1016_p=\ For non-technical issues, use\: mainWeb_1017_h2=Features mainWeb_1018_th=H2 mainWeb_1019_a=Derby mainWeb_1020_a=HSQLDB mainWeb_1021_a=MySQL mainWeb_1022_a=PostgreSQL mainWeb_1023_td=Pure Java mainWeb_1024_td=Yes mainWeb_1025_td=Yes mainWeb_1026_td=Yes mainWeb_1027_td=No mainWeb_1028_td=No mainWeb_1029_td=Memory Mode mainWeb_1030_td=Yes mainWeb_1031_td=Yes mainWeb_1032_td=Yes mainWeb_1033_td=No mainWeb_1034_td=No mainWeb_1035_td=Encrypted Database mainWeb_1036_td=Yes mainWeb_1037_td=Yes mainWeb_1038_td=Yes mainWeb_1039_td=No mainWeb_1040_td=No mainWeb_1041_td=ODBC Driver mainWeb_1042_td=Yes mainWeb_1043_td=No mainWeb_1044_td=No mainWeb_1045_td=Yes mainWeb_1046_td=Yes mainWeb_1047_td=Fulltext Search mainWeb_1048_td=Yes mainWeb_1049_td=No mainWeb_1050_td=No mainWeb_1051_td=Yes mainWeb_1052_td=Yes mainWeb_1053_td=Multi Version Concurrency mainWeb_1054_td=Yes mainWeb_1055_td=No mainWeb_1056_td=Yes mainWeb_1057_td=Yes mainWeb_1058_td=Yes mainWeb_1059_td=Footprint (jar/dll size) mainWeb_1060_td=~1 MB mainWeb_1061_td=~2 MB mainWeb_1062_td=~1 MB mainWeb_1063_td=~4 MB mainWeb_1064_td=~6 MB mainWeb_1065_p=\ See also the <a href\="features.html\#comparison">detailed comparison</a>. mainWeb_1066_h2=News mainWeb_1067_b=Newsfeeds\: mainWeb_1068_a=Full text (Atom) mainWeb_1069_p=\ or <a href\="http\://www.h2database.com/html/newsfeed-rss.xml">Header only (RSS)</a>. mainWeb_1070_b=Email Newsletter\: mainWeb_1071_p=\ Subscribe to <a href\="http\://groups.google.com/group/h2database-news/subscribe"> H2 Database News (Google account required)</a> to get informed about new releases. Your email address is only used in this context. mainWeb_1072_td= mainWeb_1073_h2=Contribute mainWeb_1074_p=\ You can contribute to the development of H2 by sending feedback and bug reports, or translate the H2 Console application (for details, start the H2 Console and select Options / Translate). To donate money, click on the PayPal button below. You will be listed as a <a href\="http\://h2database.com/html/history.html\#supporters">supporter</a>\: main_1000_h1=H2 Database Engine main_1001_p=\ Welcome to H2, the free Java SQL database engine. main_1002_a=Quickstart main_1003_p=\ Get a fast overview. main_1004_a=Tutorial main_1005_p=\ Go through the samples. main_1006_a=Features main_1007_p=\ See what this database can do and how to use these features. mvstore_1000_h1=MVStore mvstore_1001_a=\ Overview mvstore_1002_a=\ Example Code mvstore_1003_a=\ Store Builder mvstore_1004_a=\ R-Tree mvstore_1005_a=\ Features mvstore_1006_a=- Maps mvstore_1007_a=- Versions mvstore_1008_a=- Transactions mvstore_1009_a=- In-Memory Performance and Usage mvstore_1010_a=- Pluggable Data Types mvstore_1011_a=- BLOB Support mvstore_1012_a=- R-Tree and Pluggable Map Implementations mvstore_1013_a=- Concurrent Operations and Caching mvstore_1014_a=- Log Structured Storage mvstore_1015_a=- Off-Heap and Pluggable Storage mvstore_1016_a=- File System Abstraction, File Locking and Online Backup mvstore_1017_a=- Encrypted Files mvstore_1018_a=- Tools mvstore_1019_a=- Exception Handling mvstore_1020_a=- Storage Engine for H2 mvstore_1021_a=\ File Format mvstore_1022_a=\ Similar Projects and Differences to Other Storage Engines mvstore_1023_a=\ Current State mvstore_1024_a=\ Requirements mvstore_1025_h2=Overview mvstore_1026_p=\ The MVStore is a persistent, log structured key-value store. It is planned to be the next storage subsystem of H2, but it can also be used directly within an application, without using JDBC or SQL. mvstore_1027_li=MVStore stands for "multi-version store". mvstore_1028_li=Each store contains a number of maps that can be accessed using the <code>java.util.Map</code> interface. mvstore_1029_li=Both file-based persistence and in-memory operation are supported. mvstore_1030_li=It is intended to be fast, simple to use, and small. mvstore_1031_li=Concurrent read and write operations are supported. mvstore_1032_li=Transactions are supported (including concurrent transactions and 2-phase commit). mvstore_1033_li=The tool is very modular. It supports pluggable data types and serialization, pluggable storage (to a file, to off-heap memory), pluggable map implementations (B-tree, R-tree, concurrent B-tree currently), BLOB storage, and a file system abstraction to support encrypted files and zip files. mvstore_1034_h2=Example Code mvstore_1035_p=\ The following sample code shows how to use the tool\: mvstore_1036_h2=Store Builder mvstore_1037_p=\ The <code>MVStore.Builder</code> provides a fluid interface to build a store if configuration options are needed. Example usage\: mvstore_1038_p=\ The list of available options is\: mvstore_1039_li=autoCommitBufferSize\: the size of the write buffer. mvstore_1040_li=autoCommitDisabled\: to disable auto-commit. mvstore_1041_li=backgroundExceptionHandler\: a handler for exceptions that could occur while writing in the background. mvstore_1042_li=cacheSize\: the cache size in MB. mvstore_1043_li=compress\: compress the data when storing using a fast algorithm (LZF). mvstore_1044_li=compressHigh\: compress the data when storing using a slower algorithm (Deflate). mvstore_1045_li=encryptionKey\: the key for file encryption. mvstore_1046_li=fileName\: the name of the file, for file based stores. mvstore_1047_li=fileStore\: the storage implementation to use. mvstore_1048_li=pageSplitSize\: the point where pages are split. mvstore_1049_li=readOnly\: open the file in read-only mode. mvstore_1050_h2=R-Tree mvstore_1051_p=\ The <code>MVRTreeMap</code> is an R-tree implementation that supports fast spatial queries. It can be used as follows\: mvstore_1052_p=\ The default number of dimensions is 2. To use a different number of dimensions, call <code>new MVRTreeMap.Builder<String>().dimensions(3)</code>. The minimum number of dimensions is 1, the maximum is 32. mvstore_1053_h2=Features mvstore_1054_h3=Maps mvstore_1055_p=\ Each store contains a set of named maps. A map is sorted by key, and supports the common lookup operations, including access to the first and last key, iterate over some or all keys, and so on. mvstore_1056_p=\ Also supported, and very uncommon for maps, is fast index lookup\: the entries of the map can be be efficiently accessed like a random-access list (get the entry at the given index), and the index of a key can be calculated efficiently. That also means getting the median of two keys is very fast, and a range of keys can be counted very quickly. The iterator supports fast skipping. This is possible because internally, each map is organized in the form of a counted B+-tree. mvstore_1057_p=\ In database terms, a map can be used like a table, where the key of the map is the primary key of the table, and the value is the row. A map can also represent an index, where the key of the map is the key of the index, and the value of the map is the primary key of the table (for non-unique indexes, the key of the map must also contain the primary key). mvstore_1058_h3=Versions mvstore_1059_p=\ A version is a snapshot of all the data of all maps at a given point in time. Creating a snapshot is fast\: only those pages that are changed after a snapshot are copied. This behavior is also called COW (copy on write). Old versions are readable. Rollback to an old version is supported. mvstore_1060_p=\ The following sample code show how to create a store, open a map, add some data, and access the current and an old version\: mvstore_1061_h3=Transactions mvstore_1062_p=\ To support multiple concurrent open transactions, a transaction utility is included, the <code>TransactionStore</code>. The tool supports PostgreSQL style "read committed" transaction isolation with savepoints, two-phase commit, and other features typically available in a database. There is no limit on the size of a transaction (the log is written to disk for large or long running transactions). mvstore_1063_p=\ Internally, this utility stores the old versions of changed entries in a separate map, similar to a transaction log, except that entries of a closed transaction are removed, and the log is usually not stored for short transactions. For common use cases, the storage overhead of this utility is very small compared to the overhead of a regular transaction log. mvstore_1064_h3=In-Memory Performance and Usage mvstore_1065_p=\ Performance of in-memory operations is about 50% slower than <code>java.util.TreeMap</code>. mvstore_1066_p=\ The memory overhead for large maps is slightly better than for the regular map implementations, but there is a higher overhead per map. For maps with less than about 25 entries, the regular map implementations need less memory. mvstore_1067_p=\ If no file name is specified, the store operates purely in memory. Except for persisting data, all features are supported in this mode (multi-versioning, index lookup, R-tree and so on). If a file name is specified, all operations occur in memory (with the same performance characteristics) until data is persisted. mvstore_1068_p=\ As in all map implementations, keys need to be immutable, that means changing the key object after an entry has been added is not allowed. If a file name is specified, the value may also not be changed after adding an entry, because it might be serialized (which could happen at any time when autocommit is enabled). mvstore_1069_h3=Pluggable Data Types mvstore_1070_p=\ Serialization is pluggable. The default serialization currently supports many common data types, and uses Java serialization for other objects. The following classes are currently directly supported\: <code>Boolean, Byte, Short, Character, Integer, Long, Float, Double, BigInteger, BigDecimal, String, UUID, Date</code> and arrays (both primitive arrays and object arrays). For serialized objects, the size estimate is adjusted using an exponential moving average. mvstore_1071_p=\ Parameterized data types are supported (for example one could build a string data type that limits the length). mvstore_1072_p=\ The storage engine itself does not have any length limits, so that keys, values, pages, and chunks can be very big (as big as fits in memory). Also, there is no inherent limit to the number of maps and chunks. Due to using a log structured storage, there is no special case handling for large keys or pages. mvstore_1073_h3=BLOB Support mvstore_1074_p=\ There is a mechanism that stores large binary objects by splitting them into smaller blocks. This allows to store objects that don't fit in memory. Streaming as well as random access reads on such objects are supported. This tool is written on top of the store, using only the map interface. mvstore_1075_h3=R-Tree and Pluggable Map Implementations mvstore_1076_p=\ The map implementation is pluggable. In addition to the default <code>MVMap</code> (multi-version map), there is a multi-version R-tree map implementation for spatial operations. mvstore_1077_h3=Concurrent Operations and Caching mvstore_1078_p=\ Concurrent reads and writes are supported. All such read operations can occur in parallel. Concurrent reads from the page cache, as well as concurrent reads from the file system are supported. Write operations first read the relevant pages from disk to memory (this can happen concurrently), and only then modify the data. The in-memory parts of write operations are synchronized. Writing changes to the file can occur concurrently to modifying the data, as writing operates on a snapshot. mvstore_1079_p=\ Caching is done on the page level. The page cache is a concurrent LIRS cache, which should be resistant against scan operations. mvstore_1080_p=\ For fully scalable concurrent write operations to a map (in-memory and to disk), the map could be split into multiple maps in different stores ('sharding'). The plan is to add such a mechanism later when needed. mvstore_1081_h3=Log Structured Storage mvstore_1082_p=\ Internally, changes are buffered in memory, and once enough changes have accumulated, they are written in one continuous disk write operation. Compared to traditional database storage engines, this should improve write performance for file systems and storage systems that do not efficiently support small random writes, such as Btrfs, as well as SSDs. (According to a test, write throughput of a common SSD increases with write block size, until a block size of 2 MB, and then does not further increase.) By default, changes are automatically written when more than a number of pages are modified, and once every second in a background thread, even if only little data was changed. Changes can also be written explicitly by calling <code>commit()</code>. mvstore_1083_p=\ When storing, all changed pages are serialized, optionally compressed using the LZF algorithm, and written sequentially to a free area of the file. Each such change set is called a chunk. All parent pages of the changed B-trees are stored in this chunk as well, so that each chunk also contains the root of each changed map (which is the entry point for reading this version of the data). There is no separate index\: all data is stored as a list of pages. Per store, there is one additional map that contains the metadata (the list of maps, where the root page of each map is stored, and the list of chunks). mvstore_1084_p=\ There are usually two write operations per chunk\: one to store the chunk data (the pages), and one to update the file header (so it points to the latest chunk). If the chunk is appended at the end of the file, the file header is only written at the end of the chunk. There is no transaction log, no undo log, and there are no in-place updates (however, unused chunks are overwritten by default). mvstore_1085_p=\ Old data is kept for at least 45 seconds (configurable), so that there are no explicit sync operations required to guarantee data consistency. An application can also sync explicitly when needed. To reuse disk space, the chunks with the lowest amount of live data are compacted (the live data is stored again in the next chunk). To improve data locality and disk space usage, the plan is to automatically defragment and compact data. mvstore_1086_p=\ Compared to traditional storage engines (that use a transaction log, undo log, and main storage area), the log structured storage is simpler, more flexible, and typically needs less disk operations per change, as data is only written once instead of twice or 3 times, and because the B-tree pages are always full (they are stored next to each other) and can be easily compressed. But temporarily, disk space usage might actually be a bit higher than for a regular database, as disk space is not immediately re-used (there are no in-place updates). mvstore_1087_h3=Off-Heap and Pluggable Storage mvstore_1088_p=\ Storage is pluggable. Unless pure in-memory operation is used, the default storage is to a single file. mvstore_1089_p=\ An off-heap storage implementation is available. This storage keeps the data in the off-heap memory, meaning outside of the regular garbage collected heap. This allows to use very large in-memory stores without having to increase the JVM heap, which would increase Java garbage collection pauses a lot. Memory is allocated using <code>ByteBuffer.allocateDirect</code>. One chunk is allocated at a time (each chunk is usually a few MB large), so that allocation cost is low. To use the off-heap storage, call\: mvstore_1090_h3=File System Abstraction, File Locking and Online Backup mvstore_1091_p=\ The file system is pluggable. The same file system abstraction is used as H2 uses. The file can be encrypted using a encrypting file system wrapper. Other file system implementations support reading from a compressed zip or jar file. The file system abstraction closely matches the Java 7 file system API. mvstore_1092_p=\ Each store may only be opened once within a JVM. When opening a store, the file is locked in exclusive mode, so that the file can only be changed from within one process. Files can be opened in read-only mode, in which case a shared lock is used. mvstore_1093_p=\ The persisted data can be backed up at any time, even during write operations (online backup). To do that, automatic disk space reuse needs to be first disabled, so that new data is always appended at the end of the file. Then, the file can be copied. The file handle is available to the application. It is recommended to use the utility class <code>FileChannelInputStream</code> to do this. For encrypted databases, both the encrypted (raw) file content, as well as the clear text content, can be backed up. mvstore_1094_h3=Encrypted Files mvstore_1095_p=\ File encryption ensures the data can only be read with the correct password. Data can be encrypted as follows\: mvstore_1096_p=\ The following algorithms and settings are used\: mvstore_1097_li=The password char array is cleared after use, to reduce the risk that the password is stolen even if the attacker has access to the main memory. mvstore_1098_li=The password is hashed according to the PBKDF2 standard, using the SHA-256 hash algorithm. mvstore_1099_li=The length of the salt is 64 bits, so that an attacker can not use a pre-calculated password hash table (rainbow table). It is generated using a cryptographically secure random number generator. mvstore_1100_li=To speed up opening an encrypted stores on Android, the number of PBKDF2 iterations is 10. The higher the value, the better the protection against brute-force password cracking attacks, but the slower is opening a file. mvstore_1101_li=The file itself is encrypted using the standardized disk encryption mode XTS-AES. Only little more than one AES-128 round per block is needed. mvstore_1102_h3=Tools mvstore_1103_p=\ There is a tool, the <code>MVStoreTool</code>, to dump the contents of a file. mvstore_1104_h3=Exception Handling mvstore_1105_p=\ This tool does not throw checked exceptions. Instead, unchecked exceptions are thrown if needed. The error message always contains the version of the tool. The following exceptions can occur\: mvstore_1106_code=IllegalStateException mvstore_1107_li=\ if a map was already closed or an IO exception occurred, for example if the file was locked, is already closed, could not be opened or closed, if reading or writing failed, if the file is corrupt, or if there is an internal error in the tool. For such exceptions, an error code is added so that the application can distinguish between different error cases. mvstore_1108_code=IllegalArgumentException mvstore_1109_li=\ if a method was called with an illegal argument. mvstore_1110_code=UnsupportedOperationException mvstore_1111_li=\ if a method was called that is not supported, for example trying to modify a read-only map. mvstore_1112_code=ConcurrentModificationException mvstore_1113_li=\ if a map is modified concurrently. mvstore_1114_h3=Storage Engine for H2 mvstore_1115_p=\ For H2 version 1.4 and newer, the MVStore is the default storage engine (supporting SQL, JDBC, transactions, MVCC, and so on). For older versions, append <code>;MV_STORE\=TRUE</code> to the database URL. Even though it can be used with the default table level locking, by default the MVCC mode is enabled when using the MVStore. mvstore_1116_h2=File Format mvstore_1117_p=\ The data is stored in one file. The file contains two file headers (for safety), and a number of chunks. The file headers are one block each; a block is 4096 bytes. Each chunk is at least one block, but typically 200 blocks or more. Data is stored in the chunks in the form of a <a href\="http\://en.wikipedia.org/wiki/Log-structured_file_system">log structured storage</a>. There is one chunk for every version. mvstore_1118_p=\ Each chunk contains a number of B-tree pages. As an example, the following code\: mvstore_1119_p=\ will result in the following two chunks (excluding metadata)\: mvstore_1120_b=Chunk 1\: mvstore_1121_p=\ - Page 1\: (root) node with 2 entries pointing to page 2 and 3 mvstore_1122_p=\ - Page 2\: leaf with 140 entries (keys 0 - 139) mvstore_1123_p=\ - Page 3\: leaf with 260 entries (keys 140 - 399) mvstore_1124_b=Chunk 2\: mvstore_1125_p=\ - Page 4\: (root) node with 2 entries pointing to page 5 and 3 mvstore_1126_p=\ - Page 5\: leaf with 140 entries (keys 0 - 139) mvstore_1127_p=\ That means each chunk contains the changes of one version\: the new version of the changed pages and the parent pages, recursively, up to the root page. Pages in subsequent chunks refer to pages in earlier chunks. mvstore_1128_h3=File Header mvstore_1129_p=\ There are two file headers, which normally contain the exact same data. But once in a while, the file headers are updated, and writing could partially fail, which could corrupt a header. That's why there is a second header. Only the file headers are updated in this way (called "in-place update"). The headers contain the following data\: mvstore_1130_p=\ The data is stored in the form of a key-value pair. Each value is stored as a hexadecimal number. The entries are\: mvstore_1131_li=H\: The entry "H\:2" stands for the the H2 database. mvstore_1132_li=block\: The block number where one of the newest chunks starts (but not necessarily the newest). mvstore_1133_li=blockSize\: The block size of the file; currently always hex 1000, which is decimal 4096, to match the <a href\="http\://en.wikipedia.org/wiki/Disk_sector">disk sector</a> length of modern hard disks. mvstore_1134_li=chunk\: The chunk id, which is normally the same value as the version; however, the chunk id might roll over to 0, while the version doesn't. mvstore_1135_li=created\: The number of milliseconds since 1970 when the file was created. mvstore_1136_li=format\: The file format number. Currently 1. mvstore_1137_li=version\: The version number of the chunk. mvstore_1138_li=fletcher\: The <a href\="http\://en.wikipedia.org/wiki/Fletcher's_checksum"> Fletcher-32 checksum</a> of the header. mvstore_1139_p=\ When opening the file, both headers are read and the checksum is verified. If both headers are valid, the one with the newer version is used. The chunk with the latest version is then detected (details about this see below), and the rest of the metadata is read from there. If the chunk id, block and version are not stored in the file header, then the latest chunk lookup starts with the last chunk in the file. mvstore_1140_h3=Chunk Format mvstore_1141_p=\ There is one chunk per version. Each chunk consists of a header, the pages that were modified in this version, and a footer. The pages contain the actual data of the maps. The pages inside a chunk are stored right after the header, next to each other (unaligned). The size of a chunk is a multiple of the block size. The footer is stored in the last 128 bytes of the chunk. mvstore_1142_p=\ The footer allows to verify that the chunk is completely written (a chunk is written as one write operation), and allows to find the start position of the very last chunk in the file. The chunk header and footer contain the following data\: mvstore_1143_p=\ The fields of the chunk header and footer are\: mvstore_1144_li=chunk\: The chunk id. mvstore_1145_li=block\: The first block of the chunk (multiply by the block size to get the position in the file). mvstore_1146_li=len\: The size of the chunk in number of blocks. mvstore_1147_li=map\: The id of the newest map; incremented when a new map is created. mvstore_1148_li=max\: The sum of all maximum page sizes (see page format). mvstore_1149_li=next\: The predicted start block of the next chunk. mvstore_1150_li=pages\: The number of pages in the chunk. mvstore_1151_li=root\: The position of the metadata root page (see page format). mvstore_1152_li=time\: The time the chunk was written, in milliseconds after the file was created. mvstore_1153_li=version\: The version this chunk represents. mvstore_1154_li=fletcher\: The checksum of the footer. mvstore_1155_p=\ Chunks are never updated in-place. Each chunk contains the pages that were changed in that version (there is one chunk per version, see above), plus all the parent nodes of those pages, recursively, up to the root page. If an entry in a map is changed, removed, or added, then the respective page is copied, modified, and stored in the next chunk, and the number of live pages in the old chunk is decremented. This mechanism is called copy-on-write, and is similar to how the <a href\="http\://en.wikipedia.org/wiki/Btrfs">Btrfs</a> file system works. Chunks without live pages are marked as free, so the space can be re-used by more recent chunks. Because not all chunks are of the same size, there can be a number of free blocks in front of a chunk for some time (until a small chunk is written or the chunks are compacted). There is a <a href\="http\://stackoverflow.com/questions/13650134/after-how-many-seconds-are-file-system-write-buffers-typically-flushed"> delay of 45 seconds</a> (by default) before a free chunk is overwritten, to ensure new versions are persisted first. mvstore_1156_p=\ How the newest chunk is located when opening a store\: The file header contains the position of a recent chunk, but not always the newest one. This is to reduce the number of file header updates. After opening the file, the file headers, and the chunk footer of the very last chunk (at the end of the file) are read. From those candidates, the header of the most recent chunk is read. If it contains a "next" pointer (see above), those chunk's header and footer are read as well. If it turned out to be a newer valid chunk, this is repeated, until the newest chunk was found. Before writing a chunk, the position of the next chunk is predicted based on the assumption that the next chunk will be of the same size as the current one. When the next chunk is written, and the previous prediction turned out to be incorrect, the file header is updated as well. In any case, the file header is updated if the next chain gets longer than 20 hops. mvstore_1157_h3=Page Format mvstore_1158_p=\ Each map is a <a href\="http\://en.wikipedia.org/wiki/B-tree">B-tree</a>, and the map data is stored in (B-tree-) pages. There are leaf pages that contain the key-value pairs of the map, and internal nodes, which only contain keys and pointers to leaf pages. The root of a tree is either a leaf or an internal node. Unlike file header and chunk header and footer, the page data is not human readable. Instead, it is stored as byte arrays, with long (8 bytes), int (4 bytes), short (2 bytes), and <a href\="http\://en.wikipedia.org/wiki/Variable-length_quantity">variable size int and long</a> (1 to 5 / 10 bytes). The page format is\: mvstore_1159_li=length (int)\: Length of the page in bytes. mvstore_1160_li=checksum (short)\: Checksum (chunk id xor offset within the chunk xor page length). mvstore_1161_li=mapId (variable size int)\: The id of the map this page belongs to. mvstore_1162_li=len (variable size int)\: The number of keys in the page. mvstore_1163_li=type (byte)\: The page type (0 for leaf page, 1 for internal node; plus 2 if the keys and values are compressed with the LZF algorithm, or plus 6 if the keys and values are compressed with the Deflate algorithm). mvstore_1164_li=children (array of long; internal nodes only)\: The position of the children. mvstore_1165_li=childCounts (array of variable size long; internal nodes only)\: The total number of entries for the given child page. mvstore_1166_li=keys (byte array)\: All keys, stored depending on the data type. mvstore_1167_li=values (byte array; leaf pages only)\: All values, stored depending on the data type. mvstore_1168_p=\ Even though this is not required by the file format, pages are stored in the following order\: For each map, the root page is stored first, then the internal nodes (if there are any), and then the leaf pages. This should speed up reads for media where sequential reads are faster than random access reads. The metadata map is stored at the end of a chunk. mvstore_1169_p=\ Pointers to pages are stored as a long, using a special format\: 26 bits for the chunk id, 32 bits for the offset within the chunk, 5 bits for the length code, 1 bit for the page type (leaf or internal node). The page type is encoded so that when clearing or removing a map, leaf pages don't have to be read (internal nodes do have to be read in order to know where all the pages are; but in a typical B-tree the vast majority of the pages are leaf pages). The absolute file position is not included so that chunks can be moved within the file without having to change page pointers; only the chunk metadata needs to be changed. The length code is a number from 0 to 31, where 0 means the maximum length of the page is 32 bytes, 1 means 48 bytes, 2\: 64, 3\: 96, 4\: 128, 5\: 192, and so on until 31 which means longer than 1 MB. That way, reading a page only requires one read operation (except for very large pages). The sum of the maximum length of all pages is stored in the chunk metadata (field "max"), and when a page is marked as removed, the live maximum length is adjusted. This allows to estimate the amount of free space within a block, in addition to the number of free pages. mvstore_1170_p=\ The total number of entries in child pages are kept to allow efficient range counting, lookup by index, and skip operations. The pages form a <a href\="http\://www.chiark.greenend.org.uk/~sgtatham/algorithms/cbtree.html">counted B-tree</a>. mvstore_1171_p=\ Data compression\: The data after the page type are optionally compressed using the LZF algorithm. mvstore_1172_h3=Metadata Map mvstore_1173_p=\ In addition to the user maps, there is one metadata map that contains names and positions of user maps, and chunk metadata. The very last page of a chunk contains the root page of that metadata map. The exact position of this root page is stored in the chunk header. This page (directly or indirectly) points to the root pages of all other maps. The metadata map of a store with a map named "data", and one chunk, contains the following entries\: mvstore_1174_li=chunk.1\: The metadata of chunk 1. This is the same data as the chunk header, plus the number of live pages, and the maximum live length. mvstore_1175_li=map.1\: The metadata of map 1. The entries are\: name, createVersion, and type. mvstore_1176_li=name.data\: The map id of the map named "data". The value is "1". mvstore_1177_li=root.1\: The root position of map 1. mvstore_1178_li=setting.storeVersion\: The store version (a user defined value). mvstore_1179_h2=Similar Projects and Differences to Other Storage Engines mvstore_1180_p=\ Unlike similar storage engines like LevelDB and Kyoto Cabinet, the MVStore is written in Java and can easily be embedded in a Java and Android application. mvstore_1181_p=\ The MVStore is somewhat similar to the Berkeley DB Java Edition because it is also written in Java, and is also a log structured storage, but the H2 license is more liberal. mvstore_1182_p=\ Like SQLite 3, the MVStore keeps all data in one file. Unlike SQLite 3, the MVStore uses is a log structured storage. The plan is to make the MVStore both easier to use as well as faster than SQLite 3. In a recent (very simple) test, the MVStore was about twice as fast as SQLite 3 on Android. mvstore_1183_p=\ The API of the MVStore is similar to MapDB (previously known as JDBM) from Jan Kotek, and some code is shared between MVStore and MapDB. However, unlike MapDB, the MVStore uses is a log structured storage. The MVStore does not have a record size limit. mvstore_1184_h2=Current State mvstore_1185_p=\ The code is still experimental at this stage. The API as well as the behavior may partially change. Features may be added and removed (even though the main features will stay). mvstore_1186_h2=Requirements mvstore_1187_p=\ The MVStore is included in the latest H2 jar file. mvstore_1188_p=\ There are no special requirements to use it. The MVStore should run on any JVM as well as on Android. mvstore_1189_p=\ To build just the MVStore (without the database engine), run\: mvstore_1190_p=\ This will create the file <code>bin/h2mvstore-1.4.196.jar</code> (about 200 KB). performance_1000_h1=Performance performance_1001_a=\ Performance Comparison performance_1002_a=\ PolePosition Benchmark performance_1003_a=\ Database Performance Tuning performance_1004_a=\ Using the Built-In Profiler performance_1005_a=\ Application Profiling performance_1006_a=\ Database Profiling performance_1007_a=\ Statement Execution Plans performance_1008_a=\ How Data is Stored and How Indexes Work performance_1009_a=\ Fast Database Import performance_1010_h2=Performance Comparison performance_1011_p=\ In many cases H2 is faster than other (open source and not open source) database engines. Please note this is mostly a single connection benchmark run on one computer, with many very simple operations running against the database. This benchmark does not include very complex queries. The embedded mode of H2 is faster than the client-server mode because the per-statement overhead is greatly reduced. performance_1012_h3=Embedded performance_1013_th=Test Case performance_1014_th=Unit performance_1015_th=H2 performance_1016_th=HSQLDB performance_1017_th=Derby performance_1018_td=Simple\: Init performance_1019_td=ms performance_1020_td=1019 performance_1021_td=1907 performance_1022_td=8280 performance_1023_td=Simple\: Query (random) performance_1024_td=ms performance_1025_td=1304 performance_1026_td=873 performance_1027_td=1912 performance_1028_td=Simple\: Query (sequential) performance_1029_td=ms performance_1030_td=835 performance_1031_td=1839 performance_1032_td=5415 performance_1033_td=Simple\: Update (sequential) performance_1034_td=ms performance_1035_td=961 performance_1036_td=2333 performance_1037_td=21759 performance_1038_td=Simple\: Delete (sequential) performance_1039_td=ms performance_1040_td=950 performance_1041_td=1922 performance_1042_td=32016 performance_1043_td=Simple\: Memory Usage performance_1044_td=MB performance_1045_td=21 performance_1046_td=10 performance_1047_td=8 performance_1048_td=BenchA\: Init performance_1049_td=ms performance_1050_td=919 performance_1051_td=2133 performance_1052_td=7528 performance_1053_td=BenchA\: Transactions performance_1054_td=ms performance_1055_td=1219 performance_1056_td=2297 performance_1057_td=8541 performance_1058_td=BenchA\: Memory Usage performance_1059_td=MB performance_1060_td=12 performance_1061_td=15 performance_1062_td=7 performance_1063_td=BenchB\: Init performance_1064_td=ms performance_1065_td=905 performance_1066_td=1993 performance_1067_td=8049 performance_1068_td=BenchB\: Transactions performance_1069_td=ms performance_1070_td=1091 performance_1071_td=583 performance_1072_td=1165 performance_1073_td=BenchB\: Memory Usage performance_1074_td=MB performance_1075_td=17 performance_1076_td=11 performance_1077_td=8 performance_1078_td=BenchC\: Init performance_1079_td=ms performance_1080_td=2491 performance_1081_td=4003 performance_1082_td=8064 performance_1083_td=BenchC\: Transactions performance_1084_td=ms performance_1085_td=1979 performance_1086_td=803 performance_1087_td=2840 performance_1088_td=BenchC\: Memory Usage performance_1089_td=MB performance_1090_td=19 performance_1091_td=22 performance_1092_td=9 performance_1093_td=Executed statements performance_1094_td=\# performance_1095_td=1930995 performance_1096_td=1930995 performance_1097_td=1930995 performance_1098_td=Total time performance_1099_td=ms performance_1100_td=13673 performance_1101_td=20686 performance_1102_td=105569 performance_1103_td=Statements per second performance_1104_td=\# performance_1105_td=141226 performance_1106_td=93347 performance_1107_td=18291 performance_1108_h3=Client-Server performance_1109_th=Test Case performance_1110_th=Unit performance_1111_th=H2 (Server) performance_1112_th=HSQLDB performance_1113_th=Derby performance_1114_th=PostgreSQL performance_1115_th=MySQL performance_1116_td=Simple\: Init performance_1117_td=ms performance_1118_td=16338 performance_1119_td=17198 performance_1120_td=27860 performance_1121_td=30156 performance_1122_td=29409 performance_1123_td=Simple\: Query (random) performance_1124_td=ms performance_1125_td=3399 performance_1126_td=2582 performance_1127_td=6190 performance_1128_td=3315 performance_1129_td=3342 performance_1130_td=Simple\: Query (sequential) performance_1131_td=ms performance_1132_td=21841 performance_1133_td=18699 performance_1134_td=42347 performance_1135_td=30774 performance_1136_td=32611 performance_1137_td=Simple\: Update (sequential) performance_1138_td=ms performance_1139_td=6913 performance_1140_td=7745 performance_1141_td=28576 performance_1142_td=32698 performance_1143_td=11350 performance_1144_td=Simple\: Delete (sequential) performance_1145_td=ms performance_1146_td=8051 performance_1147_td=9751 performance_1148_td=42202 performance_1149_td=44480 performance_1150_td=16555 performance_1151_td=Simple\: Memory Usage performance_1152_td=MB performance_1153_td=22 performance_1154_td=11 performance_1155_td=9 performance_1156_td=0 performance_1157_td=1 performance_1158_td=BenchA\: Init performance_1159_td=ms performance_1160_td=12996 performance_1161_td=14720 performance_1162_td=24722 performance_1163_td=26375 performance_1164_td=26060 performance_1165_td=BenchA\: Transactions performance_1166_td=ms performance_1167_td=10134 performance_1168_td=10250 performance_1169_td=18452 performance_1170_td=21453 performance_1171_td=15877 performance_1172_td=BenchA\: Memory Usage performance_1173_td=MB performance_1174_td=13 performance_1175_td=15 performance_1176_td=9 performance_1177_td=0 performance_1178_td=1 performance_1179_td=BenchB\: Init performance_1180_td=ms performance_1181_td=15264 performance_1182_td=16889 performance_1183_td=28546 performance_1184_td=31610 performance_1185_td=29747 performance_1186_td=BenchB\: Transactions performance_1187_td=ms performance_1188_td=3017 performance_1189_td=3376 performance_1190_td=1842 performance_1191_td=2771 performance_1192_td=1433 performance_1193_td=BenchB\: Memory Usage performance_1194_td=MB performance_1195_td=17 performance_1196_td=12 performance_1197_td=11 performance_1198_td=1 performance_1199_td=1 performance_1200_td=BenchC\: Init performance_1201_td=ms performance_1202_td=14020 performance_1203_td=10407 performance_1204_td=17655 performance_1205_td=19520 performance_1206_td=17532 performance_1207_td=BenchC\: Transactions performance_1208_td=ms performance_1209_td=5076 performance_1210_td=3160 performance_1211_td=6411 performance_1212_td=6063 performance_1213_td=4530 performance_1214_td=BenchC\: Memory Usage performance_1215_td=MB performance_1216_td=19 performance_1217_td=21 performance_1218_td=11 performance_1219_td=1 performance_1220_td=1 performance_1221_td=Executed statements performance_1222_td=\# performance_1223_td=1930995 performance_1224_td=1930995 performance_1225_td=1930995 performance_1226_td=1930995 performance_1227_td=1930995 performance_1228_td=Total time performance_1229_td=ms performance_1230_td=117049 performance_1231_td=114777 performance_1232_td=244803 performance_1233_td=249215 performance_1234_td=188446 performance_1235_td=Statements per second performance_1236_td=\# performance_1237_td=16497 performance_1238_td=16823 performance_1239_td=7887 performance_1240_td=7748 performance_1241_td=10246 performance_1242_h3=Benchmark Results and Comments performance_1243_h4=H2 performance_1244_p=\ Version 1.4.177 (2014-04-12) was used for the test. For most operations, the performance of H2 is about the same as for HSQLDB. One situation where H2 is slow is large result sets, because they are buffered to disk if more than a certain number of records are returned. The advantage of buffering is\: there is no limit on the result set size. performance_1245_h4=HSQLDB performance_1246_p=\ Version 2.3.2 was used for the test. Cached tables are used in this test (<code>hsqldb.default_table_type\=cached</code>), and the write delay is 1 second (<code>SET WRITE_DELAY 1</code>). performance_1247_h4=Derby performance_1248_p=\ Version 10.10.1.1 was used for the test. Derby is clearly the slowest embedded database in this test. This seems to be a structural problem, because all operations are really slow. It will be hard for the developers of Derby to improve the performance to a reasonable level. A few problems have been identified\: leaving autocommit on is a problem for Derby. If it is switched off during the whole test, the results are about 20% better for Derby. Derby calls <code>FileChannel.force(false)</code>, but only twice per log file (not on each commit). Disabling this call improves performance for Derby by about 2%. Unlike H2, Derby does not call <code>FileDescriptor.sync()</code> on each checkpoint. Derby supports a testing mode (system property <code>derby.system.durability\=test</code>) where durability is disabled. According to the documentation, this setting should be used for testing only, as the database may not recover after a crash. Enabling this setting improves performance by a factor of 2.6 (embedded mode) or 1.4 (server mode). Even if enabled, Derby is still less than half as fast as H2 in default mode. performance_1249_h4=PostgreSQL performance_1250_p=\ Version 9.1.5 was used for the test. The following options where changed in <code>postgresql.conf\: fsync \= off, commit_delay \= 1000</code>. PostgreSQL is run in server mode. The memory usage number is incorrect, because only the memory usage of the JDBC driver is measured. performance_1251_h4=MySQL performance_1252_p=\ Version 5.1.65-log was used for the test. MySQL was run with the InnoDB backend. The setting <code>innodb_flush_log_at_trx_commit</code> (found in the <code>my.ini / my.cnf</code> file) was set to 0. Otherwise (and by default), MySQL is slow (around 140 statements per second in this test) because it tries to flush the data to disk for each commit. For small transactions (when autocommit is on) this is really slow. But many use cases use small or relatively small transactions. Too bad this setting is not listed in the configuration wizard, and it always overwritten when using the wizard. You need to change this setting manually in the file <code>my.ini / my.cnf</code>, and then restart the service. The memory usage number is incorrect, because only the memory usage of the JDBC driver is measured. performance_1253_h4=Firebird performance_1254_p=\ Firebird 1.5 (default installation) was tested, but the results are not published currently. It is possible to run the performance test with the Firebird database, and any information on how to configure Firebird for higher performance are welcome. performance_1255_h4=Why Oracle / MS SQL Server / DB2 are Not Listed performance_1256_p=\ The license of these databases does not allow to publish benchmark results. This doesn't mean that they are fast. They are in fact quite slow, and need a lot of memory. But you will need to test this yourself. SQLite was not tested because the JDBC driver doesn't support transactions. performance_1257_h3=About this Benchmark performance_1258_h4=How to Run performance_1259_p=\ This test was as follows\: performance_1260_h4=Separate Process per Database performance_1261_p=\ For each database, a new process is started, to ensure the previous test does not impact the current test. performance_1262_h4=Number of Connections performance_1263_p=\ This is mostly a single-connection benchmark. BenchB uses multiple connections; the other tests use one connection. performance_1264_h4=Real-World Tests performance_1265_p=\ Good benchmarks emulate real-world use cases. This benchmark includes 4 test cases\: BenchSimple uses one table and many small updates / deletes. BenchA is similar to the TPC-A test, but single connection / single threaded (see also\: www.tpc.org). BenchB is similar to the TPC-B test, using multiple connections (one thread per connection). BenchC is similar to the TPC-C test, but single connection / single threaded. performance_1266_h4=Comparing Embedded with Server Databases performance_1267_p=\ This is mainly a benchmark for embedded databases (where the application runs in the same virtual machine as the database engine). However MySQL and PostgreSQL are not Java databases and cannot be embedded into a Java application. For the Java databases, both embedded and server modes are tested. performance_1268_h4=Test Platform performance_1269_p=\ This test is run on Mac OS X 10.6. No virus scanner was used, and disk indexing was disabled. The JVM used is Sun JDK 1.6. performance_1270_h4=Multiple Runs performance_1271_p=\ When a Java benchmark is run first, the code is not fully compiled and therefore runs slower than when running multiple times. A benchmark should always run the same test multiple times and ignore the first run(s). This benchmark runs three times, but only the last run is measured. performance_1272_h4=Memory Usage performance_1273_p=\ It is not enough to measure the time taken, the memory usage is important as well. Performance can be improved by using a bigger cache, but the amount of memory is limited. HSQLDB tables are kept fully in memory by default; this benchmark uses 'disk based' tables for all databases. Unfortunately, it is not so easy to calculate the memory usage of PostgreSQL and MySQL, because they run in a different process than the test. This benchmark currently does not print memory usage of those databases. performance_1274_h4=Delayed Operations performance_1275_p=\ Some databases delay some operations (for example flushing the buffers) until after the benchmark is run. This benchmark waits between each database tested, and each database runs in a different process (sequentially). performance_1276_h4=Transaction Commit / Durability performance_1277_p=\ Durability means transaction committed to the database will not be lost. Some databases (for example MySQL) try to enforce this by default by calling <code>fsync()</code> to flush the buffers, but most hard drives don't actually flush all data. Calling the method slows down transaction commit a lot, but doesn't always make data durable. When comparing the results, it is important to think about the effect. Many database suggest to 'batch' operations when possible. This benchmark switches off autocommit when loading the data, and calls commit after each 1000 inserts. However many applications need 'short' transactions at runtime (a commit after each update). This benchmark commits after each update / delete in the simple benchmark, and after each business transaction in the other benchmarks. For databases that support delayed commits, a delay of one second is used. performance_1278_h4=Using Prepared Statements performance_1279_p=\ Wherever possible, the test cases use prepared statements. performance_1280_h4=Currently Not Tested\: Startup Time performance_1281_p=\ The startup time of a database engine is important as well for embedded use. This time is not measured currently. Also, not tested is the time used to create a database and open an existing database. Here, one (wrapper) connection is opened at the start, and for each step a new connection is opened and then closed. performance_1282_h2=PolePosition Benchmark performance_1283_p=\ The PolePosition is an open source benchmark. The algorithms are all quite simple. It was developed / sponsored by db4o. This test was not run for a longer time, so please be aware that the results below are for older database versions (H2 version 1.1, HSQLDB 1.8, Java 1.4). performance_1284_th=Test Case performance_1285_th=Unit performance_1286_th=H2 performance_1287_th=HSQLDB performance_1288_th=MySQL performance_1289_td=Melbourne write performance_1290_td=ms performance_1291_td=369 performance_1292_td=249 performance_1293_td=2022 performance_1294_td=Melbourne read performance_1295_td=ms performance_1296_td=47 performance_1297_td=49 performance_1298_td=93 performance_1299_td=Melbourne read_hot performance_1300_td=ms performance_1301_td=24 performance_1302_td=43 performance_1303_td=95 performance_1304_td=Melbourne delete performance_1305_td=ms performance_1306_td=147 performance_1307_td=133 performance_1308_td=176 performance_1309_td=Sepang write performance_1310_td=ms performance_1311_td=965 performance_1312_td=1201 performance_1313_td=3213 performance_1314_td=Sepang read performance_1315_td=ms performance_1316_td=765 performance_1317_td=948 performance_1318_td=3455 performance_1319_td=Sepang read_hot performance_1320_td=ms performance_1321_td=789 performance_1322_td=859 performance_1323_td=3563 performance_1324_td=Sepang delete performance_1325_td=ms performance_1326_td=1384 performance_1327_td=1596 performance_1328_td=6214 performance_1329_td=Bahrain write performance_1330_td=ms performance_1331_td=1186 performance_1332_td=1387 performance_1333_td=6904 performance_1334_td=Bahrain query_indexed_string performance_1335_td=ms performance_1336_td=336 performance_1337_td=170 performance_1338_td=693 performance_1339_td=Bahrain query_string performance_1340_td=ms performance_1341_td=18064 performance_1342_td=39703 performance_1343_td=41243 performance_1344_td=Bahrain query_indexed_int performance_1345_td=ms performance_1346_td=104 performance_1347_td=134 performance_1348_td=678 performance_1349_td=Bahrain update performance_1350_td=ms performance_1351_td=191 performance_1352_td=87 performance_1353_td=159 performance_1354_td=Bahrain delete performance_1355_td=ms performance_1356_td=1215 performance_1357_td=729 performance_1358_td=6812 performance_1359_td=Imola retrieve performance_1360_td=ms performance_1361_td=198 performance_1362_td=194 performance_1363_td=4036 performance_1364_td=Barcelona write performance_1365_td=ms performance_1366_td=413 performance_1367_td=832 performance_1368_td=3191 performance_1369_td=Barcelona read performance_1370_td=ms performance_1371_td=119 performance_1372_td=160 performance_1373_td=1177 performance_1374_td=Barcelona query performance_1375_td=ms performance_1376_td=20 performance_1377_td=5169 performance_1378_td=101 performance_1379_td=Barcelona delete performance_1380_td=ms performance_1381_td=388 performance_1382_td=319 performance_1383_td=3287 performance_1384_td=Total performance_1385_td=ms performance_1386_td=26724 performance_1387_td=53962 performance_1388_td=87112 performance_1389_p=\ There are a few problems with the PolePosition test\: performance_1390_li=\ HSQLDB uses in-memory tables by default while H2 uses persistent tables. The HSQLDB version included in PolePosition does not support changing this, so you need to replace <code>poleposition-0.20/lib/hsqldb.jar</code> with a newer version (for example <code>hsqldb-1.8.0.7.jar</code>), and then use the setting <code>hsqldb.connecturl\=jdbc\:hsqldb\:file\:data/hsqldb/dbbench2;hsqldb.default_table_type\=cached;sql.enforce_size\=true</code> in the file <code>Jdbc.properties</code>. performance_1391_li=HSQLDB keeps the database open between tests, while H2 closes the database (losing all the cache). To change that, use the database URL <code>jdbc\:h2\:file\:data/h2/dbbench;DB_CLOSE_DELAY\=-1</code> performance_1392_li=The amount of cache memory is quite important, specially for the PolePosition test. Unfortunately, the PolePosition test does not take this into account. performance_1393_h2=Database Performance Tuning performance_1394_h3=Keep Connections Open or Use a Connection Pool performance_1395_p=\ If your application opens and closes connections a lot (for example, for each request), you should consider using a <a href\="tutorial.html\#connection_pool">connection pool</a>. Opening a connection using <code>DriverManager.getConnection</code> is specially slow if the database is closed. By default the database is closed if the last connection is closed. performance_1396_p=\ If you open and close connections a lot but don't want to use a connection pool, consider keeping a 'sentinel' connection open for as long as the application runs, or use delayed database closing. See also <a href\="features.html\#closing_a_database">Closing a database</a>. performance_1397_h3=Use a Modern JVM performance_1398_p=\ Newer JVMs are faster. Upgrading to the latest version of your JVM can provide a "free" boost to performance. Switching from the default Client JVM to the Server JVM using the <code>-server</code> command-line option improves performance at the cost of a slight increase in start-up time. performance_1399_h3=Virus Scanners performance_1400_p=\ Some virus scanners scan files every time they are accessed. It is very important for performance that database files are not scanned for viruses. The database engine never interprets the data stored in the files as programs, that means even if somebody would store a virus in a database file, this would be harmless (when the virus does not run, it cannot spread). Some virus scanners allow to exclude files by suffix. Ensure files ending with <code>.db</code> are not scanned. performance_1401_h3=Using the Trace Options performance_1402_p=\ If the performance hot spots are in the database engine, in many cases the performance can be optimized by creating additional indexes, or changing the schema. Sometimes the application does not directly generate the SQL statements, for example if an O/R mapping tool is used. To view the SQL statements and JDBC API calls, you can use the trace options. For more information, see <a href\="features.html\#trace_options">Using the Trace Options</a>. performance_1403_h3=Index Usage performance_1404_p=\ This database uses indexes to improve the performance of <code>SELECT, UPDATE, DELETE</code>. If a column is used in the <code>WHERE</code> clause of a query, and if an index exists on this column, then the index can be used. Multi-column indexes are used if all or the first columns of the index are used. Both equality lookup and range scans are supported. Indexes are used to order result sets, but only if the condition uses the same index or no index at all. The results are sorted in memory if required. Indexes are created automatically for primary key and unique constraints. Indexes are also created for foreign key constraints, if required. For other columns, indexes need to be created manually using the <code>CREATE INDEX</code> statement. performance_1405_h3=Index Hints performance_1406_p=\ If you have determined that H2 is not using the optimal index for your query, you can use index hints to force H2 to use specific indexes. performance_1407_p=Only indexes in the list will be used when choosing an index to use on the given table. There is no significance to order in this list. performance_1408_p=\ It is possible that no index in the list is chosen, in which case a full table scan will be used. performance_1409_p=An empty list of index names forces a full table scan to be performed. performance_1410_p=Each index in the list must exist. performance_1411_h3=How Data is Stored Internally performance_1412_p=\ For persistent databases, if a table is created with a single column primary key of type <code>BIGINT, INT, SMALLINT, TINYINT</code>, then the data of the table is organized in this way. This is sometimes also called a "clustered index" or "index organized table". performance_1413_p=\ H2 internally stores table data and indexes in the form of b-trees. Each b-tree stores entries as a list of unique keys (one or more columns) and data (zero or more columns). The table data is always organized in the form of a "data b-tree" with a single column key of type <code>long</code>. If a single column primary key of type <code>BIGINT, INT, SMALLINT, TINYINT</code> is specified when creating the table (or just after creating the table, but before inserting any rows), then this column is used as the key of the data b-tree. If no primary key has been specified, if the primary key column is of another data type, or if the primary key contains more than one column, then a hidden auto-increment column of type <code>BIGINT</code> is added to the table, which is used as the key for the data b-tree. All other columns of the table are stored within the data area of this data b-tree (except for large <code>BLOB, CLOB</code> columns, which are stored externally). performance_1414_p=\ For each additional index, one new "index b-tree" is created. The key of this b-tree consists of the indexed columns, plus the key of the data b-tree. If a primary key is created after the table has been created, or if the primary key contains multiple column, or if the primary key is not of the data types listed above, then the primary key is stored in a new index b-tree. performance_1415_h3=Optimizer performance_1416_p=\ This database uses a cost based optimizer. For simple and queries and queries with medium complexity (less than 7 tables in the join), the expected cost (running time) of all possible plans is calculated, and the plan with the lowest cost is used. For more complex queries, the algorithm first tries all possible combinations for the first few tables, and the remaining tables added using a greedy algorithm (this works well for most joins). Afterwards a genetic algorithm is used to test at most 2000 distinct plans. Only left-deep plans are evaluated. performance_1417_h3=Expression Optimization performance_1418_p=\ After the statement is parsed, all expressions are simplified automatically if possible. Operations are evaluated only once if all parameters are constant. Functions are also optimized, but only if the function is constant (always returns the same result for the same parameter values). If the <code>WHERE</code> clause is always false, then the table is not accessed at all. performance_1419_h3=COUNT(*) Optimization performance_1420_p=\ If the query only counts all rows of a table, then the data is not accessed. However, this is only possible if no <code>WHERE</code> clause is used, that means it only works for queries of the form <code>SELECT COUNT(*) FROM table</code>. performance_1421_h3=Updating Optimizer Statistics / Column Selectivity performance_1422_p=\ When executing a query, at most one index per join can be used. If the same table is joined multiple times, for each join only one index is used (the same index could be used for both joins, or each join could use a different index). Example\: for the query <code>SELECT * FROM TEST T1, TEST T2 WHERE T1.NAME\='A' AND T2.ID\=T1.ID</code>, two index can be used, in this case the index on NAME for T1 and the index on ID for T2. performance_1423_p=\ If a table has multiple indexes, sometimes more than one index could be used. Example\: if there is a table <code>TEST(ID, NAME, FIRSTNAME)</code> and an index on each column, then two indexes could be used for the query <code>SELECT * FROM TEST WHERE NAME\='A' AND FIRSTNAME\='B'</code>, the index on NAME or the index on FIRSTNAME. It is not possible to use both indexes at the same time. Which index is used depends on the selectivity of the column. The selectivity describes the 'uniqueness' of values in a column. A selectivity of 100 means each value appears only once, and a selectivity of 1 means the same value appears in many or most rows. For the query above, the index on NAME should be used if the table contains more distinct names than first names. performance_1424_p=\ The SQL statement <code>ANALYZE</code> can be used to automatically estimate the selectivity of the columns in the tables. This command should be run from time to time to improve the query plans generated by the optimizer. performance_1425_h3=In-Memory (Hash) Indexes performance_1426_p=\ Using in-memory indexes, specially in-memory hash indexes, can speed up queries and data manipulation. performance_1427_p=In-memory indexes are automatically used for in-memory databases, but can also be created for persistent databases using <code>CREATE MEMORY TABLE</code>. In many cases, the rows itself will also be kept in-memory. Please note this may cause memory problems for large tables. performance_1428_p=\ In-memory hash indexes are backed by a hash table and are usually faster than regular indexes. However, hash indexes only supports direct lookup (<code>WHERE ID \= ?</code>) but not range scan (<code>WHERE ID < ?</code>). To use hash indexes, use HASH as in\: <code>CREATE UNIQUE HASH INDEX</code> and <code>CREATE TABLE ...(ID INT PRIMARY KEY HASH,...)</code>. performance_1429_h3=Use Prepared Statements performance_1430_p=\ If possible, use prepared statements with parameters. performance_1431_h3=Prepared Statements and IN(...) performance_1432_p=\ Avoid generating SQL statements with a variable size IN(...) list. Instead, use a prepared statement with arrays as in the following example\: performance_1433_h3=Optimization Examples performance_1434_p=\ See <code>src/test/org/h2/samples/optimizations.sql</code> for a few examples of queries that benefit from special optimizations built into the database. performance_1435_h3=Cache Size and Type performance_1436_p=\ By default the cache size of H2 is quite small. Consider using a larger cache size, or enable the second level soft reference cache. See also <a href\="features.html\#cache_settings">Cache Settings</a>. performance_1437_h3=Data Types performance_1438_p=\ Each data type has different storage and performance characteristics\: performance_1439_li=The <code>DECIMAL/NUMERIC</code> type is slower and requires more storage than the <code>REAL</code> and <code>DOUBLE</code> types. performance_1440_li=Text types are slower to read, write, and compare than numeric types and generally require more storage. performance_1441_li=See <a href\="advanced.html\#large_objects">Large Objects</a> for information on <code>BINARY</code> vs. <code>BLOB</code> and <code>VARCHAR</code> vs. <code>CLOB</code> performance. performance_1442_li=Parsing and formatting takes longer for the <code>TIME</code>, <code>DATE</code>, and <code>TIMESTAMP</code> types than the numeric types. performance_1443_code=SMALLINT/TINYINT/BOOLEAN performance_1444_li=\ are not significantly smaller or faster to work with than <code>INTEGER</code> in most modes. performance_1445_h3=Sorted Insert Optimization performance_1446_p=\ To reduce disk space usage and speed up table creation, an optimization for sorted inserts is available. When used, b-tree pages are split at the insertion point. To use this optimization, add <code>SORTED</code> before the <code>SELECT</code> statement\: performance_1447_h2=Using the Built-In Profiler performance_1448_p=\ A very simple Java profiler is built-in. To use it, use the following template\: performance_1449_h2=Application Profiling performance_1450_h3=Analyze First performance_1451_p=\ Before trying to optimize performance, it is important to understand where the problem is (what part of the application is slow). Blind optimization or optimization based on guesses should be avoided, because usually it is not an efficient strategy. There are various ways to analyze an application. Sometimes two implementations can be compared using <code>System.currentTimeMillis()</code>. But this does not work for complex applications with many modules, and for memory problems. performance_1452_p=\ A simple way to profile an application is to use the built-in profiling tool of java. Example\: performance_1453_p=\ Unfortunately, it is only possible to profile the application from start to end. Another solution is to create a number of full thread dumps. To do that, first run <code>jps -l</code> to get the process id, and then run <code>jstack <pid></code> or <code>kill -QUIT <pid></code> (Linux) or press Ctrl+C (Windows). performance_1454_p=\ A simple profiling tool is included in H2. To use it, the application needs to be changed slightly. Example\: performance_1455_p=\ The profiler is built into the H2 Console tool, to analyze databases that open slowly. To use it, run the H2 Console, and then click on 'Test Connection'. Afterwards, click on "Test successful" and you get the most common stack traces, which helps to find out why it took so long to connect. You will only get the stack traces if opening the database took more than a few seconds. performance_1456_h2=Database Profiling performance_1457_p=\ The <code>ConvertTraceFile</code> tool generates SQL statement statistics at the end of the SQL script file. The format used is similar to the profiling data generated when using <code>java -Xrunhprof</code>. For this to work, the trace level needs to be 2 or higher (<code>TRACE_LEVEL_FILE\=2</code>). The easiest way to set the trace level is to append the setting to the database URL, for example\: <code>jdbc\:h2\:~/test;TRACE_LEVEL_FILE\=2</code> or <code>jdbc\:h2\:tcp\://localhost/~/test;TRACE_LEVEL_FILE\=2</code>. As an example, execute the the following script using the H2 Console\: performance_1458_p=\ After running the test case, convert the <code>.trace.db</code> file using the <code>ConvertTraceFile</code> tool. The trace file is located in the same directory as the database file. performance_1459_p=\ The generated file <code>test.sql</code> will contain the SQL statements as well as the following profiling data (results vary)\: performance_1460_h2=Statement Execution Plans performance_1461_p=\ The SQL statement <code>EXPLAIN</code> displays the indexes and optimizations the database uses for a statement. The following statements support <code>EXPLAIN</code>\: <code>SELECT, UPDATE, DELETE, MERGE, INSERT</code>. The following query shows that the database uses the primary key index to search for rows\: performance_1462_p=\ For joins, the tables in the execution plan are sorted in the order they are processed. The following query shows the database first processes the table <code>INVOICE</code> (using the primary key). For each row, it will additionally check that the value of the column <code>AMOUNT</code> is larger than zero, and for those rows the database will search in the table <code>CUSTOMER</code> (using the primary key). The query plan contains some redundancy so it is a valid statement. performance_1463_h3=Displaying the Scan Count performance_1464_code=EXPLAIN ANALYZE performance_1465_p=\ additionally shows the scanned rows per table and pages read from disk per table or index. This will actually execute the query, unlike <code>EXPLAIN</code> which only prepares it. The following query scanned 1000 rows, and to do that had to read 85 pages from the data area of the table. Running the query twice will not list the pages read from disk, because they are now in the cache. The <code>tableScan</code> means this query doesn't use an index. performance_1466_p=\ The cache will prevent the pages are read twice. H2 reads all columns of the row unless only the columns in the index are read. Except for large CLOB and BLOB, which are not store in the table. performance_1467_h3=Special Optimizations performance_1468_p=\ For certain queries, the database doesn't need to read all rows, or doesn't need to sort the result even if <code>ORDER BY</code> is used. performance_1469_p=\ For queries of the form <code>SELECT COUNT(*), MIN(ID), MAX(ID) FROM TEST</code>, the query plan includes the line <code>/* direct lookup */</code> if the data can be read from an index. performance_1470_p=\ For queries of the form <code>SELECT DISTINCT CUSTOMER_ID FROM INVOICE</code>, the query plan includes the line <code>/* distinct */</code> if there is an non-unique or multi-column index on this column, and if this column has a low selectivity. performance_1471_p=\ For queries of the form <code>SELECT * FROM TEST ORDER BY ID</code>, the query plan includes the line <code>/* index sorted */</code> to indicate there is no separate sorting required. performance_1472_p=\ For queries of the form <code>SELECT * FROM TEST GROUP BY ID ORDER BY ID</code>, the query plan includes the line <code>/* group sorted */</code> to indicate there is no separate sorting required. performance_1473_h2=How Data is Stored and How Indexes Work performance_1474_p=\ Internally, each row in a table is identified by a unique number, the row id. The rows of a table are stored with the row id as the key. The row id is a number of type long. If a table has a single column primary key of type <code>INT</code> or <code>BIGINT</code>, then the value of this column is the row id, otherwise the database generates the row id automatically. There is a (non-standard) way to access the row id\: using the <code>_ROWID_</code> pseudo-column\: performance_1475_p=\ The data is stored in the database as follows\: performance_1476_th=_ROWID_ performance_1477_th=FIRST_NAME performance_1478_th=NAME performance_1479_th=CITY performance_1480_th=PHONE performance_1481_td=1 performance_1482_td=John performance_1483_td=Miller performance_1484_td=Berne performance_1485_td=123 456 789 performance_1486_td=2 performance_1487_td=Philip performance_1488_td=Jones performance_1489_td=Berne performance_1490_td=123 012 345 performance_1491_p=\ Access by row id is fast because the data is sorted by this key. Please note the row id is not available until after the row was added (that means, it can not be used in computed columns or constraints). If the query condition does not contain the row id (and if no other index can be used), then all rows of the table are scanned. A table scan iterates over all rows in the table, in the order of the row id. To find out what strategy the database uses to retrieve the data, use <code>EXPLAIN SELECT</code>\: performance_1492_h3=Indexes performance_1493_p=\ An index internally is basically just a table that contains the indexed column(s), plus the row id\: performance_1494_p=\ In the index, the data is sorted by the indexed columns. So this index contains the following data\: performance_1495_th=CITY performance_1496_th=NAME performance_1497_th=FIRST_NAME performance_1498_th=_ROWID_ performance_1499_td=Berne performance_1500_td=Jones performance_1501_td=Philip performance_1502_td=2 performance_1503_td=Berne performance_1504_td=Miller performance_1505_td=John performance_1506_td=1 performance_1507_p=\ When the database uses an index to query the data, it searches the index for the given data, and (if required) reads the remaining columns in the main data table (retrieved using the row id). An index on city, name, and first name (multi-column index) allows to quickly search for rows when the city, name, and first name are known. If only the city and name, or only the city is known, then this index is also used (so creating an additional index on just the city is not needed). This index is also used when reading all rows, sorted by the indexed columns. However, if only the first name is known, then this index is not used\: performance_1508_p=\ If your application often queries the table for a phone number, then it makes sense to create an additional index on it\: performance_1509_p=\ This index contains the phone number, and the row id\: performance_1510_th=PHONE performance_1511_th=_ROWID_ performance_1512_td=123 012 345 performance_1513_td=2 performance_1514_td=123 456 789 performance_1515_td=1 performance_1516_h3=Using Multiple Indexes performance_1517_p=\ Within a query, only one index per logical table is used. Using the condition <code>PHONE \= '123 567 789' OR CITY \= 'Berne'</code> would use a table scan instead of first using the index on the phone number and then the index on the city. It makes sense to write two queries and combine then using <code>UNION</code>. In this case, each individual query uses a different index\: performance_1518_h2=Fast Database Import performance_1519_p=\ To speed up large imports, consider using the following options temporarily\: performance_1520_code=SET LOG 0 performance_1521_li=\ (disabling the transaction log) performance_1522_code=SET CACHE_SIZE performance_1523_li=\ (a large cache is faster) performance_1524_code=SET LOCK_MODE 0 performance_1525_li=\ (disable locking) performance_1526_code=SET UNDO_LOG 0 performance_1527_li=\ (disable the session undo log) performance_1528_p=\ These options can be set in the database URL\: <code>jdbc\:h2\:~/test;LOG\=0;CACHE_SIZE\=65536;LOCK_MODE\=0;UNDO_LOG\=0</code>. Most of those options are not recommended for regular use, that means you need to reset them after use. performance_1529_p=\ If you have to import a lot of rows, use a PreparedStatement or use CSV import. Please note that <code>CREATE TABLE(...) ... AS SELECT ...</code> is faster than <code>CREATE TABLE(...); INSERT INTO ... SELECT ...</code>. quickstart_1000_h1=Quickstart quickstart_1001_a=\ Embedding H2 in an Application quickstart_1002_a=\ The H2 Console Application quickstart_1003_h2=Embedding H2 in an Application quickstart_1004_p=\ This database can be used in embedded mode, or in server mode. To use it in embedded mode, you need to\: quickstart_1005_li=Add the <code>h2*.jar</code> to the classpath (H2 does not have any dependencies) quickstart_1006_li=Use the JDBC driver class\: <code>org.h2.Driver</code> quickstart_1007_li=The database URL <code>jdbc\:h2\:~/test</code> opens the database <code>test</code> in your user home directory quickstart_1008_li=A new database is automatically created quickstart_1009_h2=The H2 Console Application quickstart_1010_p=\ The Console lets you access a SQL database using a browser interface. quickstart_1011_p=\ If you don't have Windows XP, or if something does not work as expected, please see the detailed description in the <a href\="tutorial.html">Tutorial</a>. quickstart_1012_h3=Step-by-Step quickstart_1013_h4=Installation quickstart_1014_p=\ Install the software using the Windows Installer (if you did not yet do that). quickstart_1015_h4=Start the Console quickstart_1016_p=\ Click [Start], [All Programs], [H2], and [H2 Console (Command Line)]\: quickstart_1017_p=\ A new console window appears\: quickstart_1018_p=\ Also, a new browser page should open with the URL <a href\="http\://localhost\:8082" class\="notranslate">http\://localhost\:8082</a>. You may get a security warning from the firewall. If you don't want other computers in the network to access the database on your machine, you can let the firewall block these connections. Only local connections are required at this time. quickstart_1019_h4=Login quickstart_1020_p=\ Select [Generic H2] and click [Connect]\: quickstart_1021_p=\ You are now logged in. quickstart_1022_h4=Sample quickstart_1023_p=\ Click on the [Sample SQL Script]\: quickstart_1024_p=\ The SQL commands appear in the command area. quickstart_1025_h4=Execute quickstart_1026_p=\ Click [Run] quickstart_1027_p=\ On the left side, a new entry TEST is added below the database icon. The operations and results of the statements are shown below the script. quickstart_1028_h4=Disconnect quickstart_1029_p=\ Click on [Disconnect]\: quickstart_1030_p=\ to close the connection. quickstart_1031_h4=End quickstart_1032_p=\ Close the console window. For more information, see the <a href\="tutorial.html">Tutorial</a>. roadmap_1000_h1=Roadmap roadmap_1001_p=\ New (feature) requests will usually be added at the very end of the list. The priority is increased for important and popular requests. Of course, patches are always welcome, but are not always applied as is. See also <a href\="build.html\#providing_patches">Providing Patches</a>. roadmap_1002_h2=Version 1.5.x\: Planned Changes roadmap_1003_li=Replace file password hash with file encryption key; validate encryption key when connecting. roadmap_1004_li=Remove "set binary collation" feature. roadmap_1005_li=Remove the encryption algorithm XTEA. roadmap_1006_li=Disallow referencing other tables in a table (via constraints for example). roadmap_1007_li=Remove PageStore features like compress_lob. roadmap_1008_h2=Version 1.4.x\: Planned Changes roadmap_1009_li=Change license to MPL 2.0. roadmap_1010_li=Automatic migration from 1.3 databases to 1.4. roadmap_1011_li=Option to disable the file name suffix somehow (issue 447). roadmap_1012_h2=Priority 1 roadmap_1013_li=Bugfixes. roadmap_1014_li=More tests with MULTI_THREADED\=1 (and MULTI_THREADED with MVCC)\: Online backup (using the 'backup' statement). roadmap_1015_li=Server side cursors. roadmap_1016_h2=Priority 2 roadmap_1017_li=Support hints for the optimizer (which index to use, enforce the join order). roadmap_1018_li=Full outer joins. roadmap_1019_li=Access rights\: remember the owner of an object. Create, alter and drop privileges. COMMENT\: allow owner of object to change it. Issue 208\: Access rights for schemas. roadmap_1020_li=Test multi-threaded in-memory db access. roadmap_1021_li=MySQL, MS SQL Server compatibility\: support case sensitive (mixed case) identifiers without quotes. roadmap_1022_li=Support GRANT SELECT, UPDATE ON [schemaName.] *. roadmap_1023_li=Migrate database tool (also from other database engines). For Oracle, maybe use DBMS_METADATA.GET_DDL / GET_DEPENDENT_DDL. roadmap_1024_li=Clustering\: support mixed clustering mode (one embedded, others in server mode). roadmap_1025_li=Clustering\: reads should be randomly distributed (optional) or to a designated database on RAM (parameter\: READ_FROM\=3). roadmap_1026_li=Window functions\: RANK() and DENSE_RANK(), partition using OVER(). select *, count(*) over() as fullCount from ... limit 4; roadmap_1027_li=PostgreSQL catalog\: use BEFORE SELECT triggers instead of views over metadata tables. roadmap_1028_li=Compatibility\: automatically load functions from a script depending on the mode - see FunctionsMySQL.java, issue 211. roadmap_1029_li=Test very large databases and LOBs (up to 256 GB). roadmap_1030_li=Store all temp files in the temp directory. roadmap_1031_li=Don't use temp files, specially not deleteOnExit (bug 4513817\: File.deleteOnExit consumes memory). Also to allow opening client / server (remote) connections when using LOBs. roadmap_1032_li=Make DDL (Data Definition) operations transactional. roadmap_1033_li=Deferred integrity checking (DEFERRABLE INITIALLY DEFERRED). roadmap_1034_li=Groovy Stored Procedures\: http\://groovy.codehaus.org/GSQL roadmap_1035_li=Add a migration guide (list differences between databases). roadmap_1036_li=Optimization\: automatic index creation suggestion using the trace file? roadmap_1037_li=Fulltext search Lucene\: analyzer configuration, mergeFactor. roadmap_1038_li=Compression performance\: don't allocate buffers, compress / expand in to out buffer. roadmap_1039_li=Rebuild index functionality to shrink index size and improve performance. roadmap_1040_li=Console\: add accesskey to most important commands (A, AREA, BUTTON, INPUT, LABEL, LEGEND, TEXTAREA). roadmap_1041_li=Test performance again with SQL Server, Oracle, DB2. roadmap_1042_li=Test with Spatial DB in a box / JTS\: http\://www.opengeospatial.org/standards/sfs - OpenGIS Implementation Specification. roadmap_1043_li=Write more tests and documentation for MVCC (Multi Version Concurrency Control). roadmap_1044_li=Find a tool to view large text file (larger than 100 MB), with find, page up and down (like less), truncate before / after. roadmap_1045_li=Implement, test, document XAConnection and so on. roadmap_1046_li=Pluggable data type (for streaming, hashing, compression, validation, conversion, encryption). roadmap_1047_li=CHECK\: find out what makes CHECK\=TRUE slow, move to CHECK2. roadmap_1048_li=Drop with invalidate views (so that source code is not lost). Check what other databases do exactly. roadmap_1049_li=Index usage for (ID, NAME)\=(1, 'Hi'); document. roadmap_1050_li=Set a connection read only (Connection.setReadOnly) or using a connection parameter. roadmap_1051_li=Access rights\: finer grained access control (grant access for specific functions). roadmap_1052_li=ROW_NUMBER() OVER([PARTITION BY columnName][ORDER BY columnName]). roadmap_1053_li=Version check\: docs / web console (using Javascript), and maybe in the library (using TCP/IP). roadmap_1054_li=Web server classloader\: override findResource / getResourceFrom. roadmap_1055_li=Cost for embedded temporary view is calculated wrong, if result is constant. roadmap_1056_li=Count index range query (count(*) where id between 10 and 20). roadmap_1057_li=Performance\: update in-place. roadmap_1058_li=Clustering\: when a database is back alive, automatically synchronize with the master (requires readable transaction log). roadmap_1059_li=Database file name suffix\: a way to use no or a different suffix (for example using a slash). roadmap_1060_li=Eclipse plugin. roadmap_1061_li=Asynchronous queries to support publish/subscribe\: SELECT ... FOR READ WAIT [maxMillisToWait]. See also MS SQL Server "Query Notification". roadmap_1062_li=Fulltext search (native)\: reader / tokenizer / filter. roadmap_1063_li=Linked schema using CSV files\: one schema for a directory of files; support indexes for CSV files. roadmap_1064_li=iReport to support H2. roadmap_1065_li=Include SMTP (mail) client (alert on cluster failure, low disk space,...). roadmap_1066_li=Option for SCRIPT to only process one or a set of schemas or tables, and append to a file. roadmap_1067_li=JSON parser and functions. roadmap_1068_li=Copy database\: tool with config GUI and batch mode, extensible (example\: compare). roadmap_1069_li=Document, implement tool for long running transactions using user-defined compensation statements. roadmap_1070_li=Support SET TABLE DUAL READONLY. roadmap_1071_li=GCJ\: what is the state now? roadmap_1072_li=Events for\: database Startup, Connections, Login attempts, Disconnections, Prepare (after parsing), Web Server. See http\://docs.openlinksw.com/virtuoso/fn_dbev_startup.html roadmap_1073_li=Optimization\: simpler log compression. roadmap_1074_li=Support standard INFORMATION_SCHEMA tables, as defined in http\://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt - specially KEY_COLUMN_USAGE\: http\://dev.mysql.com/doc/refman/5.0/en/information-schema.html, http\://www.xcdsql.org/Misc/INFORMATION_SCHEMA%20With%20Rolenames.gif roadmap_1075_li=Compatibility\: in MySQL, HSQLDB, /0.0 is NULL; in PostgreSQL, Derby\: division by zero. HSQLDB\: 0.0e1 / 0.0e1 is NaN. roadmap_1076_li=Functional tables should accept parameters from other tables (see FunctionMultiReturn) SELECT * FROM TEST T, P2C(T.A, T.R). roadmap_1077_li=Custom class loader to reload functions on demand. roadmap_1078_li=Test http\://mysql-je.sourceforge.net/ roadmap_1079_li=H2 Console\: the webclient could support more features like phpMyAdmin. roadmap_1080_li=Support Oracle functions\: TO_NUMBER. roadmap_1081_li=Work on the Java to C converter. roadmap_1082_li=The HELP information schema can be directly exposed in the Console. roadmap_1083_li=Maybe use the 0x1234 notation for binary fields, see MS SQL Server. roadmap_1084_li=Support Oracle CONNECT BY in some way\: http\://www.adp-gmbh.ch/ora/sql/connect_by.html http\://philip.greenspun.com/sql/trees.html roadmap_1085_li=SQL Server 2005, Oracle\: support COUNT(*) OVER(). See http\://www.orafusion.com/art_anlytc.htm roadmap_1086_li=SQL 2003\: http\://www.wiscorp.com/sql_2003_standard.zip roadmap_1087_li=Version column (number/sequence and timestamp based). roadmap_1088_li=Optimize getGeneratedKey\: send last identity after each execute (server). roadmap_1089_li=Test and document UPDATE TEST SET (ID, NAME) \= (SELECT ID*10, NAME || '\!' FROM TEST T WHERE T.ID\=TEST.ID). roadmap_1090_li=Max memory rows / max undo log size\: use block count / row size not row count. roadmap_1091_li=Implement point-in-time recovery. roadmap_1092_li=Support PL/SQL (programming language / control flow statements). roadmap_1093_li=LIKE\: improved version for larger texts (currently using naive search). roadmap_1094_li=Throw an exception when the application calls getInt on a Long (optional). roadmap_1095_li=Default date format for input and output (local date constants). roadmap_1096_li=Document ROWNUM usage for reports\: SELECT ROWNUM, * FROM (subquery). roadmap_1097_li=File system that writes to two file systems (replication, replicating file system). roadmap_1098_li=Standalone tool to get relevant system properties and add it to the trace output. roadmap_1099_li=Support 'call proc(1\=value)' (PostgreSQL, Oracle). roadmap_1100_li=Console\: improve editing data (Tab, Shift-Tab, Enter, Up, Down, Shift+Del?). roadmap_1101_li=Console\: autocomplete Ctrl+Space inserts template. roadmap_1102_li=Option to encrypt .trace.db file. roadmap_1103_li=Auto-Update feature for database, .jar file. roadmap_1104_li=ResultSet SimpleResultSet.readFromURL(String url)\: id varchar, state varchar, released timestamp. roadmap_1105_li=Partial indexing (see PostgreSQL). roadmap_1106_li=Add GUI to build a custom version (embedded, fulltext,...) using build flags. roadmap_1107_li=http\://rubyforge.org/projects/hypersonic/ roadmap_1108_li=Add a sample application that runs the H2 unit test and writes the result to a file (so it can be included in the user app). roadmap_1109_li=Table order\: ALTER TABLE TEST ORDER BY NAME DESC (MySQL compatibility). roadmap_1110_li=Backup tool should work with other databases as well. roadmap_1111_li=Console\: -ifExists doesn't work for the console. Add a flag to disable other dbs. roadmap_1112_li=Check if 'FSUTIL behavior set disablelastaccess 1' improves the performance (fsutil behavior query disablelastaccess). roadmap_1113_li=Java static code analysis\: http\://pmd.sourceforge.net/ roadmap_1114_li=Java static code analysis\: http\://www.eclipse.org/tptp/ roadmap_1115_li=Compatibility for CREATE SCHEMA AUTHORIZATION. roadmap_1116_li=Implement Clob / Blob truncate and the remaining functionality. roadmap_1117_li=Add multiple columns at the same time with ALTER TABLE .. ADD .. ADD ... roadmap_1118_li=File locking\: writing a system property to detect concurrent access from the same VM (different classloaders). roadmap_1119_li=Pure SQL triggers (example\: update parent table if the child table is changed). roadmap_1120_li=Add H2 to Gem (Ruby install system). roadmap_1121_li=Support linked JCR tables. roadmap_1122_li=Native fulltext search\: min word length; store word positions. roadmap_1123_li=Add an option to the SCRIPT command to generate only portable / standard SQL. roadmap_1124_li=Updatable views\: create 'instead of' triggers automatically if possible (simple cases first). roadmap_1125_li=Improve create index performance. roadmap_1126_li=Compact databases without having to close the database (vacuum). roadmap_1127_li=Implement more JDBC 4.0 features. roadmap_1128_li=Support TRANSFORM / PIVOT as in MS Access. roadmap_1129_li=SELECT * FROM (VALUES (...), (...), ....) AS alias(f1, ...). roadmap_1130_li=Support updatable views with join on primary keys (to extend a table). roadmap_1131_li=Public interface for functions (not public static). roadmap_1132_li=Support reading the transaction log. roadmap_1133_li=Feature matrix as in <a href\="http\://www.inetsoftware.de/products/jdbc/mssql/features/default.asp">i-net software</a>. roadmap_1134_li=Updatable result set on table without primary key or unique index. roadmap_1135_li=Compatibility with Derby and PostgreSQL\: VALUES(1), (2); SELECT * FROM (VALUES (1), (2)) AS myTable(c1). Issue 221. roadmap_1136_li=Allow execution time prepare for SELECT * FROM CSVREAD(?, 'columnNameString') roadmap_1137_li=Support data type INTERVAL roadmap_1138_li=Support nested transactions (possibly using savepoints internally). roadmap_1139_li=Add a benchmark for bigger databases, and one for many users. roadmap_1140_li=Compression in the result set over TCP/IP. roadmap_1141_li=Support curtimestamp (like curtime, curdate). roadmap_1142_li=Support ANALYZE {TABLE|INDEX} tableName COMPUTE|ESTIMATE|DELETE STATISTICS ptnOption options. roadmap_1143_li=Release locks (shared or exclusive) on demand roadmap_1144_li=Support OUTER UNION roadmap_1145_li=Support parameterized views (similar to CSVREAD, but using just SQL for the definition) roadmap_1146_li=A way (JDBC driver) to map an URL (jdbc\:h2map\:c1) to a connection object roadmap_1147_li=Support dynamic linked schema (automatically adding/updating/removing tables) roadmap_1148_li=Clustering\: adding a node should be very fast and without interrupting clients (very short lock) roadmap_1149_li=Compatibility\: \# is the start of a single line comment (MySQL) but date quote (Access). Mode specific roadmap_1150_li=Run benchmarks with Android, Java 7, java -server roadmap_1151_li=Optimizations\: faster hash function for strings. roadmap_1152_li=DatabaseEventListener\: callback for all operations (including expected time, RUNSCRIPT) and cancel functionality roadmap_1153_li=Benchmark\: add a graph to show how databases scale (performance/database size) roadmap_1154_li=Implement a SQLData interface to map your data over to a custom object roadmap_1155_li=In the MySQL and PostgreSQL mode, use lower case identifiers by default (DatabaseMetaData.storesLowerCaseIdentifiers \= true) roadmap_1156_li=Support multiple directories (on different hard drives) for the same database roadmap_1157_li=Server protocol\: use challenge response authentication, but client sends hash(user+password) encrypted with response roadmap_1158_li=Support EXEC[UTE] (doesn't return a result set, compatible to MS SQL Server) roadmap_1159_li=Support native XML data type - see http\://en.wikipedia.org/wiki/SQL/XML roadmap_1160_li=Support triggers with a string property or option\: SpringTrigger, OSGITrigger roadmap_1161_li=MySQL compatibility\: update test1 t1, test2 t2 set t1.id \= t2.id where t1.id \= t2.id; roadmap_1162_li=Ability to resize the cache array when resizing the cache roadmap_1163_li=Time based cache writing (one second after writing the log) roadmap_1164_li=Check state of H2 driver for DDLUtils\: http\://issues.apache.org/jira/browse/DDLUTILS-185 roadmap_1165_li=Index usage for REGEXP LIKE. roadmap_1166_li=Compatibility\: add a role DBA (like ADMIN). roadmap_1167_li=Better support multiple processors for in-memory databases. roadmap_1168_li=Support N'text' roadmap_1169_li=Support compatibility for jdbc\:hsqldb\:res\: roadmap_1170_li=HSQLDB compatibility\: automatically convert to the next 'higher' data type. Example\: cast(2000000000 as int) + cast(2000000000 as int); (HSQLDB\: long; PostgreSQL\: integer out of range) roadmap_1171_li=Provide an Java SQL builder with standard and H2 syntax roadmap_1172_li=Trace\: write OS, file system, JVM,... when opening the database roadmap_1173_li=Support indexes for views (probably requires materialized views) roadmap_1174_li=Document SET SEARCH_PATH, BEGIN, EXECUTE, parameters roadmap_1175_li=Server\: use one listener (detect if the request comes from an PG or TCP client) roadmap_1176_li=Optimize SELECT MIN(ID), MAX(ID), COUNT(*) FROM TEST WHERE ID BETWEEN 100 AND 200 roadmap_1177_li=Sequence\: PostgreSQL compatibility (rename, create) http\://www.postgresql.org/docs/8.2/static/sql-altersequence.html roadmap_1178_li=DISTINCT\: support large result sets by sorting on all columns (additionally) and then removing duplicates. roadmap_1179_li=Support a special trigger on all tables to allow building a transaction log reader. roadmap_1180_li=File system with a background writer thread; test if this is faster roadmap_1181_li=Better document the source code (high level documentation). roadmap_1182_li=Support select * from dual a left join dual b on b.x\=(select max(x) from dual) roadmap_1183_li=Optimization\: don't lock when the database is read-only roadmap_1184_li=Issue 146\: Support merge join. roadmap_1185_li=Integrate spatial functions from http\://geosysin.iict.ch/irstv-trac/wiki/H2spatial/Download roadmap_1186_li=Cluster\: hot deploy (adding a node at runtime). roadmap_1187_li=Support DatabaseMetaData.insertsAreDetected\: updatable result sets should detect inserts. roadmap_1188_li=Oracle\: support DECODE method (convert to CASE WHEN). roadmap_1189_li=Native search\: support "phrase search", wildcard search (* and ?), case-insensitive search, boolean operators, and grouping roadmap_1190_li=Improve documentation of access rights. roadmap_1191_li=Support opening a database that is in the classpath, maybe using a new file system. Workaround\: detect jar file using getClass().getProtectionDomain().getCodeSource().getLocation(). roadmap_1192_li=Support ENUM data type (see MySQL, PostgreSQL, MS SQL Server, maybe others). roadmap_1193_li=Remember the user defined data type (domain) of a column. roadmap_1194_li=MVCC\: support multi-threaded kernel with multi-version concurrency. roadmap_1195_li=Auto-server\: add option to define the port range or list. roadmap_1196_li=Support Jackcess (MS Access databases) roadmap_1197_li=Built-in methods to write large objects (BLOB and CLOB)\: FILE_WRITE('test.txt', 'Hello World') roadmap_1198_li=Improve time to open large databases (see mail 'init time for distributed setup') roadmap_1199_li=Move Maven 2 repository from hsql.sf.net to h2database.sf.net roadmap_1200_li=Java 1.5 tool\: JdbcUtils.closeSilently(s1, s2,...) roadmap_1201_li=Optimize A\=? OR B\=? to UNION if the cost is lower. roadmap_1202_li=Javadoc\: document design patterns used roadmap_1203_li=Support custom collators, for example for natural sort (for text that contains numbers). roadmap_1204_li=Write an article about SQLInjection (h2/src/docsrc/html/images/SQLInjection.txt) roadmap_1205_li=Convert SQL-injection-2.txt to html document, include SQLInjection.java sample roadmap_1206_li=Support OUT parameters in user-defined procedures. roadmap_1207_li=Web site design\: http\://www.igniterealtime.org/projects/openfire/index.jsp roadmap_1208_li=HSQLDB compatibility\: Openfire server uses\: CREATE SCHEMA PUBLIC AUTHORIZATION DBA; CREATE USER SA PASSWORD ""; GRANT DBA TO SA; SET SCHEMA PUBLIC roadmap_1209_li=Translation\: use ?? in help.csv roadmap_1210_li=Translated .pdf roadmap_1211_li=Recovery tool\: bad blocks should be converted to INSERT INTO SYSTEM_ERRORS(...), and things should go into the .trace.db file roadmap_1212_li=Issue 357\: support getGeneratedKeys to return multiple rows when used with batch updates. This is supported by MySQL, but not Derby. Both PostgreSQL and HSQLDB don't support getGeneratedKeys. Also support it when using INSERT ... SELECT. roadmap_1213_li=RECOVER\=2 to backup the database, run recovery, open the database roadmap_1214_li=Recovery should work with encrypted databases roadmap_1215_li=Corruption\: new error code, add help roadmap_1216_li=Space reuse\: after init, scan all storages and free those that don't belong to a live database object roadmap_1217_li=Access rights\: add missing features (users should be 'owner' of objects; missing rights for sequences; dropping objects) roadmap_1218_li=Support NOCACHE table option (Oracle). roadmap_1219_li=Support table partitioning. roadmap_1220_li=Add regular javadocs (using the default doclet, but another css) to the homepage. roadmap_1221_li=The database should be kept open for a longer time when using the server mode. roadmap_1222_li=Javadocs\: for each tool, add a copy & paste sample in the class level. roadmap_1223_li=Javadocs\: add @author tags. roadmap_1224_li=Fluent API for tools\: Server.createTcpServer().setPort(9081).setPassword(password).start(); roadmap_1225_li=MySQL compatibility\: real SQL statement for DESCRIBE TEST roadmap_1226_li=Use a default delay of 1 second before closing a database. roadmap_1227_li=Write (log) to system table before adding to internal data structures. roadmap_1228_li=Support direct lookup for MIN and MAX when using WHERE (see todo.txt / Direct Lookup). roadmap_1229_li=Support other array types (String[], double[]) in PreparedStatement.setObject(int, Object) (with test case). roadmap_1230_li=MVCC should not be memory bound (uncommitted data is kept in memory in the delta index; maybe using a regular b-tree index solves the problem). roadmap_1231_li=Oracle compatibility\: support NLS_DATE_FORMAT. roadmap_1232_li=Support for Thread.interrupt to cancel running statements. roadmap_1233_li=Cluster\: add feature to make sure cluster nodes can not get out of sync (for example by stopping one process). roadmap_1234_li=H2 Console\: support CLOB/BLOB download using a link. roadmap_1235_li=Support flashback queries as in Oracle. roadmap_1236_li=Import / Export of fixed with text files. roadmap_1237_li=HSQLDB compatibility\: automatic data type for SUM if value is the value is too big (by default use the same type as the data). roadmap_1238_li=Improve the optimizer to select the right index for special cases\: where id between 2 and 4 and booleanColumn roadmap_1239_li=Linked tables\: make hidden columns available (Oracle\: rowid and ora_rowscn columns). roadmap_1240_li=H2 Console\: in-place autocomplete. roadmap_1241_li=Support large databases\: split database files to multiple directories / disks (similar to tablespaces). roadmap_1242_li=H2 Console\: support configuration option for fixed width (monospace) font. roadmap_1243_li=Native fulltext search\: support analyzers (specially for Chinese, Japanese). roadmap_1244_li=Automatically compact databases from time to time (as a background process). roadmap_1245_li=Test Eclipse DTP. roadmap_1246_li=H2 Console\: autocomplete\: keep the previous setting roadmap_1247_li=executeBatch\: option to stop at the first failed statement. roadmap_1248_li=Implement OLAP features as described here\: http\://www.devx.com/getHelpOn/10MinuteSolution/16573/0/page/5 roadmap_1249_li=Support Oracle ROWID (unique identifier for each row). roadmap_1250_li=MySQL compatibility\: alter table add index i(c), add constraint c foreign key(c) references t(c); roadmap_1251_li=Server mode\: improve performance for batch updates. roadmap_1252_li=Applets\: support read-only databases in a zip file (accessed as a resource). roadmap_1253_li=Long running queries / errors / trace system table. roadmap_1254_li=H2 Console should support JaQu directly. roadmap_1255_li=Better document FTL_SEARCH, FTL_SEARCH_DATA. roadmap_1256_li=Sequences\: CURRVAL should be session specific. Compatibility with PostgreSQL. roadmap_1257_li=Index creation using deterministic functions. roadmap_1258_li=ANALYZE\: for unique indexes that allow null, count the number of null. roadmap_1259_li=MySQL compatibility\: multi-table delete\: DELETE .. FROM .. [,...] USING - See http\://dev.mysql.com/doc/refman/5.0/en/delete.html roadmap_1260_li=AUTO_SERVER\: support changing IP addresses (disable a network while the database is open). roadmap_1261_li=Avoid using java.util.Calendar internally because it's slow, complicated, and buggy. roadmap_1262_li=Support TRUNCATE .. CASCADE like PostgreSQL. roadmap_1263_li=Fulltext search\: lazy result generation using SimpleRowSource. roadmap_1264_li=Fulltext search\: support alternative syntax\: WHERE FTL_CONTAINS(name, 'hello'). roadmap_1265_li=MySQL compatibility\: support REPLACE, see http\://dev.mysql.com/doc/refman/6.0/en/replace.html and issue 73. roadmap_1266_li=MySQL compatibility\: support INSERT INTO table SET column1 \= value1, column2 \= value2 roadmap_1267_li=Docs\: add a one line description for each functions and SQL statements at the top (in the link section). roadmap_1268_li=Javadoc search\: weight for titles should be higher ('random' should list Functions as the best match). roadmap_1269_li=Replace information_schema tables with regular tables that are automatically re-built when needed. Use indexes. roadmap_1270_li=Issue 50\: Oracle compatibility\: support calling 0-parameters functions without parenthesis. Make constants obsolete. roadmap_1271_li=MySQL, HSQLDB compatibility\: support where 'a'\=1 (not supported by Derby, PostgreSQL) roadmap_1272_li=Finer granularity for SLF4J trace - See http\://code.google.com/p/h2database/issues/detail?id\=62 roadmap_1273_li=Add database creation date and time to the database. roadmap_1274_li=Support ASSERTION. roadmap_1275_li=MySQL compatibility\: support comparing 1\='a' roadmap_1276_li=Support PostgreSQL lock modes\: http\://www.postgresql.org/docs/8.3/static/explicit-locking.html roadmap_1277_li=PostgreSQL compatibility\: test DbVisualizer and Squirrel SQL using a new PostgreSQL JDBC driver. roadmap_1278_li=RunScript should be able to read from system in (or quite mode for Shell). roadmap_1279_li=Natural join\: support select x from dual natural join dual. roadmap_1280_li=Support using system properties in database URLs (may be a security problem). roadmap_1281_li=Natural join\: somehow support this\: select a.x, b.x, x from dual a natural join dual b roadmap_1282_li=Use the Java service provider mechanism to register file systems and function libraries. roadmap_1283_li=MySQL compatibility\: for auto_increment columns, convert 0 to next value (as when inserting NULL). roadmap_1284_li=Optimization for multi-column IN\: use an index if possible. Example\: (A, B) IN((1, 2), (2, 3)). roadmap_1285_li=Optimization for EXISTS\: convert to inner join or IN(..) if possible. roadmap_1286_li=Functions\: support hashcode(value); cryptographic and fast roadmap_1287_li=Serialized file lock\: support long running queries. roadmap_1288_li=Network\: use 127.0.0.1 if other addresses don't work. roadmap_1289_li=Pluggable network protocol (currently Socket/ServerSocket over TCP/IP) - see also TransportServer with master slave replication. roadmap_1290_li=Support reading JCR data\: one table per node type; query table; cache option roadmap_1291_li=OSGi\: create a sample application, test, document. roadmap_1292_li=help.csv\: use complete examples for functions; run as test case. roadmap_1293_li=Functions to calculate the memory and disk space usage of a table, a row, or a value. roadmap_1294_li=Re-implement PooledConnection; use a lightweight connection object. roadmap_1295_li=Doclet\: convert tests in javadocs to a java class. roadmap_1296_li=Doclet\: format fields like methods, but support sorting by name and value. roadmap_1297_li=Doclet\: shrink the html files. roadmap_1298_li=MySQL compatibility\: support SET NAMES 'latin1' - See also http\://code.google.com/p/h2database/issues/detail?id\=56 roadmap_1299_li=Allow to scan index backwards starting with a value (to better support ORDER BY DESC). roadmap_1300_li=Java Service Wrapper\: try http\://yajsw.sourceforge.net/ roadmap_1301_li=Batch parameter for INSERT, UPDATE, and DELETE, and commit after each batch. See also MySQL DELETE. roadmap_1302_li=Use a lazy and auto-close input stream (open resource when reading, close on eof). roadmap_1303_li=Connection pool\: 'reset session' command (delete temp tables, rollback, auto-commit true). roadmap_1304_li=Improve SQL documentation, see http\://www.w3schools.com/sql/ roadmap_1305_li=MySQL compatibility\: DatabaseMetaData.stores*() methods should return the same values. Test with SquirrelSQL. roadmap_1306_li=MS SQL Server compatibility\: support DATEPART syntax. roadmap_1307_li=Sybase/DB2/Oracle compatibility\: support out parameters in stored procedures - See http\://code.google.com/p/h2database/issues/detail?id\=83 roadmap_1308_li=Support INTERVAL data type (see Oracle and others). roadmap_1309_li=Combine Server and Console tool (only keep Server). roadmap_1310_li=Store the Lucene index in the database itself. roadmap_1311_li=Support standard MERGE statement\: http\://en.wikipedia.org/wiki/Merge_%28SQL%29 roadmap_1312_li=Oracle compatibility\: support DECODE(x, ...). roadmap_1313_li=MVCC\: compare concurrent update behavior with PostgreSQL and Oracle. roadmap_1314_li=HSQLDB compatibility\: CREATE FUNCTION (maybe using a Function interface). roadmap_1315_li=HSQLDB compatibility\: support CALL "java.lang.Math.sqrt"(2.0) roadmap_1316_li=Support comma as the decimal separator in the CSV tool. roadmap_1317_li=Compatibility\: Java functions with SQLJ Part1 http\://www.acm.org/sigmod/record/issues/9912/standards.pdf.gz roadmap_1318_li=Compatibility\: Java functions with SQL/PSM (Persistent Stored Modules) - need to find the documentation. roadmap_1319_li=CACHE_SIZE\: automatically use a fraction of Runtime.maxMemory - maybe automatically the second level cache. roadmap_1320_li=Support date/time/timestamp as documented in http\://en.wikipedia.org/wiki/ISO_8601 roadmap_1321_li=PostgreSQL compatibility\: when in PG mode, treat BYTEA data like PG. roadmap_1322_li=Support \=ANY(array) as in PostgreSQL. See also http\://www.postgresql.org/docs/8.0/interactive/arrays.html roadmap_1323_li=IBM DB2 compatibility\: support PREVIOUS VALUE FOR sequence. roadmap_1324_li=Compatibility\: use different LIKE ESCAPE characters depending on the mode (disable for Derby, HSQLDB, DB2, Oracle, MSSQLServer). roadmap_1325_li=Oracle compatibility\: support CREATE SYNONYM table FOR schema.table. roadmap_1326_li=FTP\: document the server, including -ftpTask option to execute / kill remote processes roadmap_1327_li=FTP\: problems with multithreading? roadmap_1328_li=FTP\: implement SFTP / FTPS roadmap_1329_li=FTP\: access to a database (.csv for a table, a directory for a schema, a file for a lob, a script.sql file). roadmap_1330_li=More secure default configuration if remote access is enabled. roadmap_1331_li=Improve database file locking (maybe use native file locking). The current approach seems to be problematic if the file system is on a remote share (see Google Group 'Lock file modification time is in the future'). roadmap_1332_li=Document internal features such as BELONGS_TO_TABLE, NULL_TO_DEFAULT, SEQUENCE. roadmap_1333_li=Issue 107\: Prefer using the ORDER BY index if LIMIT is used. roadmap_1334_li=An index on (id, name) should be used for a query\: select * from t where s\=? order by i roadmap_1335_li=Support reading sequences using DatabaseMetaData.getTables(null, null, null, new String[]{"SEQUENCE"}). See PostgreSQL. roadmap_1336_li=Add option to enable TCP_NODELAY using Socket.setTcpNoDelay(true). roadmap_1337_li=Maybe disallow \= within database names (jdbc\:h2\:mem\:MODE\=DB2 means database name MODE\=DB2). roadmap_1338_li=Fast alter table add column. roadmap_1339_li=Improve concurrency for in-memory database operations. roadmap_1340_li=Issue 122\: Support for connection aliases for remote tcp connections. roadmap_1341_li=Fast scrambling (strong encryption doesn't help if the password is included in the application). roadmap_1342_li=H2 Console\: support -webPassword to require a password to access preferences or shutdown. roadmap_1343_li=Issue 126\: The index name should be "IDX_" plus the constraint name unless there is a conflict, in which case append a number. roadmap_1344_li=Issue 127\: Support activation/deactivation of triggers roadmap_1345_li=Issue 130\: Custom log event listeners roadmap_1346_li=Issue 131\: IBM DB2 compatibility\: sysibm.sysdummy1 roadmap_1347_li=Issue 132\: Use Java enum trigger type. roadmap_1348_li=Issue 134\: IBM DB2 compatibility\: session global variables. roadmap_1349_li=Cluster\: support load balance with values for each server / auto detect. roadmap_1350_li=FTL_SET_OPTION(keyString, valueString) with key stopWords at first. roadmap_1351_li=Pluggable access control mechanism. roadmap_1352_li=Fulltext search (Lucene)\: support streaming CLOB data. roadmap_1353_li=Document/example how to create and read an encrypted script file. roadmap_1354_li=Check state of http\://issues.apache.org/jira/browse/OPENJPA-1367 (H2 does support cross joins). roadmap_1355_li=Fulltext search (Lucene)\: only prefix column names with _ if they already start with _. Instead of DATA / QUERY / modified use _DATA, _QUERY, _MODIFIED if possible. roadmap_1356_li=Support a way to create or read compressed encrypted script files using an API. roadmap_1357_li=Scripting language support (Javascript). roadmap_1358_li=The network client should better detect if the server is not an H2 server and fail early. roadmap_1359_li=H2 Console\: support CLOB/BLOB upload. roadmap_1360_li=Database file lock\: detect hibernate / standby / very slow threads (compare system time). roadmap_1361_li=Automatic detection of redundant indexes. roadmap_1362_li=Maybe reject join without "on" (except natural join). roadmap_1363_li=Implement GiST (Generalized Search Tree for Secondary Storage). roadmap_1364_li=Function to read a number of bytes/characters from an BLOB or CLOB. roadmap_1365_li=Issue 156\: Support SELECT ? UNION SELECT ?. roadmap_1366_li=Automatic mixed mode\: support a port range list (to avoid firewall problems). roadmap_1367_li=Support the pseudo column rowid, oid, _rowid_. roadmap_1368_li=H2 Console / large result sets\: stream early instead of keeping a whole result in-memory roadmap_1369_li=Support TRUNCATE for linked tables. roadmap_1370_li=UNION\: evaluate INTERSECT before UNION (like most other database except Oracle). roadmap_1371_li=Delay creating the information schema, and share metadata columns. roadmap_1372_li=TCP Server\: use a nonce (number used once) to protect unencrypted channels against replay attacks. roadmap_1373_li=Simplify running scripts and recovery\: CREATE FORCE USER (overwrites an existing user). roadmap_1374_li=Support CREATE DATABASE LINK (a custom JDBC driver is already supported). roadmap_1375_li=Support large GROUP BY operations. Issue 216. roadmap_1376_li=Issue 163\: Allow to create foreign keys on metadata types. roadmap_1377_li=Logback\: write a native DBAppender. roadmap_1378_li=Cache size\: don't use more cache than what is available. roadmap_1379_li=Allow to defragment at runtime (similar to SHUTDOWN DEFRAG) in a background thread. roadmap_1380_li=Tree index\: Instead of an AVL tree, use a general balanced trees or a scapegoat tree. roadmap_1381_li=User defined functions\: allow to store the bytecode (of just the class, or the jar file of the extension) in the database. roadmap_1382_li=Compatibility\: ResultSet.getObject() on a CLOB (TEXT) should return String for PostgreSQL and MySQL. roadmap_1383_li=Optimizer\: WHERE X\=? AND Y IN(?), it always uses the index on Y. Should be cost based. roadmap_1384_li=Common Table Expression (CTE) / recursive queries\: support parameters. Issue 314. roadmap_1385_li=Oracle compatibility\: support INSERT ALL. roadmap_1386_li=Issue 178\: Optimizer\: index usage when both ascending and descending indexes are available. roadmap_1387_li=Issue 179\: Related subqueries in HAVING clause. roadmap_1388_li=IBM DB2 compatibility\: NOT NULL WITH DEFAULT. Similar to MySQL Mode.convertInsertNullToZero. roadmap_1389_li=Creating primary key\: always create a constraint. roadmap_1390_li=Maybe use a different page layout\: keep the data at the head of the page, and ignore the tail (don't store / read it). This may increase write / read performance depending on the file system. roadmap_1391_li=Indexes of temporary tables are currently kept in-memory. Is this how it should be? roadmap_1392_li=The Shell tool should support the same built-in commands as the H2 Console. roadmap_1393_li=Maybe use PhantomReference instead of finalize. roadmap_1394_li=Database file name suffix\: should only have one dot by default. Example\: .h2db roadmap_1395_li=Issue 196\: Function based indexes roadmap_1396_li=ALTER TABLE ... ADD COLUMN IF NOT EXISTS columnName. roadmap_1397_li=Fix the disk space leak (killing the process at the exact right moment will increase the disk space usage; this space is not re-used). See TestDiskSpaceLeak.java roadmap_1398_li=ROWNUM\: Oracle compatibility when used within a subquery. Issue 198. roadmap_1399_li=Allow to access the database over HTTP (possibly using port 80) and a servlet in a REST way. roadmap_1400_li=ODBC\: encrypted databases are not supported because the ;CIPHER\= can not be set. roadmap_1401_li=Support CLOB and BLOB update, specially conn.createBlob().setBinaryStream(1); roadmap_1402_li=Optimizer\: index usage when both ascending and descending indexes are available. Issue 178. roadmap_1403_li=Issue 306\: Support schema specific domains. roadmap_1404_li=Triggers\: support user defined execution order. Oracle\: CREATE OR REPLACE TRIGGER TEST_2 BEFORE INSERT ON TEST FOR EACH ROW FOLLOWS TEST_1. SQL specifies that multiple triggers should be fired in time-of-creation order. PostgreSQL uses name order, which was judged to be more convenient. Derby\: triggers are fired in the order in which they were created. roadmap_1405_li=PostgreSQL compatibility\: combine "users" and "roles". See\: http\://www.postgresql.org/docs/8.1/interactive/user-manag.html roadmap_1406_li=Improve documentation of system properties\: only list the property names, default values, and description. roadmap_1407_li=Support running totals / cumulative sum using SUM(..) OVER(..). roadmap_1408_li=Improve object memory size calculation. Use constants for known VMs, or use reflection to call java.lang.instrument.Instrumentation.getObjectSize(Object objectToSize) roadmap_1409_li=Triggers\: NOT NULL checks should be done after running triggers (Oracle behavior, maybe others). roadmap_1410_li=Common Table Expression (CTE) / recursive queries\: support INSERT INTO ... SELECT ... Issue 219. roadmap_1411_li=Common Table Expression (CTE) / recursive queries\: support non-recursive queries. Issue 217. roadmap_1412_li=Common Table Expression (CTE) / recursive queries\: avoid endless loop. Issue 218. roadmap_1413_li=Common Table Expression (CTE) / recursive queries\: support multiple named queries. Issue 220. roadmap_1414_li=Common Table Expression (CTE) / recursive queries\: identifier scope may be incorrect. Issue 222. roadmap_1415_li=Log long running transactions (similar to long running statements). roadmap_1416_li=Parameter data type is data type of other operand. Issue 205. roadmap_1417_li=Some combinations of nested join with right outer join are not supported. roadmap_1418_li=DatabaseEventListener.openConnection(id) and closeConnection(id). roadmap_1419_li=Listener or authentication module for new connections, or a way to restrict the number of different connections to a tcp server, or to prevent to login with the same username and password from different IPs. Possibly using the DatabaseEventListener API, or a new API. roadmap_1420_li=Compatibility for data type CHAR (Derby, HSQLDB). Issue 212. roadmap_1421_li=Compatibility with MySQL TIMESTAMPDIFF. Issue 209. roadmap_1422_li=Optimizer\: use a histogram of the data, specially for non-normal distributions. roadmap_1423_li=Trigger\: allow declaring as source code (like functions). roadmap_1424_li=User defined aggregate\: allow declaring as source code (like functions). roadmap_1425_li=The error "table not found" is sometimes caused by using the wrong database. Add "(this database is empty)" to the exception message if applicable. roadmap_1426_li=MySQL + PostgreSQL compatibility\: support string literal escape with \\n. roadmap_1427_li=PostgreSQL compatibility\: support string literal escape with double \\\\. roadmap_1428_li=Document the TCP server "management_db". Maybe include the IP address of the client. roadmap_1429_li=Use javax.tools.JavaCompilerTool instead of com.sun.tools.javac.Main roadmap_1430_li=If a database object was not found in the current schema, but one with the same name existed in another schema, included that in the error message. roadmap_1431_li=Optimization to use an index for OR when using multiple keys\: where (key1 \= ? and key2 \= ?) OR (key1 \= ? and key2 \= ?) roadmap_1432_li=Issue 302\: Support optimizing queries with both inner and outer joins, as in\: select * from test a inner join test b on a.id\=b.id inner join o on o.id\=a.id where b.x\=1 (the optimizer should swap a and b here). See also TestNestedJoins, tag "swapInnerJoinTables". roadmap_1433_li=JaQu should support a DataSource and a way to create a Db object using a Connection (for multi-threaded usage with a connection pool). roadmap_1434_li=Move table to a different schema (rename table to a different schema), possibly using ALTER TABLE ... SET SCHEMA ...; roadmap_1435_li=nioMapped file system\: automatically fall back to regular (non mapped) IO if there is a problem (out of memory exception for example). roadmap_1436_li=Column as parameter of function table. Issue 228. roadmap_1437_li=Connection pool\: detect ;AUTOCOMMIT\=FALSE in the database URL, and if set, disable autocommit for all connections. roadmap_1438_li=Compatibility with MS Access\: support "&" to concatenate text. roadmap_1439_li=The BACKUP statement should not synchronize on the database, and therefore should not block other users. roadmap_1440_li=Document the database file format. roadmap_1441_li=Support reading LOBs. roadmap_1442_li=Require appending DANGEROUS\=TRUE when using certain dangerous settings such as LOG\=0, LOG\=1, LOCK_MODE\=0, disabling FILE_LOCK,... roadmap_1443_li=Support UDT (user defined types) similar to how Apache Derby supports it\: check constraint, allow to use it in Java functions as parameters (return values already seem to work). roadmap_1444_li=Encrypted file system (use cipher text stealing so file length doesn't need to decrypt; 4 KB header per file, optional compatibility with current encrypted database files). roadmap_1445_li=Issue 229\: SELECT with simple OR tests uses tableScan when it could use indexes. roadmap_1446_li=GROUP BY queries should use a temporary table if there are too many rows. roadmap_1447_li=BLOB\: support random access when reading. roadmap_1448_li=CLOB\: support random access when reading (this is harder than for BLOB as data is stored in UTF-8 form). roadmap_1449_li=Compatibility\: support SELECT INTO (as an alias for CREATE TABLE ... AS SELECT ...). roadmap_1450_li=Compatibility with MySQL\: support SELECT INTO OUTFILE (cannot be an existing file) as an alias for CSVWRITE(...). roadmap_1451_li=Compatibility with MySQL\: support non-strict mode (sql_mode \= "") any data that is too large for the column will just be truncated or set to the default value. roadmap_1452_li=The full condition should be sent to the linked table, not just the indexed condition. Example\: TestLinkedTableFullCondition roadmap_1453_li=Compatibility with IBM DB2\: CREATE PROCEDURE. roadmap_1454_li=Compatibility with IBM DB2\: SQL cursors. roadmap_1455_li=Single-column primary key values are always stored explicitly. This is not required. roadmap_1456_li=Compatibility with MySQL\: support CREATE TABLE TEST(NAME VARCHAR(255) CHARACTER SET UTF8). roadmap_1457_li=CALL is incompatible with other databases because it returns a result set, so that CallableStatement.execute() returns true. roadmap_1458_li=Optimization for large lists for column IN(1, 2, 3, 4,...) - currently an list is used, could potentially use a hash set (maybe only for a part of the values - the ones that can be evaluated). roadmap_1459_li=Compatibility for ARRAY data type (Oracle\: VARRAY(n) of VARCHAR(m); HSQLDB\: VARCHAR(n) ARRAY; Postgres\: VARCHAR(n)[]). roadmap_1460_li=PostgreSQL compatible array literal syntax\: ARRAY[['a', 'b'], ['c', 'd']] roadmap_1461_li=PostgreSQL compatibility\: UPDATE with FROM. roadmap_1462_li=Issue 297\: Oracle compatibility for "at time zone". roadmap_1463_li=IBM DB2 compatibility\: IDENTITY_VAL_LOCAL(). roadmap_1464_li=Support SQL/XML. roadmap_1465_li=Support concurrent opening of databases. roadmap_1466_li=Improved error message and diagnostics in case of network configuration problems. roadmap_1467_li=TRUNCATE should reset the identity columns as in MySQL and MS SQL Server (and possibly other databases). roadmap_1468_li=Adding a primary key should make the columns 'not null' unless if there is a row with null (compatibility with MySQL, PostgreSQL, HSQLDB; not Derby). roadmap_1469_li=ARRAY data type\: support Integer[] and so on in Java functions (currently only Object[] is supported). roadmap_1470_li=MySQL compatibility\: LOCK TABLES a READ, b READ - see also http\://dev.mysql.com/doc/refman/5.0/en/lock-tables.html roadmap_1471_li=The HTML to PDF converter should use http\://code.google.com/p/wkhtmltopdf/ roadmap_1472_li=Issue 303\: automatically convert "X NOT IN(SELECT...)" to "NOT EXISTS(...)". roadmap_1473_li=MySQL compatibility\: update test1 t1, test2 t2 set t1.name\=t2.name where t1.id\=t2.id. roadmap_1474_li=Issue 283\: Improve performance of H2 on Android. roadmap_1475_li=Support INSERT INTO / UPDATE / MERGE ... RETURNING to retrieve the generated key(s). roadmap_1476_li=Column compression option - see http\://groups.google.com/group/h2-database/browse_thread/thread/3e223504e52671fa/243da82244343f5d roadmap_1477_li=PostgreSQL compatibility\: ALTER TABLE ADD combined with adding a foreign key constraint, as in ALTER TABLE FOO ADD COLUMN PARENT BIGINT REFERENCES FOO(ID). roadmap_1478_li=MS SQL Server compatibility\: support @@ROWCOUNT. roadmap_1479_li=PostgreSQL compatibility\: LOG(x) is LOG10(x) and not LN(x). roadmap_1480_li=Issue 311\: Serialized lock mode\: executeQuery of write operations fails. roadmap_1481_li=PostgreSQL compatibility\: support PgAdmin III (specially the function current_setting). roadmap_1482_li=MySQL compatibility\: support TIMESTAMPADD. roadmap_1483_li=Support SELECT ... FOR UPDATE with joins (supported by PostgreSQL, MySQL, and HSQLDB; but not Derby). roadmap_1484_li=Support SELECT ... FOR UPDATE OF [field-list] (supported by PostgreSQL, MySQL, and HSQLDB; but not Derby). roadmap_1485_li=Support SELECT ... FOR UPDATE OF [table-list] (supported by PostgreSQL, HSQLDB, Sybase). roadmap_1486_li=TRANSACTION_ID() for in-memory databases. roadmap_1487_li=TRANSACTION_ID() should be long (same as HSQLDB and PostgreSQL). roadmap_1488_li=Support [INNER | OUTER] JOIN USING(column [,...]). roadmap_1489_li=Support NATURAL [ { LEFT | RIGHT } [ OUTER ] | INNER ] JOIN (Derby, Oracle) roadmap_1490_li=GROUP BY columnNumber (similar to ORDER BY columnNumber) (MySQL, PostgreSQL, SQLite; not by HSQLDB and Derby). roadmap_1491_li=Sybase / MS SQL Server compatibility\: CONVERT(..) parameters are swapped. roadmap_1492_li=Index conditions\: WHERE AGE>1 should not scan through all rows with AGE\=1. roadmap_1493_li=PHP support\: H2 should support PDO, or test with PostgreSQL PDO. roadmap_1494_li=Outer joins\: if no column of the outer join table is referenced, the outer join table could be removed from the query. roadmap_1495_li=Cluster\: allow using auto-increment and identity columns by ensuring executed in lock-step. roadmap_1496_li=MySQL compatibility\: index names only need to be unique for the given table. roadmap_1497_li=Issue 352\: constraints\: distinguish between 'no action' and 'restrict'. Currently, only restrict is supported, and 'no action' is internally mapped to 'restrict'. The database meta data returns 'restrict' in all cases. roadmap_1498_li=Oracle compatibility\: support MEDIAN aggregate function. roadmap_1499_li=Issue 348\: Oracle compatibility\: division should return a decimal result. roadmap_1500_li=Read rows on demand\: instead of reading the whole row, only read up to that column that is requested. Keep an pointer to the data area and the column id that is already read. roadmap_1501_li=Long running transactions\: log session id when detected. roadmap_1502_li=Optimization\: "select id from test" should use the index on id even without "order by". roadmap_1503_li=Issue 362\: LIMIT support for UPDATE statements (MySQL compatibility). roadmap_1504_li=Sybase SQL Anywhere compatibility\: SELECT TOP ... START AT ... roadmap_1505_li=Use Java 6 SQLException subclasses. roadmap_1506_li=Issue 390\: RUNSCRIPT FROM '...' CONTINUE_ON_ERROR roadmap_1507_li=Use Java 6 exceptions\: SQLDataException, SQLSyntaxErrorException, SQLTimeoutException,.. roadmap_1508_h2=Not Planned roadmap_1509_li=HSQLDB (did) support this\: select id i from test where i<0 (other databases don't). Supporting it may break compatibility. roadmap_1510_li=String.intern (so that Strings can be compared with \=\=) will not be used because some VMs have problems when used extensively. roadmap_1511_li=In prepared statements, identifier names (table names and so on) can not be parameterized. Adding such a feature would complicate the source code without providing reasonable speedup, and would slow down regular prepared statements. sourceError_1000_h1=Error Analyzer sourceError_1001_a=Home sourceError_1002_a=Input sourceError_1003_h2= <a href\="javascript\:select('details')" id \= "detailsTab">Details</a> <a href\="javascript\:select('source')" id \= "sourceTab">Source Code</a> sourceError_1004_p=Paste the error message and stack trace below and click on 'Details' or 'Source Code'\: sourceError_1005_b=Error Code\: sourceError_1006_b=Product Version\: sourceError_1007_b=Message\: sourceError_1008_b=More Information\: sourceError_1009_b=Stack Trace\: sourceError_1010_b=Source File\: sourceError_1011_p=\ Inline tutorial_1000_h1=Tutorial tutorial_1001_a=\ Starting and Using the H2 Console tutorial_1002_a=\ Special H2 Console Syntax tutorial_1003_a=\ Settings of the H2 Console tutorial_1004_a=\ Connecting to a Database using JDBC tutorial_1005_a=\ Creating New Databases tutorial_1006_a=\ Using the Server tutorial_1007_a=\ Using Hibernate tutorial_1008_a=\ Using TopLink and Glassfish tutorial_1009_a=\ Using EclipseLink tutorial_1010_a=\ Using Apache ActiveMQ tutorial_1011_a=\ Using H2 within NetBeans tutorial_1012_a=\ Using H2 with jOOQ tutorial_1013_a=\ Using Databases in Web Applications tutorial_1014_a=\ Android tutorial_1015_a=\ CSV (Comma Separated Values) Support tutorial_1016_a=\ Upgrade, Backup, and Restore tutorial_1017_a=\ Command Line Tools tutorial_1018_a=\ The Shell Tool tutorial_1019_a=\ Using OpenOffice Base tutorial_1020_a=\ Java Web Start / JNLP tutorial_1021_a=\ Using a Connection Pool tutorial_1022_a=\ Fulltext Search tutorial_1023_a=\ User-Defined Variables tutorial_1024_a=\ Date and Time tutorial_1025_a=\ Using Spring tutorial_1026_a=\ OSGi tutorial_1027_a=\ Java Management Extension (JMX) tutorial_1028_h2=Starting and Using the H2 Console tutorial_1029_p=\ The H2 Console application lets you access a database using a browser. This can be a H2 database, or another database that supports the JDBC API. tutorial_1030_p=\ This is a client/server application, so both a server and a client (a browser) are required to run it. tutorial_1031_p=\ Depending on your platform and environment, there are multiple ways to start the H2 Console\: tutorial_1032_th=OS tutorial_1033_th=Start tutorial_1034_td=Windows tutorial_1035_td=\ Click [Start], [All Programs], [H2], and [H2 Console (Command Line)] tutorial_1036_td=\ An icon will be added to the system tray\: tutorial_1037_td=\ If you don't get the window and the system tray icon, then maybe Java is not installed correctly (in this case, try another way to start the application). A browser window should open and point to the login page at <code>http\://localhost\:8082</code>. tutorial_1038_td=Windows tutorial_1039_td=\ Open a file browser, navigate to <code>h2/bin</code>, and double click on <code>h2.bat</code>. tutorial_1040_td=\ A console window appears. If there is a problem, you will see an error message in this window. A browser window will open and point to the login page (URL\: <code>http\://localhost\:8082</code>). tutorial_1041_td=Any tutorial_1042_td=\ Double click on the <code>h2*.jar</code> file. This only works if the <code>.jar</code> suffix is associated with Java. tutorial_1043_td=Any tutorial_1044_td=\ Open a console window, navigate to the directory <code>h2/bin</code>, and type\: tutorial_1045_h3=Firewall tutorial_1046_p=\ If you start the server, you may get a security warning from the firewall (if you have installed one). If you don't want other computers in the network to access the application on your machine, you can let the firewall block those connections. The connection from the local machine will still work. Only if you want other computers to access the database on this computer, you need allow remote connections in the firewall. tutorial_1047_p=\ It has been reported that when using Kaspersky 7.0 with firewall, the H2 Console is very slow when connecting over the IP address. A workaround is to connect using 'localhost'. tutorial_1048_p=\ A small firewall is already built into the server\: other computers may not connect to the server by default. To change this, go to 'Preferences' and select 'Allow connections from other computers'. tutorial_1049_h3=Testing Java tutorial_1050_p=\ To find out which version of Java is installed, open a command prompt and type\: tutorial_1051_p=\ If you get an error message, you may need to add the Java binary directory to the path environment variable. tutorial_1052_h3=Error Message 'Port may be in use' tutorial_1053_p=\ You can only start one instance of the H2 Console, otherwise you will get the following error message\: "The Web server could not be started. Possible cause\: another server is already running...". It is possible to start multiple console applications on the same computer (using different ports), but this is usually not required as the console supports multiple concurrent connections. tutorial_1054_h3=Using another Port tutorial_1055_p=\ If the default port of the H2 Console is already in use by another application, then a different port needs to be configured. The settings are stored in a properties file. For details, see <a href\="\#console_settings">Settings of the H2 Console</a>. The relevant entry is <code>webPort</code>. tutorial_1056_p=\ If no port is specified for the TCP and PG servers, each service will try to listen on its default port. If the default port is already in use, a random port is used. tutorial_1057_h3=Connecting to the Server using a Browser tutorial_1058_p=\ If the server started successfully, you can connect to it using a web browser. Javascript needs to be enabled. If you started the server on the same computer as the browser, open the URL <code>http\://localhost\:8082</code>. If you want to connect to the application from another computer, you need to provide the IP address of the server, for example\: <code>http\://192.168.0.2\:8082</code>. If you enabled TLS on the server side, the URL needs to start with <code>https\://</code>. tutorial_1059_h3=Multiple Concurrent Sessions tutorial_1060_p=\ Multiple concurrent browser sessions are supported. As that the database objects reside on the server, the amount of concurrent work is limited by the memory available to the server application. tutorial_1061_h3=Login tutorial_1062_p=\ At the login page, you need to provide connection information to connect to a database. Set the JDBC driver class of your database, the JDBC URL, user name, and password. If you are done, click [Connect]. tutorial_1063_p=\ You can save and reuse previously saved settings. The settings are stored in a properties file (see <a href\="\#console_settings">Settings of the H2 Console</a>). tutorial_1064_h3=Error Messages tutorial_1065_p=\ Error messages in are shown in red. You can show/hide the stack trace of the exception by clicking on the message. tutorial_1066_h3=Adding Database Drivers tutorial_1067_p=\ To register additional JDBC drivers (MySQL, PostgreSQL, HSQLDB,...), add the jar file names to the environment variables <code>H2DRIVERS</code> or <code>CLASSPATH</code>. Example (Windows)\: to add the HSQLDB JDBC driver <code>C\:\\Programs\\hsqldb\\lib\\hsqldb.jar</code>, set the environment variable <code>H2DRIVERS</code> to <code>C\:\\Programs\\hsqldb\\lib\\hsqldb.jar</code>. tutorial_1068_p=\ Multiple drivers can be set; entries need to be separated by <code>;</code> (Windows) or <code>\:</code> (other operating systems). Spaces in the path names are supported. The settings must not be quoted. tutorial_1069_h3=Using the H2 Console tutorial_1070_p=\ The H2 Console application has three main panels\: the toolbar on top, the tree on the left, and the query/result panel on the right. The database objects (for example, tables) are listed on the left. Type a SQL command in the query panel and click [Run]. The result appears just below the command. tutorial_1071_h3=Inserting Table Names or Column Names tutorial_1072_p=\ To insert table and column names into the script, click on the item in the tree. If you click on a table while the query is empty, then <code>SELECT * FROM ...</code> is added. While typing a query, the table that was used is expanded in the tree. For example if you type <code>SELECT * FROM TEST T WHERE T.</code> then the table TEST is expanded. tutorial_1073_h3=Disconnecting and Stopping the Application tutorial_1074_p=\ To log out of the database, click [Disconnect] in the toolbar panel. However, the server is still running and ready to accept new sessions. tutorial_1075_p=\ To stop the server, right click on the system tray icon and select [Exit]. If you don't have the system tray icon, navigate to [Preferences] and click [Shutdown], press [Ctrl]+[C] in the console where the server was started (Windows), or close the console window. tutorial_1076_h2=Special H2 Console Syntax tutorial_1077_p=\ The H2 Console supports a few built-in commands. Those are interpreted within the H2 Console, so they work with any database. Built-in commands need to be at the beginning of a statement (before any remarks), otherwise they are not parsed correctly. If in doubt, add <code>;</code> before the command. tutorial_1078_th=Command(s) tutorial_1079_th=Description tutorial_1080_td=\ @autocommit_true; tutorial_1081_td=\ @autocommit_false; tutorial_1082_td=\ Enable or disable autocommit. tutorial_1083_td=\ @cancel; tutorial_1084_td=\ Cancel the currently running statement. tutorial_1085_td=\ @columns null null TEST; tutorial_1086_td=\ @index_info null null TEST; tutorial_1087_td=\ @tables; tutorial_1088_td=\ @tables null null TEST; tutorial_1089_td=\ Call the corresponding <code>DatabaseMetaData.get</code> method. Patterns are case sensitive (usually identifiers are uppercase). For information about the parameters, see the Javadoc documentation. Missing parameters at the end of the line are set to null. The complete list of metadata commands is\: <code> @attributes, @best_row_identifier, @catalogs, @columns, @column_privileges, @cross_references, @exported_keys, @imported_keys, @index_info, @primary_keys, @procedures, @procedure_columns, @schemas, @super_tables, @super_types, @tables, @table_privileges, @table_types, @type_info, @udts, @version_columns </code> tutorial_1090_td=\ @edit select * from test; tutorial_1091_td=\ Use an updatable result set. tutorial_1092_td=\ @generated insert into test() values(); tutorial_1093_td=\ Show the result of <code>Statement.getGeneratedKeys()</code>. tutorial_1094_td=\ @history; tutorial_1095_td=\ List the command history. tutorial_1096_td=\ @info; tutorial_1097_td=\ Display the result of various <code>Connection</code> and <code>DatabaseMetaData</code> methods. tutorial_1098_td=\ @list select * from test; tutorial_1099_td=\ Show the result set in list format (each column on its own line, with row numbers). tutorial_1100_td=\ @loop 1000 select ?, ?/*rnd*/; tutorial_1101_td=\ @loop 1000 @statement select ?; tutorial_1102_td=\ Run the statement this many times. Parameters (<code>?</code>) are set using a loop from 0 up to x - 1. Random values are used for each <code>?/*rnd*/</code>. A Statement object is used instead of a PreparedStatement if <code>@statement</code> is used. Result sets are read until <code>ResultSet.next()</code> returns <code>false</code>. Timing information is printed. tutorial_1103_td=\ @maxrows 20; tutorial_1104_td=\ Set the maximum number of rows to display. tutorial_1105_td=\ @memory; tutorial_1106_td=\ Show the used and free memory. This will call <code>System.gc()</code>. tutorial_1107_td=\ @meta select 1; tutorial_1108_td=\ List the <code>ResultSetMetaData</code> after running the query. tutorial_1109_td=\ @parameter_meta select ?; tutorial_1110_td=\ Show the result of the <code>PreparedStatement.getParameterMetaData()</code> calls. The statement is not executed. tutorial_1111_td=\ @prof_start; tutorial_1112_td=\ call hash('SHA256', '', 1000000); tutorial_1113_td=\ @prof_stop; tutorial_1114_td=\ Start/stop the built-in profiling tool. The top 3 stack traces of the statement(s) between start and stop are listed (if there are 3). tutorial_1115_td=\ @prof_start; tutorial_1116_td=\ @sleep 10; tutorial_1117_td=\ @prof_stop; tutorial_1118_td=\ Sleep for a number of seconds. Used to profile a long running query or operation that is running in another session (but in the same process). tutorial_1119_td=\ @transaction_isolation; tutorial_1120_td=\ @transaction_isolation 2; tutorial_1121_td=\ Display (without parameters) or change (with parameters 1, 2, 4, 8) the transaction isolation level. tutorial_1122_h2=Settings of the H2 Console tutorial_1123_p=\ The settings of the H2 Console are stored in a configuration file called <code>.h2.server.properties</code> in you user home directory. For Windows installations, the user home directory is usually <code>C\:\\Documents and Settings\\[username]</code> or <code>C\:\\Users\\[username]</code>. The configuration file contains the settings of the application and is automatically created when the H2 Console is first started. Supported settings are\: tutorial_1124_code=webAllowOthers tutorial_1125_li=\: allow other computers to connect. tutorial_1126_code=webPort tutorial_1127_li=\: the port of the H2 Console tutorial_1128_code=webSSL tutorial_1129_li=\: use encrypted TLS (HTTPS) connections. tutorial_1130_p=\ In addition to those settings, the properties of the last recently used connection are listed in the form <code><number>\=<name>|<driver>|<url>|<user></code> using the escape character <code>\\</code>. Example\: <code>1\=Generic H2 (Embedded)|org.h2.Driver|jdbc\\\:h2\\\:~/test|sa</code> tutorial_1131_h2=Connecting to a Database using JDBC tutorial_1132_p=\ To connect to a database, a Java application first needs to load the database driver, and then get a connection. A simple way to do that is using the following code\: tutorial_1133_p=\ This code first loads the driver (<code>Class.forName(...)</code>) and then opens a connection (using <code>DriverManager.getConnection()</code>). The driver name is <code>"org.h2.Driver"</code>. The database URL always needs to start with <code>jdbc\:h2\:</code> to be recognized by this database. The second parameter in the <code>getConnection()</code> call is the user name (<code>sa</code> for System Administrator in this example). The third parameter is the password. In this database, user names are not case sensitive, but passwords are. tutorial_1134_h2=Creating New Databases tutorial_1135_p=\ By default, if the database specified in the URL does not yet exist, a new (empty) database is created automatically. The user that created the database automatically becomes the administrator of this database. tutorial_1136_p=\ Auto-creating new database can be disabled, see <a href\="features.html\#database_only_if_exists">Opening a Database Only if it Already Exists</a>. tutorial_1137_h2=Using the Server tutorial_1138_p=\ H2 currently supports three server\: a web server (for the H2 Console), a TCP server (for client/server connections) and an PG server (for PostgreSQL clients). Please note that only the web server supports browser connections. The servers can be started in different ways, one is using the <code>Server</code> tool. Starting the server doesn't open a database - databases are opened as soon as a client connects. tutorial_1139_h3=Starting the Server Tool from Command Line tutorial_1140_p=\ To start the <code>Server</code> tool from the command line with the default settings, run\: tutorial_1141_p=\ This will start the tool with the default options. To get the list of options and default values, run\: tutorial_1142_p=\ There are options available to use other ports, and start or not start parts. tutorial_1143_h3=Connecting to the TCP Server tutorial_1144_p=\ To remotely connect to a database using the TCP server, use the following driver and database URL\: tutorial_1145_li=JDBC driver class\: <code>org.h2.Driver</code> tutorial_1146_li=Database URL\: <code>jdbc\:h2\:tcp\://localhost/~/test</code> tutorial_1147_p=\ For details about the database URL, see also in Features. Please note that you can't connection with a web browser to this URL. You can only connect using a H2 client (over JDBC). tutorial_1148_h3=Starting the TCP Server within an Application tutorial_1149_p=\ Servers can also be started and stopped from within an application. Sample code\: tutorial_1150_h3=Stopping a TCP Server from Another Process tutorial_1151_p=\ The TCP server can be stopped from another process. To stop the server from the command line, run\: tutorial_1152_p=\ To stop the server from a user application, use the following code\: tutorial_1153_p=\ This function will only stop the TCP server. If other server were started in the same process, they will continue to run. To avoid recovery when the databases are opened the next time, all connections to the databases should be closed before calling this method. To stop a remote server, remote connections must be enabled on the server. Shutting down a TCP server can be protected using the option <code>-tcpPassword</code> (the same password must be used to start and stop the TCP server). tutorial_1154_h2=Using Hibernate tutorial_1155_p=\ This database supports Hibernate version 3.1 and newer. You can use the HSQLDB Dialect, or the native H2 Dialect. Unfortunately the H2 Dialect included in some old versions of Hibernate was buggy. A <a href\="http\://opensource.atlassian.com/projects/hibernate/browse/HHH-3401">patch for Hibernate</a> has been submitted and is now applied. You can rename it to <code>H2Dialect.java</code> and include this as a patch in your application, or upgrade to a version of Hibernate where this is fixed. tutorial_1156_p=\ When using Hibernate, try to use the <code>H2Dialect</code> if possible. When using the <code>H2Dialect</code>, compatibility modes such as <code>MODE\=MySQL</code> are not supported. When using such a compatibility mode, use the Hibernate dialect for the corresponding database instead of the <code>H2Dialect</code>; but please note H2 does not support all features of all databases. tutorial_1157_h2=Using TopLink and Glassfish tutorial_1158_p=\ To use H2 with Glassfish (or Sun AS), set the Datasource Classname to <code>org.h2.jdbcx.JdbcDataSource</code>. You can set this in the GUI at Application Server - Resources - JDBC - Connection Pools, or by editing the file <code>sun-resources.xml</code>\: at element <code>jdbc-connection-pool</code>, set the attribute <code>datasource-classname</code> to <code>org.h2.jdbcx.JdbcDataSource</code>. tutorial_1159_p=\ The H2 database is compatible with HSQLDB and PostgreSQL. To take advantage of H2 specific features, use the <code>H2Platform</code>. The source code of this platform is included in H2 at <code>src/tools/oracle/toplink/essentials/platform/database/DatabasePlatform.java.txt</code>. You will need to copy this file to your application, and rename it to .java. To enable it, change the following setting in persistence.xml\: tutorial_1160_p=\ In old versions of Glassfish, the property name is <code>toplink.platform.class.name</code>. tutorial_1161_p=\ To use H2 within Glassfish, copy the h2*.jar to the directory <code>glassfish/glassfish/lib</code>. tutorial_1162_h2=Using EclipseLink tutorial_1163_p=\ To use H2 in EclipseLink, use the platform class <code>org.eclipse.persistence.platform.database.H2Platform</code>. If this platform is not available in your version of EclipseLink, you can use the OraclePlatform instead in many case. See also <a href\="http\://wiki.eclipse.org/EclipseLink/Development/Incubator/Extensions/H2Platform">H2Platform</a>. tutorial_1164_h2=Using Apache ActiveMQ tutorial_1165_p=\ When using H2 as the backend database for Apache ActiveMQ, please use the <code>TransactDatabaseLocker</code> instead of the default locking mechanism. Otherwise the database file will grow without bounds. The problem is that the default locking mechanism uses an uncommitted <code>UPDATE</code> transaction, which keeps the transaction log from shrinking (causes the database file to grow). Instead of using an <code>UPDATE</code> statement, the <code>TransactDatabaseLocker</code> uses <code>SELECT ... FOR UPDATE</code> which is not problematic. To use it, change the ApacheMQ configuration element <code><jdbcPersistenceAdapter></code> element, property <code>databaseLocker\="org.apache.activemq.store.jdbc.adapter.TransactDatabaseLocker"</code>. However, using the MVCC mode will again result in the same problem. Therefore, please do not use the MVCC mode in this case. Another (more dangerous) solution is to set <code>useDatabaseLock</code> to false. tutorial_1166_h2=Using H2 within NetBeans tutorial_1167_p=\ The project <a href\="http\://kenai.com/projects/nbh2">H2 Database Engine Support For NetBeans</a> allows you to start and stop the H2 server from within the IDE. tutorial_1168_p=\ There is a known issue when using the Netbeans SQL Execution Window\: before executing a query, another query in the form <code>SELECT COUNT(*) FROM <query></code> is run. This is a problem for queries that modify state, such as <code>SELECT SEQ.NEXTVAL</code>. In this case, two sequence values are allocated instead of just one. tutorial_1169_h2=Using H2 with jOOQ tutorial_1170_p=\ jOOQ adds a thin layer on top of JDBC, allowing for type-safe SQL construction, including advanced SQL, stored procedures and advanced data types. jOOQ takes your database schema as a base for code generation. If this is your example schema\: tutorial_1171_p=\ then run the jOOQ code generator on the command line using this command\: tutorial_1172_p=\ ...where <code>codegen.xml</code> is on the classpath and contains this information tutorial_1173_p=\ Using the generated source, you can query the database as follows\: tutorial_1174_p=\ See more details on <a href\="http\://www.jooq.org">jOOQ Homepage</a> and in the <a href\="http\://www.jooq.org/tutorial.php">jOOQ Tutorial</a> tutorial_1175_h2=Using Databases in Web Applications tutorial_1176_p=\ There are multiple ways to access a database from within web applications. Here are some examples if you use Tomcat or JBoss. tutorial_1177_h3=Embedded Mode tutorial_1178_p=\ The (currently) simplest solution is to use the database in the embedded mode, that means open a connection in your application when it starts (a good solution is using a Servlet Listener, see below), or when a session starts. A database can be accessed from multiple sessions and applications at the same time, as long as they run in the same process. Most Servlet Containers (for example Tomcat) are just using one process, so this is not a problem (unless you run Tomcat in clustered mode). Tomcat uses multiple threads and multiple classloaders. If multiple applications access the same database at the same time, you need to put the database jar in the <code>shared/lib</code> or <code>server/lib</code> directory. It is a good idea to open the database when the web application starts, and close it when the web application stops. If using multiple applications, only one (any) of them needs to do that. In the application, an idea is to use one connection per Session, or even one connection per request (action). Those connections should be closed after use if possible (but it's not that bad if they don't get closed). tutorial_1179_h3=Server Mode tutorial_1180_p=\ The server mode is similar, but it allows you to run the server in another process. tutorial_1181_h3=Using a Servlet Listener to Start and Stop a Database tutorial_1182_p=\ Add the h2*.jar file to your web application, and add the following snippet to your web.xml file (between the <code>context-param</code> and the <code>filter</code> section)\: tutorial_1183_p=\ For details on how to access the database, see the file <code>DbStarter.java</code>. By default this tool opens an embedded connection using the database URL <code>jdbc\:h2\:~/test</code>, user name <code>sa</code>, and password <code>sa</code>. If you want to use this connection within your servlet, you can access as follows\: tutorial_1184_code=DbStarter tutorial_1185_p=\ can also start the TCP server, however this is disabled by default. To enable it, use the parameter <code>db.tcpServer</code> in the file <code>web.xml</code>. Here is the complete list of options. These options need to be placed between the <code>description</code> tag and the <code>listener</code> / <code>filter</code> tags\: tutorial_1186_p=\ When the web application is stopped, the database connection will be closed automatically. If the TCP server is started within the <code>DbStarter</code>, it will also be stopped automatically. tutorial_1187_h3=Using the H2 Console Servlet tutorial_1188_p=\ The H2 Console is a standalone application and includes its own web server, but it can be used as a servlet as well. To do that, include the the <code>h2*.jar</code> file in your application, and add the following configuration to your <code>web.xml</code>\: tutorial_1189_p=\ For details, see also <code>src/tools/WEB-INF/web.xml</code>. tutorial_1190_p=\ To create a web application with just the H2 Console, run the following command\: tutorial_1191_h2=Android tutorial_1192_p=\ You can use this database on an Android device (using the Dalvik VM) instead of or in addition to SQLite. So far, only very few tests and benchmarks were run, but it seems that performance is similar to SQLite, except for opening and closing a database, which is not yet optimized in H2 (H2 takes about 0.2 seconds, and SQLite about 0.02 seconds). Read operations seem to be a bit faster than SQLite, and write operations seem to be slower. So far, only very few tests have been run, and everything seems to work as expected. Fulltext search was not yet tested, however the native fulltext search should work. tutorial_1193_p=\ Reasons to use H2 instead of SQLite are\: tutorial_1194_li=Full Unicode support including UPPER() and LOWER(). tutorial_1195_li=Streaming API for BLOB and CLOB data. tutorial_1196_li=Fulltext search. tutorial_1197_li=Multiple connections. tutorial_1198_li=User defined functions and triggers. tutorial_1199_li=Database file encryption. tutorial_1200_li=Reading and writing CSV files (this feature can be used outside the database as well). tutorial_1201_li=Referential integrity and check constraints. tutorial_1202_li=Better data type and SQL support. tutorial_1203_li=In-memory databases, read-only databases, linked tables. tutorial_1204_li=Better compatibility with other databases which simplifies porting applications. tutorial_1205_li=Possibly better performance (so far for read operations). tutorial_1206_li=Server mode (accessing a database on a different machine over TCP/IP). tutorial_1207_p=\ Currently only the JDBC API is supported (it is planned to support the Android database API in future releases). Both the regular H2 jar file and the smaller <code>h2small-*.jar</code> can be used. To create the smaller jar file, run the command <code>./build.sh jarSmall</code> (Linux / Mac OS) or <code>build.bat jarSmall</code> (Windows). tutorial_1208_p=\ The database files needs to be stored in a place that is accessible for the application. Example\: tutorial_1209_p=\ Limitations\: Using a connection pool is currently not supported, because the required <code>javax.sql.</code> classes are not available on Android. tutorial_1210_h2=CSV (Comma Separated Values) Support tutorial_1211_p=\ The CSV file support can be used inside the database using the functions <code>CSVREAD</code> and <code>CSVWRITE</code>, or it can be used outside the database as a standalone tool. tutorial_1212_h3=Reading a CSV File from Within a Database tutorial_1213_p=\ A CSV file can be read using the function <code>CSVREAD</code>. Example\: tutorial_1214_p=\ Please note for performance reason, <code>CSVREAD</code> should not be used inside a join. Instead, import the data first (possibly into a temporary table), create the required indexes if necessary, and then query this table. tutorial_1215_h3=Importing Data from a CSV File tutorial_1216_p=\ A fast way to load or import data (sometimes called 'bulk load') from a CSV file is to combine table creation with import. Optionally, the column names and data types can be set when creating the table. Another option is to use <code>INSERT INTO ... SELECT</code>. tutorial_1217_h3=Writing a CSV File from Within a Database tutorial_1218_p=\ The built-in function <code>CSVWRITE</code> can be used to create a CSV file from a query. Example\: tutorial_1219_h3=Writing a CSV File from a Java Application tutorial_1220_p=\ The <code>Csv</code> tool can be used in a Java application even when not using a database at all. Example\: tutorial_1221_h3=Reading a CSV File from a Java Application tutorial_1222_p=\ It is possible to read a CSV file without opening a database. Example\: tutorial_1223_h2=Upgrade, Backup, and Restore tutorial_1224_h3=Database Upgrade tutorial_1225_p=\ The recommended way to upgrade from one version of the database engine to the next version is to create a backup of the database (in the form of a SQL script) using the old engine, and then execute the SQL script using the new engine. tutorial_1226_h3=Backup using the Script Tool tutorial_1227_p=\ The recommended way to backup a database is to create a compressed SQL script file. This will result in a small, human readable, and database version independent backup. Creating the script will also verify the checksums of the database file. The <code>Script</code> tool is ran as follows\: tutorial_1228_p=\ It is also possible to use the SQL command <code>SCRIPT</code> to create the backup of the database. For more information about the options, see the SQL command <code>SCRIPT</code>. The backup can be done remotely, however the file will be created on the server side. The built in FTP server could be used to retrieve the file from the server. tutorial_1229_h3=Restore from a Script tutorial_1230_p=\ To restore a database from a SQL script file, you can use the <code>RunScript</code> tool\: tutorial_1231_p=\ For more information about the options, see the SQL command <code>RUNSCRIPT</code>. The restore can be done remotely, however the file needs to be on the server side. The built in FTP server could be used to copy the file to the server. It is also possible to use the SQL command <code>RUNSCRIPT</code> to execute a SQL script. SQL script files may contain references to other script files, in the form of <code>RUNSCRIPT</code> commands. However, when using the server mode, the references script files need to be available on the server side. tutorial_1232_h3=Online Backup tutorial_1233_p=\ The <code>BACKUP</code> SQL statement and the <code>Backup</code> tool both create a zip file with the database file. However, the contents of this file are not human readable. tutorial_1234_p=\ The resulting backup is transactionally consistent, meaning the consistency and atomicity rules apply. tutorial_1235_p=\ The <code>Backup</code> tool (<code>org.h2.tools.Backup</code>) can not be used to create a online backup; the database must not be in use while running this program. tutorial_1236_p=\ Creating a backup by copying the database files while the database is running is not supported, except if the file systems support creating snapshots. With other file systems, it can't be guaranteed that the data is copied in the right order. tutorial_1237_h2=Command Line Tools tutorial_1238_p=\ This database comes with a number of command line tools. To get more information about a tool, start it with the parameter '-?', for example\: tutorial_1239_p=\ The command line tools are\: tutorial_1240_code=Backup tutorial_1241_li=\ creates a backup of a database. tutorial_1242_code=ChangeFileEncryption tutorial_1243_li=\ allows changing the file encryption password or algorithm of a database. tutorial_1244_code=Console tutorial_1245_li=\ starts the browser based H2 Console. tutorial_1246_code=ConvertTraceFile tutorial_1247_li=\ converts a .trace.db file to a Java application and SQL script. tutorial_1248_code=CreateCluster tutorial_1249_li=\ creates a cluster from a standalone database. tutorial_1250_code=DeleteDbFiles tutorial_1251_li=\ deletes all files belonging to a database. tutorial_1252_code=Recover tutorial_1253_li=\ helps recovering a corrupted database. tutorial_1254_code=Restore tutorial_1255_li=\ restores a backup of a database. tutorial_1256_code=RunScript tutorial_1257_li=\ runs a SQL script against a database. tutorial_1258_code=Script tutorial_1259_li=\ allows converting a database to a SQL script for backup or migration. tutorial_1260_code=Server tutorial_1261_li=\ is used in the server mode to start a H2 server. tutorial_1262_code=Shell tutorial_1263_li=\ is a command line database tool. tutorial_1264_p=\ The tools can also be called from an application by calling the main or another public method. For details, see the Javadoc documentation. tutorial_1265_h2=The Shell Tool tutorial_1266_p=\ The Shell tool is a simple interactive command line tool. To start it, type\: tutorial_1267_p=\ You will be asked for a database URL, JDBC driver, user name, and password. The connection setting can also be set as command line parameters. After connecting, you will get the list of options. The built-in commands don't need to end with a semicolon, but SQL statements are only executed if the line ends with a semicolon <code>;</code>. This allows to enter multi-line statements\: tutorial_1268_p=\ By default, results are printed as a table. For results with many column, consider using the list mode\: tutorial_1269_h2=Using OpenOffice Base tutorial_1270_p=\ OpenOffice.org Base supports database access over the JDBC API. To connect to a H2 database using OpenOffice Base, you first need to add the JDBC driver to OpenOffice. The steps to connect to a H2 database are\: tutorial_1271_li=Start OpenOffice Writer, go to [Tools], [Options] tutorial_1272_li=Make sure you have selected a Java runtime environment in OpenOffice.org / Java tutorial_1273_li=Click [Class Path...], [Add Archive...] tutorial_1274_li=Select your h2 jar file (location is up to you, could be wherever you choose) tutorial_1275_li=Click [OK] (as much as needed), stop OpenOffice (including the Quickstarter) tutorial_1276_li=Start OpenOffice Base tutorial_1277_li=Connect to an existing database; select [JDBC]; [Next] tutorial_1278_li=Example datasource URL\: <code>jdbc\:h2\:~/test</code> tutorial_1279_li=JDBC driver class\: <code>org.h2.Driver</code> tutorial_1280_p=\ Now you can access the database stored in the current users home directory. tutorial_1281_p=\ To use H2 in NeoOffice (OpenOffice without X11)\: tutorial_1282_li=In NeoOffice, go to [NeoOffice], [Preferences] tutorial_1283_li=Look for the page under [NeoOffice], [Java] tutorial_1284_li=Click [Class Path], [Add Archive...] tutorial_1285_li=Select your h2 jar file (location is up to you, could be wherever you choose) tutorial_1286_li=Click [OK] (as much as needed), restart NeoOffice. tutorial_1287_p=\ Now, when creating a new database using the "Database Wizard" \: tutorial_1288_li=Click [File], [New], [Database]. tutorial_1289_li=Select [Connect to existing database] and the select [JDBC]. Click next. tutorial_1290_li=Example datasource URL\: <code>jdbc\:h2\:~/test</code> tutorial_1291_li=JDBC driver class\: <code>org.h2.Driver</code> tutorial_1292_p=\ Another solution to use H2 in NeoOffice is\: tutorial_1293_li=Package the h2 jar within an extension package tutorial_1294_li=Install it as a Java extension in NeoOffice tutorial_1295_p=\ This can be done by create it using the NetBeans OpenOffice plugin. See also <a href\="http\://wiki.services.openoffice.org/wiki/Extensions_development_java">Extensions Development</a>. tutorial_1296_h2=Java Web Start / JNLP tutorial_1297_p=\ When using Java Web Start / JNLP (Java Network Launch Protocol), permissions tags must be set in the .jnlp file, and the application .jar file must be signed. Otherwise, when trying to write to the file system, the following exception will occur\: <code>java.security.AccessControlException</code>\: access denied (<code>java.io.FilePermission ... read</code>). Example permission tags\: tutorial_1298_h2=Using a Connection Pool tutorial_1299_p=\ For H2, opening a connection is fast if the database is already open. Still, using a connection pool improves performance if you open and close connections a lot. A simple connection pool is included in H2. It is based on the <a href\="http\://www.source-code.biz/snippets/java/8.htm">Mini Connection Pool Manager</a> from Christian d'Heureuse. There are other, more complex, open source connection pools available, for example the <a href\="http\://jakarta.apache.org/commons/dbcp/">Apache Commons DBCP</a>. For H2, it is about twice as faster to get a connection from the built-in connection pool than to get one using <code>DriverManager.getConnection()</code>.The build-in connection pool is used as follows\: tutorial_1300_h2=Fulltext Search tutorial_1301_p=\ H2 includes two fulltext search implementations. One is using Apache Lucene, and the other (the native implementation) stores the index data in special tables in the database. tutorial_1302_h3=Using the Native Fulltext Search tutorial_1303_p=\ To initialize, call\: tutorial_1304_p=\ You need to initialize it in each database where you want to use it. Afterwards, you can create a fulltext index for a table using\: tutorial_1305_p=\ PUBLIC is the schema name, TEST is the table name. The list of column names (comma separated) is optional, in this case all columns are indexed. The index is updated in realtime. To search the index, use the following query\: tutorial_1306_p=\ This will produce a result set that contains the query needed to retrieve the data\: tutorial_1307_p=\ To drop an index on a table\: tutorial_1308_p=\ To get the raw data, use <code>FT_SEARCH_DATA('Hello', 0, 0);</code>. The result contains the columns <code>SCHEMA</code> (the schema name), <code>TABLE</code> (the table name), <code>COLUMNS</code> (an array of column names), and <code>KEYS</code> (an array of objects). To join a table, use a join as in\: <code>SELECT T.* FROM FT_SEARCH_DATA('Hello', 0, 0) FT, TEST T WHERE FT.TABLE\='TEST' AND T.ID\=FT.KEYS[0];</code> tutorial_1309_p=\ You can also call the index from within a Java application\: tutorial_1310_h3=Using the Apache Lucene Fulltext Search tutorial_1311_p=\ To use the Apache Lucene full text search, you need the Lucene library in the classpath. Currently, Apache Lucene 3.6.2 is used for testing. Newer versions may work, however they are not tested. How to do that depends on the application; if you use the H2 Console, you can add the Lucene jar file to the environment variables <code>H2DRIVERS</code> or <code>CLASSPATH</code>. To initialize the Lucene fulltext search in a database, call\: tutorial_1312_p=\ You need to initialize it in each database where you want to use it. Afterwards, you can create a full text index for a table using\: tutorial_1313_p=\ PUBLIC is the schema name, TEST is the table name. The list of column names (comma separated) is optional, in this case all columns are indexed. The index is updated in realtime. To search the index, use the following query\: tutorial_1314_p=\ This will produce a result set that contains the query needed to retrieve the data\: tutorial_1315_p=\ To drop an index on a table (be warned that this will re-index all of the full-text indices for the entire database)\: tutorial_1316_p=\ To get the raw data, use <code>FTL_SEARCH_DATA('Hello', 0, 0);</code>. The result contains the columns <code>SCHEMA</code> (the schema name), <code>TABLE</code> (the table name), <code>COLUMNS</code> (an array of column names), and <code>KEYS</code> (an array of objects). To join a table, use a join as in\: <code>SELECT T.* FROM FTL_SEARCH_DATA('Hello', 0, 0) FT, TEST T WHERE FT.TABLE\='TEST' AND T.ID\=FT.KEYS[0];</code> tutorial_1317_p=\ You can also call the index from within a Java application\: tutorial_1318_p=\ The Lucene fulltext search supports searching in specific column only. Column names must be uppercase (except if the original columns are double quoted). For column names starting with an underscore (_), another underscore needs to be added. Example\: tutorial_1319_h2=User-Defined Variables tutorial_1320_p=\ This database supports user-defined variables. Variables start with <code>@</code> and can be used wherever expressions or parameters are allowed. Variables are not persisted and session scoped, that means only visible from within the session in which they are defined. A value is usually assigned using the SET command\: tutorial_1321_p=\ The value can also be changed using the SET() method. This is useful in queries\: tutorial_1322_p=\ Variables that are not set evaluate to <code>NULL</code>. The data type of a user-defined variable is the data type of the value assigned to it, that means it is not necessary (or possible) to declare variable names before using them. There are no restrictions on the assigned values; large objects (LOBs) are supported as well. Rolling back a transaction does not affect the value of a user-defined variable. tutorial_1323_h2=Date and Time tutorial_1324_p=\ Date, time and timestamp values support ISO 8601 formatting, including time zone\: tutorial_1325_p=\ If the time zone is not set, the value is parsed using the current time zone setting of the system. Date and time information is stored in H2 database files without time zone information. If the database is opened using another system time zone, the date and time will be the same. That means if you store the value '2000-01-01 12\:00\:00' in one time zone, then close the database and open the database again in a different time zone, you will also get '2000-01-01 12\:00\:00'. Please note that changing the time zone after the H2 driver is loaded is not supported. tutorial_1326_h2=Using Spring tutorial_1327_h3=Using the TCP Server tutorial_1328_p=\ Use the following configuration to start and stop the H2 TCP server using the Spring Framework\: tutorial_1329_p=\ The <code>destroy-method</code> will help prevent exceptions on hot-redeployment or when restarting the server. tutorial_1330_h3=Error Code Incompatibility tutorial_1331_p=\ There is an incompatibility with the Spring JdbcTemplate and H2 version 1.3.154 and newer, because of a change in the error code. This will cause the JdbcTemplate to not detect a duplicate key condition, and so a <code>DataIntegrityViolationException</code> is thrown instead of <code>DuplicateKeyException</code>. See also <a href\="http\://jira.spring.io/browse/SPR-8235">the issue SPR-8235</a>. The workaround is to add the following XML file to the root of the classpath\: tutorial_1332_h2=OSGi tutorial_1333_p=\ The standard H2 jar can be dropped in as a bundle in an OSGi container. H2 implements the JDBC Service defined in OSGi Service Platform Release 4 Version 4.2 Enterprise Specification. The H2 Data Source Factory service is registered with the following properties\: <code>OSGI_JDBC_DRIVER_CLASS\=org.h2.Driver</code> and <code>OSGI_JDBC_DRIVER_NAME\=H2 JDBC Driver</code>. The <code>OSGI_JDBC_DRIVER_VERSION</code> property reflects the version of the driver as is. tutorial_1334_p=\ The following standard configuration properties are supported\: <code>JDBC_USER, JDBC_PASSWORD, JDBC_DESCRIPTION, JDBC_DATASOURCE_NAME, JDBC_NETWORK_PROTOCOL, JDBC_URL, JDBC_SERVER_NAME, JDBC_PORT_NUMBER</code>. Any other standard property will be rejected. Non-standard properties will be passed on to H2 in the connection URL. tutorial_1335_h2=Java Management Extension (JMX) tutorial_1336_p=\ Management over JMX is supported, but not enabled by default. To enable JMX, append <code>;JMX\=TRUE</code> to the database URL when opening the database. Various tools support JMX, one such tool is the <code>jconsole</code>. When opening the <code>jconsole</code>, connect to the process where the database is open (when using the server mode, you need to connect to the server process). Then go to the <code>MBeans</code> section. Under <code>org.h2</code> you will find one entry per database. The object name of the entry is the database short name, plus the path (each colon is replaced with an underscore character). tutorial_1337_p=\ The following attributes and operations are supported\: tutorial_1338_code=CacheSize tutorial_1339_li=\: the cache size currently in use in KB. tutorial_1340_code=CacheSizeMax tutorial_1341_li=\ (read/write)\: the maximum cache size in KB. tutorial_1342_code=Exclusive tutorial_1343_li=\: whether this database is open in exclusive mode or not. tutorial_1344_code=FileReadCount tutorial_1345_li=\: the number of file read operations since the database was opened. tutorial_1346_code=FileSize tutorial_1347_li=\: the file size in KB. tutorial_1348_code=FileWriteCount tutorial_1349_li=\: the number of file write operations since the database was opened. tutorial_1350_code=FileWriteCountTotal tutorial_1351_li=\: the number of file write operations since the database was created. tutorial_1352_code=LogMode tutorial_1353_li=\ (read/write)\: the current transaction log mode. See <code>SET LOG</code> for details. tutorial_1354_code=Mode tutorial_1355_li=\: the compatibility mode (<code>REGULAR</code> if no compatibility mode is used). tutorial_1356_code=MultiThreaded tutorial_1357_li=\: true if multi-threaded is enabled. tutorial_1358_code=Mvcc tutorial_1359_li=\: true if <code>MVCC</code> is enabled. tutorial_1360_code=ReadOnly tutorial_1361_li=\: true if the database is read-only. tutorial_1362_code=TraceLevel tutorial_1363_li=\ (read/write)\: the file trace level. tutorial_1364_code=Version tutorial_1365_li=\: the database version in use. tutorial_1366_code=listSettings tutorial_1367_li=\: list the database settings. tutorial_1368_code=listSessions tutorial_1369_li=\: list the open sessions, including currently executing statement (if any) and locked tables (if any). tutorial_1370_p=\ To enable JMX, you may need to set the system properties <code>com.sun.management.jmxremote</code> and <code>com.sun.management.jmxremote.port</code> as required by the JVM.