提交 5cbc873e authored 作者: Thomas Mueller's avatar Thomas Mueller

--no commit message

--no commit message
上级 4cde8b24
...@@ -20,6 +20,8 @@ Advanced Topics ...@@ -20,6 +20,8 @@ Advanced Topics
Linked Tables</a><br /> Linked Tables</a><br />
<a href="#transaction_isolation"> <a href="#transaction_isolation">
Transaction Isolation</a><br /> Transaction Isolation</a><br />
<a href="#mvcc">
Multi-Version Concurrency Control (MVCC)</a><br />
<a href="#clustering"> <a href="#clustering">
Clustering / High Availability</a><br /> Clustering / High Availability</a><br />
<a href="#two_phase_commit"> <a href="#two_phase_commit">
...@@ -176,6 +178,26 @@ connection will get a lock timeout exception. The lock timeout can be set indivi ...@@ -176,6 +178,26 @@ connection will get a lock timeout exception. The lock timeout can be set indivi
for each connection. for each connection.
</p> </p>
<br /><a name="mvcc"></a>
<h2>Multi-Version Concurrency Control (MVCC)</h2>
<p>
The MVCC feature allows higher concurrency than using (table level or row level) locks.
When using MVCC in this database, delete, insert and update operations will only issue a
shared lock on the table. Table are still locked exclusively when adding or removing columns,
when dropping the table, and when using SELECT ... FOR UPDATE. Connections
only 'see' committed data, and own changes. That means, if connection A updates
a row but doesn't commit this change yet, connection B will see the old value.
Only when the change is committed, the new value is visible by other connections
(read committed). If multiple connections concurrently try to update the same row, this
database fails fast: a concurrent update exception is thrown.
</p>
<p>
To use the MVCC feature, append MVCC=TRUE to the database URL:
<pre>
jdbc:h2:~/test;MVCC=TRUE
</pre>
</p>
<br /><a name="clustering"></a> <br /><a name="clustering"></a>
<h2>Clustering / High Availability</h2> <h2>Clustering / High Availability</h2>
<p> <p>
......
...@@ -14,1032 +14,1044 @@ ...@@ -14,1032 +14,1044 @@
#Transaction Isolation #Transaction Isolation
@advanced_1005_a @advanced_1005_a
#Clustering / High Availability #Multi-Version Concurrency Control (MVCC)
@advanced_1006_a @advanced_1006_a
#Two Phase Commit #Clustering / High Availability
@advanced_1007_a @advanced_1007_a
#Compatibility #Two Phase Commit
@advanced_1008_a @advanced_1008_a
#Run as Windows Service #Compatibility
@advanced_1009_a @advanced_1009_a
#ODBC Driver #Run as Windows Service
@advanced_1010_a @advanced_1010_a
#ACID #ODBC Driver
@advanced_1011_a @advanced_1011_a
#Durability Problems #ACID
@advanced_1012_a @advanced_1012_a
#Using the Recover Tool #Durability Problems
@advanced_1013_a @advanced_1013_a
#File Locking Protocols #Using the Recover Tool
@advanced_1014_a @advanced_1014_a
#Protection against SQL Injection #File Locking Protocols
@advanced_1015_a @advanced_1015_a
#Security Protocols #Protection against SQL Injection
@advanced_1016_a @advanced_1016_a
#Universally Unique Identifiers (UUID) #Security Protocols
@advanced_1017_a @advanced_1017_a
#Settings Read from System Properties #Universally Unique Identifiers (UUID)
@advanced_1018_a @advanced_1018_a
#Settings Read from System Properties
@advanced_1019_a
#Glossary and Links #Glossary and Links
@advanced_1019_h2 @advanced_1020_h2
#Result Sets #Result Sets
@advanced_1020_h3 @advanced_1021_h3
#Limiting the Number of Rows #Limiting the Number of Rows
@advanced_1021_p @advanced_1022_p
#Before the result is returned to the application, all rows are read by the database. Server side cursors are not supported currently. If only the first few rows are interesting for the application, then the result set size should be limited to improve the performance. This can be done using LIMIT in a query (example: SELECT * FROM TEST LIMIT 100), or by using Statement.setMaxRows(max). #Before the result is returned to the application, all rows are read by the database. Server side cursors are not supported currently. If only the first few rows are interesting for the application, then the result set size should be limited to improve the performance. This can be done using LIMIT in a query (example: SELECT * FROM TEST LIMIT 100), or by using Statement.setMaxRows(max).
@advanced_1022_h3 @advanced_1023_h3
#Large Result Sets and External Sorting #Large Result Sets and External Sorting
@advanced_1023_p @advanced_1024_p
#For result set larger than 1000 rows, the result is buffered to disk. If ORDER BY is used, the sorting is done using an external sort algorithm. In this case, each block of rows is sorted using quick sort, then written to disk; when reading the data, the blocks are merged together. #For result set larger than 1000 rows, the result is buffered to disk. If ORDER BY is used, the sorting is done using an external sort algorithm. In this case, each block of rows is sorted using quick sort, then written to disk; when reading the data, the blocks are merged together.
@advanced_1024_h2 @advanced_1025_h2
#Large Objects #Large Objects
@advanced_1025_h3 @advanced_1026_h3
#Storing and Reading Large Objects #Storing and Reading Large Objects
@advanced_1026_p @advanced_1027_p
#If it is possible that the objects don't fit into memory, then the data type CLOB (for textual data) or BLOB (for binary data) should be used. For these data types, the objects are not fully read into memory, by using streams. To store a BLOB, use PreparedStatement.setBinaryStream. To store a CLOB, use PreparedStatement.setCharacterStream. To read a BLOB, use ResultSet.getBinaryStream, and to read a CLOB, use ResultSet.getCharacterStream. If the client/server mode is used, the BLOB and CLOB data is fully read into memory when accessed. In this case, the size of a BLOB or CLOB is limited by the memory. #If it is possible that the objects don't fit into memory, then the data type CLOB (for textual data) or BLOB (for binary data) should be used. For these data types, the objects are not fully read into memory, by using streams. To store a BLOB, use PreparedStatement.setBinaryStream. To store a CLOB, use PreparedStatement.setCharacterStream. To read a BLOB, use ResultSet.getBinaryStream, and to read a CLOB, use ResultSet.getCharacterStream. If the client/server mode is used, the BLOB and CLOB data is fully read into memory when accessed. In this case, the size of a BLOB or CLOB is limited by the memory.
@advanced_1027_h2 @advanced_1028_h2
#Linked Tables #Linked Tables
@advanced_1028_p @advanced_1029_p
#This database supports linked tables, which means tables that don't exist in the current database but are just links to another database. To create such a link, use the CREATE LINKED TABLE statement: #This database supports linked tables, which means tables that don't exist in the current database but are just links to another database. To create such a link, use the CREATE LINKED TABLE statement:
@advanced_1029_p @advanced_1030_p
#It is then possible to access the table in the usual way. There is a restriction when inserting data to this table: When inserting or updating rows into the table, NULL and values that are not set in the insert statement are both inserted as NULL. This may not have the desired effect if a default value in the target table is other than NULL. #It is then possible to access the table in the usual way. There is a restriction when inserting data to this table: When inserting or updating rows into the table, NULL and values that are not set in the insert statement are both inserted as NULL. This may not have the desired effect if a default value in the target table is other than NULL.
@advanced_1030_p @advanced_1031_p
#For each linked table a new connection is opened. This can be a problem for some databases when using many linked tables. For Oracle XE, the maximum number of connection can be increased. Oracle XE needs to be restarted after changing these values: #For each linked table a new connection is opened. This can be a problem for some databases when using many linked tables. For Oracle XE, the maximum number of connection can be increased. Oracle XE needs to be restarted after changing these values:
@advanced_1031_h2 @advanced_1032_h2
#Transaction Isolation #Transaction Isolation
@advanced_1032_p @advanced_1033_p
#This database supports the following transaction isolation levels: #This database supports the following transaction isolation levels:
@advanced_1033_b @advanced_1034_b
#Read Committed #Read Committed
@advanced_1034_li @advanced_1035_li
#This is the default level. Read locks are released immediately. Higher concurrency is possible when using this level. #This is the default level. Read locks are released immediately. Higher concurrency is possible when using this level.
@advanced_1035_li @advanced_1036_li
#To enable, execute the SQL statement 'SET LOCK_MODE 3' #To enable, execute the SQL statement 'SET LOCK_MODE 3'
@advanced_1036_li @advanced_1037_li
#or append ;LOCK_MODE=3 to the database URL: jdbc:h2:~/test;LOCK_MODE=3 #or append ;LOCK_MODE=3 to the database URL: jdbc:h2:~/test;LOCK_MODE=3
@advanced_1037_b @advanced_1038_b
#Serializable #Serializable
@advanced_1038_li @advanced_1039_li
#To enable, execute the SQL statement 'SET LOCK_MODE 1' #To enable, execute the SQL statement 'SET LOCK_MODE 1'
@advanced_1039_li @advanced_1040_li
#or append ;LOCK_MODE=1 to the database URL: jdbc:h2:~/test;LOCK_MODE=1 #or append ;LOCK_MODE=1 to the database URL: jdbc:h2:~/test;LOCK_MODE=1
@advanced_1040_b @advanced_1041_b
#Read Uncommitted #Read Uncommitted
@advanced_1041_li @advanced_1042_li
#This level means that transaction isolation is disabled. #This level means that transaction isolation is disabled.
@advanced_1042_li @advanced_1043_li
#To enable, execute the SQL statement 'SET LOCK_MODE 0' #To enable, execute the SQL statement 'SET LOCK_MODE 0'
@advanced_1043_li @advanced_1044_li
#or append ;LOCK_MODE=0 to the database URL: jdbc:h2:~/test;LOCK_MODE=0 #or append ;LOCK_MODE=0 to the database URL: jdbc:h2:~/test;LOCK_MODE=0
@advanced_1044_p @advanced_1045_p
#When using the isolation level 'serializable', dirty reads, non-repeatable reads, and phantom reads are prohibited. #When using the isolation level 'serializable', dirty reads, non-repeatable reads, and phantom reads are prohibited.
@advanced_1045_b @advanced_1046_b
#Dirty Reads #Dirty Reads
@advanced_1046_li @advanced_1047_li
#Means a connection can read uncommitted changes made by another connection. #Means a connection can read uncommitted changes made by another connection.
@advanced_1047_li @advanced_1048_li
#Possible with: read uncommitted #Possible with: read uncommitted
@advanced_1048_b @advanced_1049_b
#Non-Repeatable Reads #Non-Repeatable Reads
@advanced_1049_li @advanced_1050_li
#A connection reads a row, another connection changes a row and commits, and the first connection re-reads the same row and gets the new result. #A connection reads a row, another connection changes a row and commits, and the first connection re-reads the same row and gets the new result.
@advanced_1050_li @advanced_1051_li
#Possible with: read uncommitted, read committed #Possible with: read uncommitted, read committed
@advanced_1051_b @advanced_1052_b
#Phantom Reads #Phantom Reads
@advanced_1052_li @advanced_1053_li
#A connection reads a set of rows using a condition, another connection inserts a row that falls in this condition and commits, then the first connection re-reads using the same condition and gets the new row. #A connection reads a set of rows using a condition, another connection inserts a row that falls in this condition and commits, then the first connection re-reads using the same condition and gets the new row.
@advanced_1053_li @advanced_1054_li
#Possible with: read uncommitted, read committed #Possible with: read uncommitted, read committed
@advanced_1054_h3 @advanced_1055_h3
#Table Level Locking #Table Level Locking
@advanced_1055_p @advanced_1056_p
#The database allows multiple concurrent connections to the same database. To make sure all connections only see consistent data, table level locking is used. This mechanism does not allow high concurrency, but is very fast. Shared locks and exclusive locks are supported. Before reading from a table, the database tries to add a shared lock to the table (this is only possible if there is no exclusive lock on the object by another connection). If the shared lock is added successfully, the table can be read. It is allowed that other connections also have a shared lock on the same object. If a connection wants to write to a table (update or delete a row), an exclusive lock is required. To get the exclusive lock, other connection must not have any locks on the object. After the connection commits, all locks are released. This database keeps all locks in memory. #The database allows multiple concurrent connections to the same database. To make sure all connections only see consistent data, table level locking is used. This mechanism does not allow high concurrency, but is very fast. Shared locks and exclusive locks are supported. Before reading from a table, the database tries to add a shared lock to the table (this is only possible if there is no exclusive lock on the object by another connection). If the shared lock is added successfully, the table can be read. It is allowed that other connections also have a shared lock on the same object. If a connection wants to write to a table (update or delete a row), an exclusive lock is required. To get the exclusive lock, other connection must not have any locks on the object. After the connection commits, all locks are released. This database keeps all locks in memory.
@advanced_1056_h3 @advanced_1057_h3
#Lock Timeout #Lock Timeout
@advanced_1057_p @advanced_1058_p
#If a connection cannot get a lock on an object, the connection waits for some amount of time (the lock timeout). During this time, hopefully the connection holding the lock commits and it is then possible to get the lock. If this is not possible because the other connection does not release the lock for some time, the unsuccessful connection will get a lock timeout exception. The lock timeout can be set individually for each connection. #If a connection cannot get a lock on an object, the connection waits for some amount of time (the lock timeout). During this time, hopefully the connection holding the lock commits and it is then possible to get the lock. If this is not possible because the other connection does not release the lock for some time, the unsuccessful connection will get a lock timeout exception. The lock timeout can be set individually for each connection.
@advanced_1058_h2 @advanced_1059_h2
#Multi-Version Concurrency Control (MVCC)
@advanced_1060_p
#The MVCC feature allows higher concurrency than using (table level or row level) locks. When using MVCC in this database, delete, insert and update operations will only issue a shared lock on the table. Table are still locked exclusively when adding or removing columns, when dropping the table, and when using SELECT ... FOR UPDATE. Connections only 'see' committed data, and own changes. That means, if connection A updates a row but doesn't commit this change yet, connection B will see the old value. Only when the change is committed, the new value is visible by other connections (read committed). If multiple connections concurrently try to update the same row, this database fails fast: a concurrent update exception is thrown.
@advanced_1061_p
#To use the MVCC feature, append MVCC=TRUE to the database URL:
@advanced_1062_h2
#Clustering / High Availability #Clustering / High Availability
@advanced_1059_p @advanced_1063_p
#This database supports a simple clustering / high availability mechanism. The architecture is: two database servers run on two different computers, and on both computers is a copy of the same database. If both servers run, each database operation is executed on both computers. If one server fails (power, hardware or network failure), the other server can still continue to work. From this point on, the operations will be executed only on one server until the other server is back up. #This database supports a simple clustering / high availability mechanism. The architecture is: two database servers run on two different computers, and on both computers is a copy of the same database. If both servers run, each database operation is executed on both computers. If one server fails (power, hardware or network failure), the other server can still continue to work. From this point on, the operations will be executed only on one server until the other server is back up.
@advanced_1060_p @advanced_1064_p
#Clustering can only be used in the server mode (the embedded mode does not support clustering). It is possible to restore the cluster without stopping the server, however it is critical that no other application is changing the data in the first database while the second database is restored, so restoring the cluster is currently a manual process. #Clustering can only be used in the server mode (the embedded mode does not support clustering). It is possible to restore the cluster without stopping the server, however it is critical that no other application is changing the data in the first database while the second database is restored, so restoring the cluster is currently a manual process.
@advanced_1061_p @advanced_1065_p
#To initialize the cluster, use the following steps: #To initialize the cluster, use the following steps:
@advanced_1062_li @advanced_1066_li
#Create a database #Create a database
@advanced_1063_li @advanced_1067_li
#Use the CreateCluster tool to copy the database to another location and initialize the clustering. Afterwards, you have two databases containing the same data. #Use the CreateCluster tool to copy the database to another location and initialize the clustering. Afterwards, you have two databases containing the same data.
@advanced_1064_li @advanced_1068_li
#Start two servers (one for each copy of the database) #Start two servers (one for each copy of the database)
@advanced_1065_li @advanced_1069_li
#You are now ready to connect to the databases with the client application(s) #You are now ready to connect to the databases with the client application(s)
@advanced_1066_h3 @advanced_1070_h3
#Using the CreateCluster Tool #Using the CreateCluster Tool
@advanced_1067_p @advanced_1071_p
#To understand how clustering works, please try out the following example. In this example, the two databases reside on the same computer, but usually, the databases will be on different servers. #To understand how clustering works, please try out the following example. In this example, the two databases reside on the same computer, but usually, the databases will be on different servers.
@advanced_1068_li @advanced_1072_li
#Create two directories: server1 and server2. Each directory will simulate a directory on a computer. #Create two directories: server1 and server2. Each directory will simulate a directory on a computer.
@advanced_1069_li @advanced_1073_li
#Start a TCP server pointing to the first directory. You can do this using the command line: #Start a TCP server pointing to the first directory. You can do this using the command line:
@advanced_1070_li @advanced_1074_li
#Start a second TCP server pointing to the second directory. This will simulate a server running on a second (redundant) computer. You can do this using the command line: #Start a second TCP server pointing to the second directory. This will simulate a server running on a second (redundant) computer. You can do this using the command line:
@advanced_1071_li @advanced_1075_li
#Use the CreateCluster tool to initialize clustering. This will automatically create a new, empty database if it does not exist. Run the tool on the command line: #Use the CreateCluster tool to initialize clustering. This will automatically create a new, empty database if it does not exist. Run the tool on the command line:
@advanced_1072_li @advanced_1076_li
#You can now connect to the databases using an application or the H2 Console using the JDBC URL jdbc:h2:tcp://localhost:9101,localhost:9102/test #You can now connect to the databases using an application or the H2 Console using the JDBC URL jdbc:h2:tcp://localhost:9101,localhost:9102/test
@advanced_1073_li @advanced_1077_li
#If you stop a server (by killing the process), you will notice that the other machine continues to work, and therefore the database is still accessible. #If you stop a server (by killing the process), you will notice that the other machine continues to work, and therefore the database is still accessible.
@advanced_1074_li @advanced_1078_li
#To restore the cluster, you first need to delete the database that failed, then restart the server that was stopped, and re-run the CreateCluster tool. #To restore the cluster, you first need to delete the database that failed, then restart the server that was stopped, and re-run the CreateCluster tool.
@advanced_1075_h3 @advanced_1079_h3
#Clustering Algorithm and Limitations #Clustering Algorithm and Limitations
@advanced_1076_p @advanced_1080_p
#Read-only queries are only executed against the first cluster node, but all other statements are executed against all nodes. There is currently no load balancing made to avoid problems with transactions. The following functions may yield different results on different cluster nodes and must be executed with care: RANDOM_UUID(), SECURE_RAND(), SESSION_ID(), MEMORY_FREE(), MEMORY_USED(), CSVREAD(), CSVWRITE(), RAND() [when not using a seed]. Those functions should not be used directly in modifying statements (for example INSERT, UPDATE, or MERGE). However, they can be used in read-only statements and the result can then be used for modifying statements. #Read-only queries are only executed against the first cluster node, but all other statements are executed against all nodes. There is currently no load balancing made to avoid problems with transactions. The following functions may yield different results on different cluster nodes and must be executed with care: RANDOM_UUID(), SECURE_RAND(), SESSION_ID(), MEMORY_FREE(), MEMORY_USED(), CSVREAD(), CSVWRITE(), RAND() [when not using a seed]. Those functions should not be used directly in modifying statements (for example INSERT, UPDATE, or MERGE). However, they can be used in read-only statements and the result can then be used for modifying statements.
@advanced_1077_h2 @advanced_1081_h2
#Two Phase Commit #Two Phase Commit
@advanced_1078_p @advanced_1082_p
#The two phase commit protocol is supported. 2-phase-commit works as follows: #The two phase commit protocol is supported. 2-phase-commit works as follows:
@advanced_1079_li @advanced_1083_li
#Autocommit needs to be switched off #Autocommit needs to be switched off
@advanced_1080_li @advanced_1084_li
#A transaction is started, for example by inserting a row #A transaction is started, for example by inserting a row
@advanced_1081_li @advanced_1085_li
#The transaction is marked 'prepared' by executing the SQL statement <code>PREPARE COMMIT transactionName</code> #The transaction is marked 'prepared' by executing the SQL statement <code>PREPARE COMMIT transactionName</code>
@advanced_1082_li @advanced_1086_li
#The transaction can now be committed or rolled back #The transaction can now be committed or rolled back
@advanced_1083_li @advanced_1087_li
#If a problem occurs before the transaction was successfully committed or rolled back (for example because a network problem occurred), the transaction is in the state 'in-doubt' #If a problem occurs before the transaction was successfully committed or rolled back (for example because a network problem occurred), the transaction is in the state 'in-doubt'
@advanced_1084_li @advanced_1088_li
#When re-connecting to the database, the in-doubt transactions can be listed with <code>SELECT * FROM INFORMATION_SCHEMA.IN_DOUBT</code> #When re-connecting to the database, the in-doubt transactions can be listed with <code>SELECT * FROM INFORMATION_SCHEMA.IN_DOUBT</code>
@advanced_1085_li @advanced_1089_li
#Each transaction in this list must now be committed or rolled back by executing <code>COMMIT TRANSACTION transactionName</code> or <code>ROLLBACK TRANSACTION transactionName</code> #Each transaction in this list must now be committed or rolled back by executing <code>COMMIT TRANSACTION transactionName</code> or <code>ROLLBACK TRANSACTION transactionName</code>
@advanced_1086_li @advanced_1090_li
#The database needs to be closed and re-opened to apply the changes #The database needs to be closed and re-opened to apply the changes
@advanced_1087_h2 @advanced_1091_h2
#Compatibility #Compatibility
@advanced_1088_p @advanced_1092_p
#This database is (up to a certain point) compatible to other databases such as HSQLDB, MySQL and PostgreSQL. There are certain areas where H2 is incompatible. #This database is (up to a certain point) compatible to other databases such as HSQLDB, MySQL and PostgreSQL. There are certain areas where H2 is incompatible.
@advanced_1089_h3 @advanced_1093_h3
#Transaction Commit when Autocommit is On #Transaction Commit when Autocommit is On
@advanced_1090_p @advanced_1094_p
#At this time, this database engine commits a transaction (if autocommit is switched on) just before returning the result. For a query, this means the transaction is committed even before the application scans through the result set, and before the result set is closed. Other database engines may commit the transaction in this case when the result set is closed. #At this time, this database engine commits a transaction (if autocommit is switched on) just before returning the result. For a query, this means the transaction is committed even before the application scans through the result set, and before the result set is closed. Other database engines may commit the transaction in this case when the result set is closed.
@advanced_1091_h3 @advanced_1095_h3
#Keywords / Reserved Words #Keywords / Reserved Words
@advanced_1092_p @advanced_1096_p
#There is a list of keywords that can't be used as identifiers (table names, column names and so on), unless they are quoted (surrounded with double quotes). The list is currently: #There is a list of keywords that can't be used as identifiers (table names, column names and so on), unless they are quoted (surrounded with double quotes). The list is currently:
@advanced_1093_p @advanced_1097_p
#CURRENT_TIMESTAMP, CURRENT_TIME, CURRENT_DATE, CROSS, DISTINCT, EXCEPT, EXISTS, FROM, FOR, FALSE, FULL, GROUP, HAVING, INNER, INTERSECT, IS, JOIN, LIKE, MINUS, NATURAL, NOT, NULL, ON, ORDER, PRIMARY, ROWNUM, SELECT, SYSDATE, SYSTIME, SYSTIMESTAMP, TODAY, TRUE, UNION, WHERE #CURRENT_TIMESTAMP, CURRENT_TIME, CURRENT_DATE, CROSS, DISTINCT, EXCEPT, EXISTS, FROM, FOR, FALSE, FULL, GROUP, HAVING, INNER, INTERSECT, IS, JOIN, LIKE, MINUS, NATURAL, NOT, NULL, ON, ORDER, PRIMARY, ROWNUM, SELECT, SYSDATE, SYSTIME, SYSTIMESTAMP, TODAY, TRUE, UNION, WHERE
@advanced_1094_p @advanced_1098_p
#Certain words of this list are keywords because they are functions that can be used without '()' for compatibility, for example CURRENT_TIMESTAMP. #Certain words of this list are keywords because they are functions that can be used without '()' for compatibility, for example CURRENT_TIMESTAMP.
@advanced_1095_h2 @advanced_1099_h2
#Run as Windows Service #Run as Windows Service
@advanced_1096_p @advanced_1100_p
#Using a native wrapper / adapter, Java applications can be run as a Windows Service. There are various tools available to do that. The Java Service Wrapper from Tanuki Software, Inc. ( <a href="http://wrapper.tanukisoftware.org">http://wrapper.tanukisoftware.org</a> ) is included in the installation. Batch files are provided to install, start, stop and uninstall the H2 Database Engine Service. This service contains the TCP Server and the H2 Console web application. The batch files are located in the directory H2/service. #Using a native wrapper / adapter, Java applications can be run as a Windows Service. There are various tools available to do that. The Java Service Wrapper from Tanuki Software, Inc. ( <a href="http://wrapper.tanukisoftware.org">http://wrapper.tanukisoftware.org</a> ) is included in the installation. Batch files are provided to install, start, stop and uninstall the H2 Database Engine Service. This service contains the TCP Server and the H2 Console web application. The batch files are located in the directory H2/service.
@advanced_1097_h3 @advanced_1101_h3
#Install the Service #Install the Service
@advanced_1098_p @advanced_1102_p
#The service needs to be registered as a Windows Service first. To do that, double click on 1_install_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear. #The service needs to be registered as a Windows Service first. To do that, double click on 1_install_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear.
@advanced_1099_h3 @advanced_1103_h3
#Start the Service #Start the Service
@advanced_1100_p @advanced_1104_p
#You can start the H2 Database Engine Service using the service manager of Windows, or by double clicking on 2_start_service.bat. Please note that the batch file does not print an error message if the service is not installed. #You can start the H2 Database Engine Service using the service manager of Windows, or by double clicking on 2_start_service.bat. Please note that the batch file does not print an error message if the service is not installed.
@advanced_1101_h3 @advanced_1105_h3
#Connect to the H2 Console #Connect to the H2 Console
@advanced_1102_p @advanced_1106_p
#After installing and starting the service, you can connect to the H2 Console application using a browser. Double clicking on 3_start_browser.bat to do that. The default port (8082) is hard coded in the batch file. #After installing and starting the service, you can connect to the H2 Console application using a browser. Double clicking on 3_start_browser.bat to do that. The default port (8082) is hard coded in the batch file.
@advanced_1103_h3 @advanced_1107_h3
#Stop the Service #Stop the Service
@advanced_1104_p @advanced_1108_p
#To stop the service, double click on 4_stop_service.bat. Please note that the batch file does not print an error message if the service is not installed or started. #To stop the service, double click on 4_stop_service.bat. Please note that the batch file does not print an error message if the service is not installed or started.
@advanced_1105_h3 @advanced_1109_h3
#Uninstall the Service #Uninstall the Service
@advanced_1106_p @advanced_1110_p
#To uninstall the service, double click on 5_uninstall_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear. #To uninstall the service, double click on 5_uninstall_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear.
@advanced_1107_h2 @advanced_1111_h2
#ODBC Driver #ODBC Driver
@advanced_1108_p @advanced_1112_p
#This database does not come with its own ODBC driver at this time, but it supports the PostgreSQL network protocol. Therefore, the PostgreSQL ODBC driver can be used. Support for the PostgreSQL network protocol is quite new and should be viewed as experimental. It should not be used for production applications. #This database does not come with its own ODBC driver at this time, but it supports the PostgreSQL network protocol. Therefore, the PostgreSQL ODBC driver can be used. Support for the PostgreSQL network protocol is quite new and should be viewed as experimental. It should not be used for production applications.
@advanced_1109_p @advanced_1113_p
#At this time, the PostgreSQL ODBC driver does not work on 64 bit versions of Windows. For more information, see: <a href="http://svr5.postgresql.org/pgsql-odbc/2005-09/msg00127.php">ODBC Driver on Windows 64 bit</a> #At this time, the PostgreSQL ODBC driver does not work on 64 bit versions of Windows. For more information, see: <a href="http://svr5.postgresql.org/pgsql-odbc/2005-09/msg00127.php">ODBC Driver on Windows 64 bit</a>
@advanced_1110_h3 @advanced_1114_h3
#ODBC Installation #ODBC Installation
@advanced_1111_p @advanced_1115_p
#First, the ODBC driver must be installed. Any recent PostgreSQL ODBC driver should work, however version 8.2.4 or newer is recommended. The Windows version of the PostgreSQL ODBC driver is available at <a href="http://www.postgresql.org/ftp/odbc/versions/msi">http://www.postgresql.org/ftp/odbc/versions/msi</a> . #First, the ODBC driver must be installed. Any recent PostgreSQL ODBC driver should work, however version 8.2.4 or newer is recommended. The Windows version of the PostgreSQL ODBC driver is available at <a href="http://www.postgresql.org/ftp/odbc/versions/msi">http://www.postgresql.org/ftp/odbc/versions/msi</a> .
@advanced_1112_h3 @advanced_1116_h3
#Starting the Server #Starting the Server
@advanced_1113_p @advanced_1117_p
#After installing the ODBC driver, start the H2 Server using the command line: #After installing the ODBC driver, start the H2 Server using the command line:
@advanced_1114_p @advanced_1118_p
#The PG Server (PG for PostgreSQL protocol) is started as well. By default, databases are stored in the current working directory where the server is started. Use -baseDir to save databases in another directory, for example the user home directory: #The PG Server (PG for PostgreSQL protocol) is started as well. By default, databases are stored in the current working directory where the server is started. Use -baseDir to save databases in another directory, for example the user home directory:
@advanced_1115_p @advanced_1119_p
#The PG server can be started and stopped from within a Java application as follows: #The PG server can be started and stopped from within a Java application as follows:
@advanced_1116_p @advanced_1120_p
#By default, only connections from localhost are allowed. To allow remote connections, use <code>-pgAllowOthers true</code> when starting the server. #By default, only connections from localhost are allowed. To allow remote connections, use <code>-pgAllowOthers true</code> when starting the server.
@advanced_1117_h3 @advanced_1121_h3
#ODBC Configuration #ODBC Configuration
@advanced_1118_p @advanced_1122_p
#After installing the driver, a new Data Source must be added. In Windows, run <code>odbcad32.exe</code> to open the Data Source Administrator. Then click on 'Add...' and select the PostgreSQL Unicode driver. Then click 'Finish'. You will be able to change the connection properties: #After installing the driver, a new Data Source must be added. In Windows, run <code>odbcad32.exe</code> to open the Data Source Administrator. Then click on 'Add...' and select the PostgreSQL Unicode driver. Then click 'Finish'. You will be able to change the connection properties:
@advanced_1119_th @advanced_1123_th
#Property #Property
@advanced_1120_th @advanced_1124_th
#Example #Example
@advanced_1121_th @advanced_1125_th
#Remarks #Remarks
@advanced_1122_td @advanced_1126_td
#Data Source #Data Source
@advanced_1123_td @advanced_1127_td
#H2 Test #H2 Test
@advanced_1124_td @advanced_1128_td
#The name of the ODBC Data Source #The name of the ODBC Data Source
@advanced_1125_td @advanced_1129_td
#Database #Database
@advanced_1126_td @advanced_1130_td
#test #test
@advanced_1127_td @advanced_1131_td
#The database name. Only simple names are supported at this time; #The database name. Only simple names are supported at this time;
@advanced_1128_td @advanced_1132_td
#relative or absolute path are not supported in the database name. #relative or absolute path are not supported in the database name.
@advanced_1129_td @advanced_1133_td
#By default, the database is stored in the current working directory #By default, the database is stored in the current working directory
@advanced_1130_td @advanced_1134_td
#where the Server is started except when the -baseDir setting is used. #where the Server is started except when the -baseDir setting is used.
@advanced_1131_td @advanced_1135_td
#The name must be at least 3 characters. #The name must be at least 3 characters.
@advanced_1132_td @advanced_1136_td
#Server #Server
@advanced_1133_td @advanced_1137_td
#localhost #localhost
@advanced_1134_td @advanced_1138_td
#The server name or IP address. #The server name or IP address.
@advanced_1135_td @advanced_1139_td
#By default, only remote connections are allowed #By default, only remote connections are allowed
@advanced_1136_td @advanced_1140_td
#User Name #User Name
@advanced_1137_td @advanced_1141_td
#sa #sa
@advanced_1138_td @advanced_1142_td
#The database user name. #The database user name.
@advanced_1139_td @advanced_1143_td
#SSL Mode #SSL Mode
@advanced_1140_td @advanced_1144_td
#disabled #disabled
@advanced_1141_td @advanced_1145_td
#At this time, SSL is not supported. #At this time, SSL is not supported.
@advanced_1142_td @advanced_1146_td
#Port #Port
@advanced_1143_td @advanced_1147_td
#5435 #5435
@advanced_1144_td @advanced_1148_td
#The port where the PG Server is listening. #The port where the PG Server is listening.
@advanced_1145_td @advanced_1149_td
#Password #Password
@advanced_1146_td @advanced_1150_td
#sa #sa
@advanced_1147_td @advanced_1151_td
#The database password. #The database password.
@advanced_1148_p @advanced_1152_p
#Afterwards, you may use this data source. #Afterwards, you may use this data source.
@advanced_1149_h3 @advanced_1153_h3
#PG Protocol Support Limitations #PG Protocol Support Limitations
@advanced_1150_p @advanced_1154_p
#At this time, only a subset of the PostgreSQL network protocol is implemented. Also, there may be compatibility problems on the SQL level, with the catalog, or with text encoding. Problems are fixed as they are found. Currently, statements can not be cancelled when using the PG protocol. #At this time, only a subset of the PostgreSQL network protocol is implemented. Also, there may be compatibility problems on the SQL level, with the catalog, or with text encoding. Problems are fixed as they are found. Currently, statements can not be cancelled when using the PG protocol.
@advanced_1151_h3 @advanced_1155_h3
#Security Considerations #Security Considerations
@advanced_1152_p @advanced_1156_p
#Currently, the PG Server does not support challenge response or encrypt passwords. This may be a problem if an attacker can listen to the data transferred between the ODBC driver and the server, because the password is readable to the attacker. Also, it is currently not possible to use encrypted SSL connections. Therefore the ODBC driver should not be used where security is important. #Currently, the PG Server does not support challenge response or encrypt passwords. This may be a problem if an attacker can listen to the data transferred between the ODBC driver and the server, because the password is readable to the attacker. Also, it is currently not possible to use encrypted SSL connections. Therefore the ODBC driver should not be used where security is important.
@advanced_1153_h2 @advanced_1157_h2
#ACID #ACID
@advanced_1154_p @advanced_1158_p
#In the database world, ACID stands for: #In the database world, ACID stands for:
@advanced_1155_li @advanced_1159_li
#Atomicity: Transactions must be atomic, meaning either all tasks are performed or none. #Atomicity: Transactions must be atomic, meaning either all tasks are performed or none.
@advanced_1156_li @advanced_1160_li
#Consistency: All operations must comply with the defined constraints. #Consistency: All operations must comply with the defined constraints.
@advanced_1157_li @advanced_1161_li
#Isolation: Transactions must be isolated from each other. #Isolation: Transactions must be isolated from each other.
@advanced_1158_li @advanced_1162_li
#Durability: Committed transaction will not be lost. #Durability: Committed transaction will not be lost.
@advanced_1159_h3 @advanced_1163_h3
#Atomicity #Atomicity
@advanced_1160_p @advanced_1164_p
#Transactions in this database are always atomic. #Transactions in this database are always atomic.
@advanced_1161_h3 @advanced_1165_h3
#Consistency #Consistency
@advanced_1162_p @advanced_1166_p
#This database is always in a consistent state. Referential integrity rules are always enforced. #This database is always in a consistent state. Referential integrity rules are always enforced.
@advanced_1163_h3 @advanced_1167_h3
#Isolation #Isolation
@advanced_1164_p @advanced_1168_p
#For H2, as with most other database systems, the default isolation level is 'read committed'. This provides better performance, but also means that transactions are not completely isolated. H2 supports the transaction isolation levels 'serializable', 'read committed', and 'read uncommitted'. #For H2, as with most other database systems, the default isolation level is 'read committed'. This provides better performance, but also means that transactions are not completely isolated. H2 supports the transaction isolation levels 'serializable', 'read committed', and 'read uncommitted'.
@advanced_1165_h3 @advanced_1169_h3
#Durability #Durability
@advanced_1166_p @advanced_1170_p
#This database does not guarantee that all committed transactions survive a power failure. Tests show that all databases sometimes lose transactions on power failure (for details, see below). Where losing transactions is not acceptable, a laptop or UPS (uninterruptible power supply) should be used. If durability is required for all possible cases of hardware failure, clustering should be used, such as the H2 clustering mode. #This database does not guarantee that all committed transactions survive a power failure. Tests show that all databases sometimes lose transactions on power failure (for details, see below). Where losing transactions is not acceptable, a laptop or UPS (uninterruptible power supply) should be used. If durability is required for all possible cases of hardware failure, clustering should be used, such as the H2 clustering mode.
@advanced_1167_h2 @advanced_1171_h2
#Durability Problems #Durability Problems
@advanced_1168_p @advanced_1172_p
#Complete durability means all committed transaction survive a power failure. Some databases claim they can guarantee durability, but such claims are wrong. A durability test was run against H2, HSQLDB, PostgreSQL, and Derby. All of those databases sometimes lose committed transactions. The test is included in the H2 download, see org.h2.test.poweroff.Test. #Complete durability means all committed transaction survive a power failure. Some databases claim they can guarantee durability, but such claims are wrong. A durability test was run against H2, HSQLDB, PostgreSQL, and Derby. All of those databases sometimes lose committed transactions. The test is included in the H2 download, see org.h2.test.poweroff.Test.
@advanced_1169_h3 @advanced_1173_h3
#Ways to (Not) Achieve Durability #Ways to (Not) Achieve Durability
@advanced_1170_p @advanced_1174_p
#Making sure that committed transaction are not lost is more complicated than it seems first. To guarantee complete durability, a database must ensure that the log record is on the hard drive before the commit call returns. To do that, databases use different methods. One is to use the 'synchronous write' file access mode. In Java, RandomAccessFile supports the modes "rws" and "rwd": #Making sure that committed transaction are not lost is more complicated than it seems first. To guarantee complete durability, a database must ensure that the log record is on the hard drive before the commit call returns. To do that, databases use different methods. One is to use the 'synchronous write' file access mode. In Java, RandomAccessFile supports the modes "rws" and "rwd":
@advanced_1171_li @advanced_1175_li
#rwd: Every update to the file's content is written synchronously to the underlying storage device. #rwd: Every update to the file's content is written synchronously to the underlying storage device.
@advanced_1172_li @advanced_1176_li
#rws: In addition to rwd, every update to the metadata is written synchronously. #rws: In addition to rwd, every update to the metadata is written synchronously.
@advanced_1173_p @advanced_1177_p
#This feature is used by Derby. A test (org.h2.test.poweroff.TestWrite) with one of those modes achieves around 50 thousand write operations per second. Even when the operating system write buffer is disabled, the write rate is around 50 thousand operations per second. This feature does not force changes to disk because it does not flush all buffers. The test updates the same byte in the file again and again. If the hard drive was able to write at this rate, then the disk would need to make at least 50 thousand revolutions per second, or 3 million RPM (revolutions per minute). There are no such hard drives. The hard drive used for the test is about 7200 RPM, or about 120 revolutions per second. There is an overhead, so the maximum write rate must be lower than that. #This feature is used by Derby. A test (org.h2.test.poweroff.TestWrite) with one of those modes achieves around 50 thousand write operations per second. Even when the operating system write buffer is disabled, the write rate is around 50 thousand operations per second. This feature does not force changes to disk because it does not flush all buffers. The test updates the same byte in the file again and again. If the hard drive was able to write at this rate, then the disk would need to make at least 50 thousand revolutions per second, or 3 million RPM (revolutions per minute). There are no such hard drives. The hard drive used for the test is about 7200 RPM, or about 120 revolutions per second. There is an overhead, so the maximum write rate must be lower than that.
@advanced_1174_p @advanced_1178_p
#Buffers can be flushed by calling the function fsync. There are two ways to do that in Java: #Buffers can be flushed by calling the function fsync. There are two ways to do that in Java:
@advanced_1175_li @advanced_1179_li
#FileDescriptor.sync(). The documentation says that this forces all system buffers to synchronize with the underlying device. Sync is supposed to return after all in-memory modified copies of buffers associated with this FileDescriptor have been written to the physical medium. #FileDescriptor.sync(). The documentation says that this forces all system buffers to synchronize with the underlying device. Sync is supposed to return after all in-memory modified copies of buffers associated with this FileDescriptor have been written to the physical medium.
@advanced_1176_li @advanced_1180_li
#FileChannel.force() (since JDK 1.4). This method is supposed to force any updates to this channel's file to be written to the storage device that contains it. #FileChannel.force() (since JDK 1.4). This method is supposed to force any updates to this channel's file to be written to the storage device that contains it.
@advanced_1177_p @advanced_1181_p
#By default, MySQL calls fsync for each commit. When using one of those methods, only around 60 write operations per second can be achieved, which is consistent with the RPM rate of the hard drive used. Unfortunately, even when calling FileDescriptor.sync() or FileChannel.force(), data is not always persisted to the hard drive, because most hard drives do not obey fsync(): see 'Your Hard Drive Lies to You' at http://hardware.slashdot.org/article.pl?sid=05/05/13/0529252. In Mac OS X fsync does not flush hard drive buffers: http://lists.apple.com/archives/darwin-dev/2005/Feb/msg00072.html. So the situation is confusing, and tests prove there is a problem. #By default, MySQL calls fsync for each commit. When using one of those methods, only around 60 write operations per second can be achieved, which is consistent with the RPM rate of the hard drive used. Unfortunately, even when calling FileDescriptor.sync() or FileChannel.force(), data is not always persisted to the hard drive, because most hard drives do not obey fsync(): see 'Your Hard Drive Lies to You' at http://hardware.slashdot.org/article.pl?sid=05/05/13/0529252. In Mac OS X fsync does not flush hard drive buffers: http://lists.apple.com/archives/darwin-dev/2005/Feb/msg00072.html. So the situation is confusing, and tests prove there is a problem.
@advanced_1178_p @advanced_1182_p
#Trying to flush hard drive buffers hard, and if you do the performance is very bad. First you need to make sure that the hard drive actually flushes all buffers. Tests show that this can not be done in a reliable way. Then the maximum number of transactions is around 60 per second. Because of those reasons, the default behavior of H2 is to delay writing committed transactions. #Trying to flush hard drive buffers hard, and if you do the performance is very bad. First you need to make sure that the hard drive actually flushes all buffers. Tests show that this can not be done in a reliable way. Then the maximum number of transactions is around 60 per second. Because of those reasons, the default behavior of H2 is to delay writing committed transactions.
@advanced_1179_p @advanced_1183_p
#In H2, after a power failure, a bit more than one second of committed transactions may be lost. To change the behavior, use SET WRITE_DELAY and CHECKPOINT SYNC. Most other databases support commit delay as well. In the performance comparison, commit delay was used for all databases that support it. #In H2, after a power failure, a bit more than one second of committed transactions may be lost. To change the behavior, use SET WRITE_DELAY and CHECKPOINT SYNC. Most other databases support commit delay as well. In the performance comparison, commit delay was used for all databases that support it.
@advanced_1180_h3 @advanced_1184_h3
#Running the Durability Test #Running the Durability Test
@advanced_1181_p @advanced_1185_p
#To test the durability / non-durability of this and other databases, you can use the test application in the package org.h2.test.poweroff. Two computers with network connection are required to run this test. One computer just listens, while the test application is run (and power is cut) on the other computer. The computer with the listener application opens a TCP/IP port and listens for an incoming connection. The second computer first connects to the listener, and then created the databases and starts inserting records. The connection is set to 'autocommit', which means after each inserted record a commit is performed automatically. Afterwards, the test computer notifies the listener that this record was inserted successfully. The listener computer displays the last inserted record number every 10 seconds. Now, switch off the power manually, then restart the computer, and run the application again. You will find out that in most cases, none of the databases contains all the records that the listener computer knows about. For details, please consult the source code of the listener and test application. #To test the durability / non-durability of this and other databases, you can use the test application in the package org.h2.test.poweroff. Two computers with network connection are required to run this test. One computer just listens, while the test application is run (and power is cut) on the other computer. The computer with the listener application opens a TCP/IP port and listens for an incoming connection. The second computer first connects to the listener, and then created the databases and starts inserting records. The connection is set to 'autocommit', which means after each inserted record a commit is performed automatically. Afterwards, the test computer notifies the listener that this record was inserted successfully. The listener computer displays the last inserted record number every 10 seconds. Now, switch off the power manually, then restart the computer, and run the application again. You will find out that in most cases, none of the databases contains all the records that the listener computer knows about. For details, please consult the source code of the listener and test application.
@advanced_1182_h2 @advanced_1186_h2
#Using the Recover Tool #Using the Recover Tool
@advanced_1183_p @advanced_1187_p
#The recover tool can be used to extract the contents of a data file, even if the database is corrupted. At this time, it does not extract the content of the log file or large objects (CLOB or BLOB). To run the tool, type on the command line: #The recover tool can be used to extract the contents of a data file, even if the database is corrupted. At this time, it does not extract the content of the log file or large objects (CLOB or BLOB). To run the tool, type on the command line:
@advanced_1184_p @advanced_1188_p
#For each database in the current directory, a text file will be created. This file contains raw insert statement (for the data) and data definition (DDL) statement to recreate the schema of the database. This file cannot be executed directly, as the raw insert statements don't have the correct table names, so the file needs to be pre-processed manually before executing. #For each database in the current directory, a text file will be created. This file contains raw insert statement (for the data) and data definition (DDL) statement to recreate the schema of the database. This file cannot be executed directly, as the raw insert statements don't have the correct table names, so the file needs to be pre-processed manually before executing.
@advanced_1185_h2 @advanced_1189_h2
#File Locking Protocols #File Locking Protocols
@advanced_1186_p @advanced_1190_p
#Whenever a database is opened, a lock file is created to signal other processes that the database is in use. If database is closed, or if the process that opened the database terminates, this lock file is deleted. #Whenever a database is opened, a lock file is created to signal other processes that the database is in use. If database is closed, or if the process that opened the database terminates, this lock file is deleted.
@advanced_1187_p @advanced_1191_p
#In special cases (if the process did not terminate normally, for example because there was a blackout), the lock file is not deleted by the process that created it. That means the existence of the lock file is not a safe protocol for file locking. However, this software uses a challenge-response protocol to protect the database files. There are two methods (algorithms) implemented to provide both security (that is, the same database files cannot be opened by two processes at the same time) and simplicity (that is, the lock file does not need to be deleted manually by the user). The two methods are 'file method' and 'socket methods'. #In special cases (if the process did not terminate normally, for example because there was a blackout), the lock file is not deleted by the process that created it. That means the existence of the lock file is not a safe protocol for file locking. However, this software uses a challenge-response protocol to protect the database files. There are two methods (algorithms) implemented to provide both security (that is, the same database files cannot be opened by two processes at the same time) and simplicity (that is, the lock file does not need to be deleted manually by the user). The two methods are 'file method' and 'socket methods'.
@advanced_1188_h3 @advanced_1192_h3
#File Locking Method 'File' #File Locking Method 'File'
@advanced_1189_p @advanced_1193_p
#The default method for database file locking is the 'File Method'. The algorithm is: #The default method for database file locking is the 'File Method'. The algorithm is:
@advanced_1190_li @advanced_1194_li
#When the lock file does not exist, it is created (using the atomic operation File.createNewFile). Then, the process waits a little bit (20ms) and checks the file again. If the file was changed during this time, the operation is aborted. This protects against a race condition when a process deletes the lock file just after one create it, and a third process creates the file again. It does not occur if there are only two writers. #When the lock file does not exist, it is created (using the atomic operation File.createNewFile). Then, the process waits a little bit (20ms) and checks the file again. If the file was changed during this time, the operation is aborted. This protects against a race condition when a process deletes the lock file just after one create it, and a third process creates the file again. It does not occur if there are only two writers.
@advanced_1191_li @advanced_1195_li
#If the file can be created, a random number is inserted together with the locking method ('file'). Afterwards, a watchdog thread is started that checks regularly (every second once by default) if the file was deleted or modified by another (challenger) thread / process. Whenever that occurs, the file is overwritten with the old data. The watchdog thread runs with high priority so that a change to the lock file does not get through undetected even if the system is very busy. However, the watchdog thread does use very little resources (CPU time), because it waits most of the time. Also, the watchdog only reads from the hard disk and does not write to it. #If the file can be created, a random number is inserted together with the locking method ('file'). Afterwards, a watchdog thread is started that checks regularly (every second once by default) if the file was deleted or modified by another (challenger) thread / process. Whenever that occurs, the file is overwritten with the old data. The watchdog thread runs with high priority so that a change to the lock file does not get through undetected even if the system is very busy. However, the watchdog thread does use very little resources (CPU time), because it waits most of the time. Also, the watchdog only reads from the hard disk and does not write to it.
@advanced_1192_li @advanced_1196_li
#If the lock file exists, and it was modified in the 20 ms, the process waits for some time (up to 10 times). If it was still changed, an exception is thrown (database is locked). This is done to eliminate race conditions with many concurrent writers. Afterwards, the file is overwritten with a new version (challenge). After that, the thread waits for 2 seconds. If there is a watchdog thread protecting the file, he will overwrite the change and this process will fail to lock the database. However, if there is no watchdog thread, the lock file will still be as written by this thread. In this case, the file is deleted and atomically created again. The watchdog thread is started in this case and the file is locked. #If the lock file exists, and it was modified in the 20 ms, the process waits for some time (up to 10 times). If it was still changed, an exception is thrown (database is locked). This is done to eliminate race conditions with many concurrent writers. Afterwards, the file is overwritten with a new version (challenge). After that, the thread waits for 2 seconds. If there is a watchdog thread protecting the file, he will overwrite the change and this process will fail to lock the database. However, if there is no watchdog thread, the lock file will still be as written by this thread. In this case, the file is deleted and atomically created again. The watchdog thread is started in this case and the file is locked.
@advanced_1193_p @advanced_1197_p
#This algorithm is tested with over 100 concurrent threads. In some cases, when there are many concurrent threads trying to lock the database, they block each other (meaning the file cannot be locked by any of them) for some time. However, the file never gets locked by two threads at the same time. However using that many concurrent threads / processes is not the common use case. Generally, an application should throw an error to the user if it cannot open a database, and not try again in a (fast) loop. #This algorithm is tested with over 100 concurrent threads. In some cases, when there are many concurrent threads trying to lock the database, they block each other (meaning the file cannot be locked by any of them) for some time. However, the file never gets locked by two threads at the same time. However using that many concurrent threads / processes is not the common use case. Generally, an application should throw an error to the user if it cannot open a database, and not try again in a (fast) loop.
@advanced_1194_h3 @advanced_1198_h3
#File Locking Method 'Socket' #File Locking Method 'Socket'
@advanced_1195_p @advanced_1199_p
#There is a second locking mechanism implemented, but disabled by default. The algorithm is: #There is a second locking mechanism implemented, but disabled by default. The algorithm is:
@advanced_1196_li @advanced_1200_li
#If the lock file does not exist, it is created. Then a server socket is opened on a defined port, and kept open. The port and IP address of the process that opened the database is written into the lock file. #If the lock file does not exist, it is created. Then a server socket is opened on a defined port, and kept open. The port and IP address of the process that opened the database is written into the lock file.
@advanced_1197_li @advanced_1201_li
#If the lock file exists, and the lock method is 'file', then the software switches to the 'file' method. #If the lock file exists, and the lock method is 'file', then the software switches to the 'file' method.
@advanced_1198_li @advanced_1202_li
#If the lock file exists, and the lock method is 'socket', then the process checks if the port is in use. If the original process is still running, the port is in use and this process throws an exception (database is in use). If the original process died (for example due to a blackout, or abnormal termination of the virtual machine), then the port was released. The new process deletes the lock file and starts again. #If the lock file exists, and the lock method is 'socket', then the process checks if the port is in use. If the original process is still running, the port is in use and this process throws an exception (database is in use). If the original process died (for example due to a blackout, or abnormal termination of the virtual machine), then the port was released. The new process deletes the lock file and starts again.
@advanced_1199_p @advanced_1203_p
#This method does not require a watchdog thread actively polling (reading) the same file every second. The problem with this method is, if the file is stored on a network share, two processes (running on different computers) could still open the same database files, if they do not have a direct TCP/IP connection. #This method does not require a watchdog thread actively polling (reading) the same file every second. The problem with this method is, if the file is stored on a network share, two processes (running on different computers) could still open the same database files, if they do not have a direct TCP/IP connection.
@advanced_1200_h2 @advanced_1204_h2
#Protection against SQL Injection #Protection against SQL Injection
@advanced_1201_h3 @advanced_1205_h3
#What is SQL Injection #What is SQL Injection
@advanced_1202_p @advanced_1206_p
#This database engine provides a solution for the security vulnerability known as 'SQL Injection'. Here is a short description of what SQL injection means. Some applications build SQL statements with embedded user input such as: #This database engine provides a solution for the security vulnerability known as 'SQL Injection'. Here is a short description of what SQL injection means. Some applications build SQL statements with embedded user input such as:
@advanced_1203_p @advanced_1207_p
#If this mechanism is used anywhere in the application, and user input is not correctly filtered or encoded, it is possible for a user to inject SQL functionality or statements by using specially built input such as (in this example) this password: ' OR ''='. In this case the statement becomes: #If this mechanism is used anywhere in the application, and user input is not correctly filtered or encoded, it is possible for a user to inject SQL functionality or statements by using specially built input such as (in this example) this password: ' OR ''='. In this case the statement becomes:
@advanced_1204_p @advanced_1208_p
#Which is always true no matter what the password stored in the database is. For more information about SQL Injection, see Glossary and Links. #Which is always true no matter what the password stored in the database is. For more information about SQL Injection, see Glossary and Links.
@advanced_1205_h3 @advanced_1209_h3
#Disabling Literals #Disabling Literals
@advanced_1206_p @advanced_1210_p
#SQL Injection is not possible if user input is not directly embedded in SQL statements. A simple solution for the problem above is to use a PreparedStatement: #SQL Injection is not possible if user input is not directly embedded in SQL statements. A simple solution for the problem above is to use a PreparedStatement:
@advanced_1207_p @advanced_1211_p
#This database provides a way to enforce usage of parameters when passing user input to the database. This is done by disabling embedded literals in SQL statements. To do this, execute the statement: #This database provides a way to enforce usage of parameters when passing user input to the database. This is done by disabling embedded literals in SQL statements. To do this, execute the statement:
@advanced_1208_p @advanced_1212_p
#Afterwards, SQL statements with text and number literals are not allowed any more. That means, SQL statement of the form WHERE NAME='abc' or WHERE CustomerId=10 will fail. It is still possible to use PreparedStatements and parameters as described above. Also, it is still possible to generate SQL statements dynamically, and use the Statement API, as long as the SQL statements do not include literals. There is also a second mode where number literals are allowed: SET ALLOW_LITERALS NUMBERS. To allow all literals, execute SET ALLOW_LITERALS ALL (this is the default setting). Literals can only be enabled or disabled by an administrator. #Afterwards, SQL statements with text and number literals are not allowed any more. That means, SQL statement of the form WHERE NAME='abc' or WHERE CustomerId=10 will fail. It is still possible to use PreparedStatements and parameters as described above. Also, it is still possible to generate SQL statements dynamically, and use the Statement API, as long as the SQL statements do not include literals. There is also a second mode where number literals are allowed: SET ALLOW_LITERALS NUMBERS. To allow all literals, execute SET ALLOW_LITERALS ALL (this is the default setting). Literals can only be enabled or disabled by an administrator.
@advanced_1209_h3 @advanced_1213_h3
#Using Constants #Using Constants
@advanced_1210_p @advanced_1214_p
#Disabling literals also means disabling hard-coded 'constant' literals. This database supports defining constants using the CREATE CONSTANT command. Constants can be defined only when literals are enabled, but used even when literals are disabled. To avoid name clashes with column names, constants can be defined in other schemas: #Disabling literals also means disabling hard-coded 'constant' literals. This database supports defining constants using the CREATE CONSTANT command. Constants can be defined only when literals are enabled, but used even when literals are disabled. To avoid name clashes with column names, constants can be defined in other schemas:
@advanced_1211_p @advanced_1215_p
#Even when literals are enabled, it is better to use constants instead of hard-coded number or text literals in queries or views. With constants, typos are found at compile time, the source code is easier to understand and change. #Even when literals are enabled, it is better to use constants instead of hard-coded number or text literals in queries or views. With constants, typos are found at compile time, the source code is easier to understand and change.
@advanced_1212_h3 @advanced_1216_h3
#Using the ZERO() Function #Using the ZERO() Function
@advanced_1213_p @advanced_1217_p
#It is not required to create a constant for the number 0 as there is already a built-in function ZERO(): #It is not required to create a constant for the number 0 as there is already a built-in function ZERO():
@advanced_1214_h2 @advanced_1218_h2
#Security Protocols #Security Protocols
@advanced_1215_p @advanced_1219_p
#The following paragraphs document the security protocols used in this database. These descriptions are very technical and only intended for security experts that already know the underlying security primitives. #The following paragraphs document the security protocols used in this database. These descriptions are very technical and only intended for security experts that already know the underlying security primitives.
@advanced_1216_h3 @advanced_1220_h3
#User Password Encryption #User Password Encryption
@advanced_1217_p @advanced_1221_p
#When a user tries to connect to a database, the combination of user name, @, and password hashed using SHA-256, and this hash value is transmitted to the database. This step does not try to an attacker from re-using the value if he is able to listen to the (unencrypted) transmission between the client and the server. But, the passwords are never transmitted as plain text, even when using an unencrypted connection between client and server. That means if a user reuses the same password for different things, this password is still protected up to some point. See also 'RFC 2617 - HTTP Authentication: Basic and Digest Access Authentication' for more information. #When a user tries to connect to a database, the combination of user name, @, and password hashed using SHA-256, and this hash value is transmitted to the database. This step does not try to an attacker from re-using the value if he is able to listen to the (unencrypted) transmission between the client and the server. But, the passwords are never transmitted as plain text, even when using an unencrypted connection between client and server. That means if a user reuses the same password for different things, this password is still protected up to some point. See also 'RFC 2617 - HTTP Authentication: Basic and Digest Access Authentication' for more information.
@advanced_1218_p @advanced_1222_p
#When a new database or user is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bit. Using the random salt reduces the risk of an attacker pre-calculating hash values for many different (commonly used) passwords. #When a new database or user is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bit. Using the random salt reduces the risk of an attacker pre-calculating hash values for many different (commonly used) passwords.
@advanced_1219_p @advanced_1223_p
#The combination of user-password hash value (see above) and salt is hashed using SHA-256. The resulting value is stored in the database. When a user tries to connect to the database, the database combines user-password hash value with the stored salt value and calculated the hash value. Other products use multiple iterations (hash the hash value again and again), but this is not done in this product to reduce the risk of denial of service attacks (where the attacker tries to connect with bogus passwords, and the server spends a lot of time calculating the hash value for each password). The reasoning is: if the attacker has access to the hashed passwords, he also has access to the data in plain text, and therefore does not need the password any more. If the data is protected by storing it on another computer and only remotely, then the iteration count is not required at all. #The combination of user-password hash value (see above) and salt is hashed using SHA-256. The resulting value is stored in the database. When a user tries to connect to the database, the database combines user-password hash value with the stored salt value and calculated the hash value. Other products use multiple iterations (hash the hash value again and again), but this is not done in this product to reduce the risk of denial of service attacks (where the attacker tries to connect with bogus passwords, and the server spends a lot of time calculating the hash value for each password). The reasoning is: if the attacker has access to the hashed passwords, he also has access to the data in plain text, and therefore does not need the password any more. If the data is protected by storing it on another computer and only remotely, then the iteration count is not required at all.
@advanced_1220_h3 @advanced_1224_h3
#File Encryption #File Encryption
@advanced_1221_p @advanced_1225_p
#The database files can be encrypted using two different algorithms: AES-128 and XTEA (using 32 rounds). The reasons for supporting XTEA is performance (XTEA is about twice as fast as AES) and to have an alternative algorithm if AES is suddenly broken. #The database files can be encrypted using two different algorithms: AES-128 and XTEA (using 32 rounds). The reasons for supporting XTEA is performance (XTEA is about twice as fast as AES) and to have an alternative algorithm if AES is suddenly broken.
@advanced_1222_p @advanced_1226_p
#When a user tries to connect to an encrypted database, the combination of the word 'file', @, and the file password is hashed using SHA-256. This hash value is transmitted to the server. #When a user tries to connect to an encrypted database, the combination of the word 'file', @, and the file password is hashed using SHA-256. This hash value is transmitted to the server.
@advanced_1223_p @advanced_1227_p
#When a new database file is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bit. The combination of the file password hash and the salt value is hashed 1024 times using SHA-256. The reason for the iteration is to make it harder for an attacker to calculate hash values for common passwords. #When a new database file is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bit. The combination of the file password hash and the salt value is hashed 1024 times using SHA-256. The reason for the iteration is to make it harder for an attacker to calculate hash values for common passwords.
@advanced_1224_p @advanced_1228_p
#The resulting hash value is used as the key for the block cipher algorithm (AES-128 or XTEA with 32 rounds). Then, an initialization vector (IV) key is calculated by hashing the key again using SHA-256. This is to make sure the IV is unknown to the attacker. The reason for using a secret IV is to protect against watermark attacks. #The resulting hash value is used as the key for the block cipher algorithm (AES-128 or XTEA with 32 rounds). Then, an initialization vector (IV) key is calculated by hashing the key again using SHA-256. This is to make sure the IV is unknown to the attacker. The reason for using a secret IV is to protect against watermark attacks.
@advanced_1225_p @advanced_1229_p
#Before saving a block of data (each block is 8 bytes long), the following operations are executed: First, the IV is calculated by encrypting the block number with the IV key (using the same block cipher algorithm). This IV is combined with the plain text using XOR. The resulting data is encrypted using the AES-128 or XTEA algorithm. #Before saving a block of data (each block is 8 bytes long), the following operations are executed: First, the IV is calculated by encrypting the block number with the IV key (using the same block cipher algorithm). This IV is combined with the plain text using XOR. The resulting data is encrypted using the AES-128 or XTEA algorithm.
@advanced_1226_p @advanced_1230_p
#When decrypting, the operation is done in reverse. First, the block is decrypted using the key, and then the IV is calculated combined with the decrypted text using XOR. #When decrypting, the operation is done in reverse. First, the block is decrypted using the key, and then the IV is calculated combined with the decrypted text using XOR.
@advanced_1227_p @advanced_1231_p
#Therefore, the block cipher modes of operation is CBC (Cipher-block chaining), but each chain is only one block long. The advantage over the ECB (Electronic codebook) mode is that patterns in the data are not revealed, and the advantage over multi block CBC is that flipped cipher text bits are not propagated to flipped plaintext bits in the next block. #Therefore, the block cipher modes of operation is CBC (Cipher-block chaining), but each chain is only one block long. The advantage over the ECB (Electronic codebook) mode is that patterns in the data are not revealed, and the advantage over multi block CBC is that flipped cipher text bits are not propagated to flipped plaintext bits in the next block.
@advanced_1228_p @advanced_1232_p
#Database encryption is meant for securing the database while it is not in use (stolen laptop and so on). It is not meant for cases where the attacker has access to files while the database is in use. When he has write access, he can for example replace pieces of files with pieces of older versions and manipulate data like this. #Database encryption is meant for securing the database while it is not in use (stolen laptop and so on). It is not meant for cases where the attacker has access to files while the database is in use. When he has write access, he can for example replace pieces of files with pieces of older versions and manipulate data like this.
@advanced_1229_p @advanced_1233_p
#File encryption slows down the performance of the database engine. Compared to unencrypted mode, database operations take about 2.2 times longer when using XTEA, and 2.5 times longer using AES (embedded mode). #File encryption slows down the performance of the database engine. Compared to unencrypted mode, database operations take about 2.2 times longer when using XTEA, and 2.5 times longer using AES (embedded mode).
@advanced_1230_h3 @advanced_1234_h3
#SSL/TLS Connections #SSL/TLS Connections
@advanced_1231_p @advanced_1235_p
#Remote SSL/TLS connections are supported using the Java Secure Socket Extension (SSLServerSocket / SSLSocket). By default, anonymous SSL is enabled. The default cipher suite is <code>SSL_DH_anon_WITH_RC4_128_MD5</code> . #Remote SSL/TLS connections are supported using the Java Secure Socket Extension (SSLServerSocket / SSLSocket). By default, anonymous SSL is enabled. The default cipher suite is <code>SSL_DH_anon_WITH_RC4_128_MD5</code> .
@advanced_1232_h3 @advanced_1236_h3
#HTTPS Connections #HTTPS Connections
@advanced_1233_p @advanced_1237_p
#The web server supports HTTP and HTTPS connections using SSLServerSocket. There is a default self-certified certificate to support an easy starting point, but custom certificates are supported as well. #The web server supports HTTP and HTTPS connections using SSLServerSocket. There is a default self-certified certificate to support an easy starting point, but custom certificates are supported as well.
@advanced_1234_h2 @advanced_1238_h2
#Universally Unique Identifiers (UUID) #Universally Unique Identifiers (UUID)
@advanced_1235_p @advanced_1239_p
#This database supports the UUIDs. Also supported is a function to create new UUIDs using a cryptographically strong pseudo random number generator. With random UUIDs, the chance of two having the same value can be calculated using the probability theory. See also 'Birthday Paradox'. Standardized randomly generated UUIDs have 122 random bits. 4 bits are used for the version (Randomly generated UUID), and 2 bits for the variant (Leach-Salz). This database supports generating such UUIDs using the built-in function RANDOM_UUID(). Here is a small program to estimate the probability of having two identical UUIDs after generating a number of values: #This database supports the UUIDs. Also supported is a function to create new UUIDs using a cryptographically strong pseudo random number generator. With random UUIDs, the chance of two having the same value can be calculated using the probability theory. See also 'Birthday Paradox'. Standardized randomly generated UUIDs have 122 random bits. 4 bits are used for the version (Randomly generated UUID), and 2 bits for the variant (Leach-Salz). This database supports generating such UUIDs using the built-in function RANDOM_UUID(). Here is a small program to estimate the probability of having two identical UUIDs after generating a number of values:
@advanced_1236_p @advanced_1240_p
#Some values are: #Some values are:
@advanced_1237_p @advanced_1241_p
#To help non-mathematicians understand what those numbers mean, here a comparison: One's annual risk of being hit by a meteorite is estimated to be one chance in 17 billion, that means the probability is about 0.000'000'000'06. #To help non-mathematicians understand what those numbers mean, here a comparison: One's annual risk of being hit by a meteorite is estimated to be one chance in 17 billion, that means the probability is about 0.000'000'000'06.
@advanced_1238_h2 @advanced_1242_h2
#Settings Read from System Properties #Settings Read from System Properties
@advanced_1239_p @advanced_1243_p
#Some settings of the database can be set on the command line using -DpropertyName=value. It is usually not required to change those settings manually. The settings are case sensitive. Example: #Some settings of the database can be set on the command line using -DpropertyName=value. It is usually not required to change those settings manually. The settings are case sensitive. Example:
@advanced_1240_p @advanced_1244_p
#The current value of the settings can be read in the table INFORMATION_SCHEMA.SETTINGS #The current value of the settings can be read in the table INFORMATION_SCHEMA.SETTINGS
@advanced_1241_th @advanced_1245_th
#Setting #Setting
@advanced_1242_th @advanced_1246_th
#Default #Default
@advanced_1243_th @advanced_1247_th
#Description #Description
@advanced_1244_td @advanced_1248_td
#h2.check #h2.check
@advanced_1245_td @advanced_1249_td
#true #true
@advanced_1246_td @advanced_1250_td
#Assertions in the database engine #Assertions in the database engine
@advanced_1247_td @advanced_1251_td
#h2.check2 #h2.check2
@advanced_1248_td @advanced_1252_td
#false #false
@advanced_1249_td @advanced_1253_td
#Additional assertions #Additional assertions
@advanced_1250_td @advanced_1254_td
#h2.clientTraceDirectory #h2.clientTraceDirectory
@advanced_1251_td @advanced_1255_td
#trace.db/ #trace.db/
@advanced_1252_td @advanced_1256_td
#Directory where the trace files of the JDBC client are stored (only for client / server) #Directory where the trace files of the JDBC client are stored (only for client / server)
@advanced_1253_td @advanced_1257_td
#h2.emergencySpaceInitial #h2.emergencySpaceInitial
@advanced_1254_td @advanced_1258_td
#1048576 #1048576
@advanced_1255_td @advanced_1259_td
#Size of 'reserve' file to detect disk full problems early #Size of 'reserve' file to detect disk full problems early
@advanced_1256_td @advanced_1260_td
#h2.emergencySpaceMin #h2.emergencySpaceMin
@advanced_1257_td @advanced_1261_td
#131072 #131072
@advanced_1258_td @advanced_1262_td
#Minimum size of 'reserve' file #Minimum size of 'reserve' file
@advanced_1259_td @advanced_1263_td
#h2.lobCloseBetweenReads #h2.lobCloseBetweenReads
@advanced_1260_td @advanced_1264_td
#false #false
@advanced_1261_td @advanced_1265_td
#Close LOB files between read operations #Close LOB files between read operations
@advanced_1262_td @advanced_1266_td
#h2.lobFilesInDirectories #h2.lobFilesInDirectories
@advanced_1263_td @advanced_1267_td
#false #false
@advanced_1264_td @advanced_1268_td
#Store LOB files in subdirectories #Store LOB files in subdirectories
@advanced_1265_td @advanced_1269_td
#h2.lobFilesPerDirectory #h2.lobFilesPerDirectory
@advanced_1266_td @advanced_1270_td
#256 #256
@advanced_1267_td @advanced_1271_td
#Maximum number of LOB files per directory #Maximum number of LOB files per directory
@advanced_1268_td @advanced_1272_td
#h2.logAllErrors #h2.logAllErrors
@advanced_1269_td @advanced_1273_td
#false #false
@advanced_1270_td @advanced_1274_td
#Write stack traces of any kind of error to a file #Write stack traces of any kind of error to a file
@advanced_1271_td @advanced_1275_td
#h2.logAllErrorsFile #h2.logAllErrorsFile
@advanced_1272_td @advanced_1276_td
#h2errors.txt #h2errors.txt
@advanced_1273_td @advanced_1277_td
#File name to log errors #File name to log errors
@advanced_1274_td @advanced_1278_td
#h2.maxFileRetry #h2.maxFileRetry
@advanced_1275_td @advanced_1279_td
#16 #16
@advanced_1276_td @advanced_1280_td
#Number of times to retry file delete and rename #Number of times to retry file delete and rename
@advanced_1277_td @advanced_1281_td
#h2.multiThreadedKernel #h2.multiThreadedKernel
@advanced_1278_td @advanced_1282_td
#false #false
@advanced_1279_td @advanced_1283_td
#Allow multiple sessions to run concurrently #Allow multiple sessions to run concurrently
@advanced_1280_td @advanced_1284_td
#h2.objectCache #h2.objectCache
@advanced_1281_td @advanced_1285_td
#true #true
@advanced_1282_td @advanced_1286_td
#Cache commonly used objects (integers, strings) #Cache commonly used objects (integers, strings)
@advanced_1283_td @advanced_1287_td
#h2.objectCacheMaxPerElementSize #h2.objectCacheMaxPerElementSize
@advanced_1284_td @advanced_1288_td
#4096 #4096
@advanced_1285_td @advanced_1289_td
#Maximum size of an object in the cache #Maximum size of an object in the cache
@advanced_1286_td @advanced_1290_td
#h2.objectCacheSize #h2.objectCacheSize
@advanced_1287_td @advanced_1291_td
#1024 #1024
@advanced_1288_td @advanced_1292_td
#Size of object cache #Size of object cache
@advanced_1289_td @advanced_1293_td
#h2.optimizeEvaluatableSubqueries #h2.optimizeEvaluatableSubqueries
@advanced_1290_td @advanced_1294_td
#true #true
@advanced_1291_td @advanced_1295_td
#Optimize subqueries that are not dependent on the outer query #Optimize subqueries that are not dependent on the outer query
@advanced_1292_td @advanced_1296_td
#h2.optimizeIn #h2.optimizeIn
@advanced_1293_td @advanced_1297_td
#true #true
@advanced_1294_td @advanced_1298_td
#Optimize IN(...) comparisons #Optimize IN(...) comparisons
@advanced_1295_td @advanced_1299_td
#h2.optimizeMinMax #h2.optimizeMinMax
@advanced_1296_td @advanced_1300_td
#true #true
@advanced_1297_td @advanced_1301_td
#Optimize MIN and MAX aggregate functions #Optimize MIN and MAX aggregate functions
@advanced_1298_td @advanced_1302_td
#h2.optimizeSubqueryCache #h2.optimizeSubqueryCache
@advanced_1299_td @advanced_1303_td
#true #true
@advanced_1300_td @advanced_1304_td
#Cache subquery results #Cache subquery results
@advanced_1301_td @advanced_1305_td
#h2.overflowExceptions #h2.overflowExceptions
@advanced_1302_td @advanced_1306_td
#true #true
@advanced_1303_td @advanced_1307_td
#Throw an exception on integer overflows #Throw an exception on integer overflows
@advanced_1304_td @advanced_1308_td
#h2.recompileAlways #h2.recompileAlways
@advanced_1305_td @advanced_1309_td
#false #false
@advanced_1306_td @advanced_1310_td
#Always recompile prepared statements #Always recompile prepared statements
@advanced_1307_td @advanced_1311_td
#h2.redoBufferSize #h2.redoBufferSize
@advanced_1308_td @advanced_1312_td
#262144 #262144
@advanced_1309_td @advanced_1313_td
#Size of the redo buffer (used at startup when recovering) #Size of the redo buffer (used at startup when recovering)
@advanced_1310_td @advanced_1314_td
#h2.runFinalizers #h2.runFinalizers
@advanced_1311_td @advanced_1315_td
#true #true
@advanced_1312_td @advanced_1316_td
#Run finalizers to detect unclosed connections #Run finalizers to detect unclosed connections
@advanced_1313_td @advanced_1317_td
#h2.scriptDirectory #h2.scriptDirectory
@advanced_1314_td @advanced_1318_td
#Relative or absolute directory where the script files are stored to or read from #Relative or absolute directory where the script files are stored to or read from
@advanced_1315_td @advanced_1319_td
#h2.serverCachedObjects #h2.serverCachedObjects
@advanced_1316_td @advanced_1320_td
#64 #64
@advanced_1317_td @advanced_1321_td
#TCP Server: number of cached objects per session #TCP Server: number of cached objects per session
@advanced_1318_td @advanced_1322_td
#h2.serverSmallResultSetSize #h2.serverSmallResultSetSize
@advanced_1319_td @advanced_1323_td
#100 #100
@advanced_1320_td @advanced_1324_td
#TCP Server: result sets below this size are sent in one block #TCP Server: result sets below this size are sent in one block
@advanced_1321_h2 @advanced_1325_h2
#Glossary and Links #Glossary and Links
@advanced_1322_th @advanced_1326_th
#Term #Term
@advanced_1323_th @advanced_1327_th
#Description #Description
@advanced_1324_td @advanced_1328_td
#AES-128 #AES-128
@advanced_1325_td @advanced_1329_td
#A block encryption algorithm. See also: <a href="http://en.wikipedia.org/wiki/Advanced_Encryption_Standard">Wikipedia: AES</a> #A block encryption algorithm. See also: <a href="http://en.wikipedia.org/wiki/Advanced_Encryption_Standard">Wikipedia: AES</a>
@advanced_1326_td @advanced_1330_td
#Birthday Paradox #Birthday Paradox
@advanced_1327_td @advanced_1331_td
#Describes the higher than expected probability that two persons in a room have the same birthday. Also valid for randomly generated UUIDs. See also: <a href="http://en.wikipedia.org/wiki/Birthday_paradox">Wikipedia: Birthday Paradox</a> #Describes the higher than expected probability that two persons in a room have the same birthday. Also valid for randomly generated UUIDs. See also: <a href="http://en.wikipedia.org/wiki/Birthday_paradox">Wikipedia: Birthday Paradox</a>
@advanced_1328_td @advanced_1332_td
#Digest #Digest
@advanced_1329_td @advanced_1333_td
#Protocol to protect a password (but not to protect data). See also: <a href="http://www.faqs.org/rfcs/rfc2617.html">RFC 2617: HTTP Digest Access Authentication</a> #Protocol to protect a password (but not to protect data). See also: <a href="http://www.faqs.org/rfcs/rfc2617.html">RFC 2617: HTTP Digest Access Authentication</a>
@advanced_1330_td @advanced_1334_td
#GCJ #GCJ
@advanced_1331_td @advanced_1335_td
#GNU Compiler for Java. <a href="http://gcc.gnu.org/java/">http://gcc.gnu.org/java/</a> and <a href="http://nativej.mtsystems.ch">http://nativej.mtsystems.ch/ (not free any more)</a> #GNU Compiler for Java. <a href="http://gcc.gnu.org/java/">http://gcc.gnu.org/java/</a> and <a href="http://nativej.mtsystems.ch">http://nativej.mtsystems.ch/ (not free any more)</a>
@advanced_1332_td @advanced_1336_td
#HTTPS #HTTPS
@advanced_1333_td @advanced_1337_td
#A protocol to provide security to HTTP connections. See also: <a href="http://www.ietf.org/rfc/rfc2818.txt">RFC 2818: HTTP Over TLS</a> #A protocol to provide security to HTTP connections. See also: <a href="http://www.ietf.org/rfc/rfc2818.txt">RFC 2818: HTTP Over TLS</a>
@advanced_1334_td @advanced_1338_td
#Modes of Operation #Modes of Operation
@advanced_1335_a @advanced_1339_a
#Wikipedia: Block cipher modes of operation #Wikipedia: Block cipher modes of operation
@advanced_1336_td @advanced_1340_td
#Salt #Salt
@advanced_1337_td @advanced_1341_td
#Random number to increase the security of passwords. See also: <a href="http://en.wikipedia.org/wiki/Key_derivation_function">Wikipedia: Key derivation function</a> #Random number to increase the security of passwords. See also: <a href="http://en.wikipedia.org/wiki/Key_derivation_function">Wikipedia: Key derivation function</a>
@advanced_1338_td @advanced_1342_td
#SHA-256 #SHA-256
@advanced_1339_td @advanced_1343_td
#A cryptographic one-way hash function. See also: <a href="http://en.wikipedia.org/wiki/SHA_family">Wikipedia: SHA hash functions</a> #A cryptographic one-way hash function. See also: <a href="http://en.wikipedia.org/wiki/SHA_family">Wikipedia: SHA hash functions</a>
@advanced_1340_td @advanced_1344_td
#SQL Injection #SQL Injection
@advanced_1341_td @advanced_1345_td
#A security vulnerability where an application generates SQL statements with embedded user input. See also: <a href="http://en.wikipedia.org/wiki/SQL_injection">Wikipedia: SQL Injection</a> #A security vulnerability where an application generates SQL statements with embedded user input. See also: <a href="http://en.wikipedia.org/wiki/SQL_injection">Wikipedia: SQL Injection</a>
@advanced_1342_td @advanced_1346_td
#Watermark Attack #Watermark Attack
@advanced_1343_td @advanced_1347_td
#Security problem of certain encryption programs where the existence of certain data can be proven without decrypting. For more information, search in the internet for 'watermark attack cryptoloop' #Security problem of certain encryption programs where the existence of certain data can be proven without decrypting. For more information, search in the internet for 'watermark attack cryptoloop'
@advanced_1344_td @advanced_1348_td
#SSL/TLS #SSL/TLS
@advanced_1345_td @advanced_1349_td
#Secure Sockets Layer / Transport Layer Security. See also: <a href="http://java.sun.com/products/jsse/">Java Secure Socket Extension (JSSE)</a> #Secure Sockets Layer / Transport Layer Security. See also: <a href="http://java.sun.com/products/jsse/">Java Secure Socket Extension (JSSE)</a>
@advanced_1346_td @advanced_1350_td
#XTEA #XTEA
@advanced_1347_td @advanced_1351_td
#A block encryption algorithm. See also: <a href="http://en.wikipedia.org/wiki/XTEA">Wikipedia: XTEA</a> #A block encryption algorithm. See also: <a href="http://en.wikipedia.org/wiki/XTEA">Wikipedia: XTEA</a>
@build_1000_h1 @build_1000_h1
......
...@@ -14,1032 +14,1044 @@ Linked Tables ...@@ -14,1032 +14,1044 @@ Linked Tables
Transaction Isolation Transaction Isolation
@advanced_1005_a @advanced_1005_a
Clustering / High Availability Multi-Version Concurrency Control (MVCC)
@advanced_1006_a @advanced_1006_a
Two Phase Commit Clustering / High Availability
@advanced_1007_a @advanced_1007_a
Compatibility Two Phase Commit
@advanced_1008_a @advanced_1008_a
Run as Windows Service Compatibility
@advanced_1009_a @advanced_1009_a
ODBC Driver Run as Windows Service
@advanced_1010_a @advanced_1010_a
ACID ODBC Driver
@advanced_1011_a @advanced_1011_a
Durability Problems ACID
@advanced_1012_a @advanced_1012_a
Using the Recover Tool Durability Problems
@advanced_1013_a @advanced_1013_a
File Locking Protocols Using the Recover Tool
@advanced_1014_a @advanced_1014_a
Protection against SQL Injection File Locking Protocols
@advanced_1015_a @advanced_1015_a
Security Protocols Protection against SQL Injection
@advanced_1016_a @advanced_1016_a
Universally Unique Identifiers (UUID) Security Protocols
@advanced_1017_a @advanced_1017_a
Settings Read from System Properties Universally Unique Identifiers (UUID)
@advanced_1018_a @advanced_1018_a
Settings Read from System Properties
@advanced_1019_a
Glossary and Links Glossary and Links
@advanced_1019_h2 @advanced_1020_h2
Result Sets Result Sets
@advanced_1020_h3 @advanced_1021_h3
Limiting the Number of Rows Limiting the Number of Rows
@advanced_1021_p @advanced_1022_p
Before the result is returned to the application, all rows are read by the database. Server side cursors are not supported currently. If only the first few rows are interesting for the application, then the result set size should be limited to improve the performance. This can be done using LIMIT in a query (example: SELECT * FROM TEST LIMIT 100), or by using Statement.setMaxRows(max). Before the result is returned to the application, all rows are read by the database. Server side cursors are not supported currently. If only the first few rows are interesting for the application, then the result set size should be limited to improve the performance. This can be done using LIMIT in a query (example: SELECT * FROM TEST LIMIT 100), or by using Statement.setMaxRows(max).
@advanced_1022_h3 @advanced_1023_h3
Large Result Sets and External Sorting Large Result Sets and External Sorting
@advanced_1023_p @advanced_1024_p
For result set larger than 1000 rows, the result is buffered to disk. If ORDER BY is used, the sorting is done using an external sort algorithm. In this case, each block of rows is sorted using quick sort, then written to disk; when reading the data, the blocks are merged together. For result set larger than 1000 rows, the result is buffered to disk. If ORDER BY is used, the sorting is done using an external sort algorithm. In this case, each block of rows is sorted using quick sort, then written to disk; when reading the data, the blocks are merged together.
@advanced_1024_h2 @advanced_1025_h2
Large Objects Large Objects
@advanced_1025_h3 @advanced_1026_h3
Storing and Reading Large Objects Storing and Reading Large Objects
@advanced_1026_p @advanced_1027_p
If it is possible that the objects don't fit into memory, then the data type CLOB (for textual data) or BLOB (for binary data) should be used. For these data types, the objects are not fully read into memory, by using streams. To store a BLOB, use PreparedStatement.setBinaryStream. To store a CLOB, use PreparedStatement.setCharacterStream. To read a BLOB, use ResultSet.getBinaryStream, and to read a CLOB, use ResultSet.getCharacterStream. If the client/server mode is used, the BLOB and CLOB data is fully read into memory when accessed. In this case, the size of a BLOB or CLOB is limited by the memory. If it is possible that the objects don't fit into memory, then the data type CLOB (for textual data) or BLOB (for binary data) should be used. For these data types, the objects are not fully read into memory, by using streams. To store a BLOB, use PreparedStatement.setBinaryStream. To store a CLOB, use PreparedStatement.setCharacterStream. To read a BLOB, use ResultSet.getBinaryStream, and to read a CLOB, use ResultSet.getCharacterStream. If the client/server mode is used, the BLOB and CLOB data is fully read into memory when accessed. In this case, the size of a BLOB or CLOB is limited by the memory.
@advanced_1027_h2 @advanced_1028_h2
Linked Tables Linked Tables
@advanced_1028_p @advanced_1029_p
This database supports linked tables, which means tables that don't exist in the current database but are just links to another database. To create such a link, use the CREATE LINKED TABLE statement: This database supports linked tables, which means tables that don't exist in the current database but are just links to another database. To create such a link, use the CREATE LINKED TABLE statement:
@advanced_1029_p @advanced_1030_p
It is then possible to access the table in the usual way. There is a restriction when inserting data to this table: When inserting or updating rows into the table, NULL and values that are not set in the insert statement are both inserted as NULL. This may not have the desired effect if a default value in the target table is other than NULL. It is then possible to access the table in the usual way. There is a restriction when inserting data to this table: When inserting or updating rows into the table, NULL and values that are not set in the insert statement are both inserted as NULL. This may not have the desired effect if a default value in the target table is other than NULL.
@advanced_1030_p @advanced_1031_p
For each linked table a new connection is opened. This can be a problem for some databases when using many linked tables. For Oracle XE, the maximum number of connection can be increased. Oracle XE needs to be restarted after changing these values: For each linked table a new connection is opened. This can be a problem for some databases when using many linked tables. For Oracle XE, the maximum number of connection can be increased. Oracle XE needs to be restarted after changing these values:
@advanced_1031_h2 @advanced_1032_h2
Transaction Isolation Transaction Isolation
@advanced_1032_p @advanced_1033_p
This database supports the following transaction isolation levels: This database supports the following transaction isolation levels:
@advanced_1033_b @advanced_1034_b
Read Committed Read Committed
@advanced_1034_li @advanced_1035_li
This is the default level. Read locks are released immediately. Higher concurrency is possible when using this level. This is the default level. Read locks are released immediately. Higher concurrency is possible when using this level.
@advanced_1035_li @advanced_1036_li
To enable, execute the SQL statement 'SET LOCK_MODE 3' To enable, execute the SQL statement 'SET LOCK_MODE 3'
@advanced_1036_li @advanced_1037_li
or append ;LOCK_MODE=3 to the database URL: jdbc:h2:~/test;LOCK_MODE=3 or append ;LOCK_MODE=3 to the database URL: jdbc:h2:~/test;LOCK_MODE=3
@advanced_1037_b @advanced_1038_b
Serializable Serializable
@advanced_1038_li @advanced_1039_li
To enable, execute the SQL statement 'SET LOCK_MODE 1' To enable, execute the SQL statement 'SET LOCK_MODE 1'
@advanced_1039_li @advanced_1040_li
or append ;LOCK_MODE=1 to the database URL: jdbc:h2:~/test;LOCK_MODE=1 or append ;LOCK_MODE=1 to the database URL: jdbc:h2:~/test;LOCK_MODE=1
@advanced_1040_b @advanced_1041_b
Read Uncommitted Read Uncommitted
@advanced_1041_li @advanced_1042_li
This level means that transaction isolation is disabled. This level means that transaction isolation is disabled.
@advanced_1042_li @advanced_1043_li
To enable, execute the SQL statement 'SET LOCK_MODE 0' To enable, execute the SQL statement 'SET LOCK_MODE 0'
@advanced_1043_li @advanced_1044_li
or append ;LOCK_MODE=0 to the database URL: jdbc:h2:~/test;LOCK_MODE=0 or append ;LOCK_MODE=0 to the database URL: jdbc:h2:~/test;LOCK_MODE=0
@advanced_1044_p @advanced_1045_p
When using the isolation level 'serializable', dirty reads, non-repeatable reads, and phantom reads are prohibited. When using the isolation level 'serializable', dirty reads, non-repeatable reads, and phantom reads are prohibited.
@advanced_1045_b @advanced_1046_b
Dirty Reads Dirty Reads
@advanced_1046_li @advanced_1047_li
Means a connection can read uncommitted changes made by another connection. Means a connection can read uncommitted changes made by another connection.
@advanced_1047_li @advanced_1048_li
Possible with: read uncommitted Possible with: read uncommitted
@advanced_1048_b @advanced_1049_b
Non-Repeatable Reads Non-Repeatable Reads
@advanced_1049_li @advanced_1050_li
A connection reads a row, another connection changes a row and commits, and the first connection re-reads the same row and gets the new result. A connection reads a row, another connection changes a row and commits, and the first connection re-reads the same row and gets the new result.
@advanced_1050_li @advanced_1051_li
Possible with: read uncommitted, read committed Possible with: read uncommitted, read committed
@advanced_1051_b @advanced_1052_b
Phantom Reads Phantom Reads
@advanced_1052_li @advanced_1053_li
A connection reads a set of rows using a condition, another connection inserts a row that falls in this condition and commits, then the first connection re-reads using the same condition and gets the new row. A connection reads a set of rows using a condition, another connection inserts a row that falls in this condition and commits, then the first connection re-reads using the same condition and gets the new row.
@advanced_1053_li @advanced_1054_li
Possible with: read uncommitted, read committed Possible with: read uncommitted, read committed
@advanced_1054_h3 @advanced_1055_h3
Table Level Locking Table Level Locking
@advanced_1055_p @advanced_1056_p
The database allows multiple concurrent connections to the same database. To make sure all connections only see consistent data, table level locking is used. This mechanism does not allow high concurrency, but is very fast. Shared locks and exclusive locks are supported. Before reading from a table, the database tries to add a shared lock to the table (this is only possible if there is no exclusive lock on the object by another connection). If the shared lock is added successfully, the table can be read. It is allowed that other connections also have a shared lock on the same object. If a connection wants to write to a table (update or delete a row), an exclusive lock is required. To get the exclusive lock, other connection must not have any locks on the object. After the connection commits, all locks are released. This database keeps all locks in memory. The database allows multiple concurrent connections to the same database. To make sure all connections only see consistent data, table level locking is used. This mechanism does not allow high concurrency, but is very fast. Shared locks and exclusive locks are supported. Before reading from a table, the database tries to add a shared lock to the table (this is only possible if there is no exclusive lock on the object by another connection). If the shared lock is added successfully, the table can be read. It is allowed that other connections also have a shared lock on the same object. If a connection wants to write to a table (update or delete a row), an exclusive lock is required. To get the exclusive lock, other connection must not have any locks on the object. After the connection commits, all locks are released. This database keeps all locks in memory.
@advanced_1056_h3 @advanced_1057_h3
Lock Timeout Lock Timeout
@advanced_1057_p @advanced_1058_p
If a connection cannot get a lock on an object, the connection waits for some amount of time (the lock timeout). During this time, hopefully the connection holding the lock commits and it is then possible to get the lock. If this is not possible because the other connection does not release the lock for some time, the unsuccessful connection will get a lock timeout exception. The lock timeout can be set individually for each connection. If a connection cannot get a lock on an object, the connection waits for some amount of time (the lock timeout). During this time, hopefully the connection holding the lock commits and it is then possible to get the lock. If this is not possible because the other connection does not release the lock for some time, the unsuccessful connection will get a lock timeout exception. The lock timeout can be set individually for each connection.
@advanced_1058_h2 @advanced_1059_h2
Multi-Version Concurrency Control (MVCC)
@advanced_1060_p
The MVCC feature allows higher concurrency than using (table level or row level) locks. When using MVCC in this database, delete, insert and update operations will only issue a shared lock on the table. Table are still locked exclusively when adding or removing columns, when dropping the table, and when using SELECT ... FOR UPDATE. Connections only 'see' committed data, and own changes. That means, if connection A updates a row but doesn't commit this change yet, connection B will see the old value. Only when the change is committed, the new value is visible by other connections (read committed). If multiple connections concurrently try to update the same row, this database fails fast: a concurrent update exception is thrown.
@advanced_1061_p
To use the MVCC feature, append MVCC=TRUE to the database URL:
@advanced_1062_h2
Clustering / High Availability Clustering / High Availability
@advanced_1059_p @advanced_1063_p
This database supports a simple clustering / high availability mechanism. The architecture is: two database servers run on two different computers, and on both computers is a copy of the same database. If both servers run, each database operation is executed on both computers. If one server fails (power, hardware or network failure), the other server can still continue to work. From this point on, the operations will be executed only on one server until the other server is back up. This database supports a simple clustering / high availability mechanism. The architecture is: two database servers run on two different computers, and on both computers is a copy of the same database. If both servers run, each database operation is executed on both computers. If one server fails (power, hardware or network failure), the other server can still continue to work. From this point on, the operations will be executed only on one server until the other server is back up.
@advanced_1060_p @advanced_1064_p
Clustering can only be used in the server mode (the embedded mode does not support clustering). It is possible to restore the cluster without stopping the server, however it is critical that no other application is changing the data in the first database while the second database is restored, so restoring the cluster is currently a manual process. Clustering can only be used in the server mode (the embedded mode does not support clustering). It is possible to restore the cluster without stopping the server, however it is critical that no other application is changing the data in the first database while the second database is restored, so restoring the cluster is currently a manual process.
@advanced_1061_p @advanced_1065_p
To initialize the cluster, use the following steps: To initialize the cluster, use the following steps:
@advanced_1062_li @advanced_1066_li
Create a database Create a database
@advanced_1063_li @advanced_1067_li
Use the CreateCluster tool to copy the database to another location and initialize the clustering. Afterwards, you have two databases containing the same data. Use the CreateCluster tool to copy the database to another location and initialize the clustering. Afterwards, you have two databases containing the same data.
@advanced_1064_li @advanced_1068_li
Start two servers (one for each copy of the database) Start two servers (one for each copy of the database)
@advanced_1065_li @advanced_1069_li
You are now ready to connect to the databases with the client application(s) You are now ready to connect to the databases with the client application(s)
@advanced_1066_h3 @advanced_1070_h3
Using the CreateCluster Tool Using the CreateCluster Tool
@advanced_1067_p @advanced_1071_p
To understand how clustering works, please try out the following example. In this example, the two databases reside on the same computer, but usually, the databases will be on different servers. To understand how clustering works, please try out the following example. In this example, the two databases reside on the same computer, but usually, the databases will be on different servers.
@advanced_1068_li @advanced_1072_li
Create two directories: server1 and server2. Each directory will simulate a directory on a computer. Create two directories: server1 and server2. Each directory will simulate a directory on a computer.
@advanced_1069_li @advanced_1073_li
Start a TCP server pointing to the first directory. You can do this using the command line: Start a TCP server pointing to the first directory. You can do this using the command line:
@advanced_1070_li @advanced_1074_li
Start a second TCP server pointing to the second directory. This will simulate a server running on a second (redundant) computer. You can do this using the command line: Start a second TCP server pointing to the second directory. This will simulate a server running on a second (redundant) computer. You can do this using the command line:
@advanced_1071_li @advanced_1075_li
Use the CreateCluster tool to initialize clustering. This will automatically create a new, empty database if it does not exist. Run the tool on the command line: Use the CreateCluster tool to initialize clustering. This will automatically create a new, empty database if it does not exist. Run the tool on the command line:
@advanced_1072_li @advanced_1076_li
You can now connect to the databases using an application or the H2 Console using the JDBC URL jdbc:h2:tcp://localhost:9101,localhost:9102/test You can now connect to the databases using an application or the H2 Console using the JDBC URL jdbc:h2:tcp://localhost:9101,localhost:9102/test
@advanced_1073_li @advanced_1077_li
If you stop a server (by killing the process), you will notice that the other machine continues to work, and therefore the database is still accessible. If you stop a server (by killing the process), you will notice that the other machine continues to work, and therefore the database is still accessible.
@advanced_1074_li @advanced_1078_li
To restore the cluster, you first need to delete the database that failed, then restart the server that was stopped, and re-run the CreateCluster tool. To restore the cluster, you first need to delete the database that failed, then restart the server that was stopped, and re-run the CreateCluster tool.
@advanced_1075_h3 @advanced_1079_h3
Clustering Algorithm and Limitations Clustering Algorithm and Limitations
@advanced_1076_p @advanced_1080_p
Read-only queries are only executed against the first cluster node, but all other statements are executed against all nodes. There is currently no load balancing made to avoid problems with transactions. The following functions may yield different results on different cluster nodes and must be executed with care: RANDOM_UUID(), SECURE_RAND(), SESSION_ID(), MEMORY_FREE(), MEMORY_USED(), CSVREAD(), CSVWRITE(), RAND() [when not using a seed]. Those functions should not be used directly in modifying statements (for example INSERT, UPDATE, or MERGE). However, they can be used in read-only statements and the result can then be used for modifying statements. Read-only queries are only executed against the first cluster node, but all other statements are executed against all nodes. There is currently no load balancing made to avoid problems with transactions. The following functions may yield different results on different cluster nodes and must be executed with care: RANDOM_UUID(), SECURE_RAND(), SESSION_ID(), MEMORY_FREE(), MEMORY_USED(), CSVREAD(), CSVWRITE(), RAND() [when not using a seed]. Those functions should not be used directly in modifying statements (for example INSERT, UPDATE, or MERGE). However, they can be used in read-only statements and the result can then be used for modifying statements.
@advanced_1077_h2 @advanced_1081_h2
Two Phase Commit Two Phase Commit
@advanced_1078_p @advanced_1082_p
The two phase commit protocol is supported. 2-phase-commit works as follows: The two phase commit protocol is supported. 2-phase-commit works as follows:
@advanced_1079_li @advanced_1083_li
Autocommit needs to be switched off Autocommit needs to be switched off
@advanced_1080_li @advanced_1084_li
A transaction is started, for example by inserting a row A transaction is started, for example by inserting a row
@advanced_1081_li @advanced_1085_li
The transaction is marked 'prepared' by executing the SQL statement <code>PREPARE COMMIT transactionName</code> The transaction is marked 'prepared' by executing the SQL statement <code>PREPARE COMMIT transactionName</code>
@advanced_1082_li @advanced_1086_li
The transaction can now be committed or rolled back The transaction can now be committed or rolled back
@advanced_1083_li @advanced_1087_li
If a problem occurs before the transaction was successfully committed or rolled back (for example because a network problem occurred), the transaction is in the state 'in-doubt' If a problem occurs before the transaction was successfully committed or rolled back (for example because a network problem occurred), the transaction is in the state 'in-doubt'
@advanced_1084_li @advanced_1088_li
When re-connecting to the database, the in-doubt transactions can be listed with <code>SELECT * FROM INFORMATION_SCHEMA.IN_DOUBT</code> When re-connecting to the database, the in-doubt transactions can be listed with <code>SELECT * FROM INFORMATION_SCHEMA.IN_DOUBT</code>
@advanced_1085_li @advanced_1089_li
Each transaction in this list must now be committed or rolled back by executing <code>COMMIT TRANSACTION transactionName</code> or <code>ROLLBACK TRANSACTION transactionName</code> Each transaction in this list must now be committed or rolled back by executing <code>COMMIT TRANSACTION transactionName</code> or <code>ROLLBACK TRANSACTION transactionName</code>
@advanced_1086_li @advanced_1090_li
The database needs to be closed and re-opened to apply the changes The database needs to be closed and re-opened to apply the changes
@advanced_1087_h2 @advanced_1091_h2
Compatibility Compatibility
@advanced_1088_p @advanced_1092_p
This database is (up to a certain point) compatible to other databases such as HSQLDB, MySQL and PostgreSQL. There are certain areas where H2 is incompatible. This database is (up to a certain point) compatible to other databases such as HSQLDB, MySQL and PostgreSQL. There are certain areas where H2 is incompatible.
@advanced_1089_h3 @advanced_1093_h3
Transaction Commit when Autocommit is On Transaction Commit when Autocommit is On
@advanced_1090_p @advanced_1094_p
At this time, this database engine commits a transaction (if autocommit is switched on) just before returning the result. For a query, this means the transaction is committed even before the application scans through the result set, and before the result set is closed. Other database engines may commit the transaction in this case when the result set is closed. At this time, this database engine commits a transaction (if autocommit is switched on) just before returning the result. For a query, this means the transaction is committed even before the application scans through the result set, and before the result set is closed. Other database engines may commit the transaction in this case when the result set is closed.
@advanced_1091_h3 @advanced_1095_h3
Keywords / Reserved Words Keywords / Reserved Words
@advanced_1092_p @advanced_1096_p
There is a list of keywords that can't be used as identifiers (table names, column names and so on), unless they are quoted (surrounded with double quotes). The list is currently: There is a list of keywords that can't be used as identifiers (table names, column names and so on), unless they are quoted (surrounded with double quotes). The list is currently:
@advanced_1093_p @advanced_1097_p
CURRENT_TIMESTAMP, CURRENT_TIME, CURRENT_DATE, CROSS, DISTINCT, EXCEPT, EXISTS, FROM, FOR, FALSE, FULL, GROUP, HAVING, INNER, INTERSECT, IS, JOIN, LIKE, MINUS, NATURAL, NOT, NULL, ON, ORDER, PRIMARY, ROWNUM, SELECT, SYSDATE, SYSTIME, SYSTIMESTAMP, TODAY, TRUE, UNION, WHERE CURRENT_TIMESTAMP, CURRENT_TIME, CURRENT_DATE, CROSS, DISTINCT, EXCEPT, EXISTS, FROM, FOR, FALSE, FULL, GROUP, HAVING, INNER, INTERSECT, IS, JOIN, LIKE, MINUS, NATURAL, NOT, NULL, ON, ORDER, PRIMARY, ROWNUM, SELECT, SYSDATE, SYSTIME, SYSTIMESTAMP, TODAY, TRUE, UNION, WHERE
@advanced_1094_p @advanced_1098_p
Certain words of this list are keywords because they are functions that can be used without '()' for compatibility, for example CURRENT_TIMESTAMP. Certain words of this list are keywords because they are functions that can be used without '()' for compatibility, for example CURRENT_TIMESTAMP.
@advanced_1095_h2 @advanced_1099_h2
Run as Windows Service Run as Windows Service
@advanced_1096_p @advanced_1100_p
Using a native wrapper / adapter, Java applications can be run as a Windows Service. There are various tools available to do that. The Java Service Wrapper from Tanuki Software, Inc. ( <a href="http://wrapper.tanukisoftware.org">http://wrapper.tanukisoftware.org</a> ) is included in the installation. Batch files are provided to install, start, stop and uninstall the H2 Database Engine Service. This service contains the TCP Server and the H2 Console web application. The batch files are located in the directory H2/service. Using a native wrapper / adapter, Java applications can be run as a Windows Service. There are various tools available to do that. The Java Service Wrapper from Tanuki Software, Inc. ( <a href="http://wrapper.tanukisoftware.org">http://wrapper.tanukisoftware.org</a> ) is included in the installation. Batch files are provided to install, start, stop and uninstall the H2 Database Engine Service. This service contains the TCP Server and the H2 Console web application. The batch files are located in the directory H2/service.
@advanced_1097_h3 @advanced_1101_h3
Install the Service Install the Service
@advanced_1098_p @advanced_1102_p
The service needs to be registered as a Windows Service first. To do that, double click on 1_install_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear. The service needs to be registered as a Windows Service first. To do that, double click on 1_install_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear.
@advanced_1099_h3 @advanced_1103_h3
Start the Service Start the Service
@advanced_1100_p @advanced_1104_p
You can start the H2 Database Engine Service using the service manager of Windows, or by double clicking on 2_start_service.bat. Please note that the batch file does not print an error message if the service is not installed. You can start the H2 Database Engine Service using the service manager of Windows, or by double clicking on 2_start_service.bat. Please note that the batch file does not print an error message if the service is not installed.
@advanced_1101_h3 @advanced_1105_h3
Connect to the H2 Console Connect to the H2 Console
@advanced_1102_p @advanced_1106_p
After installing and starting the service, you can connect to the H2 Console application using a browser. Double clicking on 3_start_browser.bat to do that. The default port (8082) is hard coded in the batch file. After installing and starting the service, you can connect to the H2 Console application using a browser. Double clicking on 3_start_browser.bat to do that. The default port (8082) is hard coded in the batch file.
@advanced_1103_h3 @advanced_1107_h3
Stop the Service Stop the Service
@advanced_1104_p @advanced_1108_p
To stop the service, double click on 4_stop_service.bat. Please note that the batch file does not print an error message if the service is not installed or started. To stop the service, double click on 4_stop_service.bat. Please note that the batch file does not print an error message if the service is not installed or started.
@advanced_1105_h3 @advanced_1109_h3
Uninstall the Service Uninstall the Service
@advanced_1106_p @advanced_1110_p
To uninstall the service, double click on 5_uninstall_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear. To uninstall the service, double click on 5_uninstall_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear.
@advanced_1107_h2 @advanced_1111_h2
ODBC Driver ODBC Driver
@advanced_1108_p @advanced_1112_p
This database does not come with its own ODBC driver at this time, but it supports the PostgreSQL network protocol. Therefore, the PostgreSQL ODBC driver can be used. Support for the PostgreSQL network protocol is quite new and should be viewed as experimental. It should not be used for production applications. This database does not come with its own ODBC driver at this time, but it supports the PostgreSQL network protocol. Therefore, the PostgreSQL ODBC driver can be used. Support for the PostgreSQL network protocol is quite new and should be viewed as experimental. It should not be used for production applications.
@advanced_1109_p @advanced_1113_p
At this time, the PostgreSQL ODBC driver does not work on 64 bit versions of Windows. For more information, see: <a href="http://svr5.postgresql.org/pgsql-odbc/2005-09/msg00127.php">ODBC Driver on Windows 64 bit</a> At this time, the PostgreSQL ODBC driver does not work on 64 bit versions of Windows. For more information, see: <a href="http://svr5.postgresql.org/pgsql-odbc/2005-09/msg00127.php">ODBC Driver on Windows 64 bit</a>
@advanced_1110_h3 @advanced_1114_h3
ODBC Installation ODBC Installation
@advanced_1111_p @advanced_1115_p
First, the ODBC driver must be installed. Any recent PostgreSQL ODBC driver should work, however version 8.2.4 or newer is recommended. The Windows version of the PostgreSQL ODBC driver is available at <a href="http://www.postgresql.org/ftp/odbc/versions/msi">http://www.postgresql.org/ftp/odbc/versions/msi</a> . First, the ODBC driver must be installed. Any recent PostgreSQL ODBC driver should work, however version 8.2.4 or newer is recommended. The Windows version of the PostgreSQL ODBC driver is available at <a href="http://www.postgresql.org/ftp/odbc/versions/msi">http://www.postgresql.org/ftp/odbc/versions/msi</a> .
@advanced_1112_h3 @advanced_1116_h3
Starting the Server Starting the Server
@advanced_1113_p @advanced_1117_p
After installing the ODBC driver, start the H2 Server using the command line: After installing the ODBC driver, start the H2 Server using the command line:
@advanced_1114_p @advanced_1118_p
The PG Server (PG for PostgreSQL protocol) is started as well. By default, databases are stored in the current working directory where the server is started. Use -baseDir to save databases in another directory, for example the user home directory: The PG Server (PG for PostgreSQL protocol) is started as well. By default, databases are stored in the current working directory where the server is started. Use -baseDir to save databases in another directory, for example the user home directory:
@advanced_1115_p @advanced_1119_p
The PG server can be started and stopped from within a Java application as follows: The PG server can be started and stopped from within a Java application as follows:
@advanced_1116_p @advanced_1120_p
By default, only connections from localhost are allowed. To allow remote connections, use <code>-pgAllowOthers true</code> when starting the server. By default, only connections from localhost are allowed. To allow remote connections, use <code>-pgAllowOthers true</code> when starting the server.
@advanced_1117_h3 @advanced_1121_h3
ODBC Configuration ODBC Configuration
@advanced_1118_p @advanced_1122_p
After installing the driver, a new Data Source must be added. In Windows, run <code>odbcad32.exe</code> to open the Data Source Administrator. Then click on 'Add...' and select the PostgreSQL Unicode driver. Then click 'Finish'. You will be able to change the connection properties: After installing the driver, a new Data Source must be added. In Windows, run <code>odbcad32.exe</code> to open the Data Source Administrator. Then click on 'Add...' and select the PostgreSQL Unicode driver. Then click 'Finish'. You will be able to change the connection properties:
@advanced_1119_th @advanced_1123_th
Property Property
@advanced_1120_th @advanced_1124_th
Example Example
@advanced_1121_th @advanced_1125_th
Remarks Remarks
@advanced_1122_td @advanced_1126_td
Data Source Data Source
@advanced_1123_td @advanced_1127_td
H2 Test H2 Test
@advanced_1124_td @advanced_1128_td
The name of the ODBC Data Source The name of the ODBC Data Source
@advanced_1125_td @advanced_1129_td
Database Database
@advanced_1126_td @advanced_1130_td
test test
@advanced_1127_td @advanced_1131_td
The database name. Only simple names are supported at this time; The database name. Only simple names are supported at this time;
@advanced_1128_td @advanced_1132_td
relative or absolute path are not supported in the database name. relative or absolute path are not supported in the database name.
@advanced_1129_td @advanced_1133_td
By default, the database is stored in the current working directory By default, the database is stored in the current working directory
@advanced_1130_td @advanced_1134_td
where the Server is started except when the -baseDir setting is used. where the Server is started except when the -baseDir setting is used.
@advanced_1131_td @advanced_1135_td
The name must be at least 3 characters. The name must be at least 3 characters.
@advanced_1132_td @advanced_1136_td
Server Server
@advanced_1133_td @advanced_1137_td
localhost localhost
@advanced_1134_td @advanced_1138_td
The server name or IP address. The server name or IP address.
@advanced_1135_td @advanced_1139_td
By default, only remote connections are allowed By default, only remote connections are allowed
@advanced_1136_td @advanced_1140_td
User Name User Name
@advanced_1137_td @advanced_1141_td
sa sa
@advanced_1138_td @advanced_1142_td
The database user name. The database user name.
@advanced_1139_td @advanced_1143_td
SSL Mode SSL Mode
@advanced_1140_td @advanced_1144_td
disabled disabled
@advanced_1141_td @advanced_1145_td
At this time, SSL is not supported. At this time, SSL is not supported.
@advanced_1142_td @advanced_1146_td
Port Port
@advanced_1143_td @advanced_1147_td
5435 5435
@advanced_1144_td @advanced_1148_td
The port where the PG Server is listening. The port where the PG Server is listening.
@advanced_1145_td @advanced_1149_td
Password Password
@advanced_1146_td @advanced_1150_td
sa sa
@advanced_1147_td @advanced_1151_td
The database password. The database password.
@advanced_1148_p @advanced_1152_p
Afterwards, you may use this data source. Afterwards, you may use this data source.
@advanced_1149_h3 @advanced_1153_h3
PG Protocol Support Limitations PG Protocol Support Limitations
@advanced_1150_p @advanced_1154_p
At this time, only a subset of the PostgreSQL network protocol is implemented. Also, there may be compatibility problems on the SQL level, with the catalog, or with text encoding. Problems are fixed as they are found. Currently, statements can not be cancelled when using the PG protocol. At this time, only a subset of the PostgreSQL network protocol is implemented. Also, there may be compatibility problems on the SQL level, with the catalog, or with text encoding. Problems are fixed as they are found. Currently, statements can not be cancelled when using the PG protocol.
@advanced_1151_h3 @advanced_1155_h3
Security Considerations Security Considerations
@advanced_1152_p @advanced_1156_p
Currently, the PG Server does not support challenge response or encrypt passwords. This may be a problem if an attacker can listen to the data transferred between the ODBC driver and the server, because the password is readable to the attacker. Also, it is currently not possible to use encrypted SSL connections. Therefore the ODBC driver should not be used where security is important. Currently, the PG Server does not support challenge response or encrypt passwords. This may be a problem if an attacker can listen to the data transferred between the ODBC driver and the server, because the password is readable to the attacker. Also, it is currently not possible to use encrypted SSL connections. Therefore the ODBC driver should not be used where security is important.
@advanced_1153_h2 @advanced_1157_h2
ACID ACID
@advanced_1154_p @advanced_1158_p
In the database world, ACID stands for: In the database world, ACID stands for:
@advanced_1155_li @advanced_1159_li
Atomicity: Transactions must be atomic, meaning either all tasks are performed or none. Atomicity: Transactions must be atomic, meaning either all tasks are performed or none.
@advanced_1156_li @advanced_1160_li
Consistency: All operations must comply with the defined constraints. Consistency: All operations must comply with the defined constraints.
@advanced_1157_li @advanced_1161_li
Isolation: Transactions must be isolated from each other. Isolation: Transactions must be isolated from each other.
@advanced_1158_li @advanced_1162_li
Durability: Committed transaction will not be lost. Durability: Committed transaction will not be lost.
@advanced_1159_h3 @advanced_1163_h3
Atomicity Atomicity
@advanced_1160_p @advanced_1164_p
Transactions in this database are always atomic. Transactions in this database are always atomic.
@advanced_1161_h3 @advanced_1165_h3
Consistency Consistency
@advanced_1162_p @advanced_1166_p
This database is always in a consistent state. Referential integrity rules are always enforced. This database is always in a consistent state. Referential integrity rules are always enforced.
@advanced_1163_h3 @advanced_1167_h3
Isolation Isolation
@advanced_1164_p @advanced_1168_p
For H2, as with most other database systems, the default isolation level is 'read committed'. This provides better performance, but also means that transactions are not completely isolated. H2 supports the transaction isolation levels 'serializable', 'read committed', and 'read uncommitted'. For H2, as with most other database systems, the default isolation level is 'read committed'. This provides better performance, but also means that transactions are not completely isolated. H2 supports the transaction isolation levels 'serializable', 'read committed', and 'read uncommitted'.
@advanced_1165_h3 @advanced_1169_h3
Durability Durability
@advanced_1166_p @advanced_1170_p
This database does not guarantee that all committed transactions survive a power failure. Tests show that all databases sometimes lose transactions on power failure (for details, see below). Where losing transactions is not acceptable, a laptop or UPS (uninterruptible power supply) should be used. If durability is required for all possible cases of hardware failure, clustering should be used, such as the H2 clustering mode. This database does not guarantee that all committed transactions survive a power failure. Tests show that all databases sometimes lose transactions on power failure (for details, see below). Where losing transactions is not acceptable, a laptop or UPS (uninterruptible power supply) should be used. If durability is required for all possible cases of hardware failure, clustering should be used, such as the H2 clustering mode.
@advanced_1167_h2 @advanced_1171_h2
Durability Problems Durability Problems
@advanced_1168_p @advanced_1172_p
Complete durability means all committed transaction survive a power failure. Some databases claim they can guarantee durability, but such claims are wrong. A durability test was run against H2, HSQLDB, PostgreSQL, and Derby. All of those databases sometimes lose committed transactions. The test is included in the H2 download, see org.h2.test.poweroff.Test. Complete durability means all committed transaction survive a power failure. Some databases claim they can guarantee durability, but such claims are wrong. A durability test was run against H2, HSQLDB, PostgreSQL, and Derby. All of those databases sometimes lose committed transactions. The test is included in the H2 download, see org.h2.test.poweroff.Test.
@advanced_1169_h3 @advanced_1173_h3
Ways to (Not) Achieve Durability Ways to (Not) Achieve Durability
@advanced_1170_p @advanced_1174_p
Making sure that committed transaction are not lost is more complicated than it seems first. To guarantee complete durability, a database must ensure that the log record is on the hard drive before the commit call returns. To do that, databases use different methods. One is to use the 'synchronous write' file access mode. In Java, RandomAccessFile supports the modes "rws" and "rwd": Making sure that committed transaction are not lost is more complicated than it seems first. To guarantee complete durability, a database must ensure that the log record is on the hard drive before the commit call returns. To do that, databases use different methods. One is to use the 'synchronous write' file access mode. In Java, RandomAccessFile supports the modes "rws" and "rwd":
@advanced_1171_li @advanced_1175_li
rwd: Every update to the file's content is written synchronously to the underlying storage device. rwd: Every update to the file's content is written synchronously to the underlying storage device.
@advanced_1172_li @advanced_1176_li
rws: In addition to rwd, every update to the metadata is written synchronously. rws: In addition to rwd, every update to the metadata is written synchronously.
@advanced_1173_p @advanced_1177_p
This feature is used by Derby. A test (org.h2.test.poweroff.TestWrite) with one of those modes achieves around 50 thousand write operations per second. Even when the operating system write buffer is disabled, the write rate is around 50 thousand operations per second. This feature does not force changes to disk because it does not flush all buffers. The test updates the same byte in the file again and again. If the hard drive was able to write at this rate, then the disk would need to make at least 50 thousand revolutions per second, or 3 million RPM (revolutions per minute). There are no such hard drives. The hard drive used for the test is about 7200 RPM, or about 120 revolutions per second. There is an overhead, so the maximum write rate must be lower than that. This feature is used by Derby. A test (org.h2.test.poweroff.TestWrite) with one of those modes achieves around 50 thousand write operations per second. Even when the operating system write buffer is disabled, the write rate is around 50 thousand operations per second. This feature does not force changes to disk because it does not flush all buffers. The test updates the same byte in the file again and again. If the hard drive was able to write at this rate, then the disk would need to make at least 50 thousand revolutions per second, or 3 million RPM (revolutions per minute). There are no such hard drives. The hard drive used for the test is about 7200 RPM, or about 120 revolutions per second. There is an overhead, so the maximum write rate must be lower than that.
@advanced_1174_p @advanced_1178_p
Buffers can be flushed by calling the function fsync. There are two ways to do that in Java: Buffers can be flushed by calling the function fsync. There are two ways to do that in Java:
@advanced_1175_li @advanced_1179_li
FileDescriptor.sync(). The documentation says that this forces all system buffers to synchronize with the underlying device. Sync is supposed to return after all in-memory modified copies of buffers associated with this FileDescriptor have been written to the physical medium. FileDescriptor.sync(). The documentation says that this forces all system buffers to synchronize with the underlying device. Sync is supposed to return after all in-memory modified copies of buffers associated with this FileDescriptor have been written to the physical medium.
@advanced_1176_li @advanced_1180_li
FileChannel.force() (since JDK 1.4). This method is supposed to force any updates to this channel's file to be written to the storage device that contains it. FileChannel.force() (since JDK 1.4). This method is supposed to force any updates to this channel's file to be written to the storage device that contains it.
@advanced_1177_p @advanced_1181_p
By default, MySQL calls fsync for each commit. When using one of those methods, only around 60 write operations per second can be achieved, which is consistent with the RPM rate of the hard drive used. Unfortunately, even when calling FileDescriptor.sync() or FileChannel.force(), data is not always persisted to the hard drive, because most hard drives do not obey fsync(): see 'Your Hard Drive Lies to You' at http://hardware.slashdot.org/article.pl?sid=05/05/13/0529252. In Mac OS X fsync does not flush hard drive buffers: http://lists.apple.com/archives/darwin-dev/2005/Feb/msg00072.html. So the situation is confusing, and tests prove there is a problem. By default, MySQL calls fsync for each commit. When using one of those methods, only around 60 write operations per second can be achieved, which is consistent with the RPM rate of the hard drive used. Unfortunately, even when calling FileDescriptor.sync() or FileChannel.force(), data is not always persisted to the hard drive, because most hard drives do not obey fsync(): see 'Your Hard Drive Lies to You' at http://hardware.slashdot.org/article.pl?sid=05/05/13/0529252. In Mac OS X fsync does not flush hard drive buffers: http://lists.apple.com/archives/darwin-dev/2005/Feb/msg00072.html. So the situation is confusing, and tests prove there is a problem.
@advanced_1178_p @advanced_1182_p
Trying to flush hard drive buffers hard, and if you do the performance is very bad. First you need to make sure that the hard drive actually flushes all buffers. Tests show that this can not be done in a reliable way. Then the maximum number of transactions is around 60 per second. Because of those reasons, the default behavior of H2 is to delay writing committed transactions. Trying to flush hard drive buffers hard, and if you do the performance is very bad. First you need to make sure that the hard drive actually flushes all buffers. Tests show that this can not be done in a reliable way. Then the maximum number of transactions is around 60 per second. Because of those reasons, the default behavior of H2 is to delay writing committed transactions.
@advanced_1179_p @advanced_1183_p
In H2, after a power failure, a bit more than one second of committed transactions may be lost. To change the behavior, use SET WRITE_DELAY and CHECKPOINT SYNC. Most other databases support commit delay as well. In the performance comparison, commit delay was used for all databases that support it. In H2, after a power failure, a bit more than one second of committed transactions may be lost. To change the behavior, use SET WRITE_DELAY and CHECKPOINT SYNC. Most other databases support commit delay as well. In the performance comparison, commit delay was used for all databases that support it.
@advanced_1180_h3 @advanced_1184_h3
Running the Durability Test Running the Durability Test
@advanced_1181_p @advanced_1185_p
To test the durability / non-durability of this and other databases, you can use the test application in the package org.h2.test.poweroff. Two computers with network connection are required to run this test. One computer just listens, while the test application is run (and power is cut) on the other computer. The computer with the listener application opens a TCP/IP port and listens for an incoming connection. The second computer first connects to the listener, and then created the databases and starts inserting records. The connection is set to 'autocommit', which means after each inserted record a commit is performed automatically. Afterwards, the test computer notifies the listener that this record was inserted successfully. The listener computer displays the last inserted record number every 10 seconds. Now, switch off the power manually, then restart the computer, and run the application again. You will find out that in most cases, none of the databases contains all the records that the listener computer knows about. For details, please consult the source code of the listener and test application. To test the durability / non-durability of this and other databases, you can use the test application in the package org.h2.test.poweroff. Two computers with network connection are required to run this test. One computer just listens, while the test application is run (and power is cut) on the other computer. The computer with the listener application opens a TCP/IP port and listens for an incoming connection. The second computer first connects to the listener, and then created the databases and starts inserting records. The connection is set to 'autocommit', which means after each inserted record a commit is performed automatically. Afterwards, the test computer notifies the listener that this record was inserted successfully. The listener computer displays the last inserted record number every 10 seconds. Now, switch off the power manually, then restart the computer, and run the application again. You will find out that in most cases, none of the databases contains all the records that the listener computer knows about. For details, please consult the source code of the listener and test application.
@advanced_1182_h2 @advanced_1186_h2
Using the Recover Tool Using the Recover Tool
@advanced_1183_p @advanced_1187_p
The recover tool can be used to extract the contents of a data file, even if the database is corrupted. At this time, it does not extract the content of the log file or large objects (CLOB or BLOB). To run the tool, type on the command line: The recover tool can be used to extract the contents of a data file, even if the database is corrupted. At this time, it does not extract the content of the log file or large objects (CLOB or BLOB). To run the tool, type on the command line:
@advanced_1184_p @advanced_1188_p
For each database in the current directory, a text file will be created. This file contains raw insert statement (for the data) and data definition (DDL) statement to recreate the schema of the database. This file cannot be executed directly, as the raw insert statements don't have the correct table names, so the file needs to be pre-processed manually before executing. For each database in the current directory, a text file will be created. This file contains raw insert statement (for the data) and data definition (DDL) statement to recreate the schema of the database. This file cannot be executed directly, as the raw insert statements don't have the correct table names, so the file needs to be pre-processed manually before executing.
@advanced_1185_h2 @advanced_1189_h2
File Locking Protocols File Locking Protocols
@advanced_1186_p @advanced_1190_p
Whenever a database is opened, a lock file is created to signal other processes that the database is in use. If database is closed, or if the process that opened the database terminates, this lock file is deleted. Whenever a database is opened, a lock file is created to signal other processes that the database is in use. If database is closed, or if the process that opened the database terminates, this lock file is deleted.
@advanced_1187_p @advanced_1191_p
In special cases (if the process did not terminate normally, for example because there was a blackout), the lock file is not deleted by the process that created it. That means the existence of the lock file is not a safe protocol for file locking. However, this software uses a challenge-response protocol to protect the database files. There are two methods (algorithms) implemented to provide both security (that is, the same database files cannot be opened by two processes at the same time) and simplicity (that is, the lock file does not need to be deleted manually by the user). The two methods are 'file method' and 'socket methods'. In special cases (if the process did not terminate normally, for example because there was a blackout), the lock file is not deleted by the process that created it. That means the existence of the lock file is not a safe protocol for file locking. However, this software uses a challenge-response protocol to protect the database files. There are two methods (algorithms) implemented to provide both security (that is, the same database files cannot be opened by two processes at the same time) and simplicity (that is, the lock file does not need to be deleted manually by the user). The two methods are 'file method' and 'socket methods'.
@advanced_1188_h3 @advanced_1192_h3
File Locking Method 'File' File Locking Method 'File'
@advanced_1189_p @advanced_1193_p
The default method for database file locking is the 'File Method'. The algorithm is: The default method for database file locking is the 'File Method'. The algorithm is:
@advanced_1190_li @advanced_1194_li
When the lock file does not exist, it is created (using the atomic operation File.createNewFile). Then, the process waits a little bit (20ms) and checks the file again. If the file was changed during this time, the operation is aborted. This protects against a race condition when a process deletes the lock file just after one create it, and a third process creates the file again. It does not occur if there are only two writers. When the lock file does not exist, it is created (using the atomic operation File.createNewFile). Then, the process waits a little bit (20ms) and checks the file again. If the file was changed during this time, the operation is aborted. This protects against a race condition when a process deletes the lock file just after one create it, and a third process creates the file again. It does not occur if there are only two writers.
@advanced_1191_li @advanced_1195_li
If the file can be created, a random number is inserted together with the locking method ('file'). Afterwards, a watchdog thread is started that checks regularly (every second once by default) if the file was deleted or modified by another (challenger) thread / process. Whenever that occurs, the file is overwritten with the old data. The watchdog thread runs with high priority so that a change to the lock file does not get through undetected even if the system is very busy. However, the watchdog thread does use very little resources (CPU time), because it waits most of the time. Also, the watchdog only reads from the hard disk and does not write to it. If the file can be created, a random number is inserted together with the locking method ('file'). Afterwards, a watchdog thread is started that checks regularly (every second once by default) if the file was deleted or modified by another (challenger) thread / process. Whenever that occurs, the file is overwritten with the old data. The watchdog thread runs with high priority so that a change to the lock file does not get through undetected even if the system is very busy. However, the watchdog thread does use very little resources (CPU time), because it waits most of the time. Also, the watchdog only reads from the hard disk and does not write to it.
@advanced_1192_li @advanced_1196_li
If the lock file exists, and it was modified in the 20 ms, the process waits for some time (up to 10 times). If it was still changed, an exception is thrown (database is locked). This is done to eliminate race conditions with many concurrent writers. Afterwards, the file is overwritten with a new version (challenge). After that, the thread waits for 2 seconds. If there is a watchdog thread protecting the file, he will overwrite the change and this process will fail to lock the database. However, if there is no watchdog thread, the lock file will still be as written by this thread. In this case, the file is deleted and atomically created again. The watchdog thread is started in this case and the file is locked. If the lock file exists, and it was modified in the 20 ms, the process waits for some time (up to 10 times). If it was still changed, an exception is thrown (database is locked). This is done to eliminate race conditions with many concurrent writers. Afterwards, the file is overwritten with a new version (challenge). After that, the thread waits for 2 seconds. If there is a watchdog thread protecting the file, he will overwrite the change and this process will fail to lock the database. However, if there is no watchdog thread, the lock file will still be as written by this thread. In this case, the file is deleted and atomically created again. The watchdog thread is started in this case and the file is locked.
@advanced_1193_p @advanced_1197_p
This algorithm is tested with over 100 concurrent threads. In some cases, when there are many concurrent threads trying to lock the database, they block each other (meaning the file cannot be locked by any of them) for some time. However, the file never gets locked by two threads at the same time. However using that many concurrent threads / processes is not the common use case. Generally, an application should throw an error to the user if it cannot open a database, and not try again in a (fast) loop. This algorithm is tested with over 100 concurrent threads. In some cases, when there are many concurrent threads trying to lock the database, they block each other (meaning the file cannot be locked by any of them) for some time. However, the file never gets locked by two threads at the same time. However using that many concurrent threads / processes is not the common use case. Generally, an application should throw an error to the user if it cannot open a database, and not try again in a (fast) loop.
@advanced_1194_h3 @advanced_1198_h3
File Locking Method 'Socket' File Locking Method 'Socket'
@advanced_1195_p @advanced_1199_p
There is a second locking mechanism implemented, but disabled by default. The algorithm is: There is a second locking mechanism implemented, but disabled by default. The algorithm is:
@advanced_1196_li @advanced_1200_li
If the lock file does not exist, it is created. Then a server socket is opened on a defined port, and kept open. The port and IP address of the process that opened the database is written into the lock file. If the lock file does not exist, it is created. Then a server socket is opened on a defined port, and kept open. The port and IP address of the process that opened the database is written into the lock file.
@advanced_1197_li @advanced_1201_li
If the lock file exists, and the lock method is 'file', then the software switches to the 'file' method. If the lock file exists, and the lock method is 'file', then the software switches to the 'file' method.
@advanced_1198_li @advanced_1202_li
If the lock file exists, and the lock method is 'socket', then the process checks if the port is in use. If the original process is still running, the port is in use and this process throws an exception (database is in use). If the original process died (for example due to a blackout, or abnormal termination of the virtual machine), then the port was released. The new process deletes the lock file and starts again. If the lock file exists, and the lock method is 'socket', then the process checks if the port is in use. If the original process is still running, the port is in use and this process throws an exception (database is in use). If the original process died (for example due to a blackout, or abnormal termination of the virtual machine), then the port was released. The new process deletes the lock file and starts again.
@advanced_1199_p @advanced_1203_p
This method does not require a watchdog thread actively polling (reading) the same file every second. The problem with this method is, if the file is stored on a network share, two processes (running on different computers) could still open the same database files, if they do not have a direct TCP/IP connection. This method does not require a watchdog thread actively polling (reading) the same file every second. The problem with this method is, if the file is stored on a network share, two processes (running on different computers) could still open the same database files, if they do not have a direct TCP/IP connection.
@advanced_1200_h2 @advanced_1204_h2
Protection against SQL Injection Protection against SQL Injection
@advanced_1201_h3 @advanced_1205_h3
What is SQL Injection What is SQL Injection
@advanced_1202_p @advanced_1206_p
This database engine provides a solution for the security vulnerability known as 'SQL Injection'. Here is a short description of what SQL injection means. Some applications build SQL statements with embedded user input such as: This database engine provides a solution for the security vulnerability known as 'SQL Injection'. Here is a short description of what SQL injection means. Some applications build SQL statements with embedded user input such as:
@advanced_1203_p @advanced_1207_p
If this mechanism is used anywhere in the application, and user input is not correctly filtered or encoded, it is possible for a user to inject SQL functionality or statements by using specially built input such as (in this example) this password: ' OR ''='. In this case the statement becomes: If this mechanism is used anywhere in the application, and user input is not correctly filtered or encoded, it is possible for a user to inject SQL functionality or statements by using specially built input such as (in this example) this password: ' OR ''='. In this case the statement becomes:
@advanced_1204_p @advanced_1208_p
Which is always true no matter what the password stored in the database is. For more information about SQL Injection, see Glossary and Links. Which is always true no matter what the password stored in the database is. For more information about SQL Injection, see Glossary and Links.
@advanced_1205_h3 @advanced_1209_h3
Disabling Literals Disabling Literals
@advanced_1206_p @advanced_1210_p
SQL Injection is not possible if user input is not directly embedded in SQL statements. A simple solution for the problem above is to use a PreparedStatement: SQL Injection is not possible if user input is not directly embedded in SQL statements. A simple solution for the problem above is to use a PreparedStatement:
@advanced_1207_p @advanced_1211_p
This database provides a way to enforce usage of parameters when passing user input to the database. This is done by disabling embedded literals in SQL statements. To do this, execute the statement: This database provides a way to enforce usage of parameters when passing user input to the database. This is done by disabling embedded literals in SQL statements. To do this, execute the statement:
@advanced_1208_p @advanced_1212_p
Afterwards, SQL statements with text and number literals are not allowed any more. That means, SQL statement of the form WHERE NAME='abc' or WHERE CustomerId=10 will fail. It is still possible to use PreparedStatements and parameters as described above. Also, it is still possible to generate SQL statements dynamically, and use the Statement API, as long as the SQL statements do not include literals. There is also a second mode where number literals are allowed: SET ALLOW_LITERALS NUMBERS. To allow all literals, execute SET ALLOW_LITERALS ALL (this is the default setting). Literals can only be enabled or disabled by an administrator. Afterwards, SQL statements with text and number literals are not allowed any more. That means, SQL statement of the form WHERE NAME='abc' or WHERE CustomerId=10 will fail. It is still possible to use PreparedStatements and parameters as described above. Also, it is still possible to generate SQL statements dynamically, and use the Statement API, as long as the SQL statements do not include literals. There is also a second mode where number literals are allowed: SET ALLOW_LITERALS NUMBERS. To allow all literals, execute SET ALLOW_LITERALS ALL (this is the default setting). Literals can only be enabled or disabled by an administrator.
@advanced_1209_h3 @advanced_1213_h3
Using Constants Using Constants
@advanced_1210_p @advanced_1214_p
Disabling literals also means disabling hard-coded 'constant' literals. This database supports defining constants using the CREATE CONSTANT command. Constants can be defined only when literals are enabled, but used even when literals are disabled. To avoid name clashes with column names, constants can be defined in other schemas: Disabling literals also means disabling hard-coded 'constant' literals. This database supports defining constants using the CREATE CONSTANT command. Constants can be defined only when literals are enabled, but used even when literals are disabled. To avoid name clashes with column names, constants can be defined in other schemas:
@advanced_1211_p @advanced_1215_p
Even when literals are enabled, it is better to use constants instead of hard-coded number or text literals in queries or views. With constants, typos are found at compile time, the source code is easier to understand and change. Even when literals are enabled, it is better to use constants instead of hard-coded number or text literals in queries or views. With constants, typos are found at compile time, the source code is easier to understand and change.
@advanced_1212_h3 @advanced_1216_h3
Using the ZERO() Function Using the ZERO() Function
@advanced_1213_p @advanced_1217_p
It is not required to create a constant for the number 0 as there is already a built-in function ZERO(): It is not required to create a constant for the number 0 as there is already a built-in function ZERO():
@advanced_1214_h2 @advanced_1218_h2
Security Protocols Security Protocols
@advanced_1215_p @advanced_1219_p
The following paragraphs document the security protocols used in this database. These descriptions are very technical and only intended for security experts that already know the underlying security primitives. The following paragraphs document the security protocols used in this database. These descriptions are very technical and only intended for security experts that already know the underlying security primitives.
@advanced_1216_h3 @advanced_1220_h3
User Password Encryption User Password Encryption
@advanced_1217_p @advanced_1221_p
When a user tries to connect to a database, the combination of user name, @, and password hashed using SHA-256, and this hash value is transmitted to the database. This step does not try to an attacker from re-using the value if he is able to listen to the (unencrypted) transmission between the client and the server. But, the passwords are never transmitted as plain text, even when using an unencrypted connection between client and server. That means if a user reuses the same password for different things, this password is still protected up to some point. See also 'RFC 2617 - HTTP Authentication: Basic and Digest Access Authentication' for more information. When a user tries to connect to a database, the combination of user name, @, and password hashed using SHA-256, and this hash value is transmitted to the database. This step does not try to an attacker from re-using the value if he is able to listen to the (unencrypted) transmission between the client and the server. But, the passwords are never transmitted as plain text, even when using an unencrypted connection between client and server. That means if a user reuses the same password for different things, this password is still protected up to some point. See also 'RFC 2617 - HTTP Authentication: Basic and Digest Access Authentication' for more information.
@advanced_1218_p @advanced_1222_p
When a new database or user is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bit. Using the random salt reduces the risk of an attacker pre-calculating hash values for many different (commonly used) passwords. When a new database or user is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bit. Using the random salt reduces the risk of an attacker pre-calculating hash values for many different (commonly used) passwords.
@advanced_1219_p @advanced_1223_p
The combination of user-password hash value (see above) and salt is hashed using SHA-256. The resulting value is stored in the database. When a user tries to connect to the database, the database combines user-password hash value with the stored salt value and calculated the hash value. Other products use multiple iterations (hash the hash value again and again), but this is not done in this product to reduce the risk of denial of service attacks (where the attacker tries to connect with bogus passwords, and the server spends a lot of time calculating the hash value for each password). The reasoning is: if the attacker has access to the hashed passwords, he also has access to the data in plain text, and therefore does not need the password any more. If the data is protected by storing it on another computer and only remotely, then the iteration count is not required at all. The combination of user-password hash value (see above) and salt is hashed using SHA-256. The resulting value is stored in the database. When a user tries to connect to the database, the database combines user-password hash value with the stored salt value and calculated the hash value. Other products use multiple iterations (hash the hash value again and again), but this is not done in this product to reduce the risk of denial of service attacks (where the attacker tries to connect with bogus passwords, and the server spends a lot of time calculating the hash value for each password). The reasoning is: if the attacker has access to the hashed passwords, he also has access to the data in plain text, and therefore does not need the password any more. If the data is protected by storing it on another computer and only remotely, then the iteration count is not required at all.
@advanced_1220_h3 @advanced_1224_h3
File Encryption File Encryption
@advanced_1221_p @advanced_1225_p
The database files can be encrypted using two different algorithms: AES-128 and XTEA (using 32 rounds). The reasons for supporting XTEA is performance (XTEA is about twice as fast as AES) and to have an alternative algorithm if AES is suddenly broken. The database files can be encrypted using two different algorithms: AES-128 and XTEA (using 32 rounds). The reasons for supporting XTEA is performance (XTEA is about twice as fast as AES) and to have an alternative algorithm if AES is suddenly broken.
@advanced_1222_p @advanced_1226_p
When a user tries to connect to an encrypted database, the combination of the word 'file', @, and the file password is hashed using SHA-256. This hash value is transmitted to the server. When a user tries to connect to an encrypted database, the combination of the word 'file', @, and the file password is hashed using SHA-256. This hash value is transmitted to the server.
@advanced_1223_p @advanced_1227_p
When a new database file is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bit. The combination of the file password hash and the salt value is hashed 1024 times using SHA-256. The reason for the iteration is to make it harder for an attacker to calculate hash values for common passwords. When a new database file is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bit. The combination of the file password hash and the salt value is hashed 1024 times using SHA-256. The reason for the iteration is to make it harder for an attacker to calculate hash values for common passwords.
@advanced_1224_p @advanced_1228_p
The resulting hash value is used as the key for the block cipher algorithm (AES-128 or XTEA with 32 rounds). Then, an initialization vector (IV) key is calculated by hashing the key again using SHA-256. This is to make sure the IV is unknown to the attacker. The reason for using a secret IV is to protect against watermark attacks. The resulting hash value is used as the key for the block cipher algorithm (AES-128 or XTEA with 32 rounds). Then, an initialization vector (IV) key is calculated by hashing the key again using SHA-256. This is to make sure the IV is unknown to the attacker. The reason for using a secret IV is to protect against watermark attacks.
@advanced_1225_p @advanced_1229_p
Before saving a block of data (each block is 8 bytes long), the following operations are executed: First, the IV is calculated by encrypting the block number with the IV key (using the same block cipher algorithm). This IV is combined with the plain text using XOR. The resulting data is encrypted using the AES-128 or XTEA algorithm. Before saving a block of data (each block is 8 bytes long), the following operations are executed: First, the IV is calculated by encrypting the block number with the IV key (using the same block cipher algorithm). This IV is combined with the plain text using XOR. The resulting data is encrypted using the AES-128 or XTEA algorithm.
@advanced_1226_p @advanced_1230_p
When decrypting, the operation is done in reverse. First, the block is decrypted using the key, and then the IV is calculated combined with the decrypted text using XOR. When decrypting, the operation is done in reverse. First, the block is decrypted using the key, and then the IV is calculated combined with the decrypted text using XOR.
@advanced_1227_p @advanced_1231_p
Therefore, the block cipher modes of operation is CBC (Cipher-block chaining), but each chain is only one block long. The advantage over the ECB (Electronic codebook) mode is that patterns in the data are not revealed, and the advantage over multi block CBC is that flipped cipher text bits are not propagated to flipped plaintext bits in the next block. Therefore, the block cipher modes of operation is CBC (Cipher-block chaining), but each chain is only one block long. The advantage over the ECB (Electronic codebook) mode is that patterns in the data are not revealed, and the advantage over multi block CBC is that flipped cipher text bits are not propagated to flipped plaintext bits in the next block.
@advanced_1228_p @advanced_1232_p
Database encryption is meant for securing the database while it is not in use (stolen laptop and so on). It is not meant for cases where the attacker has access to files while the database is in use. When he has write access, he can for example replace pieces of files with pieces of older versions and manipulate data like this. Database encryption is meant for securing the database while it is not in use (stolen laptop and so on). It is not meant for cases where the attacker has access to files while the database is in use. When he has write access, he can for example replace pieces of files with pieces of older versions and manipulate data like this.
@advanced_1229_p @advanced_1233_p
File encryption slows down the performance of the database engine. Compared to unencrypted mode, database operations take about 2.2 times longer when using XTEA, and 2.5 times longer using AES (embedded mode). File encryption slows down the performance of the database engine. Compared to unencrypted mode, database operations take about 2.2 times longer when using XTEA, and 2.5 times longer using AES (embedded mode).
@advanced_1230_h3 @advanced_1234_h3
SSL/TLS Connections SSL/TLS Connections
@advanced_1231_p @advanced_1235_p
Remote SSL/TLS connections are supported using the Java Secure Socket Extension (SSLServerSocket / SSLSocket). By default, anonymous SSL is enabled. The default cipher suite is <code>SSL_DH_anon_WITH_RC4_128_MD5</code> . Remote SSL/TLS connections are supported using the Java Secure Socket Extension (SSLServerSocket / SSLSocket). By default, anonymous SSL is enabled. The default cipher suite is <code>SSL_DH_anon_WITH_RC4_128_MD5</code> .
@advanced_1232_h3 @advanced_1236_h3
HTTPS Connections HTTPS Connections
@advanced_1233_p @advanced_1237_p
The web server supports HTTP and HTTPS connections using SSLServerSocket. There is a default self-certified certificate to support an easy starting point, but custom certificates are supported as well. The web server supports HTTP and HTTPS connections using SSLServerSocket. There is a default self-certified certificate to support an easy starting point, but custom certificates are supported as well.
@advanced_1234_h2 @advanced_1238_h2
Universally Unique Identifiers (UUID) Universally Unique Identifiers (UUID)
@advanced_1235_p @advanced_1239_p
This database supports the UUIDs. Also supported is a function to create new UUIDs using a cryptographically strong pseudo random number generator. With random UUIDs, the chance of two having the same value can be calculated using the probability theory. See also 'Birthday Paradox'. Standardized randomly generated UUIDs have 122 random bits. 4 bits are used for the version (Randomly generated UUID), and 2 bits for the variant (Leach-Salz). This database supports generating such UUIDs using the built-in function RANDOM_UUID(). Here is a small program to estimate the probability of having two identical UUIDs after generating a number of values: This database supports the UUIDs. Also supported is a function to create new UUIDs using a cryptographically strong pseudo random number generator. With random UUIDs, the chance of two having the same value can be calculated using the probability theory. See also 'Birthday Paradox'. Standardized randomly generated UUIDs have 122 random bits. 4 bits are used for the version (Randomly generated UUID), and 2 bits for the variant (Leach-Salz). This database supports generating such UUIDs using the built-in function RANDOM_UUID(). Here is a small program to estimate the probability of having two identical UUIDs after generating a number of values:
@advanced_1236_p @advanced_1240_p
Some values are: Some values are:
@advanced_1237_p @advanced_1241_p
To help non-mathematicians understand what those numbers mean, here a comparison: One's annual risk of being hit by a meteorite is estimated to be one chance in 17 billion, that means the probability is about 0.000'000'000'06. To help non-mathematicians understand what those numbers mean, here a comparison: One's annual risk of being hit by a meteorite is estimated to be one chance in 17 billion, that means the probability is about 0.000'000'000'06.
@advanced_1238_h2 @advanced_1242_h2
Settings Read from System Properties Settings Read from System Properties
@advanced_1239_p @advanced_1243_p
Some settings of the database can be set on the command line using -DpropertyName=value. It is usually not required to change those settings manually. The settings are case sensitive. Example: Some settings of the database can be set on the command line using -DpropertyName=value. It is usually not required to change those settings manually. The settings are case sensitive. Example:
@advanced_1240_p @advanced_1244_p
The current value of the settings can be read in the table INFORMATION_SCHEMA.SETTINGS The current value of the settings can be read in the table INFORMATION_SCHEMA.SETTINGS
@advanced_1241_th @advanced_1245_th
Setting Setting
@advanced_1242_th @advanced_1246_th
Default Default
@advanced_1243_th @advanced_1247_th
Description Description
@advanced_1244_td @advanced_1248_td
h2.check h2.check
@advanced_1245_td @advanced_1249_td
true true
@advanced_1246_td @advanced_1250_td
Assertions in the database engine Assertions in the database engine
@advanced_1247_td @advanced_1251_td
h2.check2 h2.check2
@advanced_1248_td @advanced_1252_td
false false
@advanced_1249_td @advanced_1253_td
Additional assertions Additional assertions
@advanced_1250_td @advanced_1254_td
h2.clientTraceDirectory h2.clientTraceDirectory
@advanced_1251_td @advanced_1255_td
trace.db/ trace.db/
@advanced_1252_td @advanced_1256_td
Directory where the trace files of the JDBC client are stored (only for client / server) Directory where the trace files of the JDBC client are stored (only for client / server)
@advanced_1253_td @advanced_1257_td
h2.emergencySpaceInitial h2.emergencySpaceInitial
@advanced_1254_td @advanced_1258_td
1048576 1048576
@advanced_1255_td @advanced_1259_td
Size of 'reserve' file to detect disk full problems early Size of 'reserve' file to detect disk full problems early
@advanced_1256_td @advanced_1260_td
h2.emergencySpaceMin h2.emergencySpaceMin
@advanced_1257_td @advanced_1261_td
131072 131072
@advanced_1258_td @advanced_1262_td
Minimum size of 'reserve' file Minimum size of 'reserve' file
@advanced_1259_td @advanced_1263_td
h2.lobCloseBetweenReads h2.lobCloseBetweenReads
@advanced_1260_td @advanced_1264_td
false false
@advanced_1261_td @advanced_1265_td
Close LOB files between read operations Close LOB files between read operations
@advanced_1262_td @advanced_1266_td
h2.lobFilesInDirectories h2.lobFilesInDirectories
@advanced_1263_td @advanced_1267_td
false false
@advanced_1264_td @advanced_1268_td
Store LOB files in subdirectories Store LOB files in subdirectories
@advanced_1265_td @advanced_1269_td
h2.lobFilesPerDirectory h2.lobFilesPerDirectory
@advanced_1266_td @advanced_1270_td
256 256
@advanced_1267_td @advanced_1271_td
Maximum number of LOB files per directory Maximum number of LOB files per directory
@advanced_1268_td @advanced_1272_td
h2.logAllErrors h2.logAllErrors
@advanced_1269_td @advanced_1273_td
false false
@advanced_1270_td @advanced_1274_td
Write stack traces of any kind of error to a file Write stack traces of any kind of error to a file
@advanced_1271_td @advanced_1275_td
h2.logAllErrorsFile h2.logAllErrorsFile
@advanced_1272_td @advanced_1276_td
h2errors.txt h2errors.txt
@advanced_1273_td @advanced_1277_td
File name to log errors File name to log errors
@advanced_1274_td @advanced_1278_td
h2.maxFileRetry h2.maxFileRetry
@advanced_1275_td @advanced_1279_td
16 16
@advanced_1276_td @advanced_1280_td
Number of times to retry file delete and rename Number of times to retry file delete and rename
@advanced_1277_td @advanced_1281_td
h2.multiThreadedKernel h2.multiThreadedKernel
@advanced_1278_td @advanced_1282_td
false false
@advanced_1279_td @advanced_1283_td
Allow multiple sessions to run concurrently Allow multiple sessions to run concurrently
@advanced_1280_td @advanced_1284_td
h2.objectCache h2.objectCache
@advanced_1281_td @advanced_1285_td
true true
@advanced_1282_td @advanced_1286_td
Cache commonly used objects (integers, strings) Cache commonly used objects (integers, strings)
@advanced_1283_td @advanced_1287_td
h2.objectCacheMaxPerElementSize h2.objectCacheMaxPerElementSize
@advanced_1284_td @advanced_1288_td
4096 4096
@advanced_1285_td @advanced_1289_td
Maximum size of an object in the cache Maximum size of an object in the cache
@advanced_1286_td @advanced_1290_td
h2.objectCacheSize h2.objectCacheSize
@advanced_1287_td @advanced_1291_td
1024 1024
@advanced_1288_td @advanced_1292_td
Size of object cache Size of object cache
@advanced_1289_td @advanced_1293_td
h2.optimizeEvaluatableSubqueries h2.optimizeEvaluatableSubqueries
@advanced_1290_td @advanced_1294_td
true true
@advanced_1291_td @advanced_1295_td
Optimize subqueries that are not dependent on the outer query Optimize subqueries that are not dependent on the outer query
@advanced_1292_td @advanced_1296_td
h2.optimizeIn h2.optimizeIn
@advanced_1293_td @advanced_1297_td
true true
@advanced_1294_td @advanced_1298_td
Optimize IN(...) comparisons Optimize IN(...) comparisons
@advanced_1295_td @advanced_1299_td
h2.optimizeMinMax h2.optimizeMinMax
@advanced_1296_td @advanced_1300_td
true true
@advanced_1297_td @advanced_1301_td
Optimize MIN and MAX aggregate functions Optimize MIN and MAX aggregate functions
@advanced_1298_td @advanced_1302_td
h2.optimizeSubqueryCache h2.optimizeSubqueryCache
@advanced_1299_td @advanced_1303_td
true true
@advanced_1300_td @advanced_1304_td
Cache subquery results Cache subquery results
@advanced_1301_td @advanced_1305_td
h2.overflowExceptions h2.overflowExceptions
@advanced_1302_td @advanced_1306_td
true true
@advanced_1303_td @advanced_1307_td
Throw an exception on integer overflows Throw an exception on integer overflows
@advanced_1304_td @advanced_1308_td
h2.recompileAlways h2.recompileAlways
@advanced_1305_td @advanced_1309_td
false false
@advanced_1306_td @advanced_1310_td
Always recompile prepared statements Always recompile prepared statements
@advanced_1307_td @advanced_1311_td
h2.redoBufferSize h2.redoBufferSize
@advanced_1308_td @advanced_1312_td
262144 262144
@advanced_1309_td @advanced_1313_td
Size of the redo buffer (used at startup when recovering) Size of the redo buffer (used at startup when recovering)
@advanced_1310_td @advanced_1314_td
h2.runFinalizers h2.runFinalizers
@advanced_1311_td @advanced_1315_td
true true
@advanced_1312_td @advanced_1316_td
Run finalizers to detect unclosed connections Run finalizers to detect unclosed connections
@advanced_1313_td @advanced_1317_td
h2.scriptDirectory h2.scriptDirectory
@advanced_1314_td @advanced_1318_td
Relative or absolute directory where the script files are stored to or read from Relative or absolute directory where the script files are stored to or read from
@advanced_1315_td @advanced_1319_td
h2.serverCachedObjects h2.serverCachedObjects
@advanced_1316_td @advanced_1320_td
64 64
@advanced_1317_td @advanced_1321_td
TCP Server: number of cached objects per session TCP Server: number of cached objects per session
@advanced_1318_td @advanced_1322_td
h2.serverSmallResultSetSize h2.serverSmallResultSetSize
@advanced_1319_td @advanced_1323_td
100 100
@advanced_1320_td @advanced_1324_td
TCP Server: result sets below this size are sent in one block TCP Server: result sets below this size are sent in one block
@advanced_1321_h2 @advanced_1325_h2
Glossary and Links Glossary and Links
@advanced_1322_th @advanced_1326_th
Term Term
@advanced_1323_th @advanced_1327_th
Description Description
@advanced_1324_td @advanced_1328_td
AES-128 AES-128
@advanced_1325_td @advanced_1329_td
A block encryption algorithm. See also: <a href="http://en.wikipedia.org/wiki/Advanced_Encryption_Standard">Wikipedia: AES</a> A block encryption algorithm. See also: <a href="http://en.wikipedia.org/wiki/Advanced_Encryption_Standard">Wikipedia: AES</a>
@advanced_1326_td @advanced_1330_td
Birthday Paradox Birthday Paradox
@advanced_1327_td @advanced_1331_td
Describes the higher than expected probability that two persons in a room have the same birthday. Also valid for randomly generated UUIDs. See also: <a href="http://en.wikipedia.org/wiki/Birthday_paradox">Wikipedia: Birthday Paradox</a> Describes the higher than expected probability that two persons in a room have the same birthday. Also valid for randomly generated UUIDs. See also: <a href="http://en.wikipedia.org/wiki/Birthday_paradox">Wikipedia: Birthday Paradox</a>
@advanced_1328_td @advanced_1332_td
Digest Digest
@advanced_1329_td @advanced_1333_td
Protocol to protect a password (but not to protect data). See also: <a href="http://www.faqs.org/rfcs/rfc2617.html">RFC 2617: HTTP Digest Access Authentication</a> Protocol to protect a password (but not to protect data). See also: <a href="http://www.faqs.org/rfcs/rfc2617.html">RFC 2617: HTTP Digest Access Authentication</a>
@advanced_1330_td @advanced_1334_td
GCJ GCJ
@advanced_1331_td @advanced_1335_td
GNU Compiler for Java. <a href="http://gcc.gnu.org/java/">http://gcc.gnu.org/java/</a> and <a href="http://nativej.mtsystems.ch">http://nativej.mtsystems.ch/ (not free any more)</a> GNU Compiler for Java. <a href="http://gcc.gnu.org/java/">http://gcc.gnu.org/java/</a> and <a href="http://nativej.mtsystems.ch">http://nativej.mtsystems.ch/ (not free any more)</a>
@advanced_1332_td @advanced_1336_td
HTTPS HTTPS
@advanced_1333_td @advanced_1337_td
A protocol to provide security to HTTP connections. See also: <a href="http://www.ietf.org/rfc/rfc2818.txt">RFC 2818: HTTP Over TLS</a> A protocol to provide security to HTTP connections. See also: <a href="http://www.ietf.org/rfc/rfc2818.txt">RFC 2818: HTTP Over TLS</a>
@advanced_1334_td @advanced_1338_td
Modes of Operation Modes of Operation
@advanced_1335_a @advanced_1339_a
Wikipedia: Block cipher modes of operation Wikipedia: Block cipher modes of operation
@advanced_1336_td @advanced_1340_td
Salt Salt
@advanced_1337_td @advanced_1341_td
Random number to increase the security of passwords. See also: <a href="http://en.wikipedia.org/wiki/Key_derivation_function">Wikipedia: Key derivation function</a> Random number to increase the security of passwords. See also: <a href="http://en.wikipedia.org/wiki/Key_derivation_function">Wikipedia: Key derivation function</a>
@advanced_1338_td @advanced_1342_td
SHA-256 SHA-256
@advanced_1339_td @advanced_1343_td
A cryptographic one-way hash function. See also: <a href="http://en.wikipedia.org/wiki/SHA_family">Wikipedia: SHA hash functions</a> A cryptographic one-way hash function. See also: <a href="http://en.wikipedia.org/wiki/SHA_family">Wikipedia: SHA hash functions</a>
@advanced_1340_td @advanced_1344_td
SQL Injection SQL Injection
@advanced_1341_td @advanced_1345_td
A security vulnerability where an application generates SQL statements with embedded user input. See also: <a href="http://en.wikipedia.org/wiki/SQL_injection">Wikipedia: SQL Injection</a> A security vulnerability where an application generates SQL statements with embedded user input. See also: <a href="http://en.wikipedia.org/wiki/SQL_injection">Wikipedia: SQL Injection</a>
@advanced_1342_td @advanced_1346_td
Watermark Attack Watermark Attack
@advanced_1343_td @advanced_1347_td
Security problem of certain encryption programs where the existence of certain data can be proven without decrypting. For more information, search in the internet for 'watermark attack cryptoloop' Security problem of certain encryption programs where the existence of certain data can be proven without decrypting. For more information, search in the internet for 'watermark attack cryptoloop'
@advanced_1344_td @advanced_1348_td
SSL/TLS SSL/TLS
@advanced_1345_td @advanced_1349_td
Secure Sockets Layer / Transport Layer Security. See also: <a href="http://java.sun.com/products/jsse/">Java Secure Socket Extension (JSSE)</a> Secure Sockets Layer / Transport Layer Security. See also: <a href="http://java.sun.com/products/jsse/">Java Secure Socket Extension (JSSE)</a>
@advanced_1346_td @advanced_1350_td
XTEA XTEA
@advanced_1347_td @advanced_1351_td
A block encryption algorithm. See also: <a href="http://en.wikipedia.org/wiki/XTEA">Wikipedia: XTEA</a> A block encryption algorithm. See also: <a href="http://en.wikipedia.org/wiki/XTEA">Wikipedia: XTEA</a>
@build_1000_h1 @build_1000_h1
......
...@@ -14,1034 +14,1046 @@ Result Sets ...@@ -14,1034 +14,1046 @@ Result Sets
トランザクション分離 トランザクション分離
@advanced_1005_a @advanced_1005_a
クラスタリング / 高可用性 #Multi-Version Concurrency Control (MVCC)
@advanced_1006_a @advanced_1006_a
2フェーズコミット クラスタリング / 高可用性
@advanced_1007_a @advanced_1007_a
互換性 2フェーズコミット
@advanced_1008_a @advanced_1008_a
Windowsサービスとして実行する 互換性
@advanced_1009_a @advanced_1009_a
ODBCドライバ Windowsサービスとして実行する
@advanced_1010_a @advanced_1010_a
ACID ODBCドライバ
@advanced_1011_a @advanced_1011_a
永続性問題 ACID
@advanced_1012_a @advanced_1012_a
リカバーツールを使用する 永続性問題
@advanced_1013_a @advanced_1013_a
ファイルロックプロトコル リカバーツールを使用する
@advanced_1014_a @advanced_1014_a
SQLインジェクションに対する防御 ファイルロックプロトコル
@advanced_1015_a @advanced_1015_a
セキュリティプロトコル SQLインジェクションに対する防御
@advanced_1016_a @advanced_1016_a
汎用一意識別子 (UUID) セキュリティプロトコル
@advanced_1017_a @advanced_1017_a
システムプロパティから読み込まれた設定 汎用一意識別子 (UUID)
@advanced_1018_a @advanced_1018_a
システムプロパティから読み込まれた設定
@advanced_1019_a
用語集とリンク 用語集とリンク
@advanced_1019_h2 @advanced_1020_h2
Result Sets Result Sets
@advanced_1020_h3 @advanced_1021_h3
行数の制限 行数の制限
@advanced_1021_p @advanced_1022_p
アプリケーションから結果が返される前に、全ての行はデータベースによって読み取られます。 サーバー側のカーソルは現在サポートされていません。もし最初の数行がアプリケーションに読み取られたら、 result setサイズはパフォーマンスを改善するために制限されます。これは、クエリーの LIMIT を使用することで 実現できます (例: SELECT * FROM TEST LIMIT 100)、または Statement.setMaxRows(max) を使用します。 アプリケーションから結果が返される前に、全ての行はデータベースによって読み取られます。 サーバー側のカーソルは現在サポートされていません。もし最初の数行がアプリケーションに読み取られたら、 result setサイズはパフォーマンスを改善するために制限されます。これは、クエリーの LIMIT を使用することで 実現できます (例: SELECT * FROM TEST LIMIT 100)、または Statement.setMaxRows(max) を使用します。
@advanced_1022_h3 @advanced_1023_h3
大きなResult Set と外部ソート 大きなResult Set と外部ソート
@advanced_1023_p @advanced_1024_p
1000行以上のresult setのために、結果はディスクにバッファーされます。 もし ORDER BY が使用されていたら、ソートは、外部ソートアルゴリズムを使用して 完了しています。このケースでは、それぞれの行のブロックはクイックソートを使用してソートされ、 ディスクに書き込まれています; データを読み込んでいる時、ブロックは一緒にマージされます。 1000行以上のresult setのために、結果はディスクにバッファーされます。 もし ORDER BY が使用されていたら、ソートは、外部ソートアルゴリズムを使用して 完了しています。このケースでは、それぞれの行のブロックはクイックソートを使用してソートされ、 ディスクに書き込まれています; データを読み込んでいる時、ブロックは一緒にマージされます。
@advanced_1024_h2 @advanced_1025_h2
大きなオブジェクト 大きなオブジェクト
@advanced_1025_h3 @advanced_1026_h3
大きなオブジェクトのソートと読み込み 大きなオブジェクトのソートと読み込み
@advanced_1026_p @advanced_1027_p
メモリに収まらないオブジェクトは可能であるなら、 データ型は CLOB (テキストデータ) または BLOB (バイナリーデータ) が使用されるべきです。 これらのデータ型に関して、オブジェクトはストリームを使用して、完全にメモリから読み込まれるというわけではありません。 BLOB を保存するためには、PreparedStatement.setBinaryStream を使用します。 CLOB を使用するためには、PreparedStatement.setCharacterStream を使用します。 BLOB を読み込みためには、ResultSet.getBinaryStream を使用し、CLOB を読み込むために ResultSet.getCharacterStream を使用します。もし クライアント / サーバーモードが使用されていたら、 BLOB と CLOB データはアクセス時に完全にメモリから読み込まれます。このケースでは、メモリによって BLOB と CLOB のサイズは制限されています。 メモリに収まらないオブジェクトは可能であるなら、 データ型は CLOB (テキストデータ) または BLOB (バイナリーデータ) が使用されるべきです。 これらのデータ型に関して、オブジェクトはストリームを使用して、完全にメモリから読み込まれるというわけではありません。 BLOB を保存するためには、PreparedStatement.setBinaryStream を使用します。 CLOB を使用するためには、PreparedStatement.setCharacterStream を使用します。 BLOB を読み込みためには、ResultSet.getBinaryStream を使用し、CLOB を読み込むために ResultSet.getCharacterStream を使用します。もし クライアント / サーバーモードが使用されていたら、 BLOB と CLOB データはアクセス時に完全にメモリから読み込まれます。このケースでは、メモリによって BLOB と CLOB のサイズは制限されています。
@advanced_1027_h2 @advanced_1028_h2
リンクテーブル リンクテーブル
@advanced_1028_p @advanced_1029_p
このデータベースはリンクテーブルをサポートしています。これは、 現在存在しないテーブルは、ただ他のデータベースへリンクするという意味です。 このようなリンクを作るには、CREATE LINKED TABLE ステートメントを使用します: このデータベースはリンクテーブルをサポートしています。これは、 現在存在しないテーブルは、ただ他のデータベースへリンクするという意味です。 このようなリンクを作るには、CREATE LINKED TABLE ステートメントを使用します:
@advanced_1029_p @advanced_1030_p
この時、通常の方法でテーブルにアクセスすることが可能です。このテーブルにデータを挿入する時、 制限があります: テーブルに行を挿入、または更新する時、insertステートメントで設定されていないNULLと値は、 両方ともNULLとして挿入されます。目的のテーブルのデフォルト値がNULL以外なら、 望みどおりの効果は得られません。 この時、通常の方法でテーブルにアクセスすることが可能です。このテーブルにデータを挿入する時、 制限があります: テーブルに行を挿入、または更新する時、insertステートメントで設定されていないNULLと値は、 両方ともNULLとして挿入されます。目的のテーブルのデフォルト値がNULL以外なら、 望みどおりの効果は得られません。
@advanced_1030_p @advanced_1031_p
各リンクテーブルの新しい接続は開かれます。多くのリンクテーブルが使用されている時、一部データベースにとってこれは問題となり得ます。Oracle XEでは、接続の最大数を増加することができます。Oracle XEは次の値の変更後、再起動する必要があります: 各リンクテーブルの新しい接続は開かれます。多くのリンクテーブルが使用されている時、一部データベースにとってこれは問題となり得ます。Oracle XEでは、接続の最大数を増加することができます。Oracle XEは次の値の変更後、再起動する必要があります:
@advanced_1031_h2 @advanced_1032_h2
トランザクション分離 トランザクション分離
@advanced_1032_p @advanced_1033_p
このデータベースは次のトランザクション分離レベルをサポートしています: このデータベースは次のトランザクション分離レベルをサポートしています:
@advanced_1033_b @advanced_1034_b
Read Committed (コミット済み読み取り) Read Committed (コミット済み読み取り)
@advanced_1034_li @advanced_1035_li
これはデフォルトレベルです。 これはデフォルトレベルです。
read lockは早急に解除されます。 このレベルを使用する時、高い同時並行性が可能です。 read lockは早急に解除されます。 このレベルを使用する時、高い同時並行性が可能です。
これは多数のデータベースシステムで使用される分離レベルです。 これは多数のデータベースシステムで使用される分離レベルです。
@advanced_1035_li @advanced_1036_li
これを有効にするには、 SQLステートメント 'SET LOCK_MODE 3' を実行します。 これを有効にするには、 SQLステートメント 'SET LOCK_MODE 3' を実行します。
@advanced_1036_li @advanced_1037_li
または、;LOCK_MODE=3 をデータベースURLに付け加えます: jdbc:h2:~/test;LOCK_MODE=3 または、;LOCK_MODE=3 をデータベースURLに付け加えます: jdbc:h2:~/test;LOCK_MODE=3
@advanced_1037_b @advanced_1038_b
Serializable (直列化) Serializable (直列化)
@advanced_1038_li @advanced_1039_li
これを有効にするには、 SQLステートメント 'SET LOCK_MODE 1' を実行します。 これを有効にするには、 SQLステートメント 'SET LOCK_MODE 1' を実行します。
@advanced_1039_li @advanced_1040_li
または、;LOCK_MODE=1 をデータベースURLに付け加えます: jdbc:h2:~/test;LOCK_MODE=1 または、;LOCK_MODE=1 をデータベースURLに付け加えます: jdbc:h2:~/test;LOCK_MODE=1
@advanced_1040_b @advanced_1041_b
Read Uncommitted (非コミット読み取り) Read Uncommitted (非コミット読み取り)
@advanced_1041_li @advanced_1042_li
このレベルの意味は、トランザクション分離は無効だということです。 このレベルの意味は、トランザクション分離は無効だということです。
@advanced_1042_li @advanced_1043_li
これを有効にするには、SQLステートメント 'SET LOCK_MODE 0' を実行します これを有効にするには、SQLステートメント 'SET LOCK_MODE 0' を実行します
@advanced_1043_li @advanced_1044_li
または、;LOCK_MODE=0 をデータベースURLに付け加えます: jdbc:h2:~/test;LOCK_MODE=0 または、;LOCK_MODE=0 をデータベースURLに付け加えます: jdbc:h2:~/test;LOCK_MODE=0
@advanced_1044_p @advanced_1045_p
分離レベル "serializable" を使用している時、ダーティリード、反復不可能読み取り、 ファントムリードを防ぐことができます。 分離レベル "serializable" を使用している時、ダーティリード、反復不可能読み取り、 ファントムリードを防ぐことができます。
@advanced_1045_b @advanced_1046_b
Dirty Reads (ダーティリード) Dirty Reads (ダーティリード)
@advanced_1046_li @advanced_1047_li
他の接続によるコミットされていない変更を読み取ることができる、という意味です。 他の接続によるコミットされていない変更を読み取ることができる、という意味です。
@advanced_1047_li @advanced_1048_li
実行可能: read uncommitted (非コミット読み取り) 実行可能: read uncommitted (非コミット読み取り)
@advanced_1048_b @advanced_1049_b
Non-Repeatable Reads (反復不可能読み取り) Non-Repeatable Reads (反復不可能読み取り)
@advanced_1049_li @advanced_1050_li
ひとつの接続が行を読み取り、 他の接続が行を変更し、コミットすると、最初の接続は同じ行を再読し、新しい結果を取得します。 ひとつの接続が行を読み取り、 他の接続が行を変更し、コミットすると、最初の接続は同じ行を再読し、新しい結果を取得します。
@advanced_1050_li @advanced_1051_li
実行可能: read uncommitted (非コミット読み取り)、read committed (コミット済み読み取り) 実行可能: read uncommitted (非コミット読み取り)、read committed (コミット済み読み取り)
@advanced_1051_b @advanced_1052_b
Phantom Reads (ファントムリード) Phantom Reads (ファントムリード)
@advanced_1052_li @advanced_1053_li
ひとつの接続が条件を使って行の集まりを読み取り、 他の接続がこの条件を壊して行を挿入し、コミットした時、最初の接続は同じ条件を使って再読し、 新しい行を取得します。 ひとつの接続が条件を使って行の集まりを読み取り、 他の接続がこの条件を壊して行を挿入し、コミットした時、最初の接続は同じ条件を使って再読し、 新しい行を取得します。
@advanced_1053_li @advanced_1054_li
実行可能: read uncommitted (非コミット読み取り)、read committed (コミット済み読み取り) 実行可能: read uncommitted (非コミット読み取り)、read committed (コミット済み読み取り)
@advanced_1054_h3 @advanced_1055_h3
テーブルレベルロック テーブルレベルロック
@advanced_1055_p @advanced_1056_p
このデータベースは、同じデータベースへの複数の並列接続を許可しています。 全ての接続が一貫性のあるデータのみ参照できることを確認するために、テーブルレベルのロックを使用しています。 このメカニズムは高い同時並行性を要求しませんが、とても高速です。 共有ロックと排他ロックがサポートされています。 テーブルから読み取る前に、データベースはテーブルに共有ロックを追加しようとします (これはオブジェクト上に他の接続による排他ロックがない場合にのみ可能です)。 共有ロックが正常に追加されたら、テーブルを読み取ることができます。 他の接続も同じオブジェクトに共有ロックを持つことは許可されています。もし接続がテーブルに書き込みをしたいのであれば (行の更新、または削除)、排他ロックが必要です。排他ロックを取得するためには、 他の接続はオブジェクト上にどんなロックも持っていてはいけません。 接続のコミット後、全てのロックは解除されます。このデータベースは全てのロックをメモリ内に保持します。 このデータベースは、同じデータベースへの複数の並列接続を許可しています。 全ての接続が一貫性のあるデータのみ参照できることを確認するために、テーブルレベルのロックを使用しています。 このメカニズムは高い同時並行性を要求しませんが、とても高速です。 共有ロックと排他ロックがサポートされています。 テーブルから読み取る前に、データベースはテーブルに共有ロックを追加しようとします (これはオブジェクト上に他の接続による排他ロックがない場合にのみ可能です)。 共有ロックが正常に追加されたら、テーブルを読み取ることができます。 他の接続も同じオブジェクトに共有ロックを持つことは許可されています。もし接続がテーブルに書き込みをしたいのであれば (行の更新、または削除)、排他ロックが必要です。排他ロックを取得するためには、 他の接続はオブジェクト上にどんなロックも持っていてはいけません。 接続のコミット後、全てのロックは解除されます。このデータベースは全てのロックをメモリ内に保持します。
@advanced_1056_h3 @advanced_1057_h3
ロックタイムアウト ロックタイムアウト
@advanced_1057_p @advanced_1058_p
もし接続がオブジェクト上でロックを取得できないのであれば、一定時間待機します (ロックタイムアウト)。この時間の間、うまくいけば接続はロックコミットを保有し、 この時、ロックを取得することが可能です。他の接続がロックを解除しないため、 これが不可能であれば、失敗した接続がロックタイムアウト例外を取得します。 それぞれの接続に個別にロックタイムアウトを設定することができます。 もし接続がオブジェクト上でロックを取得できないのであれば、一定時間待機します (ロックタイムアウト)。この時間の間、うまくいけば接続はロックコミットを保有し、 この時、ロックを取得することが可能です。他の接続がロックを解除しないため、 これが不可能であれば、失敗した接続がロックタイムアウト例外を取得します。 それぞれの接続に個別にロックタイムアウトを設定することができます。
@advanced_1058_h2 @advanced_1059_h2
#Multi-Version Concurrency Control (MVCC)
@advanced_1060_p
#The MVCC feature allows higher concurrency than using (table level or row level) locks. When using MVCC in this database, delete, insert and update operations will only issue a shared lock on the table. Table are still locked exclusively when adding or removing columns, when dropping the table, and when using SELECT ... FOR UPDATE. Connections only 'see' committed data, and own changes. That means, if connection A updates a row but doesn't commit this change yet, connection B will see the old value. Only when the change is committed, the new value is visible by other connections (read committed). If multiple connections concurrently try to update the same row, this database fails fast: a concurrent update exception is thrown. #クラスタリングはサーバーモードでのみ使用できます (エンベッドモードはクラスタリングをサポートしていません)。 サーバーを停止しないでクラスタを回復することは可能ですが、二番目のデータベースが回復している間に、 他のどんなアプリケーションでも最初のデータベースのデータを変更しないことは重要なため、 クラスタを回復するのは現在手動プロセスです。
@advanced_1061_p
#To use the MVCC feature, append MVCC=TRUE to the database URL: #クラスタを初期化するには、次の手順に従います:
@advanced_1062_h2
クラスタリング / 高可用性 クラスタリング / 高可用性
@advanced_1059_p @advanced_1063_p
このデータベースは簡単なクラスタリング / 高可用性メカニズムをサポートしています。 アーキテクチャ: 二つのデータベースサーバーは二つの異なったコンピューター上で動作し、 両方のコンピューターは同じデータベースのコピーです。もし両方のサーバーが動いたら、 それぞれのデータベース操作は両方のコンピューター上で実行されます。ひとつのサーバーがおちたら (電源、ハードウェア、またはネットワーク障害)、他のサーバーはまだ動作を続行します。 このポイントから、操作は他のサーバーがバックアップされるまで、ひとつのサーバー上で実行されます。 このデータベースは簡単なクラスタリング / 高可用性メカニズムをサポートしています。 アーキテクチャ: 二つのデータベースサーバーは二つの異なったコンピューター上で動作し、 両方のコンピューターは同じデータベースのコピーです。もし両方のサーバーが動いたら、 それぞれのデータベース操作は両方のコンピューター上で実行されます。ひとつのサーバーがおちたら (電源、ハードウェア、またはネットワーク障害)、他のサーバーはまだ動作を続行します。 このポイントから、操作は他のサーバーがバックアップされるまで、ひとつのサーバー上で実行されます。
@advanced_1060_p @advanced_1064_p
クラスタリングはサーバーモードでのみ使用できます (エンベッドモードはクラスタリングをサポートしていません)。 サーバーを停止しないでクラスタを回復することは可能ですが、二番目のデータベースが回復している間に、 他のどんなアプリケーションでも最初のデータベースのデータを変更しないことは重要なため、 クラスタを回復するのは現在手動プロセスです。 クラスタリングはサーバーモードでのみ使用できます (エンベッドモードはクラスタリングをサポートしていません)。 サーバーを停止しないでクラスタを回復することは可能ですが、二番目のデータベースが回復している間に、 他のどんなアプリケーションでも最初のデータベースのデータを変更しないことは重要なため、 クラスタを回復するのは現在手動プロセスです。
@advanced_1061_p @advanced_1065_p
クラスタを初期化するには、次の手順に従います: クラスタを初期化するには、次の手順に従います:
@advanced_1062_li @advanced_1066_li
データベースを作成する データベースを作成する
@advanced_1063_li @advanced_1067_li
他の位置にデータベースをコピーし、クラスタリングを初期化するために、 CreateClusterツールを使用します。その後、同じデータが含まれる二つのデータベースを所有します。 他の位置にデータベースをコピーし、クラスタリングを初期化するために、 CreateClusterツールを使用します。その後、同じデータが含まれる二つのデータベースを所有します。
@advanced_1064_li @advanced_1068_li
二つのサーバーを起動します (ひとつはそれぞれのデータベースのコピー) 二つのサーバーを起動します (ひとつはそれぞれのデータベースのコピー)
@advanced_1065_li @advanced_1069_li
これでクライアントアプリケーションのデータベースに接続する準備ができました これでクライアントアプリケーションのデータベースに接続する準備ができました
@advanced_1066_h3 @advanced_1070_h3
CreateClusterツールを使用する CreateClusterツールを使用する
@advanced_1067_p @advanced_1071_p
クラスタリングがどのように機能するか理解するために、 次の例を試してみて下さい。この例では、二つのデータベースは同じコンピューター内に属していますが、 通常は、データベースは異なるサーバー内にあります。 クラスタリングがどのように機能するか理解するために、 次の例を試してみて下さい。この例では、二つのデータベースは同じコンピューター内に属していますが、 通常は、データベースは異なるサーバー内にあります。
@advanced_1068_li @advanced_1072_li
二つのディレクトリを作成します: server1 と server2 です。それぞれのディレクトリは コンピューター上のディレクトリをシミュレートします。 二つのディレクトリを作成します: server1 と server2 です。それぞれのディレクトリは コンピューター上のディレクトリをシミュレートします。
@advanced_1069_li @advanced_1073_li
最初のディレクトリを示してTCPサーバーを起動します。 次のコマンドラインを使用して実行できます: 最初のディレクトリを示してTCPサーバーを起動します。 次のコマンドラインを使用して実行できます:
@advanced_1070_li @advanced_1074_li
二番目のディレクトリを示して二番目のTCPサーバーを起動します。 これは二番目の (重複の) コンピューターで動いているサーバーをシミュレートします。 次のコマンドラインを使用して実行できます: 二番目のディレクトリを示して二番目のTCPサーバーを起動します。 これは二番目の (重複の) コンピューターで動いているサーバーをシミュレートします。 次のコマンドラインを使用して実行できます:
@advanced_1071_li @advanced_1075_li
クラスタリングを初期化するためにCreateClusterツールを使用します。 データベースが存在しなければ、自動的に新しい、空のデータベースを作成します。 次のコマンドラインでツールを実行します: クラスタリングを初期化するためにCreateClusterツールを使用します。 データベースが存在しなければ、自動的に新しい、空のデータベースを作成します。 次のコマンドラインでツールを実行します:
@advanced_1072_li @advanced_1076_li
これでアプリケーション、または JDBC URL jdbc:h2:tcp://localhost:9101,localhost:9102/test を使用したH2コンソールを使用してデータベースへアクセスできます。 これでアプリケーション、または JDBC URL jdbc:h2:tcp://localhost:9101,localhost:9102/test を使用したH2コンソールを使用してデータベースへアクセスできます。
@advanced_1073_li @advanced_1077_li
サーバーを止めたら (プロセスを無視して)、他のマシンは動作を続行し、 従ってデータベースもまだアクセス可能だということがわかります。 サーバーを止めたら (プロセスを無視して)、他のマシンは動作を続行し、 従ってデータベースもまだアクセス可能だということがわかります。
@advanced_1074_li @advanced_1078_li
クラスタを回復するために、まず最初に失敗したデータベースを削除し、止められていたサーバーを 再起動します。そして、CreateClusterツールを再実行します。 クラスタを回復するために、まず最初に失敗したデータベースを削除し、止められていたサーバーを 再起動します。そして、CreateClusterツールを再実行します。
@advanced_1075_h3 @advanced_1079_h3
クラスタリングアルゴリズムと制限 クラスタリングアルゴリズムと制限
@advanced_1076_p @advanced_1080_p
読み取り専用クエリーは、最初のクラスタノードに対してのみ 実行されますが、他の全てのステートメントは全てのノードに対して実行されます。 現在、トランザクションの問題を回避するように作られたロードバランシングは存在しません。 次の関数は、異なったクラスタノード上で異なった結果をもたらすので、実行には注意して下さい: RANDOM_UUID()、SECURE_RAND()、SESSION_ID()、MEMORY_FREE()、 MEMORY_USED()、CSVREAD()、CSVWRITE()、RAND() [seed を使用していない時] 直接ステートメントを変更する際に、これらの関数を使用してはなりません (例: INSERT、 UPDATE、または MERGE)。しかし、読み取り専用ステートメントでは使用でき、 結果はステートメントを変更するために使用することができます。 読み取り専用クエリーは、最初のクラスタノードに対してのみ 実行されますが、他の全てのステートメントは全てのノードに対して実行されます。 現在、トランザクションの問題を回避するように作られたロードバランシングは存在しません。 次の関数は、異なったクラスタノード上で異なった結果をもたらすので、実行には注意して下さい: RANDOM_UUID()、SECURE_RAND()、SESSION_ID()、MEMORY_FREE()、 MEMORY_USED()、CSVREAD()、CSVWRITE()、RAND() [seed を使用していない時] 直接ステートメントを変更する際に、これらの関数を使用してはなりません (例: INSERT、 UPDATE、または MERGE)。しかし、読み取り専用ステートメントでは使用でき、 結果はステートメントを変更するために使用することができます。
@advanced_1077_h2 @advanced_1081_h2
2フェーズコミット 2フェーズコミット
@advanced_1078_p @advanced_1082_p
2フェーズコミットプロトコルがサポートされています。 2フェーズコミットは次のように機能します: 2フェーズコミットプロトコルがサポートされています。 2フェーズコミットは次のように機能します:
@advanced_1079_li @advanced_1083_li
オートコミットはOFFの状態であることが必要です オートコミットはOFFの状態であることが必要です
@advanced_1080_li @advanced_1084_li
トランザクションは、例えば行を挿入することによって、起動されます トランザクションは、例えば行を挿入することによって、起動されます
@advanced_1081_li @advanced_1085_li
トランザクションは、SQLステートメント PREPARE COMMIT transactionName を実行することによって "prepared" とマークされます トランザクションは、SQLステートメント PREPARE COMMIT transactionName を実行することによって "prepared" とマークされます
@advanced_1082_li @advanced_1086_li
現在トランザクションはコミット、またはロールバックすることができます 現在トランザクションはコミット、またはロールバックすることができます
@advanced_1083_li @advanced_1087_li
トランザクションがコミット、またはロールバックに成功する前に問題が起きたら (例えば、ネットワークの問題が起きたことによって)、トランザクションは "in-doubt" の状態になります トランザクションがコミット、またはロールバックに成功する前に問題が起きたら (例えば、ネットワークの問題が起きたことによって)、トランザクションは "in-doubt" の状態になります
@advanced_1084_li @advanced_1088_li
データベースへの再接続時、in-doubtトランザクションは SELECT * FROM INFORMATION_SCHEMA.IN_DOUBT でリストアップされます データベースへの再接続時、in-doubtトランザクションは SELECT * FROM INFORMATION_SCHEMA.IN_DOUBT でリストアップされます
@advanced_1085_li @advanced_1089_li
リスト上のそれぞれのトランザクションは、COMMIT TRANSACTION transactionName または、 ROLLBACK TRANSACTION transactionName を実行してコミット、またはロールバックされなければなりません リスト上のそれぞれのトランザクションは、COMMIT TRANSACTION transactionName または、 ROLLBACK TRANSACTION transactionName を実行してコミット、またはロールバックされなければなりません
@advanced_1086_li @advanced_1090_li
変更を適用するために、データベースを終了し、再び開く必要があります 変更を適用するために、データベースを終了し、再び開く必要があります
@advanced_1087_h2 @advanced_1091_h2
互換性 互換性
@advanced_1088_p @advanced_1092_p
このデータベースは (ある程度までは)、HSQLDB、MySQL や PostgreSQLのような 他のデータベースと互換性があります。H2が互換性のないある一定の領域があります。 このデータベースは (ある程度までは)、HSQLDB、MySQL や PostgreSQLのような 他のデータベースと互換性があります。H2が互換性のないある一定の領域があります。
@advanced_1089_h3 @advanced_1093_h3
オートコミットがONの時のトランザクションコミット オートコミットがONの時のトランザクションコミット
@advanced_1090_p @advanced_1094_p
この時、このデータベースエンジンは 結果が返ってくる直前にトランザクションをコミットします (オートコミットがONの場合)。 クエリーにとって、アプリケーションがresult setを通してスキャンする前や、result setが閉じられる前でさえも、 トランザクションはコミットされるということを意味しています。このケースでは、他のデータベースエンジンは result setが閉じられる時、トランザクションをコミットします。 この時、このデータベースエンジンは 結果が返ってくる直前にトランザクションをコミットします (オートコミットがONの場合)。 クエリーにとって、アプリケーションがresult setを通してスキャンする前や、result setが閉じられる前でさえも、 トランザクションはコミットされるということを意味しています。このケースでは、他のデータベースエンジンは result setが閉じられる時、トランザクションをコミットします。
@advanced_1091_h3 @advanced_1095_h3
キーワード / 予約語 キーワード / 予約語
@advanced_1092_p @advanced_1096_p
引用 (二重引用符で囲まれる) されない限り、識別子 (テーブル名、カラム名など) として使用できないキーワードのリストがあります。 現在のリスト: 引用 (二重引用符で囲まれる) されない限り、識別子 (テーブル名、カラム名など) として使用できないキーワードのリストがあります。 現在のリスト:
@advanced_1093_p @advanced_1097_p
CURRENT_TIMESTAMP, CURRENT_TIME, CURRENT_DATE, CROSS, DISTINCT, EXCEPT, EXISTS, FROM, FOR, FALSE, FULL, GROUP, HAVING, INNER, INTERSECT, IS, JOIN, LIKE, MINUS, NATURAL, NOT, NULL, ON, ORDER, PRIMARY, ROWNUM, SELECT, SYSDATE, SYSTIME, SYSTIMESTAMP, TODAY, TRUE, UNION, WHERE CURRENT_TIMESTAMP, CURRENT_TIME, CURRENT_DATE, CROSS, DISTINCT, EXCEPT, EXISTS, FROM, FOR, FALSE, FULL, GROUP, HAVING, INNER, INTERSECT, IS, JOIN, LIKE, MINUS, NATURAL, NOT, NULL, ON, ORDER, PRIMARY, ROWNUM, SELECT, SYSDATE, SYSTIME, SYSTIMESTAMP, TODAY, TRUE, UNION, WHERE
@advanced_1094_p @advanced_1098_p
このリストのある特定のワードはキーワードです。なぜなら、例えば CURRENT_TIMESTAMP のような 互換性のため "()" なしで使用できる関数だからです。 このリストのある特定のワードはキーワードです。なぜなら、例えば CURRENT_TIMESTAMP のような 互換性のため "()" なしで使用できる関数だからです。
@advanced_1095_h2 @advanced_1099_h2
Windowsサービスとして実行する Windowsサービスとして実行する
@advanced_1096_p @advanced_1100_p
ネイティブラッパー / アダプタを使用して、JavaアプリケーションはWindowsサービスとして実行できます。これを実行するために、様々なツールが有効です。Tanuki Software, Inc. (<a href="http://wrapper.tanukisoftware.org/">http://wrapper.tanukisoftware.org/</a>) のJavaサービスラッパーはインストールが含まれています。H2データベースエンジンサービスのインストール、起動、終了とアンインストールのためのバッチファイルが添付されます。このサービスは、TCPサーバーとH2コンソールWebアプリケーションが含まれます。バッチファイルは、H2/service ディレクトリに配置されています。 ネイティブラッパー / アダプタを使用して、JavaアプリケーションはWindowsサービスとして実行できます。これを実行するために、様々なツールが有効です。Tanuki Software, Inc. (<a href="http://wrapper.tanukisoftware.org/">http://wrapper.tanukisoftware.org/</a>) のJavaサービスラッパーはインストールが含まれています。H2データベースエンジンサービスのインストール、起動、終了とアンインストールのためのバッチファイルが添付されます。このサービスは、TCPサーバーとH2コンソールWebアプリケーションが含まれます。バッチファイルは、H2/service ディレクトリに配置されています。
@advanced_1097_h3 @advanced_1101_h3
サービスをインストールする サービスをインストールする
@advanced_1098_p @advanced_1102_p
サービスは、最初にWindowsサービスとして登録することが必要です。 これを行うために、1_install_service.bat をダブルクリックします。 成功すれば、コマンドプロンプトウィンドウが開き、すぐに消えます。失敗したらメッセージが現れます。 サービスは、最初にWindowsサービスとして登録することが必要です。 これを行うために、1_install_service.bat をダブルクリックします。 成功すれば、コマンドプロンプトウィンドウが開き、すぐに消えます。失敗したらメッセージが現れます。
@advanced_1099_h3 @advanced_1103_h3
サービスを起動する サービスを起動する
@advanced_1100_p @advanced_1104_p
Windowsのサービスマネージャを使用するか、2_start_service.bat をダブルクリックして H2データベースエンジンサービスを起動することができます。サービスがインストールされていなければ、 バッチファイルはエラーメッセージを表示しないということに注意して下さい。 Windowsのサービスマネージャを使用するか、2_start_service.bat をダブルクリックして H2データベースエンジンサービスを起動することができます。サービスがインストールされていなければ、 バッチファイルはエラーメッセージを表示しないということに注意して下さい。
@advanced_1101_h3 @advanced_1105_h3
H2コンソールに接続する H2コンソールに接続する
@advanced_1102_p @advanced_1106_p
サービスのインストールと起動後、ブラウザを使用してH2コンソールアプリケーションに 接続することができます。3_start_browser.bat をダブルクリックして実行します。 デフォルトのポート (8082) はバッチファイルでハードコード化されているものです。 サービスのインストールと起動後、ブラウザを使用してH2コンソールアプリケーションに 接続することができます。3_start_browser.bat をダブルクリックして実行します。 デフォルトのポート (8082) はバッチファイルでハードコード化されているものです。
@advanced_1103_h3 @advanced_1107_h3
サービスを終了する サービスを終了する
@advanced_1104_p @advanced_1108_p
サービスを終了するには、4_stop_service.bat をダブルクリックします。 サービスがインストール、または開始されていなければ、 バッチファイルはエラーメッセージを表示しないということに注意して下さい。 サービスを終了するには、4_stop_service.bat をダブルクリックします。 サービスがインストール、または開始されていなければ、 バッチファイルはエラーメッセージを表示しないということに注意して下さい。
@advanced_1105_h3 @advanced_1109_h3
サービスのアンインストール サービスのアンインストール
@advanced_1106_p @advanced_1110_p
サービスをアンインストールするには、5_uninstall_service.bat をダブルクリックします。成功すれば、コマンドプロンプトウィンドウが開き、すぐに消えます。 失敗したらメッセージが現れます。 サービスをアンインストールするには、5_uninstall_service.bat をダブルクリックします。成功すれば、コマンドプロンプトウィンドウが開き、すぐに消えます。 失敗したらメッセージが現れます。
@advanced_1107_h2 @advanced_1111_h2
ODBCドライバ ODBCドライバ
@advanced_1108_p @advanced_1112_p
このデータベースは現時点で、自身のODBCドライバと共に動作しませんが、PostgreSQLネットワークプロトコルをサポートしています。そのため、PostgreSQL ODBCドライバが使用可能です。PostgreSQLネットワークプロトコルのサポートは非常に新しく、試験的なものとして見なされます。製品アプリケーションで使用されるべきではありません。 このデータベースは現時点で、自身のODBCドライバと共に動作しませんが、PostgreSQLネットワークプロトコルをサポートしています。そのため、PostgreSQL ODBCドライバが使用可能です。PostgreSQLネットワークプロトコルのサポートは非常に新しく、試験的なものとして見なされます。製品アプリケーションで使用されるべきではありません。
@advanced_1109_p @advanced_1113_p
現時点で、PostgreSQL ODBCドライバはWindowsの64 bitバージョンでは動作しません。詳細は <a href="http://svr5.postgresql.org/pgsql-odbc/2005-09/msg00127.php">ODBC Driver on Windows 64 bit</a> をご覧下さい。 現時点で、PostgreSQL ODBCドライバはWindowsの64 bitバージョンでは動作しません。詳細は <a href="http://svr5.postgresql.org/pgsql-odbc/2005-09/msg00127.php">ODBC Driver on Windows 64 bit</a> をご覧下さい。
@advanced_1110_h3 @advanced_1114_h3
ODBCインストール ODBCインストール
@advanced_1111_p @advanced_1115_p
まず、ODBCドライバがインストールされていなければなりません。最近のいずれのPostgreSQL ODBCドライバでも動作しますが、バージョン 8.2.4 以降のものを推奨します。PostgreSQL ODBCドライバのWindowsバージョンは <a href="http://www.postgresql.org/ftp/odbc/versions/msi">http://www.postgresql.org/ftp/odbc/versions/msi</a> から利用可能です。 まず、ODBCドライバがインストールされていなければなりません。最近のいずれのPostgreSQL ODBCドライバでも動作しますが、バージョン 8.2.4 以降のものを推奨します。PostgreSQL ODBCドライバのWindowsバージョンは <a href="http://www.postgresql.org/ftp/odbc/versions/msi">http://www.postgresql.org/ftp/odbc/versions/msi</a> から利用可能です。
@advanced_1112_h3 @advanced_1116_h3
サーバーの起動 サーバーの起動
@advanced_1113_p @advanced_1117_p
ODBCドライバのインストール後、コマンドラインを使用してH2サーバーを起動します: ODBCドライバのインストール後、コマンドラインを使用してH2サーバーを起動します:
@advanced_1114_p @advanced_1118_p
PGサーバー (PostgreSQLプロトコルのためのPG) が同様に起動します。デフォルトでは、データベースはサーバーが起動した現在作業中のディレクトリに保存されます。ユーザーホームディレクトリなど、別のディレクトリにデータベースを保存するには、-baseDir を使用します: PGサーバー (PostgreSQLプロトコルのためのPG) が同様に起動します。デフォルトでは、データベースはサーバーが起動した現在作業中のディレクトリに保存されます。ユーザーホームディレクトリなど、別のディレクトリにデータベースを保存するには、-baseDir を使用します:
@advanced_1115_p @advanced_1119_p
PGサーバーは次のJavaアプリケーション内から起動、終了することが可能です: PGサーバーは次のJavaアプリケーション内から起動、終了することが可能です:
@advanced_1116_p @advanced_1120_p
デフォルトでは、ローカルホストからの接続のみ許可されます。リモート接続を許可するには、サーバーの起動時に<code>-pgAllowOthers true</code> を使用します。 デフォルトでは、ローカルホストからの接続のみ許可されます。リモート接続を許可するには、サーバーの起動時に<code>-pgAllowOthers true</code> を使用します。
@advanced_1117_h3 @advanced_1121_h3
ODBC設定 ODBC設定
@advanced_1118_p @advanced_1122_p
ドライバのインストール後、新しいデータソースを追加しなければなりません。Windowsでは、データソースAdministratorを開くために、<code>odbcad32.exe</code> を実行します。"Add..." をクリックし、PostgreSQL Unicode driverを選択します。そして、"Finish" をクリックします。接続プロパティを変更することが可能です: ドライバのインストール後、新しいデータソースを追加しなければなりません。Windowsでは、データソースAdministratorを開くために、<code>odbcad32.exe</code> を実行します。"Add..." をクリックし、PostgreSQL Unicode driverを選択します。そして、"Finish" をクリックします。接続プロパティを変更することが可能です:
@advanced_1119_th @advanced_1123_th
プロパティ プロパティ
@advanced_1120_th @advanced_1124_th
@advanced_1121_th @advanced_1125_th
コメント コメント
@advanced_1122_td @advanced_1126_td
Data Source Data Source
@advanced_1123_td @advanced_1127_td
H2 Test H2 Test
@advanced_1124_td @advanced_1128_td
ODBCデータソースの名称 ODBCデータソースの名称
@advanced_1125_td @advanced_1129_td
Database Database
@advanced_1126_td @advanced_1130_td
test test
@advanced_1127_td @advanced_1131_td
データベース名。現時点では簡易な名前のみサポートされています; データベース名。現時点では簡易な名前のみサポートされています;
@advanced_1128_td @advanced_1132_td
相対パス、または絶対パスはデータベース名にサポートされていません。 相対パス、または絶対パスはデータベース名にサポートされていません。
@advanced_1129_td @advanced_1133_td
デフォルトでは、-baseDir 設定が使用された時を除き、 デフォルトでは、-baseDir 設定が使用された時を除き、
@advanced_1130_td @advanced_1134_td
データベースはサーバーが起動された現在作業中のディレクトリに保存されます。 データベースはサーバーが起動された現在作業中のディレクトリに保存されます。
@advanced_1131_td @advanced_1135_td
名前は少なくとも3文字でなければなりません。 名前は少なくとも3文字でなければなりません。
@advanced_1132_td @advanced_1136_td
Server Server
@advanced_1133_td @advanced_1137_td
localhost localhost
@advanced_1134_td @advanced_1138_td
サーバー名、またはIPアドレス サーバー名、またはIPアドレス
@advanced_1135_td @advanced_1139_td
デフォルトでは、リモート接続のみ許可されています。 デフォルトでは、リモート接続のみ許可されています。
@advanced_1136_td @advanced_1140_td
User Name User Name
@advanced_1137_td @advanced_1141_td
sa sa
@advanced_1138_td @advanced_1142_td
データベースのユーザー名 データベースのユーザー名
@advanced_1139_td @advanced_1143_td
SSL Mode SSL Mode
@advanced_1140_td @advanced_1144_td
disabled disabled
@advanced_1141_td @advanced_1145_td
現時点で、SSLはサポートされていません。 現時点で、SSLはサポートされていません。
@advanced_1142_td @advanced_1146_td
Port Port
@advanced_1143_td @advanced_1147_td
5435 5435
@advanced_1144_td @advanced_1148_td
PGサーバーが傾聴しているポート PGサーバーが傾聴しているポート
@advanced_1145_td @advanced_1149_td
Password Password
@advanced_1146_td @advanced_1150_td
sa sa
@advanced_1147_td @advanced_1151_td
データベースパスワード データベースパスワード
@advanced_1148_p @advanced_1152_p
この後、このデータソースを使用できます。 この後、このデータソースを使用できます。
@advanced_1149_h3 @advanced_1153_h3
PGプロトコルサポートの制限 PGプロトコルサポートの制限
@advanced_1150_p @advanced_1154_p
現時点では、PostgreSQLネットワークプロトコルのサブセットのみ実装されています。また、カタログ、またはテキストエンコーディングでのSQLレベル上の互換性問題がある可能性があります。問題は発見されたら修正されます。現在、PGプロトコルが使用されている時、ステートメントはキャンセルされません。 現時点では、PostgreSQLネットワークプロトコルのサブセットのみ実装されています。また、カタログ、またはテキストエンコーディングでのSQLレベル上の互換性問題がある可能性があります。問題は発見されたら修正されます。現在、PGプロトコルが使用されている時、ステートメントはキャンセルされません。
@advanced_1151_h3 @advanced_1155_h3
セキュリティ考慮 セキュリティ考慮
@advanced_1152_p @advanced_1156_p
現在、PGサーバーはchallenge response、またはパスワードの暗号化をサポートしていません。パスワードが読みやすいため、アタッカーがODBCドライバとサーバー間でのデータ転送を傾聴できる場合、これは問題になるでしょう。また、暗号化SSL接続も現在使用不可能です。そのため、ODBCドライバはセキュリティが重視される場面においては使用されるべきではありません。 現在、PGサーバーはchallenge response、またはパスワードの暗号化をサポートしていません。パスワードが読みやすいため、アタッカーがODBCドライバとサーバー間でのデータ転送を傾聴できる場合、これは問題になるでしょう。また、暗号化SSL接続も現在使用不可能です。そのため、ODBCドライバはセキュリティが重視される場面においては使用されるべきではありません。
@advanced_1153_h2 @advanced_1157_h2
ACID ACID
@advanced_1154_p @advanced_1158_p
データベースの世界では、ACIDとは以下を表しています: データベースの世界では、ACIDとは以下を表しています:
@advanced_1155_li @advanced_1159_li
Atomicity (原子性) : トランザクションはアトミックでなければならず、全てのタスクが実行されたか、実行されないかの どちらかであるという意味です。 Atomicity (原子性) : トランザクションはアトミックでなければならず、全てのタスクが実行されたか、実行されないかの どちらかであるという意味です。
@advanced_1156_li @advanced_1160_li
Consistency (一貫性) : 全てのオペレーションは定義された制約に従わなくてはいけません。 Consistency (一貫性) : 全てのオペレーションは定義された制約に従わなくてはいけません。
@advanced_1157_li @advanced_1161_li
Isolation (独立性 / 分離性) : トランザクションはそれぞれ独立 (隔離) されていなくてはなりません。 Isolation (独立性 / 分離性) : トランザクションはそれぞれ独立 (隔離) されていなくてはなりません。
@advanced_1158_li @advanced_1162_li
Durability (永続性) : コミットされたトランザクションは失われません。 Durability (永続性) : コミットされたトランザクションは失われません。
@advanced_1159_h3 @advanced_1163_h3
Atomicity (原子性) Atomicity (原子性)
@advanced_1160_p @advanced_1164_p
このデータベースでのトランザクションは常にアトミックです。 このデータベースでのトランザクションは常にアトミックです。
@advanced_1161_h3 @advanced_1165_h3
Consistency (一貫性) Consistency (一貫性)
@advanced_1162_p @advanced_1166_p
このデータベースは常に一貫性のある状態です。 参照整合性のルールは常に実行されます。 このデータベースは常に一貫性のある状態です。 参照整合性のルールは常に実行されます。
@advanced_1163_h3 @advanced_1167_h3
Isolation (独立性 / 分離性) Isolation (独立性 / 分離性)
@advanced_1164_p @advanced_1168_p
H2は、他の多くのデータベースシステムと同様に、デフォルトの分離レベルは "read committed" です。これはより良いパフォーマンスを提供しますが、トランザクションは完全に分離されていないということも意味します。H2はトランザクション分離レベル "serializable"、"read committed"、"read uncommitted" をサポートしています。 H2は、他の多くのデータベースシステムと同様に、デフォルトの分離レベルは "read committed" です。これはより良いパフォーマンスを提供しますが、トランザクションは完全に分離されていないということも意味します。H2はトランザクション分離レベル "serializable"、"read committed"、"read uncommitted" をサポートしています。
@advanced_1165_h3 @advanced_1169_h3
Durability (永続性) Durability (永続性)
@advanced_1166_p @advanced_1170_p
このデータベースは、全てのコミットされたトランザクションが電源異常に耐えられるということを保証しません。全てのデータベースが電源異常の状況において、一部トランザクションが失われるということをテストは示しています (詳細は下記をご覧下さい)。トランザクションが失われることを容認できない場面では、ノートパソコン、またはUPS (無停電電源装置) を使用します。永続性がハードウェア異常の起こり得る全ての可能性に対して必要とされるのであれば、H2クラスタリングモードのようなクラスタリングが使用されるべきです。 このデータベースは、全てのコミットされたトランザクションが電源異常に耐えられるということを保証しません。全てのデータベースが電源異常の状況において、一部トランザクションが失われるということをテストは示しています (詳細は下記をご覧下さい)。トランザクションが失われることを容認できない場面では、ノートパソコン、またはUPS (無停電電源装置) を使用します。永続性がハードウェア異常の起こり得る全ての可能性に対して必要とされるのであれば、H2クラスタリングモードのようなクラスタリングが使用されるべきです。
@advanced_1167_h2 @advanced_1171_h2
永続性問題 永続性問題
@advanced_1168_p @advanced_1172_p
完全な永続性とは、全てのコミットされたトランザクションは電源異常に耐えられる、ということを意味します。 いくつかのデータベースは、永続性を保証すると主張していますが、このような主張は誤っています。 永続性テストはH2、HSQLDB、PostgreSQL、Derbyに対して実行されました。これらの全てのデータベースは、 時々コミットされたトランザクションを失います。このテストはH2ダウンロードに含まれています。 org.h2.test.poweroff.Test をご覧下さい。 完全な永続性とは、全てのコミットされたトランザクションは電源異常に耐えられる、ということを意味します。 いくつかのデータベースは、永続性を保証すると主張していますが、このような主張は誤っています。 永続性テストはH2、HSQLDB、PostgreSQL、Derbyに対して実行されました。これらの全てのデータベースは、 時々コミットされたトランザクションを失います。このテストはH2ダウンロードに含まれています。 org.h2.test.poweroff.Test をご覧下さい。
@advanced_1169_h3 @advanced_1173_h3
永続性を実現する (しない) 方法 永続性を実現する (しない) 方法
@advanced_1170_p @advanced_1174_p
失われなかったコミット済みトランザクションは、最初に思うよりもより複雑だということを理解して下さい。 完全な永続性を保障するためには、データベースは、コミットの呼び出しが返ってくる前に ログレコードがハードドライブ上にあることを確実にしなければなりません。 これを行うために、データベースは異なったメソッドを使用します。ひとつは "同期書き込み" ファイルアクセスモードを使用することです。Javaでは、RandomAccessFile はモード "rws" と "rwd" を サポートしています: 失われなかったコミット済みトランザクションは、最初に思うよりもより複雑だということを理解して下さい。 完全な永続性を保障するためには、データベースは、コミットの呼び出しが返ってくる前に ログレコードがハードドライブ上にあることを確実にしなければなりません。 これを行うために、データベースは異なったメソッドを使用します。ひとつは "同期書き込み" ファイルアクセスモードを使用することです。Javaでは、RandomAccessFile はモード "rws" と "rwd" を サポートしています:
@advanced_1171_li @advanced_1175_li
rwd: それぞれのファイル内容の更新は、元になるストレージデバイスと同時に書き込まれます。 rwd: それぞれのファイル内容の更新は、元になるストレージデバイスと同時に書き込まれます。
@advanced_1172_li @advanced_1176_li
rws: rwdに加えて、それぞれのメタデータの更新は同時に書き込まれます。 rws: rwdに加えて、それぞれのメタデータの更新は同時に書き込まれます。
@advanced_1173_p @advanced_1177_p
この特徴はDerbyで使用されています。それらのモードのうちのひとつは、テスト (org.h2.test.poweroff.TestWrite) において、毎秒およそ5万件の書き込み操作を実現します。オペレーティングシステムのライトバッファーが無効の時でさえも、 書き込み速度は毎秒およそ5万件です。この特徴はディスクを交換させるというものではありません。 なぜなら、全てのバッファーをフラッシュするのではないからです。テストはファイル内の同じバイトを何度も更新しました。 もしハードドライブがこの速度での書き込みが可能なら、ディスクは少なくても毎秒5万回転か、 または300万 RPM (revolutions per minute 回転毎分) を行う必要があります。 そのようなハードドライブは存在しません。テストで使用されたハードドライブは、およそ7200 RPM、または 毎秒120回転です。これがオーバーヘッドなので、最大書き込み速度はこれより低くなくてはなりません。 この特徴はDerbyで使用されています。それらのモードのうちのひとつは、テスト (org.h2.test.poweroff.TestWrite) において、毎秒およそ5万件の書き込み操作を実現します。オペレーティングシステムのライトバッファーが無効の時でさえも、 書き込み速度は毎秒およそ5万件です。この特徴はディスクを交換させるというものではありません。 なぜなら、全てのバッファーをフラッシュするのではないからです。テストはファイル内の同じバイトを何度も更新しました。 もしハードドライブがこの速度での書き込みが可能なら、ディスクは少なくても毎秒5万回転か、 または300万 RPM (revolutions per minute 回転毎分) を行う必要があります。 そのようなハードドライブは存在しません。テストで使用されたハードドライブは、およそ7200 RPM、または 毎秒120回転です。これがオーバーヘッドなので、最大書き込み速度はこれより低くなくてはなりません。
@advanced_1174_p @advanced_1178_p
バッファーは fsync 関数を呼ぶことによってフラッシュされます。Javaでこれを行う二つの方法があります: バッファーは fsync 関数を呼ぶことによってフラッシュされます。Javaでこれを行う二つの方法があります:
@advanced_1175_li @advanced_1179_li
FileDescriptor.sync() ドキュメンテーションには、これは強制的に全てのシステムバッファーに基本となる デバイスとの同期を取らせる、と書かれています。このFileDescriptorに関連するバッファーのインメモリでの 変更コピーが全て物理メディアに書かれた後、Syncは返ることになっています。 FileDescriptor.sync() ドキュメンテーションには、これは強制的に全てのシステムバッファーに基本となる デバイスとの同期を取らせる、と書かれています。このFileDescriptorに関連するバッファーのインメモリでの 変更コピーが全て物理メディアに書かれた後、Syncは返ることになっています。
@advanced_1176_li @advanced_1180_li
FileChannel.force() (JDK 1.4 以来) このメソッドは、強制的にこのチャネルのファイルの更新は それを含むストレージデバイスに書き込まれることを行います。 FileChannel.force() (JDK 1.4 以来) このメソッドは、強制的にこのチャネルのファイルの更新は それを含むストレージデバイスに書き込まれることを行います。
@advanced_1177_p @advanced_1181_p
デフォルトでは、MySQLはそれぞれのコミットごとに fsync を呼びます。それらのメソッドのうちひとつを使用している時、 毎秒およそ60件だけが実行され、使用されているハードドライブのRPM速度と一貫性があります。 残念ながら、FileDescriptor.sync() または FileChannel.force() を呼んだ時でさえも データは常にハードドライブに存続するとは限りません。なぜなら、多くのハードドライブは fsync() に従わないからです: http://hardware.slashdot.org/article.pl?sid=05/05/13/0529252 内の"Your Hard Drive Lies to You" をご覧下さい。Mac OS X では、fsync はハードドライブバッファーをフラッシュしません: http://lists.apple.com/archives/darwin-dev/2005/Feb/msg00072.html そのため状況は混乱していて、 問題があることをテストは証明しています。 デフォルトでは、MySQLはそれぞれのコミットごとに fsync を呼びます。それらのメソッドのうちひとつを使用している時、 毎秒およそ60件だけが実行され、使用されているハードドライブのRPM速度と一貫性があります。 残念ながら、FileDescriptor.sync() または FileChannel.force() を呼んだ時でさえも データは常にハードドライブに存続するとは限りません。なぜなら、多くのハードドライブは fsync() に従わないからです: http://hardware.slashdot.org/article.pl?sid=05/05/13/0529252 内の"Your Hard Drive Lies to You" をご覧下さい。Mac OS X では、fsync はハードドライブバッファーをフラッシュしません: http://lists.apple.com/archives/darwin-dev/2005/Feb/msg00072.html そのため状況は混乱していて、 問題があることをテストは証明しています。
@advanced_1178_p @advanced_1182_p
ハードドライブバッファーを懸命にフラッシュしようと試みると、パフォーマンスは非常に悪いものになります。 最初に、ハードドライブは実際には全てのバッファーをフラッシュしているということを確かめることが必要です。 テストは信頼性ある方法でこれが行われていないことを示しています。その結果、トランザクションの最大数は毎秒およそ60件です。 これらの理由により、H2のデフォルト性質はコミットされたトランザクションの書き込みを遅らせることです。 ハードドライブバッファーを懸命にフラッシュしようと試みると、パフォーマンスは非常に悪いものになります。 最初に、ハードドライブは実際には全てのバッファーをフラッシュしているということを確かめることが必要です。 テストは信頼性ある方法でこれが行われていないことを示しています。その結果、トランザクションの最大数は毎秒およそ60件です。 これらの理由により、H2のデフォルト性質はコミットされたトランザクションの書き込みを遅らせることです。
@advanced_1179_p @advanced_1183_p
H2では、電源異常の後、1秒以上のコミットされたトランザクションが失われます。 この性質を変更するためには。 SET WRITE_DELAY と CHECKPOINT SYNC を使用します。 多くの他のデータベースも同様に遅延コミットをサポートしています。パフォーマンス比較では、 遅延コミットは、サポートする全てのデータベースによって使用されました。 H2では、電源異常の後、1秒以上のコミットされたトランザクションが失われます。 この性質を変更するためには。 SET WRITE_DELAY と CHECKPOINT SYNC を使用します。 多くの他のデータベースも同様に遅延コミットをサポートしています。パフォーマンス比較では、 遅延コミットは、サポートする全てのデータベースによって使用されました。
@advanced_1180_h3 @advanced_1184_h3
永続性テストを実行する 永続性テストを実行する
@advanced_1181_p @advanced_1185_p
このデータベースと他のデータベースの、永続性 / 非永続性テストを行うために、 パッケージ内 org.h2.test.poweroff のテストアプリケーションを使用することができます。 ネットワーク接続の二つのコンピューターがこのテストを実行するのに必要です。 ひとつのコンピューターは、他のコンピューター上でテストアプリケーションが実行されている間 (電源は切られています) ただ聞いています。リスナーアプリケーションのコンピューターは TCP/IP ポートを開き、 次の接続のために聞きます。二つ目のコンピューターは最初リスナーに接続し、データベースを作成して レコードの挿入を開始します。この接続は "autocommit" に設定されます。それぞれのレコード挿入後のコミットが 自動的に行われるという意味です。その後、テストコンピューターはこのレコードの挿入に成功したということを リスナーに通知します。リスナーコンピューターは10秒ごとに最後に挿入されたレコードを表示します。 電源を手動でOFFにしてコンピューターを再起動し、アプリケーションを再び実行します。 多くのケースで、リスナーコンピューターが知る全てのレコードを含むデータベースはないということがわかります。 詳細は、リスナーのソースコードとテストアプリケーションを参照して下さい。 このデータベースと他のデータベースの、永続性 / 非永続性テストを行うために、 パッケージ内 org.h2.test.poweroff のテストアプリケーションを使用することができます。 ネットワーク接続の二つのコンピューターがこのテストを実行するのに必要です。 ひとつのコンピューターは、他のコンピューター上でテストアプリケーションが実行されている間 (電源は切られています) ただ聞いています。リスナーアプリケーションのコンピューターは TCP/IP ポートを開き、 次の接続のために聞きます。二つ目のコンピューターは最初リスナーに接続し、データベースを作成して レコードの挿入を開始します。この接続は "autocommit" に設定されます。それぞれのレコード挿入後のコミットが 自動的に行われるという意味です。その後、テストコンピューターはこのレコードの挿入に成功したということを リスナーに通知します。リスナーコンピューターは10秒ごとに最後に挿入されたレコードを表示します。 電源を手動でOFFにしてコンピューターを再起動し、アプリケーションを再び実行します。 多くのケースで、リスナーコンピューターが知る全てのレコードを含むデータベースはないということがわかります。 詳細は、リスナーのソースコードとテストアプリケーションを参照して下さい。
@advanced_1182_h2 @advanced_1186_h2
リカバーツールを使用する リカバーツールを使用する
@advanced_1183_p @advanced_1187_p
リカバーツールはデータベースが破損している場合においても、 データファイルのコンテンツを復元するために使用されます。現段階では、ログファイルのコンテンツ、 または大きなオブジェクト (CLOB または BLOB) は復元しません。 このツールを実行するには、このコマンドラインをタイプして下さい: リカバーツールはデータベースが破損している場合においても、 データファイルのコンテンツを復元するために使用されます。現段階では、ログファイルのコンテンツ、 または大きなオブジェクト (CLOB または BLOB) は復元しません。 このツールを実行するには、このコマンドラインをタイプして下さい:
@advanced_1184_p @advanced_1188_p
現在のディレクトリのそれぞれのデータベースのために、テキストファイルが作られます。 このファイルには、データベースのスキーマを再び作成するために、行挿入ステートメント (データのための) と data definition (DDL) ステートメントを含んでいます。このファイルは、行挿入ステートメントが 正しいテーブル名を保持していないため、直接実行するこはできません。そのため、 ファイルは実行する前に手動で前処理を行う必要があります。 現在のディレクトリのそれぞれのデータベースのために、テキストファイルが作られます。 このファイルには、データベースのスキーマを再び作成するために、行挿入ステートメント (データのための) と data definition (DDL) ステートメントを含んでいます。このファイルは、行挿入ステートメントが 正しいテーブル名を保持していないため、直接実行するこはできません。そのため、 ファイルは実行する前に手動で前処理を行う必要があります。
@advanced_1185_h2 @advanced_1189_h2
ファイルロックプロトコル ファイルロックプロトコル
@advanced_1186_p @advanced_1190_p
データベースが開かれている時はいつでも、他のプロセスでデータベースが使用中だということを 知らせるためにロックファイルが作られます。もしデータベースが閉じられたら、または データベースが開かれたプロセスが終了したら、このロックファイルは削除されます。 データベースが開かれるときはいつも、データベースが使用中であると他のプロセスに合図するためにロックファイルが作成されます。もしデータベースが閉じられるか、データベースを開いたプロセスが終了するなら、ロックファイルは削除されます。
@advanced_1187_p @advanced_1191_p
特別なケースでは (例えば、停電のためプロセスが正常に終了されなかった場合)、 ロックファイルは作られたプロセスによって削除されません。これは、ロックファイルの存在は、 ファイルロックのための安全なプロトコルではない、ということを意味しています。 しかし、このソフトウェアはデータベースファイルを守るため、challenge-responseプロトコルを使用します。 セキュリティ (同じデータベースファイルは、同時に二つのプロセスによって開かれてはいけない) と シンプリシティー (ロックファイルはユーザーによって手動で削除される必要がない) の両方を備えるために 二つのメソッド (アルゴリズム) が実行されます。二つのメソッドは、"Fileメソッド" と "Socketメソッド" です。 特別なケースでは (例えば、停電のためプロセスが正常に終了されなかった場合)、 ロックファイルは作られたプロセスによって削除されません。これは、ロックファイルの存在は、 ファイルロックのための安全なプロトコルではない、ということを意味しています。 しかし、このソフトウェアはデータベースファイルを守るため、challenge-responseプロトコルを使用します。 セキュリティ (同じデータベースファイルは、同時に二つのプロセスによって開かれてはいけない) と シンプリシティー (ロックファイルはユーザーによって手動で削除される必要がない) の両方を備えるために 二つのメソッド (アルゴリズム) が実行されます。二つのメソッドは、"Fileメソッド" と "Socketメソッド" です。
@advanced_1188_h3 @advanced_1192_h3
ファイルロックメソッド "File" ファイルロックメソッド "File"
@advanced_1189_p @advanced_1193_p
データベースファイルロックのデフォルトメソッドは "Fileメソッド" です。アルゴリズム: データベースファイルロックのデフォルトメソッドは "Fileメソッド" です。アルゴリズム:
@advanced_1190_li @advanced_1194_li
ロックファイルが存在しない時は、作成されます (アトミックオペレーション File.createNewFile を使用する)。 その時、プロセスは少し (20ms) 待機し、再びファイルをチェックします。 もしファイルがこの間に変更されたら、オペレーションは中止されます。 ロックファイルを作成したすぐ後にプロセスがロックファイルを削除する時、 これはレースコンディションから保護し、三番目のプロセスはファイルを再び作成します。 二つのライターしか存在しなければ、これは起こりません。 ロックファイルが存在しない時は、作成されます (アトミックオペレーション File.createNewFile を使用する)。 その時、プロセスは少し (20ms) 待機し、再びファイルをチェックします。 もしファイルがこの間に変更されたら、オペレーションは中止されます。 ロックファイルを作成したすぐ後にプロセスがロックファイルを削除する時、 これはレースコンディションから保護し、三番目のプロセスはファイルを再び作成します。 二つのライターしか存在しなければ、これは起こりません。
@advanced_1191_li @advanced_1195_li
もしファイルが作成されたら、ロックメソッド ("file") でランダムな番号が一緒に挿入されます。 その後、ファイルが他のスレッド/ プロセスによって削除、または 修正された時、定期的にチェックする (デフォルトでは毎秒1回) watchdogスレッドは開始されます。 これが起きる時はいつも、ファイルは古いデータに上書きされます。システムが非常に混み合っている時でさえも、 非検出の状態で処理できないロックファイルを変更するために、watchdogスレッドは最優先に実行します。 しかし、watchdogスレッドはほとんどの時間待機しているため、非常に小さなリソース (CPU time) を使用します。 また、watchdogはハードディスクから読み取りのみ行い、書き込みはしません。 もしファイルが作成されたら、ロックメソッド ("file") でランダムな番号が一緒に挿入されます。 その後、ファイルが他のスレッド/ プロセスによって削除、または 修正された時、定期的にチェックする (デフォルトでは毎秒1回) watchdogスレッドは開始されます。 これが起きる時はいつも、ファイルは古いデータに上書きされます。システムが非常に混み合っている時でさえも、 非検出の状態で処理できないロックファイルを変更するために、watchdogスレッドは最優先に実行します。 しかし、watchdogスレッドはほとんどの時間待機しているため、非常に小さなリソース (CPU time) を使用します。 また、watchdogはハードディスクから読み取りのみ行い、書き込みはしません。
@advanced_1192_li @advanced_1196_li
もしロックファイルが存在し、20ms内に変更されたら、プロセスは数回 (10回以上) 待機します。 まだ変更されていたら、例外が投げられます (データベースはロックされます)。 多数の並列ライターで競合している状態を排除するためにこれが行われます。 その後、ファイルは新しいバージョンに上書きされます。 そして、スレッドは2秒間待機します。もしファイルを保護するwatchdogスレッドが存在したら、 変更は上書きし、このプロセスはデータベースをロックするために機能しなくなります。 しかし、もしwatchdogスレッドが存在しなければ、ロックファイルはこのスレッドによって 書かれたままの状態です。このケースでは、ファイルは削除され、自動的にまた作成されます。 watchdogスレッドはこのケースでは起動され、ファイルはロックされます。 もしロックファイルが存在し、20ms内に変更されたら、プロセスは数回 (10回以上) 待機します。 まだ変更されていたら、例外が投げられます (データベースはロックされます)。 多数の並列ライターで競合している状態を排除するためにこれが行われます。 その後、ファイルは新しいバージョンに上書きされます。 そして、スレッドは2秒間待機します。もしファイルを保護するwatchdogスレッドが存在したら、 変更は上書きし、このプロセスはデータベースをロックするために機能しなくなります。 しかし、もしwatchdogスレッドが存在しなければ、ロックファイルはこのスレッドによって 書かれたままの状態です。このケースでは、ファイルは削除され、自動的にまた作成されます。 watchdogスレッドはこのケースでは起動され、ファイルはロックされます。
@advanced_1193_p @advanced_1197_p
このアルゴリズムは100以上の並列スレッドでテストされました。いくつかのケースでは、 データベースをロックしようとする多数の並列スレッドが存在する時、それらはしばらくお互いをブロックします (それらのうちどれかがファイルをロックすることができないことを意味します)。 しかし、ファイルは同時に二つのスレッドによってロックされることは決してありません。 しかし、多数の並列スレッド / プロセスを使用することは一般的な使用ケースではありません。 通常、データベースを開くことができなかったり、(速い)ループのやり直しができなかったりした場合、 アプリケーションはユーザーにエラーを投げるべきです。 このアルゴリズムは100以上の並列スレッドでテストされました。いくつかのケースでは、 データベースをロックしようとする多数の並列スレッドが存在する時、それらはしばらくお互いをブロックします (それらのうちどれかがファイルをロックすることができないことを意味します)。 しかし、ファイルは同時に二つのスレッドによってロックされることは決してありません。 しかし、多数の並列スレッド / プロセスを使用することは一般的な使用ケースではありません。 通常、データベースを開くことができなかったり、(速い)ループのやり直しができなかったりした場合、 アプリケーションはユーザーにエラーを投げるべきです。
@advanced_1194_h3 @advanced_1198_h3
ファイルロックメソッド "Socket" ファイルロックメソッド "Socket"
@advanced_1195_p @advanced_1199_p
実行される二つ目のロックメカニズムがありますが、 デフォルトでは使用不可です。アルゴリズムは: 実行される二つ目のロックメカニズムがありますが、 デフォルトでは使用不可です。アルゴリズムは:
@advanced_1196_li @advanced_1200_li
ロックファイルが存在しない時は、作成されます。その時、サーバーソケットは定義されたポートで開かれ、 開かれた状態を保ちます。開かれたデータベースのプロセスのポートとIPアドレスはロックファイルの中に書かれています。 ロックファイルが存在しない時は、作成されます。その時、サーバーソケットは定義されたポートで開かれ、 開かれた状態を保ちます。開かれたデータベースのプロセスのポートとIPアドレスはロックファイルの中に書かれています。
@advanced_1197_li @advanced_1201_li
もしロックファイルが存在し、ロックメソッドが "file" なら、ソフトウェアは "file" メソッドにスイッチします。 もしロックファイルが存在し、ロックメソッドが "file" なら、ソフトウェアは "file" メソッドにスイッチします。
@advanced_1198_li @advanced_1202_li
もしロックファイルが存在し、ロックメソッドが "socket" なら、プロセスはポートが使用されているかチェックします。 最初のプロセスがまだ実行されていたら、ポートは使用されていれ、このプロセスは例外を投げます (database is in use)。 最初のプロセスが失われたら (例えば、停電または、仮想マシンの異常終了のため)、ポートは解除されます。 新しいプロセスはロックファイルを削除し、再び起動します。 もしロックファイルが存在し、ロックメソッドが "socket" なら、プロセスはポートが使用されているかチェックします。 最初のプロセスがまだ実行されていたら、ポートは使用されていれ、このプロセスは例外を投げます (database is in use)。 最初のプロセスが失われたら (例えば、停電または、仮想マシンの異常終了のため)、ポートは解除されます。 新しいプロセスはロックファイルを削除し、再び起動します。
@advanced_1199_p @advanced_1203_p
このメソッドは、活発に毎秒同じファイルをポーリングする (読み込む) watchdogスレッドを必要としていません。 このメソッドの問題は、ファイルがネットワークシェアに保存されたら、二つのプロセスは (異なるコンピューターで実行中の)、 TCP/IP接続を直接保持していなければ、同じデータベースファイルを開くことができます。 このメソッドは、活発に毎秒同じファイルをポーリングする (読み込む) watchdogスレッドを必要としていません。 このメソッドの問題は、ファイルがネットワークシェアに保存されたら、二つのプロセスは (異なるコンピューターで実行中の)、 TCP/IP接続を直接保持していなければ、同じデータベースファイルを開くことができます。
@advanced_1200_h2 @advanced_1204_h2
SQLインジェクションに対する防御 SQLインジェクションに対する防御
@advanced_1201_h3 @advanced_1205_h3
SQLインジェクションとは SQLインジェクションとは
@advanced_1202_p @advanced_1206_p
このデータベースエンジンは "SQLインジェクション" として知られる セキュリティ脆弱性の解決策を備えています。 これは、SQLインジェクションの意味とは何か、 についての短い説明です。いくつかのアプリケーションは、エンベッドユーザーがこのように入力する SQLステートメントを構築します: このデータベースエンジンは "SQLインジェクション" として知られる セキュリティ脆弱性の解決策を備えています。 これは、SQLインジェクションの意味とは何か、 についての短い説明です。いくつかのアプリケーションは、エンベッドユーザーがこのように入力する SQLステートメントを構築します:
@advanced_1203_p @advanced_1207_p
このメカニズムがアプリケーションのどこかで使用され、ユーザー入力が正しくないフィルター処理、 またはエンベッドなら、ユーザーはパスワード: ' OR ''=' のような (この例の場合) 特別に作られた入力を使用することによって、SQLの機能、またはステートメントに入り込むことが可能です。 このケースでは、ステートメントはこのようになります: このメカニズムがアプリケーションのどこかで使用され、ユーザー入力が正しくないフィルター処理、 またはエンベッドなら、ユーザーはパスワード: ' OR ''=' のような (この例の場合) 特別に作られた入力を使用することによって、SQLの機能、またはステートメントに入り込むことが可能です。 このケースでは、ステートメントはこのようになります:
@advanced_1204_p @advanced_1208_p
データベースに保存されたパスワードが何であっても、これは常に正しいものになります。 SQLインジェクションについての詳細は、用語集とリンク をご覧下さい。 データベースに保存されたパスワードが何であっても、これは常に正しいものになります。 SQLインジェクションについての詳細は、用語集とリンク をご覧下さい。
@advanced_1205_h3 @advanced_1209_h3
リテラルを無効にする リテラルを無効にする
@advanced_1206_p @advanced_1210_p
ユーザー入力が直接SQLステートメントに組み込まれなければ、 SQLインジェクションは不可能です。上記の問題の簡単な解決方法は、PreparedStatementを使用することです: ユーザー入力が直接SQLステートメントに組み込まれなければ、 SQLインジェクションは不可能です。上記の問題の簡単な解決方法は、PreparedStatementを使用することです:
@advanced_1207_p @advanced_1211_p
このデータベースは、ユーザー入力をデータベースに通す時、パラメータの使用を強制する方法を提供しています。 SQLステートメントの組み込まれたリテラルを無効にすることでこれを実行します。 次のステートメントを実行します: このデータベースは、ユーザー入力をデータベースに通す時、パラメータの使用を強制する方法を提供しています。 SQLステートメントの組み込まれたリテラルを無効にすることでこれを実行します。 次のステートメントを実行します:
@advanced_1208_p @advanced_1212_p
その後、文字列リテラルや数値リテラルのSQLステートメントはもう認められません。これは、 WHERE NAME='abc' や WHERE CustomerId=10 といった形のSQLステートメントは失敗するという意味です。 PreparedStatementや上に記載されたパラメータは使用することができます。また、 リテラルの含まれないSQLステートメントと同様に、SQLステートメントを動的に生成したり、 APIステートメントを使用することも可能です。数値リテラルが許可されている二つ目のモードもあります: SET ALLOW_LITERALS NUMBERS 全てのリテラルを許可するには、 SET ALLOW_LITERALS ALL を実行します (これはデフォルトの設定です)。リテラルはadministratorのみによって使用可能、または使用不可になります。 その後、文字列リテラルや数値リテラルのSQLステートメントはもう認められません。これは、 WHERE NAME='abc' や WHERE CustomerId=10 といった形のSQLステートメントは失敗するという意味です。 PreparedStatementや上に記載されたパラメータは使用することができます。また、 リテラルの含まれないSQLステートメントと同様に、SQLステートメントを動的に生成したり、 APIステートメントを使用することも可能です。数値リテラルが許可されている二つ目のモードもあります: SET ALLOW_LITERALS NUMBERS 全てのリテラルを許可するには、 SET ALLOW_LITERALS ALL を実行します (これはデフォルトの設定です)。リテラルはadministratorのみによって使用可能、または使用不可になります。
@advanced_1209_h3 @advanced_1213_h3
定数を使用する 定数を使用する
@advanced_1210_p @advanced_1214_p
リテラルを無効にするということは、ハードコード化された "定数" リテラルを無効にする、 ということも意味します。このデータベースは、CREATE CONSTANT コマンドを使用して定数を定義することをサポートしています。 定数はリテラルが有効であるときのみ定義することができますが、リテラルが無効の時でも使用することができます。 カラム名の名前の衝突を避けるために、定数は他のスキーマで定義できます: リテラルを無効にするということは、ハードコード化された "定数" リテラルを無効にする、 ということも意味します。このデータベースは、CREATE CONSTANT コマンドを使用して定数を定義することをサポートしています。 定数はリテラルが有効であるときのみ定義することができますが、リテラルが無効の時でも使用することができます。 カラム名の名前の衝突を避けるために、定数は他のスキーマで定義できます:
@advanced_1211_p @advanced_1215_p
リテラルが有効の時でも、クエリーやビューの中でハードコード化された数値リテラル、 またはテキストリテラルの代わりに、定数を使用する方がより良いでしょう。With 定数では、タイプミスはコンパイル時に発見され、ソースコードは理解、変更しやすくなります。 リテラルが有効の時でも、クエリーやビューの中でハードコード化された数値リテラル、 またはテキストリテラルの代わりに、定数を使用する方がより良いでしょう。With 定数では、タイプミスはコンパイル時に発見され、ソースコードは理解、変更しやすくなります。
@advanced_1212_h3 @advanced_1216_h3
ZERO() 関数を使用する ZERO() 関数を使用する
@advanced_1213_p @advanced_1217_p
組み込み関数 ZERO() がすでにあるため、 数値 0 のための定数を作る必要はありません: 組み込み関数 ZERO() がすでにあるため、 数値 0 のための定数を作る必要はありません:
@advanced_1214_h2 @advanced_1218_h2
セキュリティプロトコル セキュリティプロトコル
@advanced_1215_p @advanced_1219_p
次の文章は、このデータベースで使用されている セキュリティプロトコルのドキュメントです。これらの記述は非常に専門的で、 根本的なセキュリティの基本をすでに知っているセキュリティ専門家のみを対象としています。 次の文章は、このデータベースで使用されている セキュリティプロトコルのドキュメントです。これらの記述は非常に専門的で、 根本的なセキュリティの基本をすでに知っているセキュリティ専門家のみを対象としています。
@advanced_1216_h3 @advanced_1220_h3
ユーザーパスワードの暗号化 ユーザーパスワードの暗号化
@advanced_1217_p @advanced_1221_p
ユーザーがデータベースに接続しようとする時、ユーザー名の組み合わせ、@、パスワードは SHA-256 を使用してハッシュ化され、このハッシュ値はデータベースに送信されます。 この手順は、クライアントとサーバー間の転送をアタッカーが聞ける (非暗号化できる) のであれば、 再使用する値からのアタッカーを試みることはありません。しかし、パスワードはクライアントとサーバー間で 暗号化されていない接続を使用している時でさえも、プレーンテキストで送信されることはありません これはもしユーザーが、異なる場面で同じパスワードを再利用しても、このパスワードはある程度まで保護されます。 詳細は"RFC 2617 - HTTP Authentication: Basic and Digest Access Authentication" もご覧下さい。 ユーザーがデータベースに接続しようとする時、ユーザー名の組み合わせ、@、パスワードは SHA-256 を使用してハッシュ化され、このハッシュ値はデータベースに送信されます。 この手順は、クライアントとサーバー間の転送をアタッカーが聞ける (非暗号化できる) のであれば、 再使用する値からのアタッカーを試みることはありません。しかし、パスワードはクライアントとサーバー間で 暗号化されていない接続を使用している時でさえも、プレーンテキストで送信されることはありません これはもしユーザーが、異なる場面で同じパスワードを再利用しても、このパスワードはある程度まで保護されます。 詳細は"RFC 2617 - HTTP Authentication: Basic and Digest Access Authentication" もご覧下さい。
@advanced_1218_p @advanced_1222_p
新しいデータベース、またはユーザーが作られた時、暗号化された安全なランダムの 新しいsalt値が生成されます。salt値のサイズは 64 bit です。 ランダムなsaltを使用することによって、多数の異なった (通常、使用された) パスワードのハッシュ値を アタッカーに再計算されるリスクが軽減します。 新しいデータベース、またはユーザーが作られた時、暗号化された安全なランダムの 新しいsalt値が生成されます。salt値のサイズは 64 bit です。 ランダムなsaltを使用することによって、多数の異なった (通常、使用された) パスワードのハッシュ値を アタッカーに再計算されるリスクが軽減します。
@advanced_1219_p @advanced_1223_p
ユーザーパスワードのハッシュ値の組み合わせと (上記をご覧下さい) saltは SHA-256を使用してハッシュ化されます。 結果の値はデータベースに保存されます。ユーザーがデータベースに接続しようとする時、 データベースは、保存されたsalt値のユーザーパスワードのハッシュ値と計算されたハッシュ値を結合します。 他の製品は複数の反復 (ハッシュ値を繰り返しハッシュする) を使用していますが、 この製品ではサービス攻撃の拒絶 (アタッカーが偽のパスワードで接続しようとするところや、 サーバーがそれぞれのパスワードのハッシュ値を計算するのに長い時間費やすところ) のリスクを軽減するのに これは使用しません。理由は: もしアタッカーがハッシュ化されたパスワードにアクセスしたら、 プレーンテキストのデータにもアクセスできるため、パスワードはもはや必要ではなくなってしまいます。 もしデータが、保存されている他のコンピューターによって保護されていて、遠隔のみであるなら、 反復回数は全く必要とされません。 ユーザーパスワードのハッシュ値の組み合わせと (上記をご覧下さい) saltは SHA-256を使用してハッシュ化されます。 結果の値はデータベースに保存されます。ユーザーがデータベースに接続しようとする時、 データベースは、保存されたsalt値のユーザーパスワードのハッシュ値と計算されたハッシュ値を結合します。 他の製品は複数の反復 (ハッシュ値を繰り返しハッシュする) を使用していますが、 この製品ではサービス攻撃の拒絶 (アタッカーが偽のパスワードで接続しようとするところや、 サーバーがそれぞれのパスワードのハッシュ値を計算するのに長い時間費やすところ) のリスクを軽減するのに これは使用しません。理由は: もしアタッカーがハッシュ化されたパスワードにアクセスしたら、 プレーンテキストのデータにもアクセスできるため、パスワードはもはや必要ではなくなってしまいます。 もしデータが、保存されている他のコンピューターによって保護されていて、遠隔のみであるなら、 反復回数は全く必要とされません。
@advanced_1220_h3 @advanced_1224_h3
ファイル暗号化 ファイル暗号化
@advanced_1221_p @advanced_1225_p
データベースファイルは二つの異なるアルゴリズムを使用して、暗号化されます: AES-128 と XTEA です (32 ラウンドを使用)。 XTEAをサポートする理由はパフォーマンス (XTEAはAESのおよそ二倍の速さです) と、AESが突然壊れた場合、代わりとなるアルゴリズムを 持っているからです。 データベースファイルは二つの異なるアルゴリズムを使用して、暗号化されます: AES-128 と XTEA です (32 ラウンドを使用)。 XTEAをサポートする理由はパフォーマンス (XTEAはAESのおよそ二倍の速さです) と、AESが突然壊れた場合、代わりとなるアルゴリズムを 持っているからです。
@advanced_1222_p @advanced_1226_p
ユーザーが暗号化されたデータベースに接続しようとした時、"file" という単語と、@と、 ファイルパスワードの組み合わせは、SHA-256を使用してハッシュ化されます。 このハッシュ値はサーバーに送信されます。 ユーザーが暗号化されたデータベースに接続しようとした時、"file" という単語と、@と、 ファイルパスワードの組み合わせは、SHA-256を使用してハッシュ化されます。 このハッシュ値はサーバーに送信されます。
@advanced_1223_p @advanced_1227_p
新しいデータベースファイルが作られた時、暗号化された安全なランダムの新しいsalt値が生成されます。 このsaltのサイズは 64 bitです。ファイルパスワードのハッシュとsalt値の組み合わせは、 SHA-256を使用して1024回ハッシュ化されます。反復の理由は、アタッカーが通常のパスワードの ハッシュ値を計算するよりも困難にするためです。 新しいデータベースファイルが作られた時、暗号化された安全なランダムの新しいsalt値が生成されます。 このsaltのサイズは 64 bitです。ファイルパスワードのハッシュとsalt値の組み合わせは、 SHA-256を使用して1024回ハッシュ化されます。反復の理由は、アタッカーが通常のパスワードの ハッシュ値を計算するよりも困難にするためです。
@advanced_1224_p @advanced_1228_p
ハッシュ値の結果は、ブロック暗号アルゴリズム (AES-128、または 32ラウンドのXTEA) のためのキーとして 使用されます。その時、初期化ベクター (IV) キーは、再びSHA-256を使用してキーをハッシュ化することによって 計算されます。IVはアタッカーに知らないということを確認して下さい。秘密のIVを使用する理由は、 ウォーターマークアタック (電子透かし攻撃) を防御するためです。 ハッシュ値の結果は、ブロック暗号アルゴリズム (AES-128、または 32ラウンドのXTEA) のためのキーとして 使用されます。その時、初期化ベクター (IV) キーは、再びSHA-256を使用してキーをハッシュ化することによって 計算されます。IVはアタッカーに知らないということを確認して下さい。秘密のIVを使用する理由は、 ウォーターマークアタック (電子透かし攻撃) を防御するためです。
@advanced_1225_p @advanced_1229_p
データのブロックを保存する前に (それぞれのブロックは 8 バイト長)、次のオペレーションを 実行します: 最初に、IVはIVキー (同じblock cipher algorithmを使用して) でブロックナンバーを 暗号化することによって計算されます。このIVはXORを使用してプレーンテキストと併用されます。 結果データはAES-128、またはXTEAアルゴリズムを使用して暗号化されます。 データのブロックを保存する前に (それぞれのブロックは 8 バイト長)、次のオペレーションを 実行します: 最初に、IVはIVキー (同じblock cipher algorithmを使用して) でブロックナンバーを 暗号化することによって計算されます。このIVはXORを使用してプレーンテキストと併用されます。 結果データはAES-128、またはXTEAアルゴリズムを使用して暗号化されます。
@advanced_1226_p @advanced_1230_p
復号化の時、オペレーションは反対に行われます。最初に、ブロックはキーを使用して復号化され、 その時、IVはXORを使用して復号化テキストと併用されます。 復号化の時、オペレーションは反対に行われます。最初に、ブロックはキーを使用して復号化され、 その時、IVはXORを使用して復号化テキストと併用されます。
@advanced_1227_p @advanced_1231_p
その結果、オペレーションのブロック暗号モードはCBT (Cipher-block chaining) ですが、 それぞれの連鎖はたったひとつのブロック長です。ECB (Electronic codebook) モードに優る利点は、 データのパターンが明らかにされない点で、複数のブロックCBCに優る利点は、 はじき出された暗号テキストビットは次のブロックではじき出されたプレーンテキストビットに伝播されないという点です。 その結果、オペレーションのブロック暗号モードはCBT (Cipher-block chaining) ですが、 それぞれの連鎖はたったひとつのブロック長です。ECB (Electronic codebook) モードに優る利点は、 データのパターンが明らかにされない点で、複数のブロックCBCに優る利点は、 はじき出された暗号テキストビットは次のブロックではじき出されたプレーンテキストビットに伝播されないという点です。
@advanced_1228_p @advanced_1232_p
データベース暗号化は、使用されていない間は (盗まれたノートパソコン等) 安全なデータベースだということを 意味します。これは、データベースが使用されている間に、アタッカーがファイルにアクセスしたというケースを 意味するのではありません。アタッカーが書き込みアクセスをした時、例えば、 彼はファイルの一部を古いバージョンに置き換え、データをこのように操ります。 データベース暗号化は、使用されていない間は (盗まれたノートパソコン等) 安全なデータベースだということを 意味します。これは、データベースが使用されている間に、アタッカーがファイルにアクセスしたというケースを 意味するのではありません。アタッカーが書き込みアクセスをした時、例えば、 彼はファイルの一部を古いバージョンに置き換え、データをこのように操ります。
@advanced_1229_p @advanced_1233_p
ファイル暗号化はデータベースエンジンのパフォーマンスを低速にします。非暗号化モードと比較すると、 データベースオペレーションは、XTEAを使用する時はおよそ2.2倍長くかかり、 AESを使用する時は2.5倍長くかかります (エンベッドモード)。 ファイル暗号化はデータベースエンジンのパフォーマンスを低速にします。非暗号化モードと比較すると、 データベースオペレーションは、XTEAを使用する時はおよそ2.2倍長くかかり、 AESを使用する時は2.5倍長くかかります (エンベッドモード)。
@advanced_1230_h3 @advanced_1234_h3
SSL/TLS 接続 SSL/TLS 接続
@advanced_1231_p @advanced_1235_p
遠隔SSL/TLS接続は、Java Secure Socket Extension (SSLServerSocket / SSLSocket) の使用をサポートしています。デフォルトでは、匿名のSSLは使用可能です。デフォルトの暗号化パッケージソフトは SSL_DH_anon_WITH_RC4_128_MD5 です。 遠隔SSL/TLS接続は、Java Secure Socket Extension (SSLServerSocket / SSLSocket) の使用をサポートしています。デフォルトでは、匿名のSSLは使用可能です。デフォルトの暗号化パッケージソフトは SSL_DH_anon_WITH_RC4_128_MD5 です。
@advanced_1232_h3 @advanced_1236_h3
HTTPS 接続 HTTPS 接続
@advanced_1233_p @advanced_1237_p
webサーバーは、SSLServerSocketを使用したHTTP と HTTPS接続をサポートします。 簡単に開始できるように、デフォルトの自己認証された証明書がありますが、 カスタム証明書も同様にサポートされています。 webサーバーは、SSLServerSocketを使用したHTTP と HTTPS接続をサポートします。 簡単に開始できるように、デフォルトの自己認証された証明書がありますが、 カスタム証明書も同様にサポートされています。
@advanced_1234_h2 @advanced_1238_h2
汎用一意識別子 (UUID) 汎用一意識別子 (UUID)
@advanced_1235_p @advanced_1239_p
このデータベースはUUIDをサポートしています。 また、暗号化強力疑似乱数ジェネレーターを使用して新しいUUIDを作成する関数をサポートしています。 同じ値をもつ二つの無作為なUUIDが存在する可能性は、確率論を使用して計算されることができます。 "Birthday Paradox" もご覧下さい。標準化された無作為に生成されたUUIDは、122の無作為なビットを保持しています。 4ビットはバージョン(無作為に生成されたUUID) に、2ビットはバリアント (Leach-Salz) に使用されます。 このデータベースは組み込み関数 RANDOM_UUID() を使用してこのようなUUIDを生成することをサポートしています。 ここに、値の数字が生成された後、二つの 同一のUUIDが生じる可能性を見積もる小さなプログラムがあります: このデータベースはUUIDをサポートしています。 また、暗号化強力疑似乱数ジェネレーターを使用して新しいUUIDを作成する関数をサポートしています。 同じ値をもつ二つの無作為なUUIDが存在する可能性は、確率論を使用して計算されることができます。 "Birthday Paradox" もご覧下さい。標準化された無作為に生成されたUUIDは、122の無作為なビットを保持しています。 4ビットはバージョン(無作為に生成されたUUID) に、2ビットはバリアント (Leach-Salz) に使用されます。 このデータベースは組み込み関数 RANDOM_UUID() を使用してこのようなUUIDを生成することをサポートしています。 ここに、値の数字が生成された後、二つの 同一のUUIDが生じる可能性を見積もる小さなプログラムがあります:
@advanced_1236_p @advanced_1240_p
いくつかの値は: いくつかの値は:
@advanced_1237_p @advanced_1241_p
人の隕石に衝突するという年に一度の危険性は、170億に一回と見積もられ、それは、確率がおよそ 0.000'000'000'06 だということを意味しています。 人の隕石に衝突するという年に一度の危険性は、170億に一回と見積もられ、それは、確率がおよそ 0.000'000'000'06 だということを意味しています。
@advanced_1238_h2 @advanced_1242_h2
システムプロパティから読み込まれた設定 システムプロパティから読み込まれた設定
@advanced_1239_p @advanced_1243_p
いくつかのデータベースの設定は、-DpropertyName=value を使用してコマンドラインで設定することができます。 通常、これらの設定は手動で変更することは必要とされていません。設定は大文字と小文字を区別しています。 例: いくつかのデータベースの設定は、-DpropertyName=value を使用してコマンドラインで設定することができます。 通常、これらの設定は手動で変更することは必要とされていません。設定は大文字と小文字を区別しています。 例:
@advanced_1240_p @advanced_1244_p
現在の設定の値は、INFORMATION_SCHEMA.SETTINGS テーブルで読み込むことが可能です。 現在の設定の値は、INFORMATION_SCHEMA.SETTINGS テーブルで読み込むことが可能です。
@advanced_1241_th @advanced_1245_th
設定 設定
@advanced_1242_th @advanced_1246_th
デフォルト デフォルト
@advanced_1243_th @advanced_1247_th
説明 説明
@advanced_1244_td @advanced_1248_td
h2.check h2.check
@advanced_1245_td @advanced_1249_td
true true
@advanced_1246_td @advanced_1250_td
データベースエンジンでのアサーション データベースエンジンでのアサーション
@advanced_1247_td @advanced_1251_td
h2.check2 h2.check2
@advanced_1248_td @advanced_1252_td
false false
@advanced_1249_td @advanced_1253_td
追加されたアサーション 追加されたアサーション
@advanced_1250_td @advanced_1254_td
h2.clientTraceDirectory h2.clientTraceDirectory
@advanced_1251_td @advanced_1255_td
trace.db/ trace.db/
@advanced_1252_td @advanced_1256_td
JDBCクライアントのトレースファイルが保存されているディレクトリ (クライアント / サーバーのみ) JDBCクライアントのトレースファイルが保存されているディレクトリ (クライアント / サーバーのみ)
@advanced_1253_td @advanced_1257_td
h2.emergencySpaceInitial h2.emergencySpaceInitial
@advanced_1254_td @advanced_1258_td
1048576 1048576
@advanced_1255_td @advanced_1259_td
ディスクの全ての問題を早く検出する "reserve" ファイルのサイズ ディスクの全ての問題を早く検出する "reserve" ファイルのサイズ
@advanced_1256_td @advanced_1260_td
h2.emergencySpaceMin h2.emergencySpaceMin
@advanced_1257_td @advanced_1261_td
131072 131072
@advanced_1258_td @advanced_1262_td
"reserve" ファイルの最小サイズ "reserve" ファイルの最小サイズ
@advanced_1259_td @advanced_1263_td
h2.lobCloseBetweenReads h2.lobCloseBetweenReads
@advanced_1260_td @advanced_1264_td
false false
@advanced_1261_td @advanced_1265_td
読み込みオペレーションの間にLOBファイルを閉じる 読み込みオペレーションの間にLOBファイルを閉じる
@advanced_1262_td @advanced_1266_td
h2.lobFilesInDirectories h2.lobFilesInDirectories
@advanced_1263_td @advanced_1267_td
false false
@advanced_1264_td @advanced_1268_td
LOBファイルをサブディレクトリに保存する LOBファイルをサブディレクトリに保存する
@advanced_1265_td @advanced_1269_td
h2.lobFilesPerDirectory h2.lobFilesPerDirectory
@advanced_1266_td @advanced_1270_td
256 256
@advanced_1267_td @advanced_1271_td
ディレクトリごとのLOBファイルの最大数 ディレクトリごとのLOBファイルの最大数
@advanced_1268_td @advanced_1272_td
h2.logAllErrors h2.logAllErrors
@advanced_1269_td @advanced_1273_td
false false
@advanced_1270_td @advanced_1274_td
ファイルにどのの種類のエラーのスタックトレースを書き込む ファイルにどのの種類のエラーのスタックトレースを書き込む
@advanced_1271_td @advanced_1275_td
h2.logAllErrorsFile h2.logAllErrorsFile
@advanced_1272_td @advanced_1276_td
h2errors.txt h2errors.txt
@advanced_1273_td @advanced_1277_td
エラーを記録するファイル名 エラーを記録するファイル名
@advanced_1274_td @advanced_1278_td
h2.maxFileRetry h2.maxFileRetry
@advanced_1275_td @advanced_1279_td
16 16
@advanced_1276_td @advanced_1280_td
再試行ファイルの削除と改名の回数 再試行ファイルの削除と改名の回数
@advanced_1277_td @advanced_1281_td
h2.multiThreadedKernel h2.multiThreadedKernel
@advanced_1278_td @advanced_1282_td
false false
@advanced_1279_td @advanced_1283_td
同時に実行する複数セッションを許可する 同時に実行する複数セッションを許可する
@advanced_1280_td @advanced_1284_td
h2.objectCache h2.objectCache
@advanced_1281_td @advanced_1285_td
true true
@advanced_1282_td @advanced_1286_td
一般に使用されるオブジェクトをキャッシュする (integer、string) 一般に使用されるオブジェクトをキャッシュする (integer、string)
@advanced_1283_td @advanced_1287_td
h2.objectCacheMaxPerElementSize h2.objectCacheMaxPerElementSize
@advanced_1284_td @advanced_1288_td
4096 4096
@advanced_1285_td @advanced_1289_td
キャッシュのオブジェクトの最大サイズ キャッシュのオブジェクトの最大サイズ
@advanced_1286_td @advanced_1290_td
h2.objectCacheSize h2.objectCacheSize
@advanced_1287_td @advanced_1291_td
1024 1024
@advanced_1288_td @advanced_1292_td
オブジェクトキャッシュのサイズ オブジェクトキャッシュのサイズ
@advanced_1289_td @advanced_1293_td
h2.optimizeEvaluatableSubqueries h2.optimizeEvaluatableSubqueries
@advanced_1290_td @advanced_1294_td
true true
@advanced_1291_td @advanced_1295_td
外部クエリに依存していないサブクエリの最適化 外部クエリに依存していないサブクエリの最適化
@advanced_1292_td @advanced_1296_td
h2.optimizeIn h2.optimizeIn
@advanced_1293_td @advanced_1297_td
true true
@advanced_1294_td @advanced_1298_td
最適化 IN(...) 比較 最適化 IN(...) 比較
@advanced_1295_td @advanced_1299_td
h2.optimizeMinMax h2.optimizeMinMax
@advanced_1296_td @advanced_1300_td
true true
@advanced_1297_td @advanced_1301_td
最適化 MIN と MAX の集合関数 最適化 MIN と MAX の集合関数
@advanced_1298_td @advanced_1302_td
h2.optimizeSubqueryCache h2.optimizeSubqueryCache
@advanced_1299_td @advanced_1303_td
true true
@advanced_1300_td @advanced_1304_td
サブクエリの結果をキャッシュ サブクエリの結果をキャッシュ
@advanced_1301_td @advanced_1305_td
h2.overflowExceptions h2.overflowExceptions
@advanced_1302_td @advanced_1306_td
true true
@advanced_1303_td @advanced_1307_td
integerのオーバーフローに例外を投げる integerのオーバーフローに例外を投げる
@advanced_1304_td @advanced_1308_td
h2.recompileAlways h2.recompileAlways
@advanced_1305_td @advanced_1309_td
false false
@advanced_1306_td @advanced_1310_td
常にprepared statementを再コンパイルする 常にprepared statementを再コンパイルする
@advanced_1307_td @advanced_1311_td
h2.redoBufferSize h2.redoBufferSize
@advanced_1308_td @advanced_1312_td
262144 262144
@advanced_1309_td @advanced_1313_td
redo bufferのサイズ (回復時に起動で使用) redo bufferのサイズ (回復時に起動で使用)
@advanced_1310_td @advanced_1314_td
h2.runFinalizers h2.runFinalizers
@advanced_1311_td @advanced_1315_td
true true
@advanced_1312_td @advanced_1316_td
閉じられていない接続を検出するために finalizers を実行する 閉じられていない接続を検出するために finalizers を実行する
@advanced_1313_td @advanced_1317_td
h2.scriptDirectory h2.scriptDirectory
@advanced_1314_td @advanced_1318_td
スクリプトファイルが保存されるか、読み込まれる相対、または絶対ディレクトリ スクリプトファイルが保存されるか、読み込まれる相対、または絶対ディレクトリ
@advanced_1315_td @advanced_1319_td
h2.serverCachedObjects h2.serverCachedObjects
@advanced_1316_td @advanced_1320_td
64 64
@advanced_1317_td @advanced_1321_td
TCPサーバー: セッションごとのキャッシュオブジェクトの数 TCPサーバー: セッションごとのキャッシュオブジェクトの数
@advanced_1318_td @advanced_1322_td
h2.serverSmallResultSetSize h2.serverSmallResultSetSize
@advanced_1319_td @advanced_1323_td
100 100
@advanced_1320_td @advanced_1324_td
TCPサーバー: このサイズ以下のresult setがひとつのブロックに送信される TCPサーバー: このサイズ以下のresult setがひとつのブロックに送信される
@advanced_1321_h2 @advanced_1325_h2
用語集とリンク 用語集とリンク
@advanced_1322_th @advanced_1326_th
用語 用語
@advanced_1323_th @advanced_1327_th
説明 説明
@advanced_1324_td @advanced_1328_td
AES-128 AES-128
@advanced_1325_td @advanced_1329_td
ブロック暗号化アルゴリズム。こちらもご覧下さい:<a href="http://en.wikipedia.org/wiki/Advanced_Encryption_Standard">Wikipedia: AES</a> ブロック暗号化アルゴリズム。こちらもご覧下さい:<a href="http://en.wikipedia.org/wiki/Advanced_Encryption_Standard">Wikipedia: AES</a>
@advanced_1326_td @advanced_1330_td
Birthday Paradox Birthday Paradox
@advanced_1327_td @advanced_1331_td
部屋にいる二人が同じ誕生日の可能性が期待された以上に高いということを説明する。 また、有効なランダムに生成されたUUID。こちらもご覧下さい:<a href="http://en.wikipedia.org/wiki/Birthday_paradox">Wikipedia: Birthday Paradox</a> 部屋にいる二人が同じ誕生日の可能性が期待された以上に高いということを説明する。 また、有効なランダムに生成されたUUID。こちらもご覧下さい:<a href="http://en.wikipedia.org/wiki/Birthday_paradox">Wikipedia: Birthday Paradox</a>
@advanced_1328_td @advanced_1332_td
Digest Digest
@advanced_1329_td @advanced_1333_td
パスワードを保護するプロトコル (データは保護しません)。こちらもご覧下さい:<a href="http://www.faqs.org/rfcs/rfc2617.html">RFC 2617: HTTP Digest Access Authentication</a> パスワードを保護するプロトコル (データは保護しません)。こちらもご覧下さい:<a href="http://www.faqs.org/rfcs/rfc2617.html">RFC 2617: HTTP Digest Access Authentication</a>
@advanced_1330_td @advanced_1334_td
GCJ GCJ
@advanced_1331_td @advanced_1335_td
JavaのGNUコンパイラー<a href="http://gcc.gnu.org/java/">http://gcc.gnu.org/java/</a> and <a href="http://nativej.mtsystems.ch">http://nativej.mtsystems.ch/ (not free any more)</a> JavaのGNUコンパイラー<a href="http://gcc.gnu.org/java/">http://gcc.gnu.org/java/</a> and <a href="http://nativej.mtsystems.ch">http://nativej.mtsystems.ch/ (not free any more)</a>
@advanced_1332_td @advanced_1336_td
HTTPS HTTPS
@advanced_1333_td @advanced_1337_td
セキュリティをHTTP接続に提供するプロトコル。こちらもご覧下さい: <a href="http://www.ietf.org/rfc/rfc2818.txt">RFC 2818: HTTP Over TLS</a> セキュリティをHTTP接続に提供するプロトコル。こちらもご覧下さい: <a href="http://www.ietf.org/rfc/rfc2818.txt">RFC 2818: HTTP Over TLS</a>
@advanced_1334_td @advanced_1338_td
Modes of Operation Modes of Operation
@advanced_1335_a @advanced_1339_a
Wikipedia: Block cipher modes of operation Wikipedia: Block cipher modes of operation
@advanced_1336_td @advanced_1340_td
Salt Salt
@advanced_1337_td @advanced_1341_td
パスワードのセキュリティを増大する乱数。こちらもご覧下さい: <a href="http://en.wikipedia.org/wiki/Key_derivation_function">Wikipedia: Key derivation function</a> パスワードのセキュリティを増大する乱数。こちらもご覧下さい: <a href="http://en.wikipedia.org/wiki/Key_derivation_function">Wikipedia: Key derivation function</a>
@advanced_1338_td @advanced_1342_td
SHA-256 SHA-256
@advanced_1339_td @advanced_1343_td
暗号化の一方方向のハッシュ関数。こちらもご覧下さい:<a href="http://en.wikipedia.org/wiki/SHA_family">Wikipedia: SHA hash functions</a> 暗号化の一方方向のハッシュ関数。こちらもご覧下さい:<a href="http://en.wikipedia.org/wiki/SHA_family">Wikipedia: SHA hash functions</a>
@advanced_1340_td @advanced_1344_td
SQLインジェクション SQLインジェクション
@advanced_1341_td @advanced_1345_td
組み込みのユーザー入力でアプリケーションがSQLステートメントを生成するセキュリティ脆弱性 こちらもご覧下さい:<a href="http://en.wikipedia.org/wiki/SQL_injection">Wikipedia: SQL Injection</a> 組み込みのユーザー入力でアプリケーションがSQLステートメントを生成するセキュリティ脆弱性 こちらもご覧下さい:<a href="http://en.wikipedia.org/wiki/SQL_injection">Wikipedia: SQL Injection</a>
@advanced_1342_td @advanced_1346_td
Watermark Attack (透かし攻撃) Watermark Attack (透かし攻撃)
@advanced_1343_td @advanced_1347_td
復号化することなくあるデータの存在を証明できる、ある暗号化プログラムのセキュリティ問題。 詳細は、インターネットで "watermark attack cryptoloop" を検索して下さい。 復号化することなくあるデータの存在を証明できる、ある暗号化プログラムのセキュリティ問題。 詳細は、インターネットで "watermark attack cryptoloop" を検索して下さい。
@advanced_1344_td @advanced_1348_td
SSL/TLS SSL/TLS
@advanced_1345_td @advanced_1349_td
Secure Sockets Layer / Transport Layer Security。こちらもご覧下さい: <a href="http://java.sun.com/products/jsse/">Java Secure Socket Extension (JSSE)</a> Secure Sockets Layer / Transport Layer Security。こちらもご覧下さい: <a href="http://java.sun.com/products/jsse/">Java Secure Socket Extension (JSSE)</a>
@advanced_1346_td @advanced_1350_td
XTEA XTEA
@advanced_1347_td @advanced_1351_td
ブロック暗号化アルゴリズム。こちらもご覧下さい: <a href="http://en.wikipedia.org/wiki/XTEA">Wikipedia: XTEA</a> ブロック暗号化アルゴリズム。こちらもご覧下さい: <a href="http://en.wikipedia.org/wiki/XTEA">Wikipedia: XTEA</a>
@build_1000_h1 @build_1000_h1
......
...@@ -3,349 +3,353 @@ advanced_1001_a=Result Sets ...@@ -3,349 +3,353 @@ advanced_1001_a=Result Sets
advanced_1002_a=Large Objects advanced_1002_a=Large Objects
advanced_1003_a=Linked Tables advanced_1003_a=Linked Tables
advanced_1004_a=Transaction Isolation advanced_1004_a=Transaction Isolation
advanced_1005_a=Clustering / High Availability advanced_1005_a=Multi-Version Concurrency Control (MVCC)
advanced_1006_a=Two Phase Commit advanced_1006_a=Clustering / High Availability
advanced_1007_a=Compatibility advanced_1007_a=Two Phase Commit
advanced_1008_a=Run as Windows Service advanced_1008_a=Compatibility
advanced_1009_a=ODBC Driver advanced_1009_a=Run as Windows Service
advanced_1010_a=ACID advanced_1010_a=ODBC Driver
advanced_1011_a=Durability Problems advanced_1011_a=ACID
advanced_1012_a=Using the Recover Tool advanced_1012_a=Durability Problems
advanced_1013_a=File Locking Protocols advanced_1013_a=Using the Recover Tool
advanced_1014_a=Protection against SQL Injection advanced_1014_a=File Locking Protocols
advanced_1015_a=Security Protocols advanced_1015_a=Protection against SQL Injection
advanced_1016_a=Universally Unique Identifiers (UUID) advanced_1016_a=Security Protocols
advanced_1017_a=Settings Read from System Properties advanced_1017_a=Universally Unique Identifiers (UUID)
advanced_1018_a=Glossary and Links advanced_1018_a=Settings Read from System Properties
advanced_1019_h2=Result Sets advanced_1019_a=Glossary and Links
advanced_1020_h3=Limiting the Number of Rows advanced_1020_h2=Result Sets
advanced_1021_p=Before the result is returned to the application, all rows are read by the database. Server side cursors are not supported currently. If only the first few rows are interesting for the application, then the result set size should be limited to improve the performance. This can be done using LIMIT in a query (example\: SELECT * FROM TEST LIMIT 100), or by using Statement.setMaxRows(max). advanced_1021_h3=Limiting the Number of Rows
advanced_1022_h3=Large Result Sets and External Sorting advanced_1022_p=Before the result is returned to the application, all rows are read by the database. Server side cursors are not supported currently. If only the first few rows are interesting for the application, then the result set size should be limited to improve the performance. This can be done using LIMIT in a query (example\: SELECT * FROM TEST LIMIT 100), or by using Statement.setMaxRows(max).
advanced_1023_p=For result set larger than 1000 rows, the result is buffered to disk. If ORDER BY is used, the sorting is done using an external sort algorithm. In this case, each block of rows is sorted using quick sort, then written to disk; when reading the data, the blocks are merged together. advanced_1023_h3=Large Result Sets and External Sorting
advanced_1024_h2=Large Objects advanced_1024_p=For result set larger than 1000 rows, the result is buffered to disk. If ORDER BY is used, the sorting is done using an external sort algorithm. In this case, each block of rows is sorted using quick sort, then written to disk; when reading the data, the blocks are merged together.
advanced_1025_h3=Storing and Reading Large Objects advanced_1025_h2=Large Objects
advanced_1026_p=If it is possible that the objects don't fit into memory, then the data type CLOB (for textual data) or BLOB (for binary data) should be used. For these data types, the objects are not fully read into memory, by using streams. To store a BLOB, use PreparedStatement.setBinaryStream. To store a CLOB, use PreparedStatement.setCharacterStream. To read a BLOB, use ResultSet.getBinaryStream, and to read a CLOB, use ResultSet.getCharacterStream. If the client/server mode is used, the BLOB and CLOB data is fully read into memory when accessed. In this case, the size of a BLOB or CLOB is limited by the memory. advanced_1026_h3=Storing and Reading Large Objects
advanced_1027_h2=Linked Tables advanced_1027_p=If it is possible that the objects don't fit into memory, then the data type CLOB (for textual data) or BLOB (for binary data) should be used. For these data types, the objects are not fully read into memory, by using streams. To store a BLOB, use PreparedStatement.setBinaryStream. To store a CLOB, use PreparedStatement.setCharacterStream. To read a BLOB, use ResultSet.getBinaryStream, and to read a CLOB, use ResultSet.getCharacterStream. If the client/server mode is used, the BLOB and CLOB data is fully read into memory when accessed. In this case, the size of a BLOB or CLOB is limited by the memory.
advanced_1028_p=This database supports linked tables, which means tables that don't exist in the current database but are just links to another database. To create such a link, use the CREATE LINKED TABLE statement\: advanced_1028_h2=Linked Tables
advanced_1029_p=It is then possible to access the table in the usual way. There is a restriction when inserting data to this table\: When inserting or updating rows into the table, NULL and values that are not set in the insert statement are both inserted as NULL. This may not have the desired effect if a default value in the target table is other than NULL. advanced_1029_p=This database supports linked tables, which means tables that don't exist in the current database but are just links to another database. To create such a link, use the CREATE LINKED TABLE statement\:
advanced_1030_p=For each linked table a new connection is opened. This can be a problem for some databases when using many linked tables. For Oracle XE, the maximum number of connection can be increased. Oracle XE needs to be restarted after changing these values\: advanced_1030_p=It is then possible to access the table in the usual way. There is a restriction when inserting data to this table\: When inserting or updating rows into the table, NULL and values that are not set in the insert statement are both inserted as NULL. This may not have the desired effect if a default value in the target table is other than NULL.
advanced_1031_h2=Transaction Isolation advanced_1031_p=For each linked table a new connection is opened. This can be a problem for some databases when using many linked tables. For Oracle XE, the maximum number of connection can be increased. Oracle XE needs to be restarted after changing these values\:
advanced_1032_p=This database supports the following transaction isolation levels\: advanced_1032_h2=Transaction Isolation
advanced_1033_b=Read Committed advanced_1033_p=This database supports the following transaction isolation levels\:
advanced_1034_li=This is the default level. Read locks are released immediately. Higher concurrency is possible when using this level. advanced_1034_b=Read Committed
advanced_1035_li=To enable, execute the SQL statement 'SET LOCK_MODE 3' advanced_1035_li=This is the default level. Read locks are released immediately. Higher concurrency is possible when using this level.
advanced_1036_li=or append ;LOCK_MODE\=3 to the database URL\: jdbc\:h2\:~/test;LOCK_MODE\=3 advanced_1036_li=To enable, execute the SQL statement 'SET LOCK_MODE 3'
advanced_1037_b=Serializable advanced_1037_li=or append ;LOCK_MODE\=3 to the database URL\: jdbc\:h2\:~/test;LOCK_MODE\=3
advanced_1038_li=To enable, execute the SQL statement 'SET LOCK_MODE 1' advanced_1038_b=Serializable
advanced_1039_li=or append ;LOCK_MODE\=1 to the database URL\: jdbc\:h2\:~/test;LOCK_MODE\=1 advanced_1039_li=To enable, execute the SQL statement 'SET LOCK_MODE 1'
advanced_1040_b=Read Uncommitted advanced_1040_li=or append ;LOCK_MODE\=1 to the database URL\: jdbc\:h2\:~/test;LOCK_MODE\=1
advanced_1041_li=This level means that transaction isolation is disabled. advanced_1041_b=Read Uncommitted
advanced_1042_li=To enable, execute the SQL statement 'SET LOCK_MODE 0' advanced_1042_li=This level means that transaction isolation is disabled.
advanced_1043_li=or append ;LOCK_MODE\=0 to the database URL\: jdbc\:h2\:~/test;LOCK_MODE\=0 advanced_1043_li=To enable, execute the SQL statement 'SET LOCK_MODE 0'
advanced_1044_p=When using the isolation level 'serializable', dirty reads, non-repeatable reads, and phantom reads are prohibited. advanced_1044_li=or append ;LOCK_MODE\=0 to the database URL\: jdbc\:h2\:~/test;LOCK_MODE\=0
advanced_1045_b=Dirty Reads advanced_1045_p=When using the isolation level 'serializable', dirty reads, non-repeatable reads, and phantom reads are prohibited.
advanced_1046_li=Means a connection can read uncommitted changes made by another connection. advanced_1046_b=Dirty Reads
advanced_1047_li=Possible with\: read uncommitted advanced_1047_li=Means a connection can read uncommitted changes made by another connection.
advanced_1048_b=Non-Repeatable Reads advanced_1048_li=Possible with\: read uncommitted
advanced_1049_li=A connection reads a row, another connection changes a row and commits, and the first connection re-reads the same row and gets the new result. advanced_1049_b=Non-Repeatable Reads
advanced_1050_li=Possible with\: read uncommitted, read committed advanced_1050_li=A connection reads a row, another connection changes a row and commits, and the first connection re-reads the same row and gets the new result.
advanced_1051_b=Phantom Reads advanced_1051_li=Possible with\: read uncommitted, read committed
advanced_1052_li=A connection reads a set of rows using a condition, another connection inserts a row that falls in this condition and commits, then the first connection re-reads using the same condition and gets the new row. advanced_1052_b=Phantom Reads
advanced_1053_li=Possible with\: read uncommitted, read committed advanced_1053_li=A connection reads a set of rows using a condition, another connection inserts a row that falls in this condition and commits, then the first connection re-reads using the same condition and gets the new row.
advanced_1054_h3=Table Level Locking advanced_1054_li=Possible with\: read uncommitted, read committed
advanced_1055_p=The database allows multiple concurrent connections to the same database. To make sure all connections only see consistent data, table level locking is used. This mechanism does not allow high concurrency, but is very fast. Shared locks and exclusive locks are supported. Before reading from a table, the database tries to add a shared lock to the table (this is only possible if there is no exclusive lock on the object by another connection). If the shared lock is added successfully, the table can be read. It is allowed that other connections also have a shared lock on the same object. If a connection wants to write to a table (update or delete a row), an exclusive lock is required. To get the exclusive lock, other connection must not have any locks on the object. After the connection commits, all locks are released. This database keeps all locks in memory. advanced_1055_h3=Table Level Locking
advanced_1056_h3=Lock Timeout advanced_1056_p=The database allows multiple concurrent connections to the same database. To make sure all connections only see consistent data, table level locking is used. This mechanism does not allow high concurrency, but is very fast. Shared locks and exclusive locks are supported. Before reading from a table, the database tries to add a shared lock to the table (this is only possible if there is no exclusive lock on the object by another connection). If the shared lock is added successfully, the table can be read. It is allowed that other connections also have a shared lock on the same object. If a connection wants to write to a table (update or delete a row), an exclusive lock is required. To get the exclusive lock, other connection must not have any locks on the object. After the connection commits, all locks are released. This database keeps all locks in memory.
advanced_1057_p=If a connection cannot get a lock on an object, the connection waits for some amount of time (the lock timeout). During this time, hopefully the connection holding the lock commits and it is then possible to get the lock. If this is not possible because the other connection does not release the lock for some time, the unsuccessful connection will get a lock timeout exception. The lock timeout can be set individually for each connection. advanced_1057_h3=Lock Timeout
advanced_1058_h2=Clustering / High Availability advanced_1058_p=If a connection cannot get a lock on an object, the connection waits for some amount of time (the lock timeout). During this time, hopefully the connection holding the lock commits and it is then possible to get the lock. If this is not possible because the other connection does not release the lock for some time, the unsuccessful connection will get a lock timeout exception. The lock timeout can be set individually for each connection.
advanced_1059_p=This database supports a simple clustering / high availability mechanism. The architecture is\: two database servers run on two different computers, and on both computers is a copy of the same database. If both servers run, each database operation is executed on both computers. If one server fails (power, hardware or network failure), the other server can still continue to work. From this point on, the operations will be executed only on one server until the other server is back up. advanced_1059_h2=Multi-Version Concurrency Control (MVCC)
advanced_1060_p=Clustering can only be used in the server mode (the embedded mode does not support clustering). It is possible to restore the cluster without stopping the server, however it is critical that no other application is changing the data in the first database while the second database is restored, so restoring the cluster is currently a manual process. advanced_1060_p=The MVCC feature allows higher concurrency than using (table level or row level) locks. When using MVCC in this database, delete, insert and update operations will only issue a shared lock on the table. Table are still locked exclusively when adding or removing columns, when dropping the table, and when using SELECT ... FOR UPDATE. Connections only 'see' committed data, and own changes. That means, if connection A updates a row but doesn't commit this change yet, connection B will see the old value. Only when the change is committed, the new value is visible by other connections (read committed). If multiple connections concurrently try to update the same row, this database fails fast\: a concurrent update exception is thrown.
advanced_1061_p=To initialize the cluster, use the following steps\: advanced_1061_p=To use the MVCC feature, append MVCC\=TRUE to the database URL\:
advanced_1062_li=Create a database advanced_1062_h2=Clustering / High Availability
advanced_1063_li=Use the CreateCluster tool to copy the database to another location and initialize the clustering. Afterwards, you have two databases containing the same data. advanced_1063_p=This database supports a simple clustering / high availability mechanism. The architecture is\: two database servers run on two different computers, and on both computers is a copy of the same database. If both servers run, each database operation is executed on both computers. If one server fails (power, hardware or network failure), the other server can still continue to work. From this point on, the operations will be executed only on one server until the other server is back up.
advanced_1064_li=Start two servers (one for each copy of the database) advanced_1064_p=Clustering can only be used in the server mode (the embedded mode does not support clustering). It is possible to restore the cluster without stopping the server, however it is critical that no other application is changing the data in the first database while the second database is restored, so restoring the cluster is currently a manual process.
advanced_1065_li=You are now ready to connect to the databases with the client application(s) advanced_1065_p=To initialize the cluster, use the following steps\:
advanced_1066_h3=Using the CreateCluster Tool advanced_1066_li=Create a database
advanced_1067_p=To understand how clustering works, please try out the following example. In this example, the two databases reside on the same computer, but usually, the databases will be on different servers. advanced_1067_li=Use the CreateCluster tool to copy the database to another location and initialize the clustering. Afterwards, you have two databases containing the same data.
advanced_1068_li=Create two directories\: server1 and server2. Each directory will simulate a directory on a computer. advanced_1068_li=Start two servers (one for each copy of the database)
advanced_1069_li=Start a TCP server pointing to the first directory. You can do this using the command line\: advanced_1069_li=You are now ready to connect to the databases with the client application(s)
advanced_1070_li=Start a second TCP server pointing to the second directory. This will simulate a server running on a second (redundant) computer. You can do this using the command line\: advanced_1070_h3=Using the CreateCluster Tool
advanced_1071_li=Use the CreateCluster tool to initialize clustering. This will automatically create a new, empty database if it does not exist. Run the tool on the command line\: advanced_1071_p=To understand how clustering works, please try out the following example. In this example, the two databases reside on the same computer, but usually, the databases will be on different servers.
advanced_1072_li=You can now connect to the databases using an application or the H2 Console using the JDBC URL jdbc\:h2\:tcp\://localhost\:9101,localhost\:9102/test advanced_1072_li=Create two directories\: server1 and server2. Each directory will simulate a directory on a computer.
advanced_1073_li=If you stop a server (by killing the process), you will notice that the other machine continues to work, and therefore the database is still accessible. advanced_1073_li=Start a TCP server pointing to the first directory. You can do this using the command line\:
advanced_1074_li=To restore the cluster, you first need to delete the database that failed, then restart the server that was stopped, and re-run the CreateCluster tool. advanced_1074_li=Start a second TCP server pointing to the second directory. This will simulate a server running on a second (redundant) computer. You can do this using the command line\:
advanced_1075_h3=Clustering Algorithm and Limitations advanced_1075_li=Use the CreateCluster tool to initialize clustering. This will automatically create a new, empty database if it does not exist. Run the tool on the command line\:
advanced_1076_p=Read-only queries are only executed against the first cluster node, but all other statements are executed against all nodes. There is currently no load balancing made to avoid problems with transactions. The following functions may yield different results on different cluster nodes and must be executed with care\: RANDOM_UUID(), SECURE_RAND(), SESSION_ID(), MEMORY_FREE(), MEMORY_USED(), CSVREAD(), CSVWRITE(), RAND() [when not using a seed]. Those functions should not be used directly in modifying statements (for example INSERT, UPDATE, or MERGE). However, they can be used in read-only statements and the result can then be used for modifying statements. advanced_1076_li=You can now connect to the databases using an application or the H2 Console using the JDBC URL jdbc\:h2\:tcp\://localhost\:9101,localhost\:9102/test
advanced_1077_h2=Two Phase Commit advanced_1077_li=If you stop a server (by killing the process), you will notice that the other machine continues to work, and therefore the database is still accessible.
advanced_1078_p=The two phase commit protocol is supported. 2-phase-commit works as follows\: advanced_1078_li=To restore the cluster, you first need to delete the database that failed, then restart the server that was stopped, and re-run the CreateCluster tool.
advanced_1079_li=Autocommit needs to be switched off advanced_1079_h3=Clustering Algorithm and Limitations
advanced_1080_li=A transaction is started, for example by inserting a row advanced_1080_p=Read-only queries are only executed against the first cluster node, but all other statements are executed against all nodes. There is currently no load balancing made to avoid problems with transactions. The following functions may yield different results on different cluster nodes and must be executed with care\: RANDOM_UUID(), SECURE_RAND(), SESSION_ID(), MEMORY_FREE(), MEMORY_USED(), CSVREAD(), CSVWRITE(), RAND() [when not using a seed]. Those functions should not be used directly in modifying statements (for example INSERT, UPDATE, or MERGE). However, they can be used in read-only statements and the result can then be used for modifying statements.
advanced_1081_li=The transaction is marked 'prepared' by executing the SQL statement <code>PREPARE COMMIT transactionName</code> advanced_1081_h2=Two Phase Commit
advanced_1082_li=The transaction can now be committed or rolled back advanced_1082_p=The two phase commit protocol is supported. 2-phase-commit works as follows\:
advanced_1083_li=If a problem occurs before the transaction was successfully committed or rolled back (for example because a network problem occurred), the transaction is in the state 'in-doubt' advanced_1083_li=Autocommit needs to be switched off
advanced_1084_li=When re-connecting to the database, the in-doubt transactions can be listed with <code>SELECT * FROM INFORMATION_SCHEMA.IN_DOUBT</code> advanced_1084_li=A transaction is started, for example by inserting a row
advanced_1085_li=Each transaction in this list must now be committed or rolled back by executing <code>COMMIT TRANSACTION transactionName</code> or <code>ROLLBACK TRANSACTION transactionName</code> advanced_1085_li=The transaction is marked 'prepared' by executing the SQL statement <code>PREPARE COMMIT transactionName</code>
advanced_1086_li=The database needs to be closed and re-opened to apply the changes advanced_1086_li=The transaction can now be committed or rolled back
advanced_1087_h2=Compatibility advanced_1087_li=If a problem occurs before the transaction was successfully committed or rolled back (for example because a network problem occurred), the transaction is in the state 'in-doubt'
advanced_1088_p=This database is (up to a certain point) compatible to other databases such as HSQLDB, MySQL and PostgreSQL. There are certain areas where H2 is incompatible. advanced_1088_li=When re-connecting to the database, the in-doubt transactions can be listed with <code>SELECT * FROM INFORMATION_SCHEMA.IN_DOUBT</code>
advanced_1089_h3=Transaction Commit when Autocommit is On advanced_1089_li=Each transaction in this list must now be committed or rolled back by executing <code>COMMIT TRANSACTION transactionName</code> or <code>ROLLBACK TRANSACTION transactionName</code>
advanced_1090_p=At this time, this database engine commits a transaction (if autocommit is switched on) just before returning the result. For a query, this means the transaction is committed even before the application scans through the result set, and before the result set is closed. Other database engines may commit the transaction in this case when the result set is closed. advanced_1090_li=The database needs to be closed and re-opened to apply the changes
advanced_1091_h3=Keywords / Reserved Words advanced_1091_h2=Compatibility
advanced_1092_p=There is a list of keywords that can't be used as identifiers (table names, column names and so on), unless they are quoted (surrounded with double quotes). The list is currently\: advanced_1092_p=This database is (up to a certain point) compatible to other databases such as HSQLDB, MySQL and PostgreSQL. There are certain areas where H2 is incompatible.
advanced_1093_p=CURRENT_TIMESTAMP, CURRENT_TIME, CURRENT_DATE, CROSS, DISTINCT, EXCEPT, EXISTS, FROM, FOR, FALSE, FULL, GROUP, HAVING, INNER, INTERSECT, IS, JOIN, LIKE, MINUS, NATURAL, NOT, NULL, ON, ORDER, PRIMARY, ROWNUM, SELECT, SYSDATE, SYSTIME, SYSTIMESTAMP, TODAY, TRUE, UNION, WHERE advanced_1093_h3=Transaction Commit when Autocommit is On
advanced_1094_p=Certain words of this list are keywords because they are functions that can be used without '()' for compatibility, for example CURRENT_TIMESTAMP. advanced_1094_p=At this time, this database engine commits a transaction (if autocommit is switched on) just before returning the result. For a query, this means the transaction is committed even before the application scans through the result set, and before the result set is closed. Other database engines may commit the transaction in this case when the result set is closed.
advanced_1095_h2=Run as Windows Service advanced_1095_h3=Keywords / Reserved Words
advanced_1096_p=Using a native wrapper / adapter, Java applications can be run as a Windows Service. There are various tools available to do that. The Java Service Wrapper from Tanuki Software, Inc. ( <a href\="http\://wrapper.tanukisoftware.org">http\://wrapper.tanukisoftware.org</a> ) is included in the installation. Batch files are provided to install, start, stop and uninstall the H2 Database Engine Service. This service contains the TCP Server and the H2 Console web application. The batch files are located in the directory H2/service. advanced_1096_p=There is a list of keywords that can't be used as identifiers (table names, column names and so on), unless they are quoted (surrounded with double quotes). The list is currently\:
advanced_1097_h3=Install the Service advanced_1097_p=CURRENT_TIMESTAMP, CURRENT_TIME, CURRENT_DATE, CROSS, DISTINCT, EXCEPT, EXISTS, FROM, FOR, FALSE, FULL, GROUP, HAVING, INNER, INTERSECT, IS, JOIN, LIKE, MINUS, NATURAL, NOT, NULL, ON, ORDER, PRIMARY, ROWNUM, SELECT, SYSDATE, SYSTIME, SYSTIMESTAMP, TODAY, TRUE, UNION, WHERE
advanced_1098_p=The service needs to be registered as a Windows Service first. To do that, double click on 1_install_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear. advanced_1098_p=Certain words of this list are keywords because they are functions that can be used without '()' for compatibility, for example CURRENT_TIMESTAMP.
advanced_1099_h3=Start the Service advanced_1099_h2=Run as Windows Service
advanced_1100_p=You can start the H2 Database Engine Service using the service manager of Windows, or by double clicking on 2_start_service.bat. Please note that the batch file does not print an error message if the service is not installed. advanced_1100_p=Using a native wrapper / adapter, Java applications can be run as a Windows Service. There are various tools available to do that. The Java Service Wrapper from Tanuki Software, Inc. ( <a href\="http\://wrapper.tanukisoftware.org">http\://wrapper.tanukisoftware.org</a> ) is included in the installation. Batch files are provided to install, start, stop and uninstall the H2 Database Engine Service. This service contains the TCP Server and the H2 Console web application. The batch files are located in the directory H2/service.
advanced_1101_h3=Connect to the H2 Console advanced_1101_h3=Install the Service
advanced_1102_p=After installing and starting the service, you can connect to the H2 Console application using a browser. Double clicking on 3_start_browser.bat to do that. The default port (8082) is hard coded in the batch file. advanced_1102_p=The service needs to be registered as a Windows Service first. To do that, double click on 1_install_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear.
advanced_1103_h3=Stop the Service advanced_1103_h3=Start the Service
advanced_1104_p=To stop the service, double click on 4_stop_service.bat. Please note that the batch file does not print an error message if the service is not installed or started. advanced_1104_p=You can start the H2 Database Engine Service using the service manager of Windows, or by double clicking on 2_start_service.bat. Please note that the batch file does not print an error message if the service is not installed.
advanced_1105_h3=Uninstall the Service advanced_1105_h3=Connect to the H2 Console
advanced_1106_p=To uninstall the service, double click on 5_uninstall_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear. advanced_1106_p=After installing and starting the service, you can connect to the H2 Console application using a browser. Double clicking on 3_start_browser.bat to do that. The default port (8082) is hard coded in the batch file.
advanced_1107_h2=ODBC Driver advanced_1107_h3=Stop the Service
advanced_1108_p=This database does not come with its own ODBC driver at this time, but it supports the PostgreSQL network protocol. Therefore, the PostgreSQL ODBC driver can be used. Support for the PostgreSQL network protocol is quite new and should be viewed as experimental. It should not be used for production applications. advanced_1108_p=To stop the service, double click on 4_stop_service.bat. Please note that the batch file does not print an error message if the service is not installed or started.
advanced_1109_p=At this time, the PostgreSQL ODBC driver does not work on 64 bit versions of Windows. For more information, see\: <a href\="http\://svr5.postgresql.org/pgsql-odbc/2005-09/msg00127.php">ODBC Driver on Windows 64 bit</a> advanced_1109_h3=Uninstall the Service
advanced_1110_h3=ODBC Installation advanced_1110_p=To uninstall the service, double click on 5_uninstall_service.bat. If successful, a command prompt window will pop up and disappear immediately. If not, a message will appear.
advanced_1111_p=First, the ODBC driver must be installed. Any recent PostgreSQL ODBC driver should work, however version 8.2.4 or newer is recommended. The Windows version of the PostgreSQL ODBC driver is available at <a href\="http\://www.postgresql.org/ftp/odbc/versions/msi">http\://www.postgresql.org/ftp/odbc/versions/msi</a> . advanced_1111_h2=ODBC Driver
advanced_1112_h3=Starting the Server advanced_1112_p=This database does not come with its own ODBC driver at this time, but it supports the PostgreSQL network protocol. Therefore, the PostgreSQL ODBC driver can be used. Support for the PostgreSQL network protocol is quite new and should be viewed as experimental. It should not be used for production applications.
advanced_1113_p=After installing the ODBC driver, start the H2 Server using the command line\: advanced_1113_p=At this time, the PostgreSQL ODBC driver does not work on 64 bit versions of Windows. For more information, see\: <a href\="http\://svr5.postgresql.org/pgsql-odbc/2005-09/msg00127.php">ODBC Driver on Windows 64 bit</a>
advanced_1114_p=The PG Server (PG for PostgreSQL protocol) is started as well. By default, databases are stored in the current working directory where the server is started. Use -baseDir to save databases in another directory, for example the user home directory\: advanced_1114_h3=ODBC Installation
advanced_1115_p=The PG server can be started and stopped from within a Java application as follows\: advanced_1115_p=First, the ODBC driver must be installed. Any recent PostgreSQL ODBC driver should work, however version 8.2.4 or newer is recommended. The Windows version of the PostgreSQL ODBC driver is available at <a href\="http\://www.postgresql.org/ftp/odbc/versions/msi">http\://www.postgresql.org/ftp/odbc/versions/msi</a> .
advanced_1116_p=By default, only connections from localhost are allowed. To allow remote connections, use <code>-pgAllowOthers true</code> when starting the server. advanced_1116_h3=Starting the Server
advanced_1117_h3=ODBC Configuration advanced_1117_p=After installing the ODBC driver, start the H2 Server using the command line\:
advanced_1118_p=After installing the driver, a new Data Source must be added. In Windows, run <code>odbcad32.exe</code> to open the Data Source Administrator. Then click on 'Add...' and select the PostgreSQL Unicode driver. Then click 'Finish'. You will be able to change the connection properties\: advanced_1118_p=The PG Server (PG for PostgreSQL protocol) is started as well. By default, databases are stored in the current working directory where the server is started. Use -baseDir to save databases in another directory, for example the user home directory\:
advanced_1119_th=Property advanced_1119_p=The PG server can be started and stopped from within a Java application as follows\:
advanced_1120_th=Example advanced_1120_p=By default, only connections from localhost are allowed. To allow remote connections, use <code>-pgAllowOthers true</code> when starting the server.
advanced_1121_th=Remarks advanced_1121_h3=ODBC Configuration
advanced_1122_td=Data Source advanced_1122_p=After installing the driver, a new Data Source must be added. In Windows, run <code>odbcad32.exe</code> to open the Data Source Administrator. Then click on 'Add...' and select the PostgreSQL Unicode driver. Then click 'Finish'. You will be able to change the connection properties\:
advanced_1123_td=H2 Test advanced_1123_th=Property
advanced_1124_td=The name of the ODBC Data Source advanced_1124_th=Example
advanced_1125_td=Database advanced_1125_th=Remarks
advanced_1126_td=test advanced_1126_td=Data Source
advanced_1127_td=The database name. Only simple names are supported at this time; advanced_1127_td=H2 Test
advanced_1128_td=relative or absolute path are not supported in the database name. advanced_1128_td=The name of the ODBC Data Source
advanced_1129_td=By default, the database is stored in the current working directory advanced_1129_td=Database
advanced_1130_td=where the Server is started except when the -baseDir setting is used. advanced_1130_td=test
advanced_1131_td=The name must be at least 3 characters. advanced_1131_td=The database name. Only simple names are supported at this time;
advanced_1132_td=Server advanced_1132_td=relative or absolute path are not supported in the database name.
advanced_1133_td=localhost advanced_1133_td=By default, the database is stored in the current working directory
advanced_1134_td=The server name or IP address. advanced_1134_td=where the Server is started except when the -baseDir setting is used.
advanced_1135_td=By default, only remote connections are allowed advanced_1135_td=The name must be at least 3 characters.
advanced_1136_td=User Name advanced_1136_td=Server
advanced_1137_td=sa advanced_1137_td=localhost
advanced_1138_td=The database user name. advanced_1138_td=The server name or IP address.
advanced_1139_td=SSL Mode advanced_1139_td=By default, only remote connections are allowed
advanced_1140_td=disabled advanced_1140_td=User Name
advanced_1141_td=At this time, SSL is not supported. advanced_1141_td=sa
advanced_1142_td=Port advanced_1142_td=The database user name.
advanced_1143_td=5435 advanced_1143_td=SSL Mode
advanced_1144_td=The port where the PG Server is listening. advanced_1144_td=disabled
advanced_1145_td=Password advanced_1145_td=At this time, SSL is not supported.
advanced_1146_td=sa advanced_1146_td=Port
advanced_1147_td=The database password. advanced_1147_td=5435
advanced_1148_p=Afterwards, you may use this data source. advanced_1148_td=The port where the PG Server is listening.
advanced_1149_h3=PG Protocol Support Limitations advanced_1149_td=Password
advanced_1150_p=At this time, only a subset of the PostgreSQL network protocol is implemented. Also, there may be compatibility problems on the SQL level, with the catalog, or with text encoding. Problems are fixed as they are found. Currently, statements can not be cancelled when using the PG protocol. advanced_1150_td=sa
advanced_1151_h3=Security Considerations advanced_1151_td=The database password.
advanced_1152_p=Currently, the PG Server does not support challenge response or encrypt passwords. This may be a problem if an attacker can listen to the data transferred between the ODBC driver and the server, because the password is readable to the attacker. Also, it is currently not possible to use encrypted SSL connections. Therefore the ODBC driver should not be used where security is important. advanced_1152_p=Afterwards, you may use this data source.
advanced_1153_h2=ACID advanced_1153_h3=PG Protocol Support Limitations
advanced_1154_p=In the database world, ACID stands for\: advanced_1154_p=At this time, only a subset of the PostgreSQL network protocol is implemented. Also, there may be compatibility problems on the SQL level, with the catalog, or with text encoding. Problems are fixed as they are found. Currently, statements can not be cancelled when using the PG protocol.
advanced_1155_li=Atomicity\: Transactions must be atomic, meaning either all tasks are performed or none. advanced_1155_h3=Security Considerations
advanced_1156_li=Consistency\: All operations must comply with the defined constraints. advanced_1156_p=Currently, the PG Server does not support challenge response or encrypt passwords. This may be a problem if an attacker can listen to the data transferred between the ODBC driver and the server, because the password is readable to the attacker. Also, it is currently not possible to use encrypted SSL connections. Therefore the ODBC driver should not be used where security is important.
advanced_1157_li=Isolation\: Transactions must be isolated from each other. advanced_1157_h2=ACID
advanced_1158_li=Durability\: Committed transaction will not be lost. advanced_1158_p=In the database world, ACID stands for\:
advanced_1159_h3=Atomicity advanced_1159_li=Atomicity\: Transactions must be atomic, meaning either all tasks are performed or none.
advanced_1160_p=Transactions in this database are always atomic. advanced_1160_li=Consistency\: All operations must comply with the defined constraints.
advanced_1161_h3=Consistency advanced_1161_li=Isolation\: Transactions must be isolated from each other.
advanced_1162_p=This database is always in a consistent state. Referential integrity rules are always enforced. advanced_1162_li=Durability\: Committed transaction will not be lost.
advanced_1163_h3=Isolation advanced_1163_h3=Atomicity
advanced_1164_p=For H2, as with most other database systems, the default isolation level is 'read committed'. This provides better performance, but also means that transactions are not completely isolated. H2 supports the transaction isolation levels 'serializable', 'read committed', and 'read uncommitted'. advanced_1164_p=Transactions in this database are always atomic.
advanced_1165_h3=Durability advanced_1165_h3=Consistency
advanced_1166_p=This database does not guarantee that all committed transactions survive a power failure. Tests show that all databases sometimes lose transactions on power failure (for details, see below). Where losing transactions is not acceptable, a laptop or UPS (uninterruptible power supply) should be used. If durability is required for all possible cases of hardware failure, clustering should be used, such as the H2 clustering mode. advanced_1166_p=This database is always in a consistent state. Referential integrity rules are always enforced.
advanced_1167_h2=Durability Problems advanced_1167_h3=Isolation
advanced_1168_p=Complete durability means all committed transaction survive a power failure. Some databases claim they can guarantee durability, but such claims are wrong. A durability test was run against H2, HSQLDB, PostgreSQL, and Derby. All of those databases sometimes lose committed transactions. The test is included in the H2 download, see org.h2.test.poweroff.Test. advanced_1168_p=For H2, as with most other database systems, the default isolation level is 'read committed'. This provides better performance, but also means that transactions are not completely isolated. H2 supports the transaction isolation levels 'serializable', 'read committed', and 'read uncommitted'.
advanced_1169_h3=Ways to (Not) Achieve Durability advanced_1169_h3=Durability
advanced_1170_p=Making sure that committed transaction are not lost is more complicated than it seems first. To guarantee complete durability, a database must ensure that the log record is on the hard drive before the commit call returns. To do that, databases use different methods. One is to use the 'synchronous write' file access mode. In Java, RandomAccessFile supports the modes "rws" and "rwd"\: advanced_1170_p=This database does not guarantee that all committed transactions survive a power failure. Tests show that all databases sometimes lose transactions on power failure (for details, see below). Where losing transactions is not acceptable, a laptop or UPS (uninterruptible power supply) should be used. If durability is required for all possible cases of hardware failure, clustering should be used, such as the H2 clustering mode.
advanced_1171_li=rwd\: Every update to the file's content is written synchronously to the underlying storage device. advanced_1171_h2=Durability Problems
advanced_1172_li=rws\: In addition to rwd, every update to the metadata is written synchronously. advanced_1172_p=Complete durability means all committed transaction survive a power failure. Some databases claim they can guarantee durability, but such claims are wrong. A durability test was run against H2, HSQLDB, PostgreSQL, and Derby. All of those databases sometimes lose committed transactions. The test is included in the H2 download, see org.h2.test.poweroff.Test.
advanced_1173_p=This feature is used by Derby. A test (org.h2.test.poweroff.TestWrite) with one of those modes achieves around 50 thousand write operations per second. Even when the operating system write buffer is disabled, the write rate is around 50 thousand operations per second. This feature does not force changes to disk because it does not flush all buffers. The test updates the same byte in the file again and again. If the hard drive was able to write at this rate, then the disk would need to make at least 50 thousand revolutions per second, or 3 million RPM (revolutions per minute). There are no such hard drives. The hard drive used for the test is about 7200 RPM, or about 120 revolutions per second. There is an overhead, so the maximum write rate must be lower than that. advanced_1173_h3=Ways to (Not) Achieve Durability
advanced_1174_p=Buffers can be flushed by calling the function fsync. There are two ways to do that in Java\: advanced_1174_p=Making sure that committed transaction are not lost is more complicated than it seems first. To guarantee complete durability, a database must ensure that the log record is on the hard drive before the commit call returns. To do that, databases use different methods. One is to use the 'synchronous write' file access mode. In Java, RandomAccessFile supports the modes "rws" and "rwd"\:
advanced_1175_li=FileDescriptor.sync(). The documentation says that this forces all system buffers to synchronize with the underlying device. Sync is supposed to return after all in-memory modified copies of buffers associated with this FileDescriptor have been written to the physical medium. advanced_1175_li=rwd\: Every update to the file's content is written synchronously to the underlying storage device.
advanced_1176_li=FileChannel.force() (since JDK 1.4). This method is supposed to force any updates to this channel's file to be written to the storage device that contains it. advanced_1176_li=rws\: In addition to rwd, every update to the metadata is written synchronously.
advanced_1177_p=By default, MySQL calls fsync for each commit. When using one of those methods, only around 60 write operations per second can be achieved, which is consistent with the RPM rate of the hard drive used. Unfortunately, even when calling FileDescriptor.sync() or FileChannel.force(), data is not always persisted to the hard drive, because most hard drives do not obey fsync()\: see 'Your Hard Drive Lies to You' at http\://hardware.slashdot.org/article.pl?sid\=05/05/13/0529252. In Mac OS X fsync does not flush hard drive buffers\: http\://lists.apple.com/archives/darwin-dev/2005/Feb/msg00072.html. So the situation is confusing, and tests prove there is a problem. advanced_1177_p=This feature is used by Derby. A test (org.h2.test.poweroff.TestWrite) with one of those modes achieves around 50 thousand write operations per second. Even when the operating system write buffer is disabled, the write rate is around 50 thousand operations per second. This feature does not force changes to disk because it does not flush all buffers. The test updates the same byte in the file again and again. If the hard drive was able to write at this rate, then the disk would need to make at least 50 thousand revolutions per second, or 3 million RPM (revolutions per minute). There are no such hard drives. The hard drive used for the test is about 7200 RPM, or about 120 revolutions per second. There is an overhead, so the maximum write rate must be lower than that.
advanced_1178_p=Trying to flush hard drive buffers hard, and if you do the performance is very bad. First you need to make sure that the hard drive actually flushes all buffers. Tests show that this can not be done in a reliable way. Then the maximum number of transactions is around 60 per second. Because of those reasons, the default behavior of H2 is to delay writing committed transactions. advanced_1178_p=Buffers can be flushed by calling the function fsync. There are two ways to do that in Java\:
advanced_1179_p=In H2, after a power failure, a bit more than one second of committed transactions may be lost. To change the behavior, use SET WRITE_DELAY and CHECKPOINT SYNC. Most other databases support commit delay as well. In the performance comparison, commit delay was used for all databases that support it. advanced_1179_li=FileDescriptor.sync(). The documentation says that this forces all system buffers to synchronize with the underlying device. Sync is supposed to return after all in-memory modified copies of buffers associated with this FileDescriptor have been written to the physical medium.
advanced_1180_h3=Running the Durability Test advanced_1180_li=FileChannel.force() (since JDK 1.4). This method is supposed to force any updates to this channel's file to be written to the storage device that contains it.
advanced_1181_p=To test the durability / non-durability of this and other databases, you can use the test application in the package org.h2.test.poweroff. Two computers with network connection are required to run this test. One computer just listens, while the test application is run (and power is cut) on the other computer. The computer with the listener application opens a TCP/IP port and listens for an incoming connection. The second computer first connects to the listener, and then created the databases and starts inserting records. The connection is set to 'autocommit', which means after each inserted record a commit is performed automatically. Afterwards, the test computer notifies the listener that this record was inserted successfully. The listener computer displays the last inserted record number every 10 seconds. Now, switch off the power manually, then restart the computer, and run the application again. You will find out that in most cases, none of the databases contains all the records that the listener computer knows about. For details, please consult the source code of the listener and test application. advanced_1181_p=By default, MySQL calls fsync for each commit. When using one of those methods, only around 60 write operations per second can be achieved, which is consistent with the RPM rate of the hard drive used. Unfortunately, even when calling FileDescriptor.sync() or FileChannel.force(), data is not always persisted to the hard drive, because most hard drives do not obey fsync()\: see 'Your Hard Drive Lies to You' at http\://hardware.slashdot.org/article.pl?sid\=05/05/13/0529252. In Mac OS X fsync does not flush hard drive buffers\: http\://lists.apple.com/archives/darwin-dev/2005/Feb/msg00072.html. So the situation is confusing, and tests prove there is a problem.
advanced_1182_h2=Using the Recover Tool advanced_1182_p=Trying to flush hard drive buffers hard, and if you do the performance is very bad. First you need to make sure that the hard drive actually flushes all buffers. Tests show that this can not be done in a reliable way. Then the maximum number of transactions is around 60 per second. Because of those reasons, the default behavior of H2 is to delay writing committed transactions.
advanced_1183_p=The recover tool can be used to extract the contents of a data file, even if the database is corrupted. At this time, it does not extract the content of the log file or large objects (CLOB or BLOB). To run the tool, type on the command line\: advanced_1183_p=In H2, after a power failure, a bit more than one second of committed transactions may be lost. To change the behavior, use SET WRITE_DELAY and CHECKPOINT SYNC. Most other databases support commit delay as well. In the performance comparison, commit delay was used for all databases that support it.
advanced_1184_p=For each database in the current directory, a text file will be created. This file contains raw insert statement (for the data) and data definition (DDL) statement to recreate the schema of the database. This file cannot be executed directly, as the raw insert statements don't have the correct table names, so the file needs to be pre-processed manually before executing. advanced_1184_h3=Running the Durability Test
advanced_1185_h2=File Locking Protocols advanced_1185_p=To test the durability / non-durability of this and other databases, you can use the test application in the package org.h2.test.poweroff. Two computers with network connection are required to run this test. One computer just listens, while the test application is run (and power is cut) on the other computer. The computer with the listener application opens a TCP/IP port and listens for an incoming connection. The second computer first connects to the listener, and then created the databases and starts inserting records. The connection is set to 'autocommit', which means after each inserted record a commit is performed automatically. Afterwards, the test computer notifies the listener that this record was inserted successfully. The listener computer displays the last inserted record number every 10 seconds. Now, switch off the power manually, then restart the computer, and run the application again. You will find out that in most cases, none of the databases contains all the records that the listener computer knows about. For details, please consult the source code of the listener and test application.
advanced_1186_p=Whenever a database is opened, a lock file is created to signal other processes that the database is in use. If database is closed, or if the process that opened the database terminates, this lock file is deleted. advanced_1186_h2=Using the Recover Tool
advanced_1187_p=In special cases (if the process did not terminate normally, for example because there was a blackout), the lock file is not deleted by the process that created it. That means the existence of the lock file is not a safe protocol for file locking. However, this software uses a challenge-response protocol to protect the database files. There are two methods (algorithms) implemented to provide both security (that is, the same database files cannot be opened by two processes at the same time) and simplicity (that is, the lock file does not need to be deleted manually by the user). The two methods are 'file method' and 'socket methods'. advanced_1187_p=The recover tool can be used to extract the contents of a data file, even if the database is corrupted. At this time, it does not extract the content of the log file or large objects (CLOB or BLOB). To run the tool, type on the command line\:
advanced_1188_h3=File Locking Method 'File' advanced_1188_p=For each database in the current directory, a text file will be created. This file contains raw insert statement (for the data) and data definition (DDL) statement to recreate the schema of the database. This file cannot be executed directly, as the raw insert statements don't have the correct table names, so the file needs to be pre-processed manually before executing.
advanced_1189_p=The default method for database file locking is the 'File Method'. The algorithm is\: advanced_1189_h2=File Locking Protocols
advanced_1190_li=When the lock file does not exist, it is created (using the atomic operation File.createNewFile). Then, the process waits a little bit (20ms) and checks the file again. If the file was changed during this time, the operation is aborted. This protects against a race condition when a process deletes the lock file just after one create it, and a third process creates the file again. It does not occur if there are only two writers. advanced_1190_p=Whenever a database is opened, a lock file is created to signal other processes that the database is in use. If database is closed, or if the process that opened the database terminates, this lock file is deleted.
advanced_1191_li=If the file can be created, a random number is inserted together with the locking method ('file'). Afterwards, a watchdog thread is started that checks regularly (every second once by default) if the file was deleted or modified by another (challenger) thread / process. Whenever that occurs, the file is overwritten with the old data. The watchdog thread runs with high priority so that a change to the lock file does not get through undetected even if the system is very busy. However, the watchdog thread does use very little resources (CPU time), because it waits most of the time. Also, the watchdog only reads from the hard disk and does not write to it. advanced_1191_p=In special cases (if the process did not terminate normally, for example because there was a blackout), the lock file is not deleted by the process that created it. That means the existence of the lock file is not a safe protocol for file locking. However, this software uses a challenge-response protocol to protect the database files. There are two methods (algorithms) implemented to provide both security (that is, the same database files cannot be opened by two processes at the same time) and simplicity (that is, the lock file does not need to be deleted manually by the user). The two methods are 'file method' and 'socket methods'.
advanced_1192_li=If the lock file exists, and it was modified in the 20 ms, the process waits for some time (up to 10 times). If it was still changed, an exception is thrown (database is locked). This is done to eliminate race conditions with many concurrent writers. Afterwards, the file is overwritten with a new version (challenge). After that, the thread waits for 2 seconds. If there is a watchdog thread protecting the file, he will overwrite the change and this process will fail to lock the database. However, if there is no watchdog thread, the lock file will still be as written by this thread. In this case, the file is deleted and atomically created again. The watchdog thread is started in this case and the file is locked. advanced_1192_h3=File Locking Method 'File'
advanced_1193_p=This algorithm is tested with over 100 concurrent threads. In some cases, when there are many concurrent threads trying to lock the database, they block each other (meaning the file cannot be locked by any of them) for some time. However, the file never gets locked by two threads at the same time. However using that many concurrent threads / processes is not the common use case. Generally, an application should throw an error to the user if it cannot open a database, and not try again in a (fast) loop. advanced_1193_p=The default method for database file locking is the 'File Method'. The algorithm is\:
advanced_1194_h3=File Locking Method 'Socket' advanced_1194_li=When the lock file does not exist, it is created (using the atomic operation File.createNewFile). Then, the process waits a little bit (20ms) and checks the file again. If the file was changed during this time, the operation is aborted. This protects against a race condition when a process deletes the lock file just after one create it, and a third process creates the file again. It does not occur if there are only two writers.
advanced_1195_p=There is a second locking mechanism implemented, but disabled by default. The algorithm is\: advanced_1195_li=If the file can be created, a random number is inserted together with the locking method ('file'). Afterwards, a watchdog thread is started that checks regularly (every second once by default) if the file was deleted or modified by another (challenger) thread / process. Whenever that occurs, the file is overwritten with the old data. The watchdog thread runs with high priority so that a change to the lock file does not get through undetected even if the system is very busy. However, the watchdog thread does use very little resources (CPU time), because it waits most of the time. Also, the watchdog only reads from the hard disk and does not write to it.
advanced_1196_li=If the lock file does not exist, it is created. Then a server socket is opened on a defined port, and kept open. The port and IP address of the process that opened the database is written into the lock file. advanced_1196_li=If the lock file exists, and it was modified in the 20 ms, the process waits for some time (up to 10 times). If it was still changed, an exception is thrown (database is locked). This is done to eliminate race conditions with many concurrent writers. Afterwards, the file is overwritten with a new version (challenge). After that, the thread waits for 2 seconds. If there is a watchdog thread protecting the file, he will overwrite the change and this process will fail to lock the database. However, if there is no watchdog thread, the lock file will still be as written by this thread. In this case, the file is deleted and atomically created again. The watchdog thread is started in this case and the file is locked.
advanced_1197_li=If the lock file exists, and the lock method is 'file', then the software switches to the 'file' method. advanced_1197_p=This algorithm is tested with over 100 concurrent threads. In some cases, when there are many concurrent threads trying to lock the database, they block each other (meaning the file cannot be locked by any of them) for some time. However, the file never gets locked by two threads at the same time. However using that many concurrent threads / processes is not the common use case. Generally, an application should throw an error to the user if it cannot open a database, and not try again in a (fast) loop.
advanced_1198_li=If the lock file exists, and the lock method is 'socket', then the process checks if the port is in use. If the original process is still running, the port is in use and this process throws an exception (database is in use). If the original process died (for example due to a blackout, or abnormal termination of the virtual machine), then the port was released. The new process deletes the lock file and starts again. advanced_1198_h3=File Locking Method 'Socket'
advanced_1199_p=This method does not require a watchdog thread actively polling (reading) the same file every second. The problem with this method is, if the file is stored on a network share, two processes (running on different computers) could still open the same database files, if they do not have a direct TCP/IP connection. advanced_1199_p=There is a second locking mechanism implemented, but disabled by default. The algorithm is\:
advanced_1200_h2=Protection against SQL Injection advanced_1200_li=If the lock file does not exist, it is created. Then a server socket is opened on a defined port, and kept open. The port and IP address of the process that opened the database is written into the lock file.
advanced_1201_h3=What is SQL Injection advanced_1201_li=If the lock file exists, and the lock method is 'file', then the software switches to the 'file' method.
advanced_1202_p=This database engine provides a solution for the security vulnerability known as 'SQL Injection'. Here is a short description of what SQL injection means. Some applications build SQL statements with embedded user input such as\: advanced_1202_li=If the lock file exists, and the lock method is 'socket', then the process checks if the port is in use. If the original process is still running, the port is in use and this process throws an exception (database is in use). If the original process died (for example due to a blackout, or abnormal termination of the virtual machine), then the port was released. The new process deletes the lock file and starts again.
advanced_1203_p=If this mechanism is used anywhere in the application, and user input is not correctly filtered or encoded, it is possible for a user to inject SQL functionality or statements by using specially built input such as (in this example) this password\: ' OR ''\='. In this case the statement becomes\: advanced_1203_p=This method does not require a watchdog thread actively polling (reading) the same file every second. The problem with this method is, if the file is stored on a network share, two processes (running on different computers) could still open the same database files, if they do not have a direct TCP/IP connection.
advanced_1204_p=Which is always true no matter what the password stored in the database is. For more information about SQL Injection, see Glossary and Links. advanced_1204_h2=Protection against SQL Injection
advanced_1205_h3=Disabling Literals advanced_1205_h3=What is SQL Injection
advanced_1206_p=SQL Injection is not possible if user input is not directly embedded in SQL statements. A simple solution for the problem above is to use a PreparedStatement\: advanced_1206_p=This database engine provides a solution for the security vulnerability known as 'SQL Injection'. Here is a short description of what SQL injection means. Some applications build SQL statements with embedded user input such as\:
advanced_1207_p=This database provides a way to enforce usage of parameters when passing user input to the database. This is done by disabling embedded literals in SQL statements. To do this, execute the statement\: advanced_1207_p=If this mechanism is used anywhere in the application, and user input is not correctly filtered or encoded, it is possible for a user to inject SQL functionality or statements by using specially built input such as (in this example) this password\: ' OR ''\='. In this case the statement becomes\:
advanced_1208_p=Afterwards, SQL statements with text and number literals are not allowed any more. That means, SQL statement of the form WHERE NAME\='abc' or WHERE CustomerId\=10 will fail. It is still possible to use PreparedStatements and parameters as described above. Also, it is still possible to generate SQL statements dynamically, and use the Statement API, as long as the SQL statements do not include literals. There is also a second mode where number literals are allowed\: SET ALLOW_LITERALS NUMBERS. To allow all literals, execute SET ALLOW_LITERALS ALL (this is the default setting). Literals can only be enabled or disabled by an administrator. advanced_1208_p=Which is always true no matter what the password stored in the database is. For more information about SQL Injection, see Glossary and Links.
advanced_1209_h3=Using Constants advanced_1209_h3=Disabling Literals
advanced_1210_p=Disabling literals also means disabling hard-coded 'constant' literals. This database supports defining constants using the CREATE CONSTANT command. Constants can be defined only when literals are enabled, but used even when literals are disabled. To avoid name clashes with column names, constants can be defined in other schemas\: advanced_1210_p=SQL Injection is not possible if user input is not directly embedded in SQL statements. A simple solution for the problem above is to use a PreparedStatement\:
advanced_1211_p=Even when literals are enabled, it is better to use constants instead of hard-coded number or text literals in queries or views. With constants, typos are found at compile time, the source code is easier to understand and change. advanced_1211_p=This database provides a way to enforce usage of parameters when passing user input to the database. This is done by disabling embedded literals in SQL statements. To do this, execute the statement\:
advanced_1212_h3=Using the ZERO() Function advanced_1212_p=Afterwards, SQL statements with text and number literals are not allowed any more. That means, SQL statement of the form WHERE NAME\='abc' or WHERE CustomerId\=10 will fail. It is still possible to use PreparedStatements and parameters as described above. Also, it is still possible to generate SQL statements dynamically, and use the Statement API, as long as the SQL statements do not include literals. There is also a second mode where number literals are allowed\: SET ALLOW_LITERALS NUMBERS. To allow all literals, execute SET ALLOW_LITERALS ALL (this is the default setting). Literals can only be enabled or disabled by an administrator.
advanced_1213_p=It is not required to create a constant for the number 0 as there is already a built-in function ZERO()\: advanced_1213_h3=Using Constants
advanced_1214_h2=Security Protocols advanced_1214_p=Disabling literals also means disabling hard-coded 'constant' literals. This database supports defining constants using the CREATE CONSTANT command. Constants can be defined only when literals are enabled, but used even when literals are disabled. To avoid name clashes with column names, constants can be defined in other schemas\:
advanced_1215_p=The following paragraphs document the security protocols used in this database. These descriptions are very technical and only intended for security experts that already know the underlying security primitives. advanced_1215_p=Even when literals are enabled, it is better to use constants instead of hard-coded number or text literals in queries or views. With constants, typos are found at compile time, the source code is easier to understand and change.
advanced_1216_h3=User Password Encryption advanced_1216_h3=Using the ZERO() Function
advanced_1217_p=When a user tries to connect to a database, the combination of user name, @, and password hashed using SHA-256, and this hash value is transmitted to the database. This step does not try to an attacker from re-using the value if he is able to listen to the (unencrypted) transmission between the client and the server. But, the passwords are never transmitted as plain text, even when using an unencrypted connection between client and server. That means if a user reuses the same password for different things, this password is still protected up to some point. See also 'RFC 2617 - HTTP Authentication\: Basic and Digest Access Authentication' for more information. advanced_1217_p=It is not required to create a constant for the number 0 as there is already a built-in function ZERO()\:
advanced_1218_p=When a new database or user is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bit. Using the random salt reduces the risk of an attacker pre-calculating hash values for many different (commonly used) passwords. advanced_1218_h2=Security Protocols
advanced_1219_p=The combination of user-password hash value (see above) and salt is hashed using SHA-256. The resulting value is stored in the database. When a user tries to connect to the database, the database combines user-password hash value with the stored salt value and calculated the hash value. Other products use multiple iterations (hash the hash value again and again), but this is not done in this product to reduce the risk of denial of service attacks (where the attacker tries to connect with bogus passwords, and the server spends a lot of time calculating the hash value for each password). The reasoning is\: if the attacker has access to the hashed passwords, he also has access to the data in plain text, and therefore does not need the password any more. If the data is protected by storing it on another computer and only remotely, then the iteration count is not required at all. advanced_1219_p=The following paragraphs document the security protocols used in this database. These descriptions are very technical and only intended for security experts that already know the underlying security primitives.
advanced_1220_h3=File Encryption advanced_1220_h3=User Password Encryption
advanced_1221_p=The database files can be encrypted using two different algorithms\: AES-128 and XTEA (using 32 rounds). The reasons for supporting XTEA is performance (XTEA is about twice as fast as AES) and to have an alternative algorithm if AES is suddenly broken. advanced_1221_p=When a user tries to connect to a database, the combination of user name, @, and password hashed using SHA-256, and this hash value is transmitted to the database. This step does not try to an attacker from re-using the value if he is able to listen to the (unencrypted) transmission between the client and the server. But, the passwords are never transmitted as plain text, even when using an unencrypted connection between client and server. That means if a user reuses the same password for different things, this password is still protected up to some point. See also 'RFC 2617 - HTTP Authentication\: Basic and Digest Access Authentication' for more information.
advanced_1222_p=When a user tries to connect to an encrypted database, the combination of the word 'file', @, and the file password is hashed using SHA-256. This hash value is transmitted to the server. advanced_1222_p=When a new database or user is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bit. Using the random salt reduces the risk of an attacker pre-calculating hash values for many different (commonly used) passwords.
advanced_1223_p=When a new database file is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bit. The combination of the file password hash and the salt value is hashed 1024 times using SHA-256. The reason for the iteration is to make it harder for an attacker to calculate hash values for common passwords. advanced_1223_p=The combination of user-password hash value (see above) and salt is hashed using SHA-256. The resulting value is stored in the database. When a user tries to connect to the database, the database combines user-password hash value with the stored salt value and calculated the hash value. Other products use multiple iterations (hash the hash value again and again), but this is not done in this product to reduce the risk of denial of service attacks (where the attacker tries to connect with bogus passwords, and the server spends a lot of time calculating the hash value for each password). The reasoning is\: if the attacker has access to the hashed passwords, he also has access to the data in plain text, and therefore does not need the password any more. If the data is protected by storing it on another computer and only remotely, then the iteration count is not required at all.
advanced_1224_p=The resulting hash value is used as the key for the block cipher algorithm (AES-128 or XTEA with 32 rounds). Then, an initialization vector (IV) key is calculated by hashing the key again using SHA-256. This is to make sure the IV is unknown to the attacker. The reason for using a secret IV is to protect against watermark attacks. advanced_1224_h3=File Encryption
advanced_1225_p=Before saving a block of data (each block is 8 bytes long), the following operations are executed\: First, the IV is calculated by encrypting the block number with the IV key (using the same block cipher algorithm). This IV is combined with the plain text using XOR. The resulting data is encrypted using the AES-128 or XTEA algorithm. advanced_1225_p=The database files can be encrypted using two different algorithms\: AES-128 and XTEA (using 32 rounds). The reasons for supporting XTEA is performance (XTEA is about twice as fast as AES) and to have an alternative algorithm if AES is suddenly broken.
advanced_1226_p=When decrypting, the operation is done in reverse. First, the block is decrypted using the key, and then the IV is calculated combined with the decrypted text using XOR. advanced_1226_p=When a user tries to connect to an encrypted database, the combination of the word 'file', @, and the file password is hashed using SHA-256. This hash value is transmitted to the server.
advanced_1227_p=Therefore, the block cipher modes of operation is CBC (Cipher-block chaining), but each chain is only one block long. The advantage over the ECB (Electronic codebook) mode is that patterns in the data are not revealed, and the advantage over multi block CBC is that flipped cipher text bits are not propagated to flipped plaintext bits in the next block. advanced_1227_p=When a new database file is created, a new cryptographically secure random salt value is generated. The size of the salt is 64 bit. The combination of the file password hash and the salt value is hashed 1024 times using SHA-256. The reason for the iteration is to make it harder for an attacker to calculate hash values for common passwords.
advanced_1228_p=Database encryption is meant for securing the database while it is not in use (stolen laptop and so on). It is not meant for cases where the attacker has access to files while the database is in use. When he has write access, he can for example replace pieces of files with pieces of older versions and manipulate data like this. advanced_1228_p=The resulting hash value is used as the key for the block cipher algorithm (AES-128 or XTEA with 32 rounds). Then, an initialization vector (IV) key is calculated by hashing the key again using SHA-256. This is to make sure the IV is unknown to the attacker. The reason for using a secret IV is to protect against watermark attacks.
advanced_1229_p=File encryption slows down the performance of the database engine. Compared to unencrypted mode, database operations take about 2.2 times longer when using XTEA, and 2.5 times longer using AES (embedded mode). advanced_1229_p=Before saving a block of data (each block is 8 bytes long), the following operations are executed\: First, the IV is calculated by encrypting the block number with the IV key (using the same block cipher algorithm). This IV is combined with the plain text using XOR. The resulting data is encrypted using the AES-128 or XTEA algorithm.
advanced_1230_h3=SSL/TLS Connections advanced_1230_p=When decrypting, the operation is done in reverse. First, the block is decrypted using the key, and then the IV is calculated combined with the decrypted text using XOR.
advanced_1231_p=Remote SSL/TLS connections are supported using the Java Secure Socket Extension (SSLServerSocket / SSLSocket). By default, anonymous SSL is enabled. The default cipher suite is <code>SSL_DH_anon_WITH_RC4_128_MD5</code> . advanced_1231_p=Therefore, the block cipher modes of operation is CBC (Cipher-block chaining), but each chain is only one block long. The advantage over the ECB (Electronic codebook) mode is that patterns in the data are not revealed, and the advantage over multi block CBC is that flipped cipher text bits are not propagated to flipped plaintext bits in the next block.
advanced_1232_h3=HTTPS Connections advanced_1232_p=Database encryption is meant for securing the database while it is not in use (stolen laptop and so on). It is not meant for cases where the attacker has access to files while the database is in use. When he has write access, he can for example replace pieces of files with pieces of older versions and manipulate data like this.
advanced_1233_p=The web server supports HTTP and HTTPS connections using SSLServerSocket. There is a default self-certified certificate to support an easy starting point, but custom certificates are supported as well. advanced_1233_p=File encryption slows down the performance of the database engine. Compared to unencrypted mode, database operations take about 2.2 times longer when using XTEA, and 2.5 times longer using AES (embedded mode).
advanced_1234_h2=Universally Unique Identifiers (UUID) advanced_1234_h3=SSL/TLS Connections
advanced_1235_p=This database supports the UUIDs. Also supported is a function to create new UUIDs using a cryptographically strong pseudo random number generator. With random UUIDs, the chance of two having the same value can be calculated using the probability theory. See also 'Birthday Paradox'. Standardized randomly generated UUIDs have 122 random bits. 4 bits are used for the version (Randomly generated UUID), and 2 bits for the variant (Leach-Salz). This database supports generating such UUIDs using the built-in function RANDOM_UUID(). Here is a small program to estimate the probability of having two identical UUIDs after generating a number of values\: advanced_1235_p=Remote SSL/TLS connections are supported using the Java Secure Socket Extension (SSLServerSocket / SSLSocket). By default, anonymous SSL is enabled. The default cipher suite is <code>SSL_DH_anon_WITH_RC4_128_MD5</code> .
advanced_1236_p=Some values are\: advanced_1236_h3=HTTPS Connections
advanced_1237_p=To help non-mathematicians understand what those numbers mean, here a comparison\: One's annual risk of being hit by a meteorite is estimated to be one chance in 17 billion, that means the probability is about 0.000'000'000'06. advanced_1237_p=The web server supports HTTP and HTTPS connections using SSLServerSocket. There is a default self-certified certificate to support an easy starting point, but custom certificates are supported as well.
advanced_1238_h2=Settings Read from System Properties advanced_1238_h2=Universally Unique Identifiers (UUID)
advanced_1239_p=Some settings of the database can be set on the command line using -DpropertyName\=value. It is usually not required to change those settings manually. The settings are case sensitive. Example\: advanced_1239_p=This database supports the UUIDs. Also supported is a function to create new UUIDs using a cryptographically strong pseudo random number generator. With random UUIDs, the chance of two having the same value can be calculated using the probability theory. See also 'Birthday Paradox'. Standardized randomly generated UUIDs have 122 random bits. 4 bits are used for the version (Randomly generated UUID), and 2 bits for the variant (Leach-Salz). This database supports generating such UUIDs using the built-in function RANDOM_UUID(). Here is a small program to estimate the probability of having two identical UUIDs after generating a number of values\:
advanced_1240_p=The current value of the settings can be read in the table INFORMATION_SCHEMA.SETTINGS advanced_1240_p=Some values are\:
advanced_1241_th=Setting advanced_1241_p=To help non-mathematicians understand what those numbers mean, here a comparison\: One's annual risk of being hit by a meteorite is estimated to be one chance in 17 billion, that means the probability is about 0.000'000'000'06.
advanced_1242_th=Default advanced_1242_h2=Settings Read from System Properties
advanced_1243_th=Description advanced_1243_p=Some settings of the database can be set on the command line using -DpropertyName\=value. It is usually not required to change those settings manually. The settings are case sensitive. Example\:
advanced_1244_td=h2.check advanced_1244_p=The current value of the settings can be read in the table INFORMATION_SCHEMA.SETTINGS
advanced_1245_td=true advanced_1245_th=Setting
advanced_1246_td=Assertions in the database engine advanced_1246_th=Default
advanced_1247_td=h2.check2 advanced_1247_th=Description
advanced_1248_td=false advanced_1248_td=h2.check
advanced_1249_td=Additional assertions advanced_1249_td=true
advanced_1250_td=h2.clientTraceDirectory advanced_1250_td=Assertions in the database engine
advanced_1251_td=trace.db/ advanced_1251_td=h2.check2
advanced_1252_td=Directory where the trace files of the JDBC client are stored (only for client / server) advanced_1252_td=false
advanced_1253_td=h2.emergencySpaceInitial advanced_1253_td=Additional assertions
advanced_1254_td=1048576 advanced_1254_td=h2.clientTraceDirectory
advanced_1255_td=Size of 'reserve' file to detect disk full problems early advanced_1255_td=trace.db/
advanced_1256_td=h2.emergencySpaceMin advanced_1256_td=Directory where the trace files of the JDBC client are stored (only for client / server)
advanced_1257_td=131072 advanced_1257_td=h2.emergencySpaceInitial
advanced_1258_td=Minimum size of 'reserve' file advanced_1258_td=1048576
advanced_1259_td=h2.lobCloseBetweenReads advanced_1259_td=Size of 'reserve' file to detect disk full problems early
advanced_1260_td=false advanced_1260_td=h2.emergencySpaceMin
advanced_1261_td=Close LOB files between read operations advanced_1261_td=131072
advanced_1262_td=h2.lobFilesInDirectories advanced_1262_td=Minimum size of 'reserve' file
advanced_1263_td=false advanced_1263_td=h2.lobCloseBetweenReads
advanced_1264_td=Store LOB files in subdirectories advanced_1264_td=false
advanced_1265_td=h2.lobFilesPerDirectory advanced_1265_td=Close LOB files between read operations
advanced_1266_td=256 advanced_1266_td=h2.lobFilesInDirectories
advanced_1267_td=Maximum number of LOB files per directory advanced_1267_td=false
advanced_1268_td=h2.logAllErrors advanced_1268_td=Store LOB files in subdirectories
advanced_1269_td=false advanced_1269_td=h2.lobFilesPerDirectory
advanced_1270_td=Write stack traces of any kind of error to a file advanced_1270_td=256
advanced_1271_td=h2.logAllErrorsFile advanced_1271_td=Maximum number of LOB files per directory
advanced_1272_td=h2errors.txt advanced_1272_td=h2.logAllErrors
advanced_1273_td=File name to log errors advanced_1273_td=false
advanced_1274_td=h2.maxFileRetry advanced_1274_td=Write stack traces of any kind of error to a file
advanced_1275_td=16 advanced_1275_td=h2.logAllErrorsFile
advanced_1276_td=Number of times to retry file delete and rename advanced_1276_td=h2errors.txt
advanced_1277_td=h2.multiThreadedKernel advanced_1277_td=File name to log errors
advanced_1278_td=false advanced_1278_td=h2.maxFileRetry
advanced_1279_td=Allow multiple sessions to run concurrently advanced_1279_td=16
advanced_1280_td=h2.objectCache advanced_1280_td=Number of times to retry file delete and rename
advanced_1281_td=true advanced_1281_td=h2.multiThreadedKernel
advanced_1282_td=Cache commonly used objects (integers, strings) advanced_1282_td=false
advanced_1283_td=h2.objectCacheMaxPerElementSize advanced_1283_td=Allow multiple sessions to run concurrently
advanced_1284_td=4096 advanced_1284_td=h2.objectCache
advanced_1285_td=Maximum size of an object in the cache advanced_1285_td=true
advanced_1286_td=h2.objectCacheSize advanced_1286_td=Cache commonly used objects (integers, strings)
advanced_1287_td=1024 advanced_1287_td=h2.objectCacheMaxPerElementSize
advanced_1288_td=Size of object cache advanced_1288_td=4096
advanced_1289_td=h2.optimizeEvaluatableSubqueries advanced_1289_td=Maximum size of an object in the cache
advanced_1290_td=true advanced_1290_td=h2.objectCacheSize
advanced_1291_td=Optimize subqueries that are not dependent on the outer query advanced_1291_td=1024
advanced_1292_td=h2.optimizeIn advanced_1292_td=Size of object cache
advanced_1293_td=true advanced_1293_td=h2.optimizeEvaluatableSubqueries
advanced_1294_td=Optimize IN(...) comparisons advanced_1294_td=true
advanced_1295_td=h2.optimizeMinMax advanced_1295_td=Optimize subqueries that are not dependent on the outer query
advanced_1296_td=true advanced_1296_td=h2.optimizeIn
advanced_1297_td=Optimize MIN and MAX aggregate functions advanced_1297_td=true
advanced_1298_td=h2.optimizeSubqueryCache advanced_1298_td=Optimize IN(...) comparisons
advanced_1299_td=true advanced_1299_td=h2.optimizeMinMax
advanced_1300_td=Cache subquery results advanced_1300_td=true
advanced_1301_td=h2.overflowExceptions advanced_1301_td=Optimize MIN and MAX aggregate functions
advanced_1302_td=true advanced_1302_td=h2.optimizeSubqueryCache
advanced_1303_td=Throw an exception on integer overflows advanced_1303_td=true
advanced_1304_td=h2.recompileAlways advanced_1304_td=Cache subquery results
advanced_1305_td=false advanced_1305_td=h2.overflowExceptions
advanced_1306_td=Always recompile prepared statements advanced_1306_td=true
advanced_1307_td=h2.redoBufferSize advanced_1307_td=Throw an exception on integer overflows
advanced_1308_td=262144 advanced_1308_td=h2.recompileAlways
advanced_1309_td=Size of the redo buffer (used at startup when recovering) advanced_1309_td=false
advanced_1310_td=h2.runFinalizers advanced_1310_td=Always recompile prepared statements
advanced_1311_td=true advanced_1311_td=h2.redoBufferSize
advanced_1312_td=Run finalizers to detect unclosed connections advanced_1312_td=262144
advanced_1313_td=h2.scriptDirectory advanced_1313_td=Size of the redo buffer (used at startup when recovering)
advanced_1314_td=Relative or absolute directory where the script files are stored to or read from advanced_1314_td=h2.runFinalizers
advanced_1315_td=h2.serverCachedObjects advanced_1315_td=true
advanced_1316_td=64 advanced_1316_td=Run finalizers to detect unclosed connections
advanced_1317_td=TCP Server\: number of cached objects per session advanced_1317_td=h2.scriptDirectory
advanced_1318_td=h2.serverSmallResultSetSize advanced_1318_td=Relative or absolute directory where the script files are stored to or read from
advanced_1319_td=100 advanced_1319_td=h2.serverCachedObjects
advanced_1320_td=TCP Server\: result sets below this size are sent in one block advanced_1320_td=64
advanced_1321_h2=Glossary and Links advanced_1321_td=TCP Server\: number of cached objects per session
advanced_1322_th=Term advanced_1322_td=h2.serverSmallResultSetSize
advanced_1323_th=Description advanced_1323_td=100
advanced_1324_td=AES-128 advanced_1324_td=TCP Server\: result sets below this size are sent in one block
advanced_1325_td=A block encryption algorithm. See also\: <a href\="http\://en.wikipedia.org/wiki/Advanced_Encryption_Standard">Wikipedia\: AES</a> advanced_1325_h2=Glossary and Links
advanced_1326_td=Birthday Paradox advanced_1326_th=Term
advanced_1327_td=Describes the higher than expected probability that two persons in a room have the same birthday. Also valid for randomly generated UUIDs. See also\: <a href\="http\://en.wikipedia.org/wiki/Birthday_paradox">Wikipedia\: Birthday Paradox</a> advanced_1327_th=Description
advanced_1328_td=Digest advanced_1328_td=AES-128
advanced_1329_td=Protocol to protect a password (but not to protect data). See also\: <a href\="http\://www.faqs.org/rfcs/rfc2617.html">RFC 2617\: HTTP Digest Access Authentication</a> advanced_1329_td=A block encryption algorithm. See also\: <a href\="http\://en.wikipedia.org/wiki/Advanced_Encryption_Standard">Wikipedia\: AES</a>
advanced_1330_td=GCJ advanced_1330_td=Birthday Paradox
advanced_1331_td=GNU Compiler for Java. <a href\="http\://gcc.gnu.org/java/">http\://gcc.gnu.org/java/</a> and <a href\="http\://nativej.mtsystems.ch">http\://nativej.mtsystems.ch/ (not free any more)</a> advanced_1331_td=Describes the higher than expected probability that two persons in a room have the same birthday. Also valid for randomly generated UUIDs. See also\: <a href\="http\://en.wikipedia.org/wiki/Birthday_paradox">Wikipedia\: Birthday Paradox</a>
advanced_1332_td=HTTPS advanced_1332_td=Digest
advanced_1333_td=A protocol to provide security to HTTP connections. See also\: <a href\="http\://www.ietf.org/rfc/rfc2818.txt">RFC 2818\: HTTP Over TLS</a> advanced_1333_td=Protocol to protect a password (but not to protect data). See also\: <a href\="http\://www.faqs.org/rfcs/rfc2617.html">RFC 2617\: HTTP Digest Access Authentication</a>
advanced_1334_td=Modes of Operation advanced_1334_td=GCJ
advanced_1335_a=Wikipedia\: Block cipher modes of operation advanced_1335_td=GNU Compiler for Java. <a href\="http\://gcc.gnu.org/java/">http\://gcc.gnu.org/java/</a> and <a href\="http\://nativej.mtsystems.ch">http\://nativej.mtsystems.ch/ (not free any more)</a>
advanced_1336_td=Salt advanced_1336_td=HTTPS
advanced_1337_td=Random number to increase the security of passwords. See also\: <a href\="http\://en.wikipedia.org/wiki/Key_derivation_function">Wikipedia\: Key derivation function</a> advanced_1337_td=A protocol to provide security to HTTP connections. See also\: <a href\="http\://www.ietf.org/rfc/rfc2818.txt">RFC 2818\: HTTP Over TLS</a>
advanced_1338_td=SHA-256 advanced_1338_td=Modes of Operation
advanced_1339_td=A cryptographic one-way hash function. See also\: <a href\="http\://en.wikipedia.org/wiki/SHA_family">Wikipedia\: SHA hash functions</a> advanced_1339_a=Wikipedia\: Block cipher modes of operation
advanced_1340_td=SQL Injection advanced_1340_td=Salt
advanced_1341_td=A security vulnerability where an application generates SQL statements with embedded user input. See also\: <a href\="http\://en.wikipedia.org/wiki/SQL_injection">Wikipedia\: SQL Injection</a> advanced_1341_td=Random number to increase the security of passwords. See also\: <a href\="http\://en.wikipedia.org/wiki/Key_derivation_function">Wikipedia\: Key derivation function</a>
advanced_1342_td=Watermark Attack advanced_1342_td=SHA-256
advanced_1343_td=Security problem of certain encryption programs where the existence of certain data can be proven without decrypting. For more information, search in the internet for 'watermark attack cryptoloop' advanced_1343_td=A cryptographic one-way hash function. See also\: <a href\="http\://en.wikipedia.org/wiki/SHA_family">Wikipedia\: SHA hash functions</a>
advanced_1344_td=SSL/TLS advanced_1344_td=SQL Injection
advanced_1345_td=Secure Sockets Layer / Transport Layer Security. See also\: <a href\="http\://java.sun.com/products/jsse/">Java Secure Socket Extension (JSSE)</a> advanced_1345_td=A security vulnerability where an application generates SQL statements with embedded user input. See also\: <a href\="http\://en.wikipedia.org/wiki/SQL_injection">Wikipedia\: SQL Injection</a>
advanced_1346_td=XTEA advanced_1346_td=Watermark Attack
advanced_1347_td=A block encryption algorithm. See also\: <a href\="http\://en.wikipedia.org/wiki/XTEA">Wikipedia\: XTEA</a> advanced_1347_td=Security problem of certain encryption programs where the existence of certain data can be proven without decrypting. For more information, search in the internet for 'watermark attack cryptoloop'
advanced_1348_td=SSL/TLS
advanced_1349_td=Secure Sockets Layer / Transport Layer Security. See also\: <a href\="http\://java.sun.com/products/jsse/">Java Secure Socket Extension (JSSE)</a>
advanced_1350_td=XTEA
advanced_1351_td=A block encryption algorithm. See also\: <a href\="http\://en.wikipedia.org/wiki/XTEA">Wikipedia\: XTEA</a>
build_1000_h1=Build build_1000_h1=Build
build_1001_a=Portability build_1001_a=Portability
build_1002_a=Environment build_1002_a=Environment
......
...@@ -311,20 +311,20 @@ public class TableData extends Table implements RecordReader { ...@@ -311,20 +311,20 @@ public class TableData extends Table implements RecordReader {
return; return;
} }
long max = System.currentTimeMillis() + session.getLockTimeout(); long max = System.currentTimeMillis() + session.getLockTimeout();
if (!force && database.isMultiVersion()) {
// MVCC: update, delete, and insert use a shared lock
// select doesn't lock
if (exclusive) {
exclusive = false;
} else {
return;
}
}
synchronized (database) { synchronized (database) {
while (true) { while (true) {
if (lockExclusive == session) { if (lockExclusive == session) {
return; return;
} }
if (!force && database.isMultiVersion()) {
// MVCC: update, delete, and insert use a shared lock
// but select doesn't lock
if (exclusive) {
exclusive = false;
} else {
return;
}
}
if (exclusive) { if (exclusive) {
if (lockExclusive == null) { if (lockExclusive == null) {
if (lockShared.isEmpty()) { if (lockShared.isEmpty()) {
......
...@@ -142,15 +142,6 @@ java org.h2.test.TestAll timer ...@@ -142,15 +142,6 @@ java org.h2.test.TestAll timer
/* /*
TODO history:
Math operations using unknown data types (for example -? and ?+?) are now interpreted as decimal.
INSTR, LOCATE: backward searching is not supported by using a negative start position
TODO doc:
MVCC still locks the table exclusively when adding or removing columns and when dropping the table.
Also, a shared lock is still added when inserting or removing rows.
replicating file system replicating file system
background thread writing file system (all writes) background thread writing file system (all writes)
......
...@@ -40,13 +40,14 @@ public class TestMVCC extends TestBase { ...@@ -40,13 +40,14 @@ public class TestMVCC extends TestBase {
DeleteDbFiles.execute(null, "test", true); DeleteDbFiles.execute(null, "test", true);
Class.forName("org.h2.Driver"); Class.forName("org.h2.Driver");
c1 = DriverManager.getConnection("jdbc:h2:test;MVCC=TRUE", "sa", "sa"); c1 = DriverManager.getConnection("jdbc:h2:test;MVCC=TRUE;LOCK_TIMEOUT=10", "sa", "sa");
s1 = c1.createStatement(); s1 = c1.createStatement();
c2 = DriverManager.getConnection("jdbc:h2:test;MVCC=TRUE", "sa", "sa"); c2 = DriverManager.getConnection("jdbc:h2:test;MVCC=TRUE;LOCK_TIMEOUT=10", "sa", "sa");
s2 = c2.createStatement(); s2 = c2.createStatement();
c1.setAutoCommit(false); c1.setAutoCommit(false);
c2.setAutoCommit(false); c2.setAutoCommit(false);
// it should not be possible to drop a table when an uncommitted transaction changed something
s1.execute("create table test(id int primary key)"); s1.execute("create table test(id int primary key)");
s1.execute("insert into test values(1)"); s1.execute("insert into test values(1)");
try { try {
...@@ -60,6 +61,23 @@ public class TestMVCC extends TestBase { ...@@ -60,6 +61,23 @@ public class TestMVCC extends TestBase {
s2.execute("drop table test"); s2.execute("drop table test");
c2.rollback(); c2.rollback();
// select for update should do an exclusive lock, even with mvcc
s1.execute("create table test(id int primary key, name varchar(255))");
s1.execute("insert into test values(1, 'y')");
c1.commit();
s2.execute("select * from test for update");
try {
s1.execute("insert into test values(2, 'x')");
error("Unexpected success");
} catch (SQLException e) {
// lock timeout expected
checkNotGeneralException(e);
}
c2.rollback();
s1.execute("drop table test");
c1.commit();
c2.commit();
s1.execute("create table test(id int primary key, name varchar(255))"); s1.execute("create table test(id int primary key, name varchar(255))");
s2.execute("insert into test values(4, 'Hello')"); s2.execute("insert into test values(4, 'Hello')");
c2.rollback(); c2.rollback();
......
select instr('asgisj','s', -1) from dual; select instr('abcisj','s', -1) from dual;
> 5; > 5;
CREATE TABLE TEST(ID INT); CREATE TABLE TEST(ID INT);
INSERT INTO TEST VALUES(1), (2), (3); INSERT INTO TEST VALUES(1), (2), (3);
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论