# The database allows multiple concurrent connections to the same database. To make sure all connections only see consistent data, table level locking is used by default. This mechanism does not allow high concurrency, but is very fast. Shared locks and exclusive locks are supported. Before reading from a table, the database tries to add a shared lock to the table (this is only possible if there is no exclusive lock on the object by another connection). If the shared lock is added successfully, the table can be read. It is allowed that other connections also have a shared lock on the same object. If a connection wants to write to a table (update or delete a row), an exclusive lock is required. To get the exclusive lock, other connection must not have any locks on the object. After the connection commits, all locks are released. This database keeps all locks in memory. When a lock is released, and multiple connections are waiting for it, one of them is picked at random.
# The database allows multiple concurrent connections to the same database. To make sure all connections only see consistent data, table level locking is used by default. This mechanism does not allow high concurrency, but is very fast. Shared locks and exclusive locks are supported. Before reading from a table, the database tries to add a shared lock to the table (this is only possible if there is no exclusive lock on the object by another connection). If the shared lock is added successfully, the table can be read. It is allowed that other connections also have a shared lock on the same object. If a connection wants to write to a table (update or delete a row), an exclusive lock is required. To get the exclusive lock, other connection must not have any locks on the object. After the connection commits, all locks are released. This database keeps all locks in memory. When a lock is released, and multiple connections are waiting for it, one of them is picked at random.
@advanced_1090_h3
@advanced_1090_h3
ロックタイムアウト
#Table Level Locking (PageStore engine)
@advanced_1091_p
@advanced_1091_p
# If a connection cannot get a lock on an object, the connection waits for some amount of time (the lock timeout). During this time, hopefully the connection holding the lock commits and it is then possible to get the lock. If this is not possible because the other connection does not release the lock for some time, the unsuccessful connection will get a lock timeout exception. The lock timeout can be set individually for each connection.
# If a connection cannot get a lock on an object, the connection waits for some amount of time (the lock timeout). During this time, hopefully the connection holding the lock commits and it is then possible to get the lock. If this is not possible because the other connection does not release the lock for some time, the unsuccessful connection will get a lock timeout exception. The lock timeout can be set individually for each connection.
...
@@ -541,13 +541,13 @@ ODBC設定
...
@@ -541,13 +541,13 @@ ODBC設定
Data Source
Data Source
@advanced_1180_td
@advanced_1180_td
H2 Test
#~/test;ifexists=true
@advanced_1181_td
@advanced_1181_td
ODBCデータソース�?��??称
# The database name. This can include connections settings. By default, the database is stored in the current working directory where the Server is started except when the -baseDir setting is used. The name must be at least 3 characters.
@advanced_1182_td
@advanced_1182_td
Database
#Servername
@advanced_1183_td
@advanced_1183_td
#~/test;ifexists=true
#~/test;ifexists=true
...
@@ -559,7 +559,7 @@ Database
...
@@ -559,7 +559,7 @@ Database
#Servername
#Servername
@advanced_1186_td
@advanced_1186_td
localhost
#Username
@advanced_1187_td
@advanced_1187_td
サー�?ー�??�?�?��?��?�IPアドレス
サー�?ー�??�?�?��?��?�IPアドレス
...
@@ -571,7 +571,7 @@ localhost
...
@@ -571,7 +571,7 @@ localhost
#Username
#Username
@advanced_1190_td
@advanced_1190_td
sa
#false (disabled)
@advanced_1191_td
@advanced_1191_td
データベース�?�ユーザー�??
データベース�?�ユーザー�??
...
@@ -619,7 +619,7 @@ PGプロトコルサ�?ート�?�制�?
...
@@ -619,7 +619,7 @@ PGプロトコルサ�?ート�?�制�?
# PostgreSQL ODBC Driver Setup requires a database password; that means it is not possible to connect to H2 databases without password. This is a limitation of the ODBC driver.
# PostgreSQL ODBC Driver Setup requires a database password; that means it is not possible to connect to H2 databases without password. This is a limitation of the ODBC driver.
@advanced_1206_h3
@advanced_1206_h3
セキュリティ考慮
#Using Microsoft Access
@advanced_1207_p
@advanced_1207_p
# Currently, the PG Server does not support challenge response or encrypt passwords. This may be a problem if an attacker can listen to the data transferred between the ODBC driver and the server, because the password is readable to the attacker. Also, it is currently not possible to use encrypted SSL connections. Therefore the ODBC driver should not be used where security is important.
# Currently, the PG Server does not support challenge response or encrypt passwords. This may be a problem if an attacker can listen to the data transferred between the ODBC driver and the server, because the password is readable to the attacker. Also, it is currently not possible to use encrypted SSL connections. Therefore the ODBC driver should not be used where security is important.
...
@@ -1093,7 +1093,7 @@ HTTPS 接続
...
@@ -1093,7 +1093,7 @@ HTTPS 接続
# Limitations: Recursive queries need to be of the type <code>UNION ALL</code>, and the recursion needs to be on the second part of the query. No tables or views with the name of the table expression may exist. Different table expression names need to be used when using multiple distinct table expressions within the same transaction and for the same session. All columns of the table expression are of type <code>VARCHAR</code>, and may need to be cast to the required data type. Views with recursive queries are not supported. Subqueries and <code>INSERT INTO ... FROM</code> with recursive queries are not supported. Parameters are only supported within the last <code>SELECT</code> statement (a workaround is to use session variables like <code>@start</code> within the table expression). The syntax is:
# Limitations: Recursive queries need to be of the type <code>UNION ALL</code>, and the recursion needs to be on the second part of the query. No tables or views with the name of the table expression may exist. Different table expression names need to be used when using multiple distinct table expressions within the same transaction and for the same session. All columns of the table expression are of type <code>VARCHAR</code>, and may need to be cast to the required data type. Views with recursive queries are not supported. Subqueries and <code>INSERT INTO ... FROM</code> with recursive queries are not supported. Parameters are only supported within the last <code>SELECT</code> statement (a workaround is to use session variables like <code>@start</code> within the table expression). The syntax is:
@advanced_1364_h2
@advanced_1364_h2
システムプロパティ�?�ら読�?�込�?�れ�?�設定
#Setting the Server Bind Address
@advanced_1365_p
@advanced_1365_p
# Some settings of the database can be set on the command line using <code>-DpropertyName=value</code>. It is usually not required to change those settings manually. The settings are case sensitive. Example:
# Some settings of the database can be set on the command line using <code>-DpropertyName=value</code>. It is usually not required to change those settings manually. The settings are case sensitive. Example:
...
@@ -2128,10 +2128,10 @@ Centralリ�?ジトリ�?�利用
...
@@ -2128,10 +2128,10 @@ Centralリ�?ジトリ�?�利用
#[API CHANGE]</strong> #439: the JDBC type of TIMESTAMP WITH TIME ZONE changed from Types.OTHER (1111) to Types.TIMESTAMP_WITH_TIMEZONE (2014)
#[API CHANGE]</strong> #439: the JDBC type of TIMESTAMP WITH TIME ZONE changed from Types.OTHER (1111) to Types.TIMESTAMP_WITH_TIMEZONE (2014)
@changelog_1064_li
@changelog_1064_li
##430: Subquery not cached if number of rows exceeds MAX_MEMORY_ROWS.
#PR #1637: Remove explicit unboxing
@changelog_1065_li
@changelog_1065_li
##411: "TIMEZONE" should be "TIME ZONE" in type "TIMESTAMP WITH TIMEZONE".
#PR #1635: Optimize UUID to VARCHAR conversion and use correct time check in Engine.openSession()
@changelog_1066_li
@changelog_1066_li
#PR #418, Implement Connection#createArrayOf and PreparedStatement#setArray.
#PR #418, Implement Connection#createArrayOf and PreparedStatement#setArray.
...
@@ -2140,7 +2140,7 @@ Centralリ�?ジトリ�?�利用
...
@@ -2140,7 +2140,7 @@ Centralリ�?ジトリ�?�利用
#PR #427, Add MySQL compatibility functions UNIX_TIMESTAMP, FROM_UNIXTIME and DATE.
#PR #427, Add MySQL compatibility functions UNIX_TIMESTAMP, FROM_UNIXTIME and DATE.
@changelog_1068_li
@changelog_1068_li
##429: Tables not found : Fix some Turkish locale bugs around uppercasing.
#PR #1630: fix duplicate words typos in comments and javadoc
@changelog_1069_li
@changelog_1069_li
#Fixed bug in metadata locking, obscure combination of DDL and SELECT SEQUENCE.NEXTVAL required.
#Fixed bug in metadata locking, obscure combination of DDL and SELECT SEQUENCE.NEXTVAL required.
...
@@ -3745,7 +3745,7 @@ ORDER BY, GROUP BY, HAVING, UNION, LIMIT, TOP
...
@@ -3745,7 +3745,7 @@ ORDER BY, GROUP BY, HAVING, UNION, LIMIT, TOP
# This comparison is based on H2 1.3, <a href="http://db.apache.org/derby">Apache Derby version 10.8</a>, <a href="http://hsqldb.org">HSQLDB 2.2</a>, <a href="http://mysql.com">MySQL 5.5</a>, <a href="http://www.postgresql.org">PostgreSQL 9.0</a>.
# This comparison is based on H2 1.3, <a href="http://db.apache.org/derby">Apache Derby version 10.8</a>, <a href="http://hsqldb.org">HSQLDB 2.2</a>, <a href="http://mysql.com">MySQL 5.5</a>, <a href="http://www.postgresql.org">PostgreSQL 9.0</a>.
# For more information about the algorithms, see <a href="advanced.html#file_locking_protocols">Advanced / File Locking Protocols</a>.
# For more information about the algorithms, see <a href="advanced.html#file_locking_protocols">Advanced / File Locking Protocols</a>.
@features_1378_h2
@features_1378_h2
�?��?��?�存在�?�る場�?��?��?��?データベースを開�??
#Page Size
@features_1379_p
@features_1379_p
# By default, when an application calls <code>DriverManager.getConnection(url, ...)</code> and the database specified in the URL does not yet exist, a new (empty) database is created. In some situations, it is better to restrict creating new databases, and only allow to open existing databases. To do this, add <code>;IFEXISTS=TRUE</code> to the database URL. In this case, if the database does not already exist, an exception is thrown when trying to connect. The connection only succeeds when the database already exists. The complete URL may look like this:
# By default, when an application calls <code>DriverManager.getConnection(url, ...)</code> and the database specified in the URL does not yet exist, a new (empty) database is created. In some situations, it is better to restrict creating new databases, and only allow to open existing databases. To do this, add <code>;IFEXISTS=TRUE</code> to the database URL. In this case, if the database does not already exist, an exception is thrown when trying to connect. The connection only succeeds when the database already exists. The complete URL may look like this:
# A fast way to load or import data (sometimes called 'bulk load') from a CSV file is to combine table creation with import. Optionally, the column names and data types can be set when creating the table. Another option is to use <code>INSERT INTO ... SELECT</code>.
# A fast way to load or import data (sometimes called 'bulk load') from a CSV file is to combine table creation with import. Optionally, the column names and data types can be set when creating the table. Another option is to use <code>INSERT INTO ... SELECT</code>.
@tutorial_1217_h3
@tutorial_1217_h3
データベース内�?�らCSVファイル�?�書�??込む
#Importing Data from a CSV File
@tutorial_1218_p
@tutorial_1218_p
# The built-in function <code>CSVWRITE</code> can be used to create a CSV file from a query. Example:
# The built-in function <code>CSVWRITE</code> can be used to create a CSV file from a query. Example:
# allows converting a database to a SQL script for backup or migration.
# allows converting a database to a SQL script for backup or migration.
@tutorial_1260_code
@tutorial_1260_code
Server
#Script
@tutorial_1261_li
@tutorial_1261_li
# is used in the server mode to start a H2 server.
# is used in the server mode to start a H2 server.
...
@@ -12298,7 +12298,7 @@ Java Web Start / JNLP
...
@@ -12298,7 +12298,7 @@ Java Web Start / JNLP
# For H2, opening a connection is fast if the database is already open. Still, using a connection pool improves performance if you open and close connections a lot. A simple connection pool is included in H2. It is based on the <a href="http://www.source-code.biz/snippets/java/8.htm">Mini Connection Pool Manager</a> from Christian d'Heureuse. There are other, more complex, open source connection pools available, for example the <a href="http://jakarta.apache.org/commons/dbcp/">Apache Commons DBCP</a>. For H2, it is about twice as faster to get a connection from the built-in connection pool than to get one using <code>DriverManager.getConnection()</code>.The build-in connection pool is used as follows:
# For H2, opening a connection is fast if the database is already open. Still, using a connection pool improves performance if you open and close connections a lot. A simple connection pool is included in H2. It is based on the <a href="http://www.source-code.biz/snippets/java/8.htm">Mini Connection Pool Manager</a> from Christian d'Heureuse. There are other, more complex, open source connection pools available, for example the <a href="http://jakarta.apache.org/commons/dbcp/">Apache Commons DBCP</a>. For H2, it is about twice as faster to get a connection from the built-in connection pool than to get one using <code>DriverManager.getConnection()</code>.The build-in connection pool is used as follows:
@tutorial_1300_h2
@tutorial_1300_h2
フルテキストサー�?
#Using a Connection Pool
@tutorial_1301_p
@tutorial_1301_p
# H2 includes two fulltext search implementations. One is using Apache Lucene, and the other (the native implementation) stores the index data in special tables in the database.
# H2 includes two fulltext search implementations. One is using Apache Lucene, and the other (the native implementation) stores the index data in special tables in the database.