提交 dfd877e2 authored 作者: Noel Grandin's avatar Noel Grandin

When in cluster mode, and one of the nodes goes down, we need to log the

problem with priority "error", not "debug"
上级 555af2d5
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<!-- <!--
Copyright 2004-2014 H2 Group. Multiple-Licensed under the MPL 2.0, Version 1.0, Copyright 2004-2014 H2 Group. Multiple-Licensed under the MPL 2.0, Version 1.0,
and under the Eclipse Public License, Version 1.0 and under the Eclipse Public License, Version 1.0
Initial Developer: H2 Group Initial Developer: H2 Group
--> -->
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head><meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> <head><meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" /> <meta name="viewport" content="width=device-width, initial-scale=1" />
<title> <title>
Change Log Change Log
</title> </title>
<link rel="stylesheet" type="text/css" href="stylesheet.css" /> <link rel="stylesheet" type="text/css" href="stylesheet.css" />
<!-- [search] { --> <!-- [search] { -->
<script type="text/javascript" src="navigation.js"></script> <script type="text/javascript" src="navigation.js"></script>
</head><body onload="frameMe();"> </head><body onload="frameMe();">
<table class="content"><tr class="content"><td class="content"><div class="contentDiv"> <table class="content"><tr class="content"><td class="content"><div class="contentDiv">
<!-- } --> <!-- } -->
<h1>Change Log</h1> <h1>Change Log</h1>
<h2>Next Version (unreleased)</h2> <h2>Next Version (unreleased)</h2>
<ul><li>Pull request #4: Creating and removing temporary tables was getting <ul><li>Pull request #4: Creating and removing temporary tables was getting
slower and slower over time, because an internal object id was allocated but slower and slower over time, because an internal object id was allocated but
never de-allocated. never de-allocated.
</li><li>Issue 609: the spatial index did not support NULL with update and delete operations. </li><li>Issue 609: the spatial index did not support NULL with update and delete operations.
</li><li>Pull request #2: Add external metadata type support (table type "external") </li><li>Pull request #2: Add external metadata type support (table type "external")
</li><li>MS SQL Server: the CONVERT method did not work in views </li><li>MS SQL Server: the CONVERT method did not work in views
and derrived tables. and derrived tables.
</li><li>Java 8 compatibility for "regexp_replace". </li><li>Java 8 compatibility for "regexp_replace".
</li></ul> </li><li>When in cluster mode, and one of the nodes goes down,
we need to log the problem with priority "error", not "debug"
<h2>Version 1.4.187 Beta (2015-04-10)</h2> </li></ul>
<ul><li>MVStore: concurrent changes to the same row could result in
the exception "The transaction log might be corrupt for key ...". <h2>Version 1.4.187 Beta (2015-04-10)</h2>
This could only be reproduced with 3 or more threads. <ul><li>MVStore: concurrent changes to the same row could result in
</li><li>Results with CLOB or BLOB data are no longer reused. the exception "The transaction log might be corrupt for key ...".
</li><li>References to BLOB and CLOB objects now have a timeout. This could only be reproduced with 3 or more threads.
The configuration setting is LOB_TIMEOUT (default 5 minutes). </li><li>Results with CLOB or BLOB data are no longer reused.
This should avoid growing the database file if there are many queries that return BLOB or CLOB objects, </li><li>References to BLOB and CLOB objects now have a timeout.
and the database is not closed for a longer time. The configuration setting is LOB_TIMEOUT (default 5 minutes).
</li><li>MVStore: when committing a session that removed LOB values, This should avoid growing the database file if there are many queries that return BLOB or CLOB objects,
changes were flushed unnecessarily. and the database is not closed for a longer time.
</li><li>Issue 610: possible integer overflow in WriteBuffer.grow(). </li><li>MVStore: when committing a session that removed LOB values,
</li><li>Issue 609: the spatial index did not support NULL (ClassCastException). changes were flushed unnecessarily.
</li><li>MVStore: in some cases, CLOB/BLOB data blocks were removed </li><li>Issue 610: possible integer overflow in WriteBuffer.grow().
incorrectly when opening a database. </li><li>Issue 609: the spatial index did not support NULL (ClassCastException).
</li><li>MVStore: updates that affected many rows were were slow </li><li>MVStore: in some cases, CLOB/BLOB data blocks were removed
in some cases if there was a secondary index. incorrectly when opening a database.
</li><li>Using "runscript" with autocommit disabled could result </li><li>MVStore: updates that affected many rows were were slow
in a lock timeout on the internal table "SYS". in some cases if there was a secondary index.
</li><li>Issue 603: there was a memory leak when using H2 in a web application. </li><li>Using "runscript" with autocommit disabled could result
Apache Tomcat logged an error message: "The web application ... in a lock timeout on the internal table "SYS".
created a ThreadLocal with key of type [org.h2.util.DateTimeUtils$1]". </li><li>Issue 603: there was a memory leak when using H2 in a web application.
</li><li>When using the MVStore, Apache Tomcat logged an error message: "The web application ...
running a SQL script generate by the Recover tool from a PageStore file created a ThreadLocal with key of type [org.h2.util.DateTimeUtils$1]".
failed with a strange error message (NullPointerException), </li><li>When using the MVStore,
now a clear error message is shown. running a SQL script generate by the Recover tool from a PageStore file
</li><li>Issue 605: with version 1.4.186, opening a database could result in failed with a strange error message (NullPointerException),
an endless loop in LobStorageMap.init. now a clear error message is shown.
</li><li>Queries that use the same table alias multiple times now work. </li><li>Issue 605: with version 1.4.186, opening a database could result in
Before, the select expression list was expanded incorrectly. an endless loop in LobStorageMap.init.
Example: "select * from a as x, b as x". </li><li>Queries that use the same table alias multiple times now work.
</li><li>The MySQL compatibility feature "insert ... on duplicate key update" Before, the select expression list was expanded incorrectly.
did not work with a non-default schema. Example: "select * from a as x, b as x".
</li><li>Issue 599: the condition "in(x, y)" could not be used in the select list </li><li>The MySQL compatibility feature "insert ... on duplicate key update"
when using "group by". did not work with a non-default schema.
</li><li>The LIRS cache could grow larger than the allocated memory. </li><li>Issue 599: the condition "in(x, y)" could not be used in the select list
</li><li>A new file system implementation that re-opens the file if it was closed due when using "group by".
to the application calling Thread.interrupt(). File name prefix "retry:". </li><li>The LIRS cache could grow larger than the allocated memory.
Please note it is strongly recommended to avoid calling Thread.interrupt; </li><li>A new file system implementation that re-opens the file if it was closed due
this is a problem for various libraries, including Apache Lucene. to the application calling Thread.interrupt(). File name prefix "retry:".
</li><li>MVStore: use RandomAccessFile file system if the file name starts with "file:". Please note it is strongly recommended to avoid calling Thread.interrupt;
</li><li>Allow DATEADD to take a long value for count when manipulating milliseconds. this is a problem for various libraries, including Apache Lucene.
</li><li>When using MV_STORE=TRUE and the SET CACHE_SIZE setting, the cache size was incorrectly set, </li><li>MVStore: use RandomAccessFile file system if the file name starts with "file:".
so that it was effectively 1024 times smaller than it should be. </li><li>Allow DATEADD to take a long value for count when manipulating milliseconds.
</li><li>Concurrent CREATE TABLE... IF NOT EXISTS in the presence of MULTI_THREAD=TRUE could </li><li>When using MV_STORE=TRUE and the SET CACHE_SIZE setting, the cache size was incorrectly set,
throw an exception. so that it was effectively 1024 times smaller than it should be.
</li><li>Fix bug in MVStore when creating lots of temporary tables, where we could run out of </li><li>Concurrent CREATE TABLE... IF NOT EXISTS in the presence of MULTI_THREAD=TRUE could
transaction IDs. throw an exception.
</li><li>Add support for PostgreSQL STRING_AGG function. Patch by Fred Aquiles. </li><li>Fix bug in MVStore when creating lots of temporary tables, where we could run out of
</li><li>Fix bug in "jdbc:h2:nioMemFS" isRoot() function. transaction IDs.
Also, the page size was increased to 64 KB. </li><li>Add support for PostgreSQL STRING_AGG function. Patch by Fred Aquiles.
</li></ul> </li><li>Fix bug in "jdbc:h2:nioMemFS" isRoot() function.
Also, the page size was increased to 64 KB.
<h2>Version 1.4.186 Beta (2015-03-02)</h2> </li></ul>
<ul><li>The Servlet API 3.0.1 is now used, instead of 2.4.
</li><li>MVStore: old chunks no longer removed in append-only mode. <h2>Version 1.4.186 Beta (2015-03-02)</h2>
</li><li>MVStore: the cache for page references could grow far too big, resulting in out of memory in some cases. <ul><li>The Servlet API 3.0.1 is now used, instead of 2.4.
</li><li>MVStore: orphaned lob objects were not correctly removed in some cases, </li><li>MVStore: old chunks no longer removed in append-only mode.
making the database grow unnecessarily. </li><li>MVStore: the cache for page references could grow far too big, resulting in out of memory in some cases.
</li><li>MVStore: the maximum cache size was artificially limited to 2 GB </li><li>MVStore: orphaned lob objects were not correctly removed in some cases,
(due to an integer overflow). making the database grow unnecessarily.
</li><li>MVStore / TransactionStore: concurrent updates could result in a </li><li>MVStore: the maximum cache size was artificially limited to 2 GB
"Too many open transactions" exception. (due to an integer overflow).
</li><li>StringUtils.toUpperEnglish now has a small cache. </li><li>MVStore / TransactionStore: concurrent updates could result in a
This should speed up reading from a ResultSet when using the column name. "Too many open transactions" exception.
</li><li>MVStore: up to 65535 open transactions are now supported. </li><li>StringUtils.toUpperEnglish now has a small cache.
Previously, the limit was at most 65535 transactions between the oldest open and the This should speed up reading from a ResultSet when using the column name.
newest open transaction (which was quite a strange limit). </li><li>MVStore: up to 65535 open transactions are now supported.
</li><li>The default limit for in-place LOB objects was changed from 128 to 256 bytes. Previously, the limit was at most 65535 transactions between the oldest open and the
This is because each read creates a reference to a LOB, and maintaining the references newest open transaction (which was quite a strange limit).
is a big overhead. With the higher limit, less references are needed. </li><li>The default limit for in-place LOB objects was changed from 128 to 256 bytes.
</li><li>Tables without columns didn't work. This is because each read creates a reference to a LOB, and maintaining the references
(The use case for such tables is testing.) is a big overhead. With the higher limit, less references are needed.
</li><li>The LIRS cache now resizes the table automatically in all cases </li><li>Tables without columns didn't work.
and no longer needs the averageMemory configuration. (The use case for such tables is testing.)
</li><li>Creating a linked table from an MVStore database to a non-MVStore database </li><li>The LIRS cache now resizes the table automatically in all cases
created a second (non-MVStore) database file. and no longer needs the averageMemory configuration.
</li><li>In version 1.4.184, a bug was introduced that broke queries </li><li>Creating a linked table from an MVStore database to a non-MVStore database
that have both joins and wildcards, for example: created a second (non-MVStore) database file.
select * from dual join(select x from dual) on 1=1 </li><li>In version 1.4.184, a bug was introduced that broke queries
</li><li>Issue 598: parser fails on timestamp "24:00:00.1234" - prevent the creation of out-of-range time values. that have both joins and wildcards, for example:
</li><li>Allow declaring triggers as source code (like functions). Patch by Sylvain Cuaz. select * from dual join(select x from dual) on 1=1
</li><li>Make the planner use indexes for sorting when doing a GROUP BY where </li><li>Issue 598: parser fails on timestamp "24:00:00.1234" - prevent the creation of out-of-range time values.
all of the GROUP BY columns are not mentioned in the select. Patch by Frederico (zepfred). </li><li>Allow declaring triggers as source code (like functions). Patch by Sylvain Cuaz.
</li><li>PostgreSQL compatibility: generate_series (as an alias for system_range). Patch by litailang. </li><li>Make the planner use indexes for sorting when doing a GROUP BY where
</li><li>Fix missing "column" type in right-hand parameter in ConditionIn. Patch by Arnaud Thimel. all of the GROUP BY columns are not mentioned in the select. Patch by Frederico (zepfred).
</li></ul> </li><li>PostgreSQL compatibility: generate_series (as an alias for system_range). Patch by litailang.
</li><li>Fix missing "column" type in right-hand parameter in ConditionIn. Patch by Arnaud Thimel.
<h2>Version 1.4.185 Beta (2015-01-16)</h2> </li></ul>
<ul><li>In version 1.4.184, "group by" ignored the table name,
and could pick a select column by mistake. <h2>Version 1.4.185 Beta (2015-01-16)</h2>
Example: select 0 as x from system_range(1, 2) d group by d.x; <ul><li>In version 1.4.184, "group by" ignored the table name,
</li><li>New connection setting "REUSE_SPACE" (default: true). If disabled, and could pick a select column by mistake.
all changes are appended to the database file, and existing content is never overwritten. Example: select 0 as x from system_range(1, 2) d group by d.x;
This allows to rollback to a previous state of the database by truncating </li><li>New connection setting "REUSE_SPACE" (default: true). If disabled,
the database file. all changes are appended to the database file, and existing content is never overwritten.
</li><li>Issue 587: MVStore: concurrent compaction and store operations could result in an IllegalStateException. This allows to rollback to a previous state of the database by truncating
</li><li>Issue 594: Profiler.copyInThread does not work properly. the database file.
</li><li>Script tool: Now, SCRIPT ... TO is always used (for higher speed and lower disk space usage). </li><li>Issue 587: MVStore: concurrent compaction and store operations could result in an IllegalStateException.
</li><li>Script tool: Fix parsing of BLOCKSIZE parameter, original patch by Ken Jorissen. </li><li>Issue 594: Profiler.copyInThread does not work properly.
</li><li>Fix bug in PageStore#commit method - when the ignoreBigLog flag was set, </li><li>Script tool: Now, SCRIPT ... TO is always used (for higher speed and lower disk space usage).
the logic that cleared the flag could never be reached, resulting in performance degradation. </li><li>Script tool: Fix parsing of BLOCKSIZE parameter, original patch by Ken Jorissen.
Reported by Alexander Nesterov. </li><li>Fix bug in PageStore#commit method - when the ignoreBigLog flag was set,
</li><li>Issue 552: Implement BIT_AND and BIT_OR aggregate functions. the logic that cleared the flag could never be reached, resulting in performance degradation.
</li></ul> Reported by Alexander Nesterov.
</li><li>Issue 552: Implement BIT_AND and BIT_OR aggregate functions.
<h2>Version 1.4.184 Beta (2014-12-19)</h2> </li></ul>
<ul><li>In version 1.3.183, indexes were not used if the table contains
columns with a default value generated by a sequence. <h2>Version 1.4.184 Beta (2014-12-19)</h2>
This includes tables with identity and auto-increment columns. <ul><li>In version 1.3.183, indexes were not used if the table contains
This bug was introduced by supporting "rownum" in views and derived tables. columns with a default value generated by a sequence.
</li><li>MVStore: imported BLOB and CLOB data sometimes disappeared. This includes tables with identity and auto-increment columns.
This was caused by a bug in the ObjectDataType comparison. This bug was introduced by supporting "rownum" in views and derived tables.
</li><li>Reading from a StreamStore now throws an </li><li>MVStore: imported BLOB and CLOB data sometimes disappeared.
IOException if the underlying data doesn't exist. This was caused by a bug in the ObjectDataType comparison.
</li><li>MVStore: if there is an exception while saving, the store is now in all cases immediately closed. </li><li>Reading from a StreamStore now throws an
</li><li>MVStore: the dump tool could go into an endless loop for some files. IOException if the underlying data doesn't exist.
</li><li>MVStore: recovery for a database with many CLOB or BLOB entries is now much faster. </li><li>MVStore: if there is an exception while saving, the store is now in all cases immediately closed.
</li><li>Group by with a quoted select column name alias didn't work. Example: </li><li>MVStore: the dump tool could go into an endless loop for some files.
select 1 "a" from dual group by "a" </li><li>MVStore: recovery for a database with many CLOB or BLOB entries is now much faster.
</li><li>Auto-server mode: the host name is now stored in the .lock.db file. </li><li>Group by with a quoted select column name alias didn't work. Example:
</li></ul> select 1 "a" from dual group by "a"
</li><li>Auto-server mode: the host name is now stored in the .lock.db file.
<h2>Version 1.4.183 Beta (2014-12-13)</h2> </li></ul>
<ul><li>MVStore: the default auto-commit buffer size is now about twice as big.
This should reduce the database file size after inserting a lot of data. <h2>Version 1.4.183 Beta (2014-12-13)</h2>
</li><li>The built-in functions "power" and "radians" now always return a double. <ul><li>MVStore: the default auto-commit buffer size is now about twice as big.
</li><li>Using "row_number" or "rownum" in views or derived tables had unexpected results This should reduce the database file size after inserting a lot of data.
if the outer query contained constraints for the given view. Example: </li><li>The built-in functions "power" and "radians" now always return a double.
select b.nr, b.id from (select row_number() over() as nr, a.id as id </li><li>Using "row_number" or "rownum" in views or derived tables had unexpected results
from (select id from test order by name) as a) as b where b.id = 1 if the outer query contained constraints for the given view. Example:
</li><li>MVStore: the Recover tool can now deal with more types of corruption in the file. select b.nr, b.id from (select row_number() over() as nr, a.id as id
</li><li>MVStore: the TransactionStore now first needs to be initialized before it can be used. from (select id from test order by name) as a) as b where b.id = 1
</li><li>Views and derived tables with equality and range conditions on the same columns </li><li>MVStore: the Recover tool can now deal with more types of corruption in the file.
did not work properly. example: select x from (select x from (select 1 as x) </li><li>MVStore: the TransactionStore now first needs to be initialized before it can be used.
where x &gt; 0 and x &lt; 2) where x = 1 </li><li>Views and derived tables with equality and range conditions on the same columns
</li><li>The database URL setting PAGE_SIZE setting is now also used for the MVStore. did not work properly. example: select x from (select x from (select 1 as x)
</li><li>MVStore: the default page split size for persistent stores is now 4096 where x &gt; 0 and x &lt; 2) where x = 1
(it was 16 KB so far). This should reduce the database file size for most situations </li><li>The database URL setting PAGE_SIZE setting is now also used for the MVStore.
(in some cases, less than half the size of the previous version). </li><li>MVStore: the default page split size for persistent stores is now 4096
</li><li>With query literals disabled, auto-analyze of a table with CLOB or BLOB did not work. (it was 16 KB so far). This should reduce the database file size for most situations
</li><li>MVStore: use a mark and sweep GC algorithm instead of reference counting, (in some cases, less than half the size of the previous version).
to ensure used chunks are never overwrite, even if the reference counting </li><li>With query literals disabled, auto-analyze of a table with CLOB or BLOB did not work.
algorithm does not work properly. </li><li>MVStore: use a mark and sweep GC algorithm instead of reference counting,
</li><li>In the multi-threaded mode, updating the column selectivity ("analyze") to ensure used chunks are never overwrite, even if the reference counting
in the background sometimes did not work. algorithm does not work properly.
</li><li>In the multi-threaded mode, database metadata operations </li><li>In the multi-threaded mode, updating the column selectivity ("analyze")
did sometimes not work if the schema was changed at the same time in the background sometimes did not work.
(for example, if tables were dropped). </li><li>In the multi-threaded mode, database metadata operations
</li><li>Some CLOB and BLOB values could no longer be read when did sometimes not work if the schema was changed at the same time
the original row was removed (even when using the MVCC mode). (for example, if tables were dropped).
</li><li>The MVStoreTool could throw an IllegalArgumentException. </li><li>Some CLOB and BLOB values could no longer be read when
</li><li>Improved performance for some the original row was removed (even when using the MVCC mode).
date / time / timestamp conversion operations. </li><li>The MVStoreTool could throw an IllegalArgumentException.
Thanks to Sergey Evdokimov for reporting the problem. </li><li>Improved performance for some
</li><li>H2 Console: the built-in web server did not work properly date / time / timestamp conversion operations.
if an unknown file was requested. Thanks to Sergey Evdokimov for reporting the problem.
</li><li>MVStore: the jar file is renamed to "h2-mvstore-*.jar" and is </li><li>H2 Console: the built-in web server did not work properly
deployed to Maven separately. if an unknown file was requested.
</li><li>MVStore: support for concurrent reads and writes is now enabled by default. </li><li>MVStore: the jar file is renamed to "h2-mvstore-*.jar" and is
</li><li>Server mode: the transfer buffer size has been changed from 16 KB to 64 KB, deployed to Maven separately.
after it was found that this improves performance on Linux quite a lot. </li><li>MVStore: support for concurrent reads and writes is now enabled by default.
</li><li>H2 Console and server mode: SSL is now disabled and TLS is used </li><li>Server mode: the transfer buffer size has been changed from 16 KB to 64 KB,
to protect against the Poodle SSLv3 vulnerability. after it was found that this improves performance on Linux quite a lot.
The system property to disable secure anonymous connections is now </li><li>H2 Console and server mode: SSL is now disabled and TLS is used
"h2.enableAnonymousTLS". to protect against the Poodle SSLv3 vulnerability.
The default certificate is still self-signed, so you need to manually install The system property to disable secure anonymous connections is now
another one if you want to avoid man in the middle attacks. "h2.enableAnonymousTLS".
</li><li>MVStore: the R-tree did not correctly measure the memory usage. The default certificate is still self-signed, so you need to manually install
</li><li>MVStore: compacting a store with an R-tree did not always work. another one if you want to avoid man in the middle attacks.
</li><li>Issue 581: When running in LOCK_MODE=0, </li><li>MVStore: the R-tree did not correctly measure the memory usage.
JdbcDatabaseMetaData#supportsTransactionIsolationLevel(TRANSACTION_READ_UNCOMMITTED) </li><li>MVStore: compacting a store with an R-tree did not always work.
should return false </li><li>Issue 581: When running in LOCK_MODE=0,
</li><li>Fix bug which could generate deadlocks when multiple connections accessed the same table. JdbcDatabaseMetaData#supportsTransactionIsolationLevel(TRANSACTION_READ_UNCOMMITTED)
</li><li>Some places in the code were not respecting the value set in the "SET MAX_MEMORY_ROWS x" command should return false
</li><li>Fix bug which could generate a NegativeArraySizeException when performing large (>40M) row union operations </li><li>Fix bug which could generate deadlocks when multiple connections accessed the same table.
</li><li>Fix "USE schema" command for MySQL compatibility, patch by mfulton </li><li>Some places in the code were not respecting the value set in the "SET MAX_MEMORY_ROWS x" command
</li><li>Parse and ignore the ROW_FORMAT=DYNAMIC MySQL syntax, patch by mfulton </li><li>Fix bug which could generate a NegativeArraySizeException when performing large (>40M) row union operations
</li></ul> </li><li>Fix "USE schema" command for MySQL compatibility, patch by mfulton
</li><li>Parse and ignore the ROW_FORMAT=DYNAMIC MySQL syntax, patch by mfulton
<h2>Version 1.4.182 Beta (2014-10-17)</h2> </li></ul>
<ul><li>MVStore: improved error messages and logging;
improved behavior if there is an error when serializing objects. <h2>Version 1.4.182 Beta (2014-10-17)</h2>
</li><li>OSGi: the MVStore packages are now exported. <ul><li>MVStore: improved error messages and logging;
</li><li>With the MVStore option, when using multiple threads improved behavior if there is an error when serializing objects.
that concurrently create indexes or tables, </li><li>OSGi: the MVStore packages are now exported.
it was relatively easy to get a lock timeout on the "SYS" table. </li><li>With the MVStore option, when using multiple threads
</li><li>When using the multi-threaded option, the exception that concurrently create indexes or tables,
"Unexpected code path" could be thrown, specially if the option it was relatively easy to get a lock timeout on the "SYS" table.
"analyze_auto" was set to a low value. </li><li>When using the multi-threaded option, the exception
</li><li>In the server mode, when reading from a CLOB or BLOB, if the connection "Unexpected code path" could be thrown, specially if the option
was closed, a NullPointerException could be thrown instead of an exception saying "analyze_auto" was set to a low value.
the connection is closed. </li><li>In the server mode, when reading from a CLOB or BLOB, if the connection
</li><li>DatabaseMetaData.getProcedures and getProcedureColumns was closed, a NullPointerException could be thrown instead of an exception saying
could throw an exception if a user defined class is not available. the connection is closed.
</li><li>Issue 584: the error message for a wrong sequence definition was wrong. </li><li>DatabaseMetaData.getProcedures and getProcedureColumns
</li><li>CSV tool: the rowSeparator option is no longer supported, could throw an exception if a user defined class is not available.
as the same can be achieved with the lineSeparator. </li><li>Issue 584: the error message for a wrong sequence definition was wrong.
</li><li>Descending indexes on MVStore tables did not work properly. </li><li>CSV tool: the rowSeparator option is no longer supported,
</li><li>Issue 579: Conditions on the "_rowid_" pseudo-column didn't use an index as the same can be achieved with the lineSeparator.
when using the MVStore. </li><li>Descending indexes on MVStore tables did not work properly.
</li><li>Fixed documentation that "offset" and "fetch" are also keywords since version 1.4.x. </li><li>Issue 579: Conditions on the "_rowid_" pseudo-column didn't use an index
</li><li>The Long.MIN_VALUE could not be parsed for auto-increment (identity) columns. when using the MVStore.
</li><li>Issue 573: Add implementation for Methods "isWrapperFor()" and "unwrap()" in </li><li>Fixed documentation that "offset" and "fetch" are also keywords since version 1.4.x.
other JDBC classes. </li><li>The Long.MIN_VALUE could not be parsed for auto-increment (identity) columns.
</li><li>Issue 572: MySQL compatibility for "order by" in update statements. </li><li>Issue 573: Add implementation for Methods "isWrapperFor()" and "unwrap()" in
</li><li>The change in JDBC escape processing in version 1.4.181 affects both the parser other JDBC classes.
(which is running on the server) and the JDBC API (which is running on the client). </li><li>Issue 572: MySQL compatibility for "order by" in update statements.
If you (or a tool you use) use the syntax "{t 'time}", or "{ts 'timestamp'}", or "{d 'data'}", </li><li>The change in JDBC escape processing in version 1.4.181 affects both the parser
then both the client and the server need to be upgraded to version 1.4.181 or later. (which is running on the server) and the JDBC API (which is running on the client).
</li></ul> If you (or a tool you use) use the syntax "{t 'time}", or "{ts 'timestamp'}", or "{d 'data'}",
then both the client and the server need to be upgraded to version 1.4.181 or later.
<h2>Version 1.4.181 Beta (2014-08-06)</h2> </li></ul>
<ul><li>Improved MySQL compatibility by supporting "use schema".
Thanks a lot to Karl Pietrzak for the patch! <h2>Version 1.4.181 Beta (2014-08-06)</h2>
</li><li>Writing to the trace file is now faster, specially with the debug level. <ul><li>Improved MySQL compatibility by supporting "use schema".
</li><li>The database option "defrag_always=true" did not work with the MVStore. Thanks a lot to Karl Pietrzak for the patch!
</li><li>The JDBC escape syntax {ts 'value'} did not interpret the value as a timestamp. </li><li>Writing to the trace file is now faster, specially with the debug level.
The same for {d 'value'} (for date) and {t 'value'} (for time). </li><li>The database option "defrag_always=true" did not work with the MVStore.
Thanks to Lukas Eder for reporting the issue. </li><li>The JDBC escape syntax {ts 'value'} did not interpret the value as a timestamp.
The following problem was detected after version 1.4.181 was released: The same for {d 'value'} (for date) and {t 'value'} (for time).
The change in JDBC escape processing affects both the parser (which is running on the server) Thanks to Lukas Eder for reporting the issue.
and the JDBC API (which is running on the client). The following problem was detected after version 1.4.181 was released:
If you (or a tool you use) use the syntax {t 'time'}, or {ts 'timestamp'}, or {d 'date'}, The change in JDBC escape processing affects both the parser (which is running on the server)
then both the client and the server need to be upgraded to version 1.4.181 or later. and the JDBC API (which is running on the client).
</li><li>File system abstraction: support replacing existing files using move If you (or a tool you use) use the syntax {t 'time'}, or {ts 'timestamp'}, or {d 'date'},
(currently not for Windows). then both the client and the server need to be upgraded to version 1.4.181 or later.
</li><li>The statement "shutdown defrag" now compresses the database (with the MVStore). </li><li>File system abstraction: support replacing existing files using move
This command can greatly reduce the file size, and is relatively fast, (currently not for Windows).
but is not incremental. </li><li>The statement "shutdown defrag" now compresses the database (with the MVStore).
</li><li>The MVStore now automatically compacts the store in the background if there is no read or write activity, This command can greatly reduce the file size, and is relatively fast,
which should (after some time; sometimes about one minute) reduce the file size. but is not incremental.
This is still work in progress, feedback is welcome! </li><li>The MVStore now automatically compacts the store in the background if there is no read or write activity,
</li><li>Change default value of PAGE_SIZE from 2048 to 4096 to more closely match most file systems block size which should (after some time; sometimes about one minute) reduce the file size.
(PageStore only; the MVStore already used 4096). This is still work in progress, feedback is welcome!
</li><li>Auto-scale MAX_MEMORY_ROWS and CACHE_SIZE settings by the amount of available RAM. Gives a better </li><li>Change default value of PAGE_SIZE from 2048 to 4096 to more closely match most file systems block size
out of box experience for people with more powerful machines. (PageStore only; the MVStore already used 4096).
</li><li>Handle tabs like 4 spaces in web console, patch by Martin Grajcar. </li><li>Auto-scale MAX_MEMORY_ROWS and CACHE_SIZE settings by the amount of available RAM. Gives a better
</li><li>Issue 573: Add implementation for Methods "isWrapperFor()" and "unwrap()" in JdbcConnection.java, out of box experience for people with more powerful machines.
patch by BigMichi1. </li><li>Handle tabs like 4 spaces in web console, patch by Martin Grajcar.
</li></ul> </li><li>Issue 573: Add implementation for Methods "isWrapperFor()" and "unwrap()" in JdbcConnection.java,
patch by BigMichi1.
<h2>Version 1.4.180 Beta (2014-07-13)</h2> </li></ul>
<ul><li>MVStore: the store is now auto-compacted automatically up to some point,
to avoid very large file sizes. This area is still work in progress. <h2>Version 1.4.180 Beta (2014-07-13)</h2>
</li><li>Sequences of temporary tables (auto-increment or identity columns) <ul><li>MVStore: the store is now auto-compacted automatically up to some point,
were persisted unnecessarily in the database file, and were not removed to avoid very large file sizes. This area is still work in progress.
when re-opening the database. </li><li>Sequences of temporary tables (auto-increment or identity columns)
</li><li>MVStore: an IndexOutOfBoundsException could sometimes were persisted unnecessarily in the database file, and were not removed
occur MVMap.openVersion when concurrently accessing the store. when re-opening the database.
</li><li>The LIRS cache now re-sizes the internal hash map if needed. </li><li>MVStore: an IndexOutOfBoundsException could sometimes
</li><li>Optionally persist session history in the H2 console. (patch from Martin Grajcar) occur MVMap.openVersion when concurrently accessing the store.
</li><li>Add client-info property to get the number of servers currently in the cluster </li><li>The LIRS cache now re-sizes the internal hash map if needed.
and which servers that are available. (patch from Nikolaj Fogh) </li><li>Optionally persist session history in the H2 console. (patch from Martin Grajcar)
</li><li>Fix bug in changing encrypted DB password that kept the file handle </li><li>Add client-info property to get the number of servers currently in the cluster
open when the wrong password was supplied. (test case from Jens Hohmuth). and which servers that are available. (patch from Nikolaj Fogh)
</li><li>Issue 567: H2 hangs for a long time then (sometimes) recovers. </li><li>Fix bug in changing encrypted DB password that kept the file handle
Introduce a queue when doing table locking to prevent session starvation. open when the wrong password was supplied. (test case from Jens Hohmuth).
</li></ul> </li><li>Issue 567: H2 hangs for a long time then (sometimes) recovers.
Introduce a queue when doing table locking to prevent session starvation.
<h2>Version 1.4.179 Beta (2014-06-23)</h2> </li></ul>
<ul><li>The license was changed to MPL 2.0 (from 1.0) and EPL 1.0.
</li><li>Issue 565: MVStore: concurrently adding LOB objects <h2>Version 1.4.179 Beta (2014-06-23)</h2>
(with MULTI_THREADED option) resulted in a NullPointerException. <ul><li>The license was changed to MPL 2.0 (from 1.0) and EPL 1.0.
</li><li>MVStore: reduced dependencies to other H2 classes. </li><li>Issue 565: MVStore: concurrently adding LOB objects
</li><li>There was a way to prevent a database from being re-opened, (with MULTI_THREADED option) resulted in a NullPointerException.
by creating a column constraint that references a table with a higher id, </li><li>MVStore: reduced dependencies to other H2 classes.
for example with "check" constraints that contains queries. </li><li>There was a way to prevent a database from being re-opened,
This is now detected, and creating the table is prohibited. by creating a column constraint that references a table with a higher id,
In future versions of H2, most likely creating references to other for example with "check" constraints that contains queries.
tables will no longer be supported because of such problems. This is now detected, and creating the table is prohibited.
</li><li>MVStore: descending indexes with "nulls first" did not work as expected In future versions of H2, most likely creating references to other
(null was ordered last). tables will no longer be supported because of such problems.
</li><li>Large result sets now always create temporary tables instead of temporary files. </li><li>MVStore: descending indexes with "nulls first" did not work as expected
</li><li>When using the PageStore, opening a database failed in some cases with a NullPointerException (null was ordered last).
if temporary tables were used (explicitly, or implicitly when using large result sets). </li><li>Large result sets now always create temporary tables instead of temporary files.
</li><li>If a database file in the PageStore file format exists, this file and this mode </li><li>When using the PageStore, opening a database failed in some cases with a NullPointerException
is now used, even if the database URL does not contain "MV_STORE=FALSE". if temporary tables were used (explicitly, or implicitly when using large result sets).
If a MVStore file exists, it is used. </li><li>If a database file in the PageStore file format exists, this file and this mode
</li><li>Databases created with version 1.3.175 and earlier is now used, even if the database URL does not contain "MV_STORE=FALSE".
that contained foreign keys in combination with multi-column indexes If a MVStore file exists, it is used.
could not be opened in some cases. </li><li>Databases created with version 1.3.175 and earlier
This was due to a bugfix in version 1.3.176: that contained foreign keys in combination with multi-column indexes
Referential integrity constraints sometimes used the wrong index. could not be opened in some cases.
</li><li>MVStore: the ObjectDataType comparison method was incorrect if one This was due to a bugfix in version 1.3.176:
key was Serializable and the other was of a common class. Referential integrity constraints sometimes used the wrong index.
</li><li>Recursive queries with many result rows (more than the setting "max_memory_rows") </li><li>MVStore: the ObjectDataType comparison method was incorrect if one
did not work correctly. key was Serializable and the other was of a common class.
</li><li>The license has changed to MPL 2.0 + EPL 1.0. </li><li>Recursive queries with many result rows (more than the setting "max_memory_rows")
</li><li>MVStore: temporary tables from result sets could survive re-opening a database, did not work correctly.
which could result in a ClassCastException. </li><li>The license has changed to MPL 2.0 + EPL 1.0.
</li><li>Issue 566: MVStore: unique indexes that were created later on did not work correctly </li><li>MVStore: temporary tables from result sets could survive re-opening a database,
if there were over 5000 rows in the table. which could result in a ClassCastException.
Existing databases need to be re-created (at least the broken index need to be re-built). </li><li>Issue 566: MVStore: unique indexes that were created later on did not work correctly
</li><li>MVStore: creating secondary indexes on large tables if there were over 5000 rows in the table.
results in missing rows in the index. Existing databases need to be re-created (at least the broken index need to be re-built).
</li><li>Metadata: the password of linked tables is now only visible for admin users. </li><li>MVStore: creating secondary indexes on large tables
</li><li>For Windows, database URLs of the form "jdbc:h2:/test" where considered results in missing rows in the index.
relative and did not work unless the system property "h2.implicitRelativePath" was used. </li><li>Metadata: the password of linked tables is now only visible for admin users.
</li><li>Windows: using a base directory of "C:/" and similar did not work as expected. </li><li>For Windows, database URLs of the form "jdbc:h2:/test" where considered
</li><li>Follow JDBC specification on Procedures MetaData, use P0 as relative and did not work unless the system property "h2.implicitRelativePath" was used.
return type of procedure. </li><li>Windows: using a base directory of "C:/" and similar did not work as expected.
</li><li>Issue 531: IDENTITY ignored for added column. </li><li>Follow JDBC specification on Procedures MetaData, use P0 as
</li><li>FileSystem: improve exception throwing compatibility with JDK return type of procedure.
</li><li>Spatial Index: adjust costs so we do not use the spatial index if the </li><li>Issue 531: IDENTITY ignored for added column.
query does not contain an intersects operator. </li><li>FileSystem: improve exception throwing compatibility with JDK
</li><li>Fix multi-threaded deadlock when using a View that includes a TableFunction. </li><li>Spatial Index: adjust costs so we do not use the spatial index if the
</li><li>Fix bug in dividing very-small BigDecimal numbers. query does not contain an intersects operator.
</li></ul> </li><li>Fix multi-threaded deadlock when using a View that includes a TableFunction.
</li><li>Fix bug in dividing very-small BigDecimal numbers.
<h2>Version 1.4.178 Beta (2014-05-02)</h2> </li></ul>
<ul><li>Issue 559: Make dependency on org.osgi.service.jdbc optional.
</li><li>Improve error message when the user specifies an unsupported combination of database settings. <h2>Version 1.4.178 Beta (2014-05-02)</h2>
</li><li>MVStore: in the multi-threaded mode, NullPointerException and other exceptions could occur. <ul><li>Issue 559: Make dependency on org.osgi.service.jdbc optional.
</li><li>MVStore: some database file could not be compacted due to a bug in </li><li>Improve error message when the user specifies an unsupported combination of database settings.
the bookkeeping of the fill rate. Also, database file were compacted quite slowly. </li><li>MVStore: in the multi-threaded mode, NullPointerException and other exceptions could occur.
This has been improved; but more changes in this area are expected. </li><li>MVStore: some database file could not be compacted due to a bug in
</li><li>MVStore: support for volatile maps (that don't store changes). the bookkeeping of the fill rate. Also, database file were compacted quite slowly.
</li><li>MVStore mode: in-memory databases now also use the MVStore. This has been improved; but more changes in this area are expected.
</li><li>In server mode, appending ";autocommit=false" to the database URL was working, </li><li>MVStore: support for volatile maps (that don't store changes).
but the return value of Connection.getAutoCommit() was wrong. </li><li>MVStore mode: in-memory databases now also use the MVStore.
</li><li>Issue 561: OSGi: the import package declaration of org.h2 excluded version 1.4. </li><li>In server mode, appending ";autocommit=false" to the database URL was working,
</li><li>Issue 558: with the MVStore, a NullPointerException could occur when using LOBs but the return value of Connection.getAutoCommit() was wrong.
at session commit (LobStorageMap.removeLob). </li><li>Issue 561: OSGi: the import package declaration of org.h2 excluded version 1.4.
</li><li>Remove the "h2.MAX_MEMORY_ROWS_DISTINCT" system property to reduce confusion. </li><li>Issue 558: with the MVStore, a NullPointerException could occur when using LOBs
We already have the MAX_MEMORY_ROWS setting which does a very similar thing, and is better documented. at session commit (LobStorageMap.removeLob).
</li><li>Issue 554: Web Console in an IFrame was not fully supported. </li><li>Remove the "h2.MAX_MEMORY_ROWS_DISTINCT" system property to reduce confusion.
</li></ul> We already have the MAX_MEMORY_ROWS setting which does a very similar thing, and is better documented.
</li><li>Issue 554: Web Console in an IFrame was not fully supported.
<h2>Version 1.4.177 Beta (2014-04-12)</h2> </li></ul>
<ul><li>By default, the MV_STORE option is enabled, so it is using the new MVStore
storage. The MVCC setting is by default set to the same values as the MV_STORE setting, <h2>Version 1.4.177 Beta (2014-04-12)</h2>
so it is also enabled by default. For testing, both settings can be disabled by appending <ul><li>By default, the MV_STORE option is enabled, so it is using the new MVStore
";MV_STORE=FALSE" and/or ";MVCC=FALSE" to the database URL. storage. The MVCC setting is by default set to the same values as the MV_STORE setting,
</li><li>The file locking method 'serialized' is no longer supported. so it is also enabled by default. For testing, both settings can be disabled by appending
This mode might return in a future version, ";MV_STORE=FALSE" and/or ";MVCC=FALSE" to the database URL.
however this is not clear right now. </li><li>The file locking method 'serialized' is no longer supported.
A new implementation and new tests would be needed. This mode might return in a future version,
</li><li>Enable the new storage format for dates (system property "h2.storeLocalTime"). however this is not clear right now.
For the MVStore mode, this is always enabled, but with version 1.4 A new implementation and new tests would be needed.
this is even enabled in the PageStore mode. </li><li>Enable the new storage format for dates (system property "h2.storeLocalTime").
</li><li>Implicit relative paths are disabled (system property "h2.implicitRelativePath"), For the MVStore mode, this is always enabled, but with version 1.4
so that the database URL jdbc:h2:test now needs to be written as jdbc:h2:./test. this is even enabled in the PageStore mode.
</li><li>"select ... fetch first 1 row only" is supported with the regular mode. </li><li>Implicit relative paths are disabled (system property "h2.implicitRelativePath"),
This was disabled so far because "fetch" and "offset" are now keywords. so that the database URL jdbc:h2:test now needs to be written as jdbc:h2:./test.
See also Mode.supportOffsetFetch. </li><li>"select ... fetch first 1 row only" is supported with the regular mode.
</li><li>Byte arrays are now sorted in unsigned mode This was disabled so far because "fetch" and "offset" are now keywords.
(x'99' is larger than x'09'). See also Mode.supportOffsetFetch.
(System property "h2.sortBinaryUnsigned", Mode.binaryUnsigned, setting "binary_collation"). </li><li>Byte arrays are now sorted in unsigned mode
</li><li>Csv.getInstance will be removed in future versions of 1.4. (x'99' is larger than x'09').
Use the public constructor instead. (System property "h2.sortBinaryUnsigned", Mode.binaryUnsigned, setting "binary_collation").
</li><li>Remove support for the limited old-style outer join syntax using "(+)". </li><li>Csv.getInstance will be removed in future versions of 1.4.
Use "outer join" instead. Use the public constructor instead.
System property "h2.oldStyleOuterJoin". </li><li>Remove support for the limited old-style outer join syntax using "(+)".
</li><li>Support the data type "DATETIME2" as an alias for "DATETIME", for MS SQL Server compatibility. Use "outer join" instead.
</li><li>Add Oracle-compatible TRANSLATE function, patch by Eric Chatellier. System property "h2.oldStyleOuterJoin".
</li></ul> </li><li>Support the data type "DATETIME2" as an alias for "DATETIME", for MS SQL Server compatibility.
</li><li>Add Oracle-compatible TRANSLATE function, patch by Eric Chatellier.
<h2>Version 1.3.176 (2014-04-05)</h2> </li></ul>
<ul><li>The file locking method 'serialized' is no longer documented,
as it will not be available in version 1.4. <h2>Version 1.3.176 (2014-04-05)</h2>
</li><li>The static method Csv.getInstance() was removed. <ul><li>The file locking method 'serialized' is no longer documented,
Use the public constructor instead. as it will not be available in version 1.4.
</li><li>The default user name for the Script, RunScript, Shell, </li><li>The static method Csv.getInstance() was removed.
and CreateCluster tools are no longer "sa" but an empty string. Use the public constructor instead.
</li><li>The stack trace of the exception "The object is already closed" is no longer logged by default. </li><li>The default user name for the Script, RunScript, Shell,
</li><li>If a value of a result set was itself a result set, the result and CreateCluster tools are no longer "sa" but an empty string.
could only be read once. </li><li>The stack trace of the exception "The object is already closed" is no longer logged by default.
</li><li>Column constraints are also visible in views (patch from Nicolas Fortin for H2GIS). </li><li>If a value of a result set was itself a result set, the result
</li><li>Granting a additional right to a role that already had a right for that table was not working. could only be read once.
</li><li>Spatial index: a few bugs have been fixed (using spatial constraints in views, </li><li>Column constraints are also visible in views (patch from Nicolas Fortin for H2GIS).
transferring geometry objects over TCP/IP, the returned geometry object is copied when needed). </li><li>Granting a additional right to a role that already had a right for that table was not working.
</li><li>Issue 551: the datatype documentation was incorrect (found by Bernd Eckenfels). </li><li>Spatial index: a few bugs have been fixed (using spatial constraints in views,
</li><li>Issue 368: ON DUPLICATE KEY UPDATE did not work for multi-row inserts. transferring geometry objects over TCP/IP, the returned geometry object is copied when needed).
Test case from Angus Macdonald. </li><li>Issue 551: the datatype documentation was incorrect (found by Bernd Eckenfels).
</li><li>OSGi: the package javax.tools is now imported (as an optional). </li><li>Issue 368: ON DUPLICATE KEY UPDATE did not work for multi-row inserts.
</li><li>H2 Console: auto-complete is now disabled by default, but there is a hot-key (Ctrl+Space). Test case from Angus Macdonald.
</li><li>H2 Console: auto-complete did not work with multi-line statements. </li><li>OSGi: the package javax.tools is now imported (as an optional).
</li><li>CLOB and BLOB data was not immediately removed after a rollback. </li><li>H2 Console: auto-complete is now disabled by default, but there is a hot-key (Ctrl+Space).
</li><li>There is a new Aggregate API that supports the internal H2 data types </li><li>H2 Console: auto-complete did not work with multi-line statements.
(GEOMETRY for example). Thanks a lot to Nicolas Fortin for the patch! </li><li>CLOB and BLOB data was not immediately removed after a rollback.
</li><li>Referential integrity constraints sometimes used the wrong index, </li><li>There is a new Aggregate API that supports the internal H2 data types
such that updating a row in the referenced table incorrectly failed with (GEOMETRY for example). Thanks a lot to Nicolas Fortin for the patch!
a constraint violation. </li><li>Referential integrity constraints sometimes used the wrong index,
</li><li>The Polish translation was completed and corrected by Wojtek Jurczyk. Thanks a lot! such that updating a row in the referenced table incorrectly failed with
</li><li>Issue 545: Unnecessary duplicate code was removed. a constraint violation.
</li><li>The profiler tool can now process files with full thread dumps. </li><li>The Polish translation was completed and corrected by Wojtek Jurczyk. Thanks a lot!
</li><li>MVStore: the file format was changed slightly. </li><li>Issue 545: Unnecessary duplicate code was removed.
</li><li>MVStore mode: the CLOB and BLOB storage was re-implemented and is </li><li>The profiler tool can now process files with full thread dumps.
now much faster than with the PageStore (which is still the default storage). </li><li>MVStore: the file format was changed slightly.
</li><li>MVStore mode: creating indexes is now much faster </li><li>MVStore mode: the CLOB and BLOB storage was re-implemented and is
(in many cases faster than with the default PageStore). now much faster than with the PageStore (which is still the default storage).
</li><li>Various bugs in the MVStore storage and have been fixed, </li><li>MVStore mode: creating indexes is now much faster
including a bug in the R-tree implementation. (in many cases faster than with the default PageStore).
The database could get corrupt if there were transient IO exceptions while storing. </li><li>Various bugs in the MVStore storage and have been fixed,
</li><li>The method org.h2.expression.Function.getCost could throw a NullPointException. including a bug in the R-tree implementation.
</li><li>Storing LOBs in separate files (outside of the main database file) The database could get corrupt if there were transient IO exceptions while storing.
is no longer supported for new databases. </li><li>The method org.h2.expression.Function.getCost could throw a NullPointException.
</li><li>Lucene 2 is no longer supported. </li><li>Storing LOBs in separate files (outside of the main database file)
</li><li>Fix bug in calculating default MIN and MAX values for SEQUENCE. is no longer supported for new databases.
</li><li>Fix bug in performing IN queries with multiple values when IGNORECASE=TRUE </li><li>Lucene 2 is no longer supported.
</li><li>Add entry-point to org.h2.tools.Shell so it can be called from inside an application. </li><li>Fix bug in calculating default MIN and MAX values for SEQUENCE.
patch by Thomas Gillet. </li><li>Fix bug in performing IN queries with multiple values when IGNORECASE=TRUE
</li><li>Fix bug that prevented the PgServer from being stopped and started multiple times. </li><li>Add entry-point to org.h2.tools.Shell so it can be called from inside an application.
</li><li>Support some more DDL syntax for MySQL, patch from Peter Jentsch. patch by Thomas Gillet.
</li><li>Issue 548: TO_CHAR does not format MM and DD correctly when the month or day of </li><li>Fix bug that prevented the PgServer from being stopped and started multiple times.
the month is 1 digit, patch from "the.tucc" </li><li>Support some more DDL syntax for MySQL, patch from Peter Jentsch.
</li><li>Fix bug in varargs support in ALIAS's, patch from Nicolas Fortin </li><li>Issue 548: TO_CHAR does not format MM and DD correctly when the month or day of
</li></ul> the month is 1 digit, patch from "the.tucc"
</li><li>Fix bug in varargs support in ALIAS's, patch from Nicolas Fortin
<!-- [close] { --></div></td></tr></table><!-- } --><!-- analytics --></body></html> </li></ul>
<!-- [close] { --></div></td></tr></table><!-- } --><!-- analytics --></body></html>
/* /*
* Copyright 2004-2014 H2 Group. Multiple-Licensed under the MPL 2.0, * Copyright 2004-2014 H2 Group. Multiple-Licensed under the MPL 2.0,
* and the EPL 1.0 (http://h2database.com/html/license.html). * and the EPL 1.0 (http://h2database.com/html/license.html).
* Initial Developer: H2 Group * Initial Developer: H2 Group
*/ */
package org.h2.engine; package org.h2.engine;
import java.io.IOException; import java.io.IOException;
import java.net.Socket; import java.net.Socket;
import java.util.ArrayList; import java.util.ArrayList;
import org.h2.api.DatabaseEventListener; import org.h2.api.DatabaseEventListener;
import org.h2.api.ErrorCode; import org.h2.api.ErrorCode;
import org.h2.api.JavaObjectSerializer; import org.h2.api.JavaObjectSerializer;
import org.h2.command.CommandInterface; import org.h2.command.CommandInterface;
import org.h2.command.CommandRemote; import org.h2.command.CommandRemote;
import org.h2.command.dml.SetTypes; import org.h2.command.dml.SetTypes;
import org.h2.jdbc.JdbcSQLException; import org.h2.jdbc.JdbcSQLException;
import org.h2.message.DbException; import org.h2.message.DbException;
import org.h2.message.Trace; import org.h2.message.Trace;
import org.h2.message.TraceSystem; import org.h2.message.TraceSystem;
import org.h2.result.ResultInterface; import org.h2.result.ResultInterface;
import org.h2.store.DataHandler; import org.h2.store.DataHandler;
import org.h2.store.FileStore; import org.h2.store.FileStore;
import org.h2.store.LobStorageFrontend; import org.h2.store.LobStorageFrontend;
import org.h2.store.LobStorageInterface; import org.h2.store.LobStorageInterface;
import org.h2.store.fs.FileUtils; import org.h2.store.fs.FileUtils;
import org.h2.util.JdbcUtils; import org.h2.util.JdbcUtils;
import org.h2.util.MathUtils; import org.h2.util.MathUtils;
import org.h2.util.NetUtils; import org.h2.util.NetUtils;
import org.h2.util.New; import org.h2.util.New;
import org.h2.util.SmallLRUCache; import org.h2.util.SmallLRUCache;
import org.h2.util.StringUtils; import org.h2.util.StringUtils;
import org.h2.util.TempFileDeleter; import org.h2.util.TempFileDeleter;
import org.h2.value.Transfer; import org.h2.value.Transfer;
import org.h2.value.Value; import org.h2.value.Value;
/** /**
* The client side part of a session when using the server mode. This object * The client side part of a session when using the server mode. This object
* communicates with a Session on the server side. * communicates with a Session on the server side.
*/ */
public class SessionRemote extends SessionWithState implements DataHandler { public class SessionRemote extends SessionWithState implements DataHandler {
public static final int SESSION_PREPARE = 0; public static final int SESSION_PREPARE = 0;
public static final int SESSION_CLOSE = 1; public static final int SESSION_CLOSE = 1;
public static final int COMMAND_EXECUTE_QUERY = 2; public static final int COMMAND_EXECUTE_QUERY = 2;
public static final int COMMAND_EXECUTE_UPDATE = 3; public static final int COMMAND_EXECUTE_UPDATE = 3;
public static final int COMMAND_CLOSE = 4; public static final int COMMAND_CLOSE = 4;
public static final int RESULT_FETCH_ROWS = 5; public static final int RESULT_FETCH_ROWS = 5;
public static final int RESULT_RESET = 6; public static final int RESULT_RESET = 6;
public static final int RESULT_CLOSE = 7; public static final int RESULT_CLOSE = 7;
public static final int COMMAND_COMMIT = 8; public static final int COMMAND_COMMIT = 8;
public static final int CHANGE_ID = 9; public static final int CHANGE_ID = 9;
public static final int COMMAND_GET_META_DATA = 10; public static final int COMMAND_GET_META_DATA = 10;
public static final int SESSION_PREPARE_READ_PARAMS = 11; public static final int SESSION_PREPARE_READ_PARAMS = 11;
public static final int SESSION_SET_ID = 12; public static final int SESSION_SET_ID = 12;
public static final int SESSION_CANCEL_STATEMENT = 13; public static final int SESSION_CANCEL_STATEMENT = 13;
public static final int SESSION_CHECK_KEY = 14; public static final int SESSION_CHECK_KEY = 14;
public static final int SESSION_SET_AUTOCOMMIT = 15; public static final int SESSION_SET_AUTOCOMMIT = 15;
public static final int SESSION_HAS_PENDING_TRANSACTION = 16; public static final int SESSION_HAS_PENDING_TRANSACTION = 16;
public static final int LOB_READ = 17; public static final int LOB_READ = 17;
public static final int STATUS_ERROR = 0; public static final int STATUS_ERROR = 0;
public static final int STATUS_OK = 1; public static final int STATUS_OK = 1;
public static final int STATUS_CLOSED = 2; public static final int STATUS_CLOSED = 2;
public static final int STATUS_OK_STATE_CHANGED = 3; public static final int STATUS_OK_STATE_CHANGED = 3;
private static SessionFactory sessionFactory; private static SessionFactory sessionFactory;
private TraceSystem traceSystem; private TraceSystem traceSystem;
private Trace trace; private Trace trace;
private ArrayList<Transfer> transferList = New.arrayList(); private ArrayList<Transfer> transferList = New.arrayList();
private int nextId; private int nextId;
private boolean autoCommit = true; private boolean autoCommit = true;
private CommandInterface autoCommitFalse, autoCommitTrue; private CommandInterface autoCommitFalse, autoCommitTrue;
private ConnectionInfo connectionInfo; private ConnectionInfo connectionInfo;
private String databaseName; private String databaseName;
private String cipher; private String cipher;
private byte[] fileEncryptionKey; private byte[] fileEncryptionKey;
private final Object lobSyncObject = new Object(); private final Object lobSyncObject = new Object();
private String sessionId; private String sessionId;
private int clientVersion; private int clientVersion;
private boolean autoReconnect; private boolean autoReconnect;
private int lastReconnect; private int lastReconnect;
private SessionInterface embedded; private SessionInterface embedded;
private DatabaseEventListener eventListener; private DatabaseEventListener eventListener;
private LobStorageFrontend lobStorage; private LobStorageFrontend lobStorage;
private boolean cluster; private boolean cluster;
private TempFileDeleter tempFileDeleter; private TempFileDeleter tempFileDeleter;
private JavaObjectSerializer javaObjectSerializer; private JavaObjectSerializer javaObjectSerializer;
private volatile boolean javaObjectSerializerInitialized; private volatile boolean javaObjectSerializerInitialized;
public SessionRemote(ConnectionInfo ci) { public SessionRemote(ConnectionInfo ci) {
this.connectionInfo = ci; this.connectionInfo = ci;
} }
@Override @Override
public ArrayList<String> getClusterServers() { public ArrayList<String> getClusterServers() {
ArrayList<String> serverList = new ArrayList<String>(); ArrayList<String> serverList = new ArrayList<String>();
for (int i = 0; i < transferList.size(); i++) { for (int i = 0; i < transferList.size(); i++) {
Transfer transfer = transferList.get(i); Transfer transfer = transferList.get(i);
serverList.add(transfer.getSocket().getInetAddress(). serverList.add(transfer.getSocket().getInetAddress().
getHostAddress() + ":" + getHostAddress() + ":" +
transfer.getSocket().getPort()); transfer.getSocket().getPort());
} }
return serverList; return serverList;
} }
private Transfer initTransfer(ConnectionInfo ci, String db, String server) private Transfer initTransfer(ConnectionInfo ci, String db, String server)
throws IOException { throws IOException {
Socket socket = NetUtils.createSocket(server, Socket socket = NetUtils.createSocket(server,
Constants.DEFAULT_TCP_PORT, ci.isSSL()); Constants.DEFAULT_TCP_PORT, ci.isSSL());
Transfer trans = new Transfer(this); Transfer trans = new Transfer(this);
trans.setSocket(socket); trans.setSocket(socket);
trans.setSSL(ci.isSSL()); trans.setSSL(ci.isSSL());
trans.init(); trans.init();
trans.writeInt(Constants.TCP_PROTOCOL_VERSION_6); trans.writeInt(Constants.TCP_PROTOCOL_VERSION_6);
trans.writeInt(Constants.TCP_PROTOCOL_VERSION_15); trans.writeInt(Constants.TCP_PROTOCOL_VERSION_15);
trans.writeString(db); trans.writeString(db);
trans.writeString(ci.getOriginalURL()); trans.writeString(ci.getOriginalURL());
trans.writeString(ci.getUserName()); trans.writeString(ci.getUserName());
trans.writeBytes(ci.getUserPasswordHash()); trans.writeBytes(ci.getUserPasswordHash());
trans.writeBytes(ci.getFilePasswordHash()); trans.writeBytes(ci.getFilePasswordHash());
String[] keys = ci.getKeys(); String[] keys = ci.getKeys();
trans.writeInt(keys.length); trans.writeInt(keys.length);
for (String key : keys) { for (String key : keys) {
trans.writeString(key).writeString(ci.getProperty(key)); trans.writeString(key).writeString(ci.getProperty(key));
} }
try { try {
done(trans); done(trans);
clientVersion = trans.readInt(); clientVersion = trans.readInt();
trans.setVersion(clientVersion); trans.setVersion(clientVersion);
if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_14) { if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_14) {
if (ci.getFileEncryptionKey() != null) { if (ci.getFileEncryptionKey() != null) {
trans.writeBytes(ci.getFileEncryptionKey()); trans.writeBytes(ci.getFileEncryptionKey());
} }
} }
trans.writeInt(SessionRemote.SESSION_SET_ID); trans.writeInt(SessionRemote.SESSION_SET_ID);
trans.writeString(sessionId); trans.writeString(sessionId);
done(trans); done(trans);
if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_15) { if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_15) {
autoCommit = trans.readBoolean(); autoCommit = trans.readBoolean();
} else { } else {
autoCommit = true; autoCommit = true;
} }
return trans; return trans;
} catch (DbException e) { } catch (DbException e) {
trans.close(); trans.close();
throw e; throw e;
} }
} }
@Override @Override
public boolean hasPendingTransaction() { public boolean hasPendingTransaction() {
if (clientVersion < Constants.TCP_PROTOCOL_VERSION_10) { if (clientVersion < Constants.TCP_PROTOCOL_VERSION_10) {
return true; return true;
} }
for (int i = 0, count = 0; i < transferList.size(); i++) { for (int i = 0, count = 0; i < transferList.size(); i++) {
Transfer transfer = transferList.get(i); Transfer transfer = transferList.get(i);
try { try {
traceOperation("SESSION_HAS_PENDING_TRANSACTION", 0); traceOperation("SESSION_HAS_PENDING_TRANSACTION", 0);
transfer.writeInt( transfer.writeInt(
SessionRemote.SESSION_HAS_PENDING_TRANSACTION); SessionRemote.SESSION_HAS_PENDING_TRANSACTION);
done(transfer); done(transfer);
return transfer.readInt() != 0; return transfer.readInt() != 0;
} catch (IOException e) { } catch (IOException e) {
removeServer(e, i--, ++count); removeServer(e, i--, ++count);
} }
} }
return true; return true;
} }
@Override @Override
public void cancel() { public void cancel() {
// this method is called when closing the connection // this method is called when closing the connection
// the statement that is currently running is not canceled in this case // the statement that is currently running is not canceled in this case
// however Statement.cancel is supported // however Statement.cancel is supported
} }
/** /**
* Cancel the statement with the given id. * Cancel the statement with the given id.
* *
* @param id the statement id * @param id the statement id
*/ */
public void cancelStatement(int id) { public void cancelStatement(int id) {
for (Transfer transfer : transferList) { for (Transfer transfer : transferList) {
try { try {
Transfer trans = transfer.openNewConnection(); Transfer trans = transfer.openNewConnection();
trans.init(); trans.init();
trans.writeInt(clientVersion); trans.writeInt(clientVersion);
trans.writeInt(clientVersion); trans.writeInt(clientVersion);
trans.writeString(null); trans.writeString(null);
trans.writeString(null); trans.writeString(null);
trans.writeString(sessionId); trans.writeString(sessionId);
trans.writeInt(SessionRemote.SESSION_CANCEL_STATEMENT); trans.writeInt(SessionRemote.SESSION_CANCEL_STATEMENT);
trans.writeInt(id); trans.writeInt(id);
trans.close(); trans.close();
} catch (IOException e) { } catch (IOException e) {
trace.debug(e, "could not cancel statement"); trace.debug(e, "could not cancel statement");
} }
} }
} }
private void checkClusterDisableAutoCommit(String serverList) { private void checkClusterDisableAutoCommit(String serverList) {
if (autoCommit && transferList.size() > 1) { if (autoCommit && transferList.size() > 1) {
setAutoCommitSend(false); setAutoCommitSend(false);
CommandInterface c = prepareCommand( CommandInterface c = prepareCommand(
"SET CLUSTER " + serverList, Integer.MAX_VALUE); "SET CLUSTER " + serverList, Integer.MAX_VALUE);
// this will set autoCommit to false // this will set autoCommit to false
c.executeUpdate(); c.executeUpdate();
// so we need to switch it on // so we need to switch it on
autoCommit = true; autoCommit = true;
cluster = true; cluster = true;
} }
} }
@Override @Override
public boolean getAutoCommit() { public boolean getAutoCommit() {
return autoCommit; return autoCommit;
} }
@Override @Override
public void setAutoCommit(boolean autoCommit) { public void setAutoCommit(boolean autoCommit) {
if (!cluster) { if (!cluster) {
setAutoCommitSend(autoCommit); setAutoCommitSend(autoCommit);
} }
this.autoCommit = autoCommit; this.autoCommit = autoCommit;
} }
public void setAutoCommitFromServer(boolean autoCommit) { public void setAutoCommitFromServer(boolean autoCommit) {
if (cluster) { if (cluster) {
if (autoCommit) { if (autoCommit) {
// the user executed SET AUTOCOMMIT TRUE // the user executed SET AUTOCOMMIT TRUE
setAutoCommitSend(false); setAutoCommitSend(false);
this.autoCommit = true; this.autoCommit = true;
} }
} else { } else {
this.autoCommit = autoCommit; this.autoCommit = autoCommit;
} }
} }
private void setAutoCommitSend(boolean autoCommit) { private void setAutoCommitSend(boolean autoCommit) {
if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_8) { if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_8) {
for (int i = 0, count = 0; i < transferList.size(); i++) { for (int i = 0, count = 0; i < transferList.size(); i++) {
Transfer transfer = transferList.get(i); Transfer transfer = transferList.get(i);
try { try {
traceOperation("SESSION_SET_AUTOCOMMIT", autoCommit ? 1 : 0); traceOperation("SESSION_SET_AUTOCOMMIT", autoCommit ? 1 : 0);
transfer.writeInt(SessionRemote.SESSION_SET_AUTOCOMMIT). transfer.writeInt(SessionRemote.SESSION_SET_AUTOCOMMIT).
writeBoolean(autoCommit); writeBoolean(autoCommit);
done(transfer); done(transfer);
} catch (IOException e) { } catch (IOException e) {
removeServer(e, i--, ++count); removeServer(e, i--, ++count);
} }
} }
} else { } else {
if (autoCommit) { if (autoCommit) {
if (autoCommitTrue == null) { if (autoCommitTrue == null) {
autoCommitTrue = prepareCommand( autoCommitTrue = prepareCommand(
"SET AUTOCOMMIT TRUE", Integer.MAX_VALUE); "SET AUTOCOMMIT TRUE", Integer.MAX_VALUE);
} }
autoCommitTrue.executeUpdate(); autoCommitTrue.executeUpdate();
} else { } else {
if (autoCommitFalse == null) { if (autoCommitFalse == null) {
autoCommitFalse = prepareCommand( autoCommitFalse = prepareCommand(
"SET AUTOCOMMIT FALSE", Integer.MAX_VALUE); "SET AUTOCOMMIT FALSE", Integer.MAX_VALUE);
} }
autoCommitFalse.executeUpdate(); autoCommitFalse.executeUpdate();
} }
} }
} }
/** /**
* Calls COMMIT if the session is in cluster mode. * Calls COMMIT if the session is in cluster mode.
*/ */
public void autoCommitIfCluster() { public void autoCommitIfCluster() {
if (autoCommit && cluster) { if (autoCommit && cluster) {
// server side auto commit is off because of race conditions // server side auto commit is off because of race conditions
// (update set id=1 where id=0, but update set id=2 where id=0 is // (update set id=1 where id=0, but update set id=2 where id=0 is
// faster) // faster)
for (int i = 0, count = 0; i < transferList.size(); i++) { for (int i = 0, count = 0; i < transferList.size(); i++) {
Transfer transfer = transferList.get(i); Transfer transfer = transferList.get(i);
try { try {
traceOperation("COMMAND_COMMIT", 0); traceOperation("COMMAND_COMMIT", 0);
transfer.writeInt(SessionRemote.COMMAND_COMMIT); transfer.writeInt(SessionRemote.COMMAND_COMMIT);
done(transfer); done(transfer);
} catch (IOException e) { } catch (IOException e) {
removeServer(e, i--, ++count); removeServer(e, i--, ++count);
} }
} }
} }
} }
private String getFilePrefix(String dir) { private String getFilePrefix(String dir) {
StringBuilder buff = new StringBuilder(dir); StringBuilder buff = new StringBuilder(dir);
buff.append('/'); buff.append('/');
for (int i = 0; i < databaseName.length(); i++) { for (int i = 0; i < databaseName.length(); i++) {
char ch = databaseName.charAt(i); char ch = databaseName.charAt(i);
if (Character.isLetterOrDigit(ch)) { if (Character.isLetterOrDigit(ch)) {
buff.append(ch); buff.append(ch);
} else { } else {
buff.append('_'); buff.append('_');
} }
} }
return buff.toString(); return buff.toString();
} }
@Override @Override
public int getPowerOffCount() { public int getPowerOffCount() {
return 0; return 0;
} }
@Override @Override
public void setPowerOffCount(int count) { public void setPowerOffCount(int count) {
throw DbException.getUnsupportedException("remote"); throw DbException.getUnsupportedException("remote");
} }
/** /**
* Open a new (remote or embedded) session. * Open a new (remote or embedded) session.
* *
* @param openNew whether to open a new session in any case * @param openNew whether to open a new session in any case
* @return the session * @return the session
*/ */
public SessionInterface connectEmbeddedOrServer(boolean openNew) { public SessionInterface connectEmbeddedOrServer(boolean openNew) {
ConnectionInfo ci = connectionInfo; ConnectionInfo ci = connectionInfo;
if (ci.isRemote()) { if (ci.isRemote()) {
connectServer(ci); connectServer(ci);
return this; return this;
} }
// create the session using reflection, // create the session using reflection,
// so that the JDBC layer can be compiled without it // so that the JDBC layer can be compiled without it
boolean autoServerMode = Boolean.parseBoolean( boolean autoServerMode = Boolean.parseBoolean(
ci.getProperty("AUTO_SERVER", "false")); ci.getProperty("AUTO_SERVER", "false"));
ConnectionInfo backup = null; ConnectionInfo backup = null;
try { try {
if (autoServerMode) { if (autoServerMode) {
backup = ci.clone(); backup = ci.clone();
connectionInfo = ci.clone(); connectionInfo = ci.clone();
} }
if (openNew) { if (openNew) {
ci.setProperty("OPEN_NEW", "true"); ci.setProperty("OPEN_NEW", "true");
} }
if (sessionFactory == null) { if (sessionFactory == null) {
sessionFactory = (SessionFactory) Class.forName( sessionFactory = (SessionFactory) Class.forName(
"org.h2.engine.Engine").getMethod("getInstance").invoke(null); "org.h2.engine.Engine").getMethod("getInstance").invoke(null);
} }
return sessionFactory.createSession(ci); return sessionFactory.createSession(ci);
} catch (Exception re) { } catch (Exception re) {
DbException e = DbException.convert(re); DbException e = DbException.convert(re);
if (e.getErrorCode() == ErrorCode.DATABASE_ALREADY_OPEN_1) { if (e.getErrorCode() == ErrorCode.DATABASE_ALREADY_OPEN_1) {
if (autoServerMode) { if (autoServerMode) {
String serverKey = ((JdbcSQLException) e.getSQLException()). String serverKey = ((JdbcSQLException) e.getSQLException()).
getSQL(); getSQL();
if (serverKey != null) { if (serverKey != null) {
backup.setServerKey(serverKey); backup.setServerKey(serverKey);
// OPEN_NEW must be removed now, otherwise // OPEN_NEW must be removed now, otherwise
// opening a session with AUTO_SERVER fails // opening a session with AUTO_SERVER fails
// if another connection is already open // if another connection is already open
backup.removeProperty("OPEN_NEW", null); backup.removeProperty("OPEN_NEW", null);
connectServer(backup); connectServer(backup);
return this; return this;
} }
} }
} }
throw e; throw e;
} }
} }
private void connectServer(ConnectionInfo ci) { private void connectServer(ConnectionInfo ci) {
String name = ci.getName(); String name = ci.getName();
if (name.startsWith("//")) { if (name.startsWith("//")) {
name = name.substring("//".length()); name = name.substring("//".length());
} }
int idx = name.indexOf('/'); int idx = name.indexOf('/');
if (idx < 0) { if (idx < 0) {
throw ci.getFormatException(); throw ci.getFormatException();
} }
databaseName = name.substring(idx + 1); databaseName = name.substring(idx + 1);
String server = name.substring(0, idx); String server = name.substring(0, idx);
traceSystem = new TraceSystem(null); traceSystem = new TraceSystem(null);
String traceLevelFile = ci.getProperty( String traceLevelFile = ci.getProperty(
SetTypes.TRACE_LEVEL_FILE, null); SetTypes.TRACE_LEVEL_FILE, null);
if (traceLevelFile != null) { if (traceLevelFile != null) {
int level = Integer.parseInt(traceLevelFile); int level = Integer.parseInt(traceLevelFile);
String prefix = getFilePrefix( String prefix = getFilePrefix(
SysProperties.CLIENT_TRACE_DIRECTORY); SysProperties.CLIENT_TRACE_DIRECTORY);
try { try {
traceSystem.setLevelFile(level); traceSystem.setLevelFile(level);
if (level > 0 && level < 4) { if (level > 0 && level < 4) {
String file = FileUtils.createTempFile(prefix, String file = FileUtils.createTempFile(prefix,
Constants.SUFFIX_TRACE_FILE, false, false); Constants.SUFFIX_TRACE_FILE, false, false);
traceSystem.setFileName(file); traceSystem.setFileName(file);
} }
} catch (IOException e) { } catch (IOException e) {
throw DbException.convertIOException(e, prefix); throw DbException.convertIOException(e, prefix);
} }
} }
String traceLevelSystemOut = ci.getProperty( String traceLevelSystemOut = ci.getProperty(
SetTypes.TRACE_LEVEL_SYSTEM_OUT, null); SetTypes.TRACE_LEVEL_SYSTEM_OUT, null);
if (traceLevelSystemOut != null) { if (traceLevelSystemOut != null) {
int level = Integer.parseInt(traceLevelSystemOut); int level = Integer.parseInt(traceLevelSystemOut);
traceSystem.setLevelSystemOut(level); traceSystem.setLevelSystemOut(level);
} }
trace = traceSystem.getTrace(Trace.JDBC); trace = traceSystem.getTrace(Trace.JDBC);
String serverList = null; String serverList = null;
if (server.indexOf(',') >= 0) { if (server.indexOf(',') >= 0) {
serverList = StringUtils.quoteStringSQL(server); serverList = StringUtils.quoteStringSQL(server);
ci.setProperty("CLUSTER", Constants.CLUSTERING_ENABLED); ci.setProperty("CLUSTER", Constants.CLUSTERING_ENABLED);
} }
autoReconnect = Boolean.parseBoolean(ci.getProperty( autoReconnect = Boolean.parseBoolean(ci.getProperty(
"AUTO_RECONNECT", "false")); "AUTO_RECONNECT", "false"));
// AUTO_SERVER implies AUTO_RECONNECT // AUTO_SERVER implies AUTO_RECONNECT
boolean autoServer = Boolean.parseBoolean(ci.getProperty( boolean autoServer = Boolean.parseBoolean(ci.getProperty(
"AUTO_SERVER", "false")); "AUTO_SERVER", "false"));
if (autoServer && serverList != null) { if (autoServer && serverList != null) {
throw DbException throw DbException
.getUnsupportedException("autoServer && serverList != null"); .getUnsupportedException("autoServer && serverList != null");
} }
autoReconnect |= autoServer; autoReconnect |= autoServer;
if (autoReconnect) { if (autoReconnect) {
String className = ci.getProperty("DATABASE_EVENT_LISTENER"); String className = ci.getProperty("DATABASE_EVENT_LISTENER");
if (className != null) { if (className != null) {
className = StringUtils.trim(className, true, true, "'"); className = StringUtils.trim(className, true, true, "'");
try { try {
eventListener = (DatabaseEventListener) JdbcUtils eventListener = (DatabaseEventListener) JdbcUtils
.loadUserClass(className).newInstance(); .loadUserClass(className).newInstance();
} catch (Throwable e) { } catch (Throwable e) {
throw DbException.convert(e); throw DbException.convert(e);
} }
} }
} }
cipher = ci.getProperty("CIPHER"); cipher = ci.getProperty("CIPHER");
if (cipher != null) { if (cipher != null) {
fileEncryptionKey = MathUtils.secureRandomBytes(32); fileEncryptionKey = MathUtils.secureRandomBytes(32);
} }
String[] servers = StringUtils.arraySplit(server, ',', true); String[] servers = StringUtils.arraySplit(server, ',', true);
int len = servers.length; int len = servers.length;
transferList.clear(); transferList.clear();
sessionId = StringUtils.convertBytesToHex(MathUtils.secureRandomBytes(32)); sessionId = StringUtils.convertBytesToHex(MathUtils.secureRandomBytes(32));
// TODO cluster: support more than 2 connections // TODO cluster: support more than 2 connections
boolean switchOffCluster = false; boolean switchOffCluster = false;
try { try {
for (int i = 0; i < len; i++) { for (int i = 0; i < len; i++) {
String s = servers[i]; String s = servers[i];
try { try {
Transfer trans = initTransfer(ci, databaseName, s); Transfer trans = initTransfer(ci, databaseName, s);
transferList.add(trans); transferList.add(trans);
} catch (IOException e) { } catch (IOException e) {
if (len == 1) { if (len == 1) {
throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, e, e + ": " + s); throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, e, e + ": " + s);
} }
switchOffCluster = true; switchOffCluster = true;
} }
} }
checkClosed(); checkClosed();
if (switchOffCluster) { if (switchOffCluster) {
switchOffCluster(); switchOffCluster();
} }
checkClusterDisableAutoCommit(serverList); checkClusterDisableAutoCommit(serverList);
} catch (DbException e) { } catch (DbException e) {
traceSystem.close(); traceSystem.close();
throw e; throw e;
} }
} }
private void switchOffCluster() { private void switchOffCluster() {
CommandInterface ci = prepareCommand("SET CLUSTER ''", Integer.MAX_VALUE); CommandInterface ci = prepareCommand("SET CLUSTER ''", Integer.MAX_VALUE);
ci.executeUpdate(); ci.executeUpdate();
} }
/** /**
* Remove a server from the list of cluster nodes and disables the cluster * Remove a server from the list of cluster nodes and disables the cluster
* mode. * mode.
* *
* @param e the exception (used for debugging) * @param e the exception (used for debugging)
* @param i the index of the server to remove * @param i the index of the server to remove
* @param count the retry count index * @param count the retry count index
*/ */
public void removeServer(IOException e, int i, int count) { public void removeServer(IOException e, int i, int count) {
trace.debug(e, "removing server because of exception"); trace.error(e, "removing server because of exception");
transferList.remove(i); transferList.remove(i);
if (transferList.size() == 0 && autoReconnect(count)) { if (transferList.size() == 0 && autoReconnect(count)) {
return; return;
} }
checkClosed(); checkClosed();
switchOffCluster(); switchOffCluster();
} }
@Override @Override
public synchronized CommandInterface prepareCommand(String sql, int fetchSize) { public synchronized CommandInterface prepareCommand(String sql, int fetchSize) {
checkClosed(); checkClosed();
return new CommandRemote(this, transferList, sql, fetchSize); return new CommandRemote(this, transferList, sql, fetchSize);
} }
/** /**
* Automatically re-connect if necessary and if configured to do so. * Automatically re-connect if necessary and if configured to do so.
* *
* @param count the retry count index * @param count the retry count index
* @return true if reconnected * @return true if reconnected
*/ */
private boolean autoReconnect(int count) { private boolean autoReconnect(int count) {
if (!isClosed()) { if (!isClosed()) {
return false; return false;
} }
if (!autoReconnect) { if (!autoReconnect) {
return false; return false;
} }
if (!cluster && !autoCommit) { if (!cluster && !autoCommit) {
return false; return false;
} }
if (count > SysProperties.MAX_RECONNECT) { if (count > SysProperties.MAX_RECONNECT) {
return false; return false;
} }
lastReconnect++; lastReconnect++;
while (true) { while (true) {
try { try {
embedded = connectEmbeddedOrServer(false); embedded = connectEmbeddedOrServer(false);
break; break;
} catch (DbException e) { } catch (DbException e) {
if (e.getErrorCode() != ErrorCode.DATABASE_IS_IN_EXCLUSIVE_MODE) { if (e.getErrorCode() != ErrorCode.DATABASE_IS_IN_EXCLUSIVE_MODE) {
throw e; throw e;
} }
// exclusive mode: re-try endlessly // exclusive mode: re-try endlessly
try { try {
Thread.sleep(500); Thread.sleep(500);
} catch (Exception e2) { } catch (Exception e2) {
// ignore // ignore
} }
} }
} }
if (embedded == this) { if (embedded == this) {
// connected to a server somewhere else // connected to a server somewhere else
embedded = null; embedded = null;
} else { } else {
// opened an embedded connection now - // opened an embedded connection now -
// must connect to this database in server mode // must connect to this database in server mode
// unfortunately // unfortunately
connectEmbeddedOrServer(true); connectEmbeddedOrServer(true);
} }
recreateSessionState(); recreateSessionState();
if (eventListener != null) { if (eventListener != null) {
eventListener.setProgress(DatabaseEventListener.STATE_RECONNECTED, eventListener.setProgress(DatabaseEventListener.STATE_RECONNECTED,
databaseName, count, SysProperties.MAX_RECONNECT); databaseName, count, SysProperties.MAX_RECONNECT);
} }
return true; return true;
} }
/** /**
* Check if this session is closed and throws an exception if so. * Check if this session is closed and throws an exception if so.
* *
* @throws DbException if the session is closed * @throws DbException if the session is closed
*/ */
public void checkClosed() { public void checkClosed() {
if (isClosed()) { if (isClosed()) {
throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, "session closed"); throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, "session closed");
} }
} }
@Override @Override
public void close() { public void close() {
RuntimeException closeError = null; RuntimeException closeError = null;
if (transferList != null) { if (transferList != null) {
synchronized (this) { synchronized (this) {
for (Transfer transfer : transferList) { for (Transfer transfer : transferList) {
try { try {
traceOperation("SESSION_CLOSE", 0); traceOperation("SESSION_CLOSE", 0);
transfer.writeInt(SessionRemote.SESSION_CLOSE); transfer.writeInt(SessionRemote.SESSION_CLOSE);
done(transfer); done(transfer);
transfer.close(); transfer.close();
} catch (RuntimeException e) { } catch (RuntimeException e) {
trace.error(e, "close"); trace.error(e, "close");
closeError = e; closeError = e;
} catch (Exception e) { } catch (Exception e) {
trace.error(e, "close"); trace.error(e, "close");
} }
} }
} }
transferList = null; transferList = null;
} }
traceSystem.close(); traceSystem.close();
if (embedded != null) { if (embedded != null) {
embedded.close(); embedded.close();
embedded = null; embedded = null;
} }
if (closeError != null) { if (closeError != null) {
throw closeError; throw closeError;
} }
} }
@Override @Override
public Trace getTrace() { public Trace getTrace() {
return traceSystem.getTrace(Trace.JDBC); return traceSystem.getTrace(Trace.JDBC);
} }
public int getNextId() { public int getNextId() {
return nextId++; return nextId++;
} }
public int getCurrentId() { public int getCurrentId() {
return nextId; return nextId;
} }
/** /**
* Called to flush the output after data has been sent to the server and * Called to flush the output after data has been sent to the server and
* just before receiving data. This method also reads the status code from * just before receiving data. This method also reads the status code from
* the server and throws any exception the server sent. * the server and throws any exception the server sent.
* *
* @param transfer the transfer object * @param transfer the transfer object
* @throws DbException if the server sent an exception * @throws DbException if the server sent an exception
* @throws IOException if there is a communication problem between client * @throws IOException if there is a communication problem between client
* and server * and server
*/ */
public void done(Transfer transfer) throws IOException { public void done(Transfer transfer) throws IOException {
transfer.flush(); transfer.flush();
int status = transfer.readInt(); int status = transfer.readInt();
if (status == STATUS_ERROR) { if (status == STATUS_ERROR) {
String sqlstate = transfer.readString(); String sqlstate = transfer.readString();
String message = transfer.readString(); String message = transfer.readString();
String sql = transfer.readString(); String sql = transfer.readString();
int errorCode = transfer.readInt(); int errorCode = transfer.readInt();
String stackTrace = transfer.readString(); String stackTrace = transfer.readString();
JdbcSQLException s = new JdbcSQLException(message, sql, sqlstate, JdbcSQLException s = new JdbcSQLException(message, sql, sqlstate,
errorCode, null, stackTrace); errorCode, null, stackTrace);
if (errorCode == ErrorCode.CONNECTION_BROKEN_1) { if (errorCode == ErrorCode.CONNECTION_BROKEN_1) {
// allow re-connect // allow re-connect
IOException e = new IOException(s.toString(), s); IOException e = new IOException(s.toString(), s);
throw e; throw e;
} }
throw DbException.convert(s); throw DbException.convert(s);
} else if (status == STATUS_CLOSED) { } else if (status == STATUS_CLOSED) {
transferList = null; transferList = null;
} else if (status == STATUS_OK_STATE_CHANGED) { } else if (status == STATUS_OK_STATE_CHANGED) {
sessionStateChanged = true; sessionStateChanged = true;
} else if (status == STATUS_OK) { } else if (status == STATUS_OK) {
// ok // ok
} else { } else {
throw DbException.get(ErrorCode.CONNECTION_BROKEN_1, throw DbException.get(ErrorCode.CONNECTION_BROKEN_1,
"unexpected status " + status); "unexpected status " + status);
} }
} }
/** /**
* Returns true if the connection was opened in cluster mode. * Returns true if the connection was opened in cluster mode.
* *
* @return true if it is * @return true if it is
*/ */
public boolean isClustered() { public boolean isClustered() {
return cluster; return cluster;
} }
@Override @Override
public boolean isClosed() { public boolean isClosed() {
return transferList == null || transferList.size() == 0; return transferList == null || transferList.size() == 0;
} }
/** /**
* Write the operation to the trace system if debug trace is enabled. * Write the operation to the trace system if debug trace is enabled.
* *
* @param operation the operation performed * @param operation the operation performed
* @param id the id of the operation * @param id the id of the operation
*/ */
public void traceOperation(String operation, int id) { public void traceOperation(String operation, int id) {
if (trace.isDebugEnabled()) { if (trace.isDebugEnabled()) {
trace.debug("{0} {1}", operation, id); trace.debug("{0} {1}", operation, id);
} }
} }
@Override @Override
public void checkPowerOff() { public void checkPowerOff() {
// ok // ok
} }
@Override @Override
public void checkWritingAllowed() { public void checkWritingAllowed() {
// ok // ok
} }
@Override @Override
public String getDatabasePath() { public String getDatabasePath() {
return ""; return "";
} }
@Override @Override
public String getLobCompressionAlgorithm(int type) { public String getLobCompressionAlgorithm(int type) {
return null; return null;
} }
@Override @Override
public int getMaxLengthInplaceLob() { public int getMaxLengthInplaceLob() {
return SysProperties.LOB_CLIENT_MAX_SIZE_MEMORY; return SysProperties.LOB_CLIENT_MAX_SIZE_MEMORY;
} }
@Override @Override
public FileStore openFile(String name, String mode, boolean mustExist) { public FileStore openFile(String name, String mode, boolean mustExist) {
if (mustExist && !FileUtils.exists(name)) { if (mustExist && !FileUtils.exists(name)) {
throw DbException.get(ErrorCode.FILE_NOT_FOUND_1, name); throw DbException.get(ErrorCode.FILE_NOT_FOUND_1, name);
} }
FileStore store; FileStore store;
if (cipher == null) { if (cipher == null) {
store = FileStore.open(this, name, mode); store = FileStore.open(this, name, mode);
} else { } else {
store = FileStore.open(this, name, mode, cipher, fileEncryptionKey, 0); store = FileStore.open(this, name, mode, cipher, fileEncryptionKey, 0);
} }
store.setCheckedWriting(false); store.setCheckedWriting(false);
try { try {
store.init(); store.init();
} catch (DbException e) { } catch (DbException e) {
store.closeSilently(); store.closeSilently();
throw e; throw e;
} }
return store; return store;
} }
@Override @Override
public DataHandler getDataHandler() { public DataHandler getDataHandler() {
return this; return this;
} }
@Override @Override
public Object getLobSyncObject() { public Object getLobSyncObject() {
return lobSyncObject; return lobSyncObject;
} }
@Override @Override
public SmallLRUCache<String, String[]> getLobFileListCache() { public SmallLRUCache<String, String[]> getLobFileListCache() {
return null; return null;
} }
public int getLastReconnect() { public int getLastReconnect() {
return lastReconnect; return lastReconnect;
} }
@Override @Override
public TempFileDeleter getTempFileDeleter() { public TempFileDeleter getTempFileDeleter() {
if (tempFileDeleter == null) { if (tempFileDeleter == null) {
tempFileDeleter = TempFileDeleter.getInstance(); tempFileDeleter = TempFileDeleter.getInstance();
} }
return tempFileDeleter; return tempFileDeleter;
} }
@Override @Override
public boolean isReconnectNeeded(boolean write) { public boolean isReconnectNeeded(boolean write) {
return false; return false;
} }
@Override @Override
public SessionInterface reconnect(boolean write) { public SessionInterface reconnect(boolean write) {
return this; return this;
} }
@Override @Override
public void afterWriting() { public void afterWriting() {
// nothing to do // nothing to do
} }
@Override @Override
public LobStorageInterface getLobStorage() { public LobStorageInterface getLobStorage() {
if (lobStorage == null) { if (lobStorage == null) {
lobStorage = new LobStorageFrontend(this); lobStorage = new LobStorageFrontend(this);
} }
return lobStorage; return lobStorage;
} }
@Override @Override
public synchronized int readLob(long lobId, byte[] hmac, long offset, public synchronized int readLob(long lobId, byte[] hmac, long offset,
byte[] buff, int off, int length) { byte[] buff, int off, int length) {
checkClosed(); checkClosed();
for (int i = 0, count = 0; i < transferList.size(); i++) { for (int i = 0, count = 0; i < transferList.size(); i++) {
Transfer transfer = transferList.get(i); Transfer transfer = transferList.get(i);
try { try {
traceOperation("LOB_READ", (int) lobId); traceOperation("LOB_READ", (int) lobId);
transfer.writeInt(SessionRemote.LOB_READ); transfer.writeInt(SessionRemote.LOB_READ);
transfer.writeLong(lobId); transfer.writeLong(lobId);
if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_12) { if (clientVersion >= Constants.TCP_PROTOCOL_VERSION_12) {
transfer.writeBytes(hmac); transfer.writeBytes(hmac);
} }
transfer.writeLong(offset); transfer.writeLong(offset);
transfer.writeInt(length); transfer.writeInt(length);
done(transfer); done(transfer);
length = transfer.readInt(); length = transfer.readInt();
if (length <= 0) { if (length <= 0) {
return length; return length;
} }
transfer.readBytes(buff, off, length); transfer.readBytes(buff, off, length);
return length; return length;
} catch (IOException e) { } catch (IOException e) {
removeServer(e, i--, ++count); removeServer(e, i--, ++count);
} }
} }
return 1; return 1;
} }
@Override @Override
public JavaObjectSerializer getJavaObjectSerializer() { public JavaObjectSerializer getJavaObjectSerializer() {
initJavaObjectSerializer(); initJavaObjectSerializer();
return javaObjectSerializer; return javaObjectSerializer;
} }
private void initJavaObjectSerializer() { private void initJavaObjectSerializer() {
if (javaObjectSerializerInitialized) { if (javaObjectSerializerInitialized) {
return; return;
} }
synchronized (this) { synchronized (this) {
if (javaObjectSerializerInitialized) { if (javaObjectSerializerInitialized) {
return; return;
} }
String serializerFQN = readSerializationSettings(); String serializerFQN = readSerializationSettings();
if (serializerFQN != null) { if (serializerFQN != null) {
serializerFQN = serializerFQN.trim(); serializerFQN = serializerFQN.trim();
if (!serializerFQN.isEmpty() && !serializerFQN.equals("null")) { if (!serializerFQN.isEmpty() && !serializerFQN.equals("null")) {
try { try {
javaObjectSerializer = (JavaObjectSerializer) JdbcUtils javaObjectSerializer = (JavaObjectSerializer) JdbcUtils
.loadUserClass(serializerFQN).newInstance(); .loadUserClass(serializerFQN).newInstance();
} catch (Exception e) { } catch (Exception e) {
throw DbException.convert(e); throw DbException.convert(e);
} }
} }
} }
javaObjectSerializerInitialized = true; javaObjectSerializerInitialized = true;
} }
} }
/** /**
* Read the serializer name from the persistent database settings. * Read the serializer name from the persistent database settings.
* *
* @return the serializer * @return the serializer
*/ */
private String readSerializationSettings() { private String readSerializationSettings() {
String javaObjectSerializerFQN = null; String javaObjectSerializerFQN = null;
CommandInterface ci = prepareCommand( CommandInterface ci = prepareCommand(
"SELECT VALUE FROM INFORMATION_SCHEMA.SETTINGS "+ "SELECT VALUE FROM INFORMATION_SCHEMA.SETTINGS "+
" WHERE NAME='JAVA_OBJECT_SERIALIZER'", Integer.MAX_VALUE); " WHERE NAME='JAVA_OBJECT_SERIALIZER'", Integer.MAX_VALUE);
try { try {
ResultInterface result = ci.executeQuery(0, false); ResultInterface result = ci.executeQuery(0, false);
if (result.next()) { if (result.next()) {
Value[] row = result.currentRow(); Value[] row = result.currentRow();
javaObjectSerializerFQN = row[0].getString(); javaObjectSerializerFQN = row[0].getString();
} }
} finally { } finally {
ci.close(); ci.close();
} }
return javaObjectSerializerFQN; return javaObjectSerializerFQN;
} }
@Override @Override
public void addTemporaryLob(Value v) { public void addTemporaryLob(Value v) {
// do nothing // do nothing
} }
} }
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论