提交 fb598f3f authored 作者: Thomas Mueller's avatar Thomas Mueller

--no commit message

--no commit message
上级 a2b139df
...@@ -17,10 +17,10 @@ H2 Database Engine ...@@ -17,10 +17,10 @@ H2 Database Engine
<a href="http://www.h2database.com/h2-2007-04-20.zip">Platform-Independent Zip (3.6 MB)</a><br /> <a href="http://www.h2database.com/h2-2007-04-20.zip">Platform-Independent Zip (3.6 MB)</a><br />
</p> </p>
<h3>Version 1.0 / 2006-08-31 (Last Stable)</h3> <h3>Version 1.0 / 2007-04-20 (Last Stable)</h3>
<p> <p>
<a href="http://www.h2database.com/h2-setup-2006-08-31.exe">Windows Installer (2.4 MB)</a><br /> <a href="http://www.h2database.com/h2-setup-2007-04-20.exe">Windows Installer (2.7 MB)</a><br />
<a href="http://www.h2database.com/h2-2006-08-31.zip">Platform-Independent Zip (3.1 MB)</a><br /> <a href="http://www.h2database.com/h2-2007-04-20.zip">Platform-Independent Zip (3.6 MB)</a><br />
</p> </p>
<p> <p>
......
...@@ -36,7 +36,10 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -36,7 +36,10 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
<h3>Version 1.0 (Current)</h3> <h3>Version 1.0 (Current)</h3>
<h3>Version 1.0 / 2007-04-20 (TODO)</h3><ul> <h3>Version 1.0 / 2007-04-20 (Build 45) TODO</h3><ul>
<li>Unnamed private in-memory database (jdbc:h2:mem:) were not 'private' as documented. Fixed.
</li><li>Autocomplete in the Console application: now the result frame scrolls to the top when the list is updated.
</li><li>GROUP BY expressions did not work correctly in subqueries. Fixed.
</li><li>New function TABLE to define ad-hoc (temporary) tables in a query. </li><li>New function TABLE to define ad-hoc (temporary) tables in a query.
This also solves problems with variable-size IN(...) queries: This also solves problems with variable-size IN(...) queries:
instead of SELECT * FROM TEST WHERE ID IN(?, ?, ...) you can now write: instead of SELECT * FROM TEST WHERE ID IN(?, ?, ...) you can now write:
...@@ -55,7 +58,6 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -55,7 +58,6 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>The BACKUP command is better tested and documented. </li><li>The BACKUP command is better tested and documented.
This means hot backup (online backup) is now possible. This means hot backup (online backup) is now possible.
</li><li>The old 'Backup' tool is now called 'Script' (as the SQL statement). </li><li>The old 'Backup' tool is now called 'Script' (as the SQL statement).
</li><li>For the tools RunScript and Script, the parameter 'script' has been renamed to 'file'.
</li><li>There are new 'Backup' and 'Restore' tools that work with database files directly. </li><li>There are new 'Backup' and 'Restore' tools that work with database files directly.
</li><li>The complete syntax for referential and check constraints is now supported </li><li>The complete syntax for referential and check constraints is now supported
when written as part of the column definition, behind PRIMARY KEY. when written as part of the column definition, behind PRIMARY KEY.
...@@ -85,7 +87,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -85,7 +87,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
This is not required any longer. This is not required any longer.
</li></ul> </li></ul>
<h3>Version 1.0 / 2007-03-04</h3><ul> <h3>Version 1.0 / 2007-03-04 (Build 44)</h3><ul>
<li>System sequences (automatically created sequences for IDENTITY or AUTO_INCREMENT columns) are now <li>System sequences (automatically created sequences for IDENTITY or AUTO_INCREMENT columns) are now
random (UUIDs) to avoid clashes when merging databases using RUNSCRIPT. random (UUIDs) to avoid clashes when merging databases using RUNSCRIPT.
</li><li>The precision for linked tables was not correct for some data types, for example VARCHAR. Fixed. </li><li>The precision for linked tables was not correct for some data types, for example VARCHAR. Fixed.
...@@ -111,7 +113,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -111,7 +113,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
an incorrect optimization was made and the result was wrong sometimes. an incorrect optimization was made and the result was wrong sometimes.
</li></ul> </li></ul>
<h3>Version 1.0 / 2007-01-30</h3><ul> <h3>Version 1.0 / 2007-01-30 (Build 41)</h3><ul>
<li>Experimental online backup feature using the SQL statement BACKUP TO 'fileName'. <li>Experimental online backup feature using the SQL statement BACKUP TO 'fileName'.
This creates a backup in the form of a zip file. Unlike the SCRIPT TO command, the data tables are not locked. This creates a backup in the form of a zip file. Unlike the SCRIPT TO command, the data tables are not locked.
</li><li>When using the server mode, temporary files for large LOB values are now deleted when the result set is closed. </li><li>When using the server mode, temporary files for large LOB values are now deleted when the result set is closed.
...@@ -133,7 +135,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -133,7 +135,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>The forum subscriptions (the emails sent from the forum) now works. </li><li>The forum subscriptions (the emails sent from the forum) now works.
</li></ul> </li></ul>
<h3>Version 1.0 / 2007-01-17</h3><ul> <h3>Version 1.0 / 2007-01-17 (Build 40)</h3><ul>
<li>Setting the collation (SET COLLATOR) was very slow on some systems (up to 24 seconds). <li>Setting the collation (SET COLLATOR) was very slow on some systems (up to 24 seconds).
Thanks a lot to Martina Nissler for finding this problem! Thanks a lot to Martina Nissler for finding this problem!
</li><li>The Console is now translated to Japanese thanks to IKEMOTO, Masahiro (ikeyan (at) arizona (dot) ne (dot) jp) </li><li>The Console is now translated to Japanese thanks to IKEMOTO, Masahiro (ikeyan (at) arizona (dot) ne (dot) jp)
...@@ -155,7 +157,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -155,7 +157,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>Date, time and timestamp objects were cloned in cases where it was not required. Fixed. </li><li>Date, time and timestamp objects were cloned in cases where it was not required. Fixed.
</li></ul> </li></ul>
<h3>Version 1.0 / 2007-01-02</h3><ul> <h3>Version 1.0 / 2007-01-02 (Build 36)</h3><ul>
<li>It was possible to drop the sequence of a temporary tables with DROP ALL OBJECTS, resulting in a null pointer exception afterwards. <li>It was possible to drop the sequence of a temporary tables with DROP ALL OBJECTS, resulting in a null pointer exception afterwards.
</li><li>Prepared statements with non-constant functions such as CURRENT_TIMESTAMP() did not get re-evaluated if the result of the function changed. Fixed. </li><li>Prepared statements with non-constant functions such as CURRENT_TIMESTAMP() did not get re-evaluated if the result of the function changed. Fixed.
</li><li>The (relative or absolute) directory where the script files are stored or read can now be changed using the system property h2.scriptDirectory </li><li>The (relative or absolute) directory where the script files are stored or read can now be changed using the system property h2.scriptDirectory
...@@ -174,7 +176,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -174,7 +176,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
To create the maven artifacts yourself, use 'ant mavenUploadLocal' and 'ant mavenBuildCentral'. To create the maven artifacts yourself, use 'ant mavenUploadLocal' and 'ant mavenBuildCentral'.
</li></ul> </li></ul>
<h3>Version 1.0 / 2006-12-17</h3><ul> <h3>Version 1.0 / 2006-12-17 (Build 34)</h3><ul>
<li>Can be compiled with JDK 1.6. However, only very few of the JDBC 4.0 features are implemented so far. <li>Can be compiled with JDK 1.6. However, only very few of the JDBC 4.0 features are implemented so far.
</li><li>The unit test of OpenJPA works now. </li><li>The unit test of OpenJPA works now.
</li><li>Unfortunately, the Hibernate dialect has changed due to a change in the meta data in the last release </li><li>Unfortunately, the Hibernate dialect has changed due to a change in the meta data in the last release
...@@ -200,7 +202,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -200,7 +202,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>Support for indexed parameters in PreparedStatements: update test set name=?2 where id=?1 </li><li>Support for indexed parameters in PreparedStatements: update test set name=?2 where id=?1
</li></ul> </li></ul>
<h3>Version 1.0 / 2006-12-03</h3><ul> <h3>Version 1.0 / 2006-12-03 (Build 32)</h3><ul>
<li>The SQL statement COMMENT did not work as expected. Many bugs have been fixed in this area. <li>The SQL statement COMMENT did not work as expected. Many bugs have been fixed in this area.
If you already have comments in the database, it is recommended to backup and restore the database, If you already have comments in the database, it is recommended to backup and restore the database,
using the Backup and RunScript tools or the SQL commands SCRIPT and RUNSCRIPT. using the Backup and RunScript tools or the SQL commands SCRIPT and RUNSCRIPT.
...@@ -229,7 +231,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -229,7 +231,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
(jdbc:columnlist:connection instead of jdbc:default:connection). (jdbc:columnlist:connection instead of jdbc:default:connection).
</li></ul> </li></ul>
<h3>Version 1.0 / 2006-11-20</h3><ul> <h3>Version 1.0 / 2006-11-20 (Build 31)</h3><ul>
<li>SCRIPT: New option BLOCKSIZE to split BLOB and CLOB data into separate blocks, to avoid OutOfMemory problems. <li>SCRIPT: New option BLOCKSIZE to split BLOB and CLOB data into separate blocks, to avoid OutOfMemory problems.
</li><li>When using the READ_COMMITTED isolation level, a transaction now waits until there are no write locks </li><li>When using the READ_COMMITTED isolation level, a transaction now waits until there are no write locks
when trying to read data. However, it still does not add a read lock. when trying to read data. However, it still does not add a read lock.
...@@ -265,7 +267,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -265,7 +267,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>The built-in FTP server can now access a virtual directory stored in a database. </li><li>The built-in FTP server can now access a virtual directory stored in a database.
</li></ul> </li></ul>
<h3>Version 1.0 / 2006-11-03</h3><ul> <h3>Version 1.0 / 2006-11-03 (Build 30)</h3><ul>
<li> <li>
Two simple full text search implementations (Lucene and native) are now included. Two simple full text search implementations (Lucene and native) are now included.
This is work in progress, and currently undocumented. This is work in progress, and currently undocumented.
...@@ -325,7 +327,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -325,7 +327,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
JDBC 4.0 driver auto discovery: When using JDK 1.6, Class.forName("org.h2.Driver") is no longer required. JDBC 4.0 driver auto discovery: When using JDK 1.6, Class.forName("org.h2.Driver") is no longer required.
</li></ul> </li></ul>
<h3>Version 1.0 / 2006-10-10</h3><ul> <h3>Version 1.0 / 2006-10-10 (Build 28)</h3><ul>
<li> <li>
Redundant () in a IN subquery is now supported: where id in ((select id from test)) Redundant () in a IN subquery is now supported: where id in ((select id from test))
</li><li> </li><li>
...@@ -353,7 +355,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -353,7 +355,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
Could not re-connect to a database when ALLOW_LITERALS or COMPRESS_LOB was set. Fixed. Could not re-connect to a database when ALLOW_LITERALS or COMPRESS_LOB was set. Fixed.
</li></ul> </li></ul>
<h3>Version 1.0 / 2006-09-24</h3><ul> <h3>Version 1.0 / 2006-09-24 (Build 27)</h3><ul>
<li> <li>
New LOCK_MODE 3 (READ_COMMITTED). Table level locking, but only when writing (no read locks). New LOCK_MODE 3 (READ_COMMITTED). Table level locking, but only when writing (no read locks).
</li><li> </li><li>
...@@ -422,7 +424,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -422,7 +424,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
Backup and Runscript tools now support options (H2 only) Backup and Runscript tools now support options (H2 only)
</li></ul> </li></ul>
<h3>Version 1.0 / 2006-09-10</h3><ul> <h3>Version 1.0 / 2006-09-10 (Build 26)</h3><ul>
<li> <li>
Updated the performance test so that Firebird can be tested as well. Updated the performance test so that Firebird can be tested as well.
</li><li> </li><li>
...@@ -474,7 +476,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -474,7 +476,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
ORDER BY an expression didn't work when using GROUP BY at the same time. ORDER BY an expression didn't work when using GROUP BY at the same time.
</li></ul> </li></ul>
<h3>Version 1.0 / 2006-08-31</h3><ul> <h3>Version 1.0 / 2006-08-31 (Build 25)</h3><ul>
<li> <li>
In some situations, wide b-tree indexes (with large VARCHAR columns for example) could get corrupted. Fixed. In some situations, wide b-tree indexes (with large VARCHAR columns for example) could get corrupted. Fixed.
</li><li> </li><li>
...@@ -512,7 +514,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -512,7 +514,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
Outer join: There where some incompatibilities with PostgreSQL and MySQL with more complex outer joins. Fixed. Outer join: There where some incompatibilities with PostgreSQL and MySQL with more complex outer joins. Fixed.
</li></ul> </li></ul>
<h3>Version 0.9 / 2006-08-23</h3><ul> <h3>Version 0.9 / 2006-08-23 (Build 23)</h3><ul>
<li> <li>
Bugfix for LIKE: If collation was set (SET COLLATION ...), it was ignored when using LIKE. Fixed. Bugfix for LIKE: If collation was set (SET COLLATION ...), it was ignored when using LIKE. Fixed.
</li><li> </li><li>
...@@ -547,7 +549,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -547,7 +549,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
Date and time constants outside the valid range (February 31 and so on) are no longer accepted. Date and time constants outside the valid range (February 31 and so on) are no longer accepted.
</li></ul> </li></ul>
<h3>Version 0.9 / 2006-08-14</h3><ul> <h3>Version 0.9 / 2006-08-14 (Build 21)</h3><ul>
<li> <li>
SET LOG 0 didn't work (except if the log level was set to some other value before). Fixed. SET LOG 0 didn't work (except if the log level was set to some other value before). Fixed.
</li><li> </li><li>
...@@ -576,7 +578,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -576,7 +578,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
It was not possible to cancel a select statement with a (temporary) view. Fixed. It was not possible to cancel a select statement with a (temporary) view. Fixed.
</li></ul> </li></ul>
<h3>Version 0.9 / 2006-07-29</h3><ul> <h3>Version 0.9 / 2006-07-29 (Build 18)</h3><ul>
<li> <li>
ParameterMetaData is now implemented (mainly to support getParameterCount). ParameterMetaData is now implemented (mainly to support getParameterCount).
</li><li> </li><li>
...@@ -624,7 +626,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -624,7 +626,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
C-style block comments /* */ are not parsed correctly when they contain * or / C-style block comments /* */ are not parsed correctly when they contain * or /
</li></ul> </li></ul>
<h3>Version 0.9 / 2006-07-14</h3><ul> <h3>Version 0.9 / 2006-07-14 (Build 16)</h3><ul>
<li> <li>
The regression tests are no longer included in the jar file. This reduces the size by about 200 KB. The regression tests are no longer included in the jar file. This reduces the size by about 200 KB.
</li><li> </li><li>
...@@ -678,7 +680,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -678,7 +680,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
DatabaseMetaData.getTypeInfo: BIGINT was returning AUTO_INCREMENT=TRUE, which is wrong. Fixed. DatabaseMetaData.getTypeInfo: BIGINT was returning AUTO_INCREMENT=TRUE, which is wrong. Fixed.
</li></ul> </li></ul>
<h3>Version 0.9 / 2006-07-01</h3><ul> <h3>Version 0.9 / 2006-07-01 (Build 14)</h3><ul>
<li> <li>
After dropping constraints and altering a table sometimes the database could not be opened. Fixed. After dropping constraints and altering a table sometimes the database could not be opened. Fixed.
</li><li> </li><li>
...@@ -766,7 +768,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -766,7 +768,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
Documented ALTER TABLE DROP COLUMN. The functionality was there already, but the documentation not. Documented ALTER TABLE DROP COLUMN. The functionality was there already, but the documentation not.
</li></ul> </li></ul>
<h3>Version 0.9 / 2006-06-02</h3><ul> <h3>Version 0.9 / 2006-06-02 (Build 10)</h3><ul>
<li> <li>
Removed the GCJ h2-server.exe from download. It was not stable on Windows. Removed the GCJ h2-server.exe from download. It was not stable on Windows.
</li><li> </li><li>
...@@ -1136,174 +1138,6 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -1136,174 +1138,6 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
Improved parser performance and memory usage. Improved parser performance and memory usage.
</li></ul> </li></ul>
<h3>Version 0.9 / 2006-03-08</h3><ul>
<li>
Bugfix for table level locking. Sometimes a Java-level deadlock occurred when a connection was not closed.
</li><li>
Implemented the SHUTDOWN statement to close the database.
</li><li>
CURRENT_TIMESTAMP now also supports an optional precision argument.
</li><li>
Web Console: Improved support for MySQL, PostgreSQL, HSQLDB.
</li><li>
Automatically create the directory if it does not exist when creating a database.
</li><li>
Optimization for constant temporary views as in
SELECT SUM(B.X+A.X) FROM SYSTEM_RANGE(1, 10000) B,
(SELECT SUM(X) X FROM SYSTEM_RANGE(1, 10000)) A
</li><li>
Implemented the aggregate functions STDDEV_POP, STDDEV_SAMP, VAR_POP, VAR_SAMP. Implemented AVG(DISTINCT...).
</li><li>
Implemented GROUP_CONCAT. Similar to XMLAGG, but can be used for other text data (CSV, JSON) as well.
</li><li>
Fix for file_lock. The documentation and implementation did not match, and
the sleep gap was done even when using socket locking, making file locking with sockets
slower than required.
</li><li>
Security (TCP and ODBC Server): Connections made from other computers are now not allowed by default for security reasons.
This feature already existed for the web server.
Added two new settings (tcpAllowOthers and odbcAllowOthers)
</li><li>
Improved performance for queries in the server mode.
</li><li>
Improved the startup time for large databases using a 'summary' file.
The data in this file is redundant, but improves startup time.
</li><li>
Implemented a benchmark test suite for single connection performance test.
</li><li>
A primary key is now created automatically for identity / autoincrement columns.
</li><li>
SET ASSERT, a new setting to switch off assertions.
</li><li>
Hibernate dialect for Hibernate 3.1.
</li><li>
A new locking mode (level 2: table level locking with garbage collection).
</li></ul>
<h3>Version 0.9 / 2006-02-17</h3><ul>
<li>
The SQL syntax in the docs is now cross-linked
</li><li>
Written a parser for the BNF in the help file.
Written a test to run random statements based on this BNF.
</li><li>
Support function syntax: POSITION(pattern IN text).
</li><li>
Support for list syntax: select * from test where (id, name)=(1, 'Hi'). This is experimental only.
It should only be used for compatibility, as indexes are not used in this case currently.
This is currently undocumented, until indexes are used.
</li><li>
Function CONCAT now supports a variable number of arguments.
</li><li>
Support for MySQL syntax: LAST_INSERT_ID() as an alias for IDENTITY()
</li><li>
Support for Oracle syntax: DUAL table, sequence.NEXTVAL / CURRVAL.
Function NVL as alias for COALESCE.
Function INSTR as alias for LOCATE. Support for SYSTIMESTAMP and SYSTIME.
Function ROWNUM
</li><li>
Implemented a ROWNUM() function, but only supported for simple queries.
</li><li>
Implemented local temporary tables.
</li><li>
Implemented short version of constraints in CREATE TABLE:
CREATE TABLE TEST(ID INT UNIQUE, NAME VARCHAR CHECK LENGTH(NAME)>3).
</li><li>
Implemented function NEXTVAL and CURRVAL for sequences (PostgreSQL compatibility).
</li><li>
Bugfix for special cases of subqueries containing group by.
</li><li>
Multi-dimension (spatial index) support: This is done without supporting R-Tree indexes.
Instead, a function to map multi-dimensional data to a scalar (using a space filling curve) is implemented.
</li><li>
Computed Columns: Support for MS SQL Server style computed columns. with computed columns,
it is very simple to emulate functional indexes (sometimes called function-based indexes).
</li><li>
Locking: Added the SQL statement SET LOCK_MODE to disable table level locking.
This is to improve compatibility with HSQLDB.
</li><li>
Schema / Catalog: Improved compatibility with HSQLDB, now use a default schema called 'PUBLIC'.
</li></ul>
<h3>Version 0.9 / 2006-02-05</h3><ul>
<li>
Implemented function EXTRACT
</li><li>
Parser: bugfix for reading numbers like 1e-1
</li><li>
BLOB/CLOB: implemented Blob and Clob classes with 'read' functionality.
</li><li>
Function TRIM: other characters than space can be removed now.
</li><li>
Implemented CASE WHEN, ANY, ALL
</li><li>
Referential integrity: Deleting rows in the parent table was sometimes not possible, fixed.
</li><li>
Data compression: Implemented data compressions functions.
</li><li>
Large database: the database size was limited to 2 GB due to a bug. Fixed.
Improved the recovery performance. Added a progress function to the database event listener.
Renamed exception listener to database event listener. Added functionality to set
the listener at startup (to display the progress of opening / recovering a database).
</li><li>
Quoted keywords as identifiers: It was not possible to connect to the database again when
a (quoted) keyword was used as a table / column name. Fixed.
</li><li>
Compatibility: DATE, TIME and TIMESTAMP can now be used as identifiers (for example, table and column names).
They are only keywords if followed by a string, like in TIME '10:20:40'.
</li><li>
Compatibility: PostgreSQL and MySQL round 0.5 to 1 when converting to integer, HSQLDB and
other Java databases do not. Added this to the compatibility settings. The default is to behave like HSQLDB.
</li><li>
Synthetic tests: Utils / BitField threw java.lang.ArrayIndexOutOfBoundsException in some situations.
Parser: DATE / TIME / TIMESTAMP constants where not parser correctly in some situations.
Object ID assignment didn't always work correctly for indexes if other objects where dropped before.
Generated constraint names where not in all cases unique. If creating a unique index failed due to
primary key violation, some data remained in the database file, leading to problems later.
</li><li>
Storage: In some situations when creating many tables, the error 'double allocation' appeared. Fixed.
</li><li>
Auto-Increment column: It was possible to drop a sequence that belongs to a table
(auto increment column) after reconnecting. Fixed.
</li><li>
Auto-Increment column: It was possible to drop a sequence that belongs to a table
(auto increment column) after reconnecting. Fixed. ALTER TABLE ADD COLUMN didn't
work correctly on a table with auto-increment column,
</li><li>
Default values: If a subquery was used as a default value (do other database support this?),
it was executed in the wrong session context after reconnecting.
</li></ul>
<h3>Version 0.9 / 2006-01-26</h3><ul>
<li>
Autoincrement: There was a problem with IDENTITY columns, the inserted value was not stored in the session, and
so the IDENTITY() function didn't work after reconnect.
</li><li>
Storage: Simplified handling of deleted records. This also improves the performance of DROP TABLE.
</li><li>
Encrypted files: Fixed a bug with file.setLength if the new size is smaller than before.
</li><li>
Server mode: Fixed a problem with very big strings (larger than 64 KB).
</li><li>
Identity columns: when a manual value was inserted that was higher than the current
sequence value, the sequence was not updated to the higher value. Fixed.
</li><li>
Added a setting for the maximum number of rows (in a result set) that are kept in-memory
(MAX_MEMORY_ROWS). Increased the default from 1000 to 10000.
</li><li>
Bug: Sometimes log records where not written completely when using the binary storage format. Fixed.
</li><li>
Performance: now sorting the records by file position before writing.
This improves the performance of closing the database in some situations.
</li><li>
Two-Phase-Commit is implemented. However, the XA API is not yet implemented.
</li><li>
Bug: Recovery didn't work correctly if the database was not properly closed twice in a row
(reason: the checksum for rolled back records was not updated in the log file).
</li><li>
Bug: Renaming a table and then reconnect didn't work.
</li></ul>
<h3>Version 0.9 / 2005-12-13</h3><ul> <h3>Version 0.9 / 2005-12-13</h3><ul>
<li> <li>
First public release. First public release.
...@@ -1340,8 +1174,8 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -1340,8 +1174,8 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>MVCC (Multi Version Concurrency Control) </li><li>MVCC (Multi Version Concurrency Control)
</li><li>Server side cursors </li><li>Server side cursors
</li><li>Row level locking </li><li>Row level locking
</li><li>Read-only databases inside a jar (splitting large files to speed up random access)
</li><li>System table: open sessions and locks of a database </li><li>System table: open sessions and locks of a database
</li><li>System table / function: cache usage
</li><li>Function in management db: open connections and databases of a (TCP) server </li><li>Function in management db: open connections and databases of a (TCP) server
</li><li>Fix right outer joins </li><li>Fix right outer joins
</li><li>Full outer joins </li><li>Full outer joins
...@@ -1350,16 +1184,13 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -1350,16 +1184,13 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>Shutdown compact </li><li>Shutdown compact
</li><li>Optimization of distinct with index: select distinct name from test </li><li>Optimization of distinct with index: select distinct name from test
</li><li>RECOVER=1 automatically if a problem opening </li><li>RECOVER=1 automatically if a problem opening
</li><li>Performance Test: executed statements must match, why sometimes a little more
</li><li>Forum: email notification doesn't work? test or disable or document </li><li>Forum: email notification doesn't work? test or disable or document
</li><li>Document server mode, embedded mode, web app mode, dual mode (server+embedded) </li><li>Document server mode, embedded mode, web app mode, dual mode (server+embedded)
</li><li>Stop the server: close all open databases first </li><li>Stop the server: close all open databases first
</li><li>Read-only databases inside a jar
</li><li>SET variable { TO | = } { value | 'value' | DEFAULT } </li><li>SET variable { TO | = } { value | 'value' | DEFAULT }
</li><li>Running totals: select @running:=if(@previous=t.ID,@running,0)+t.NUM as TOTAL, @previous:=t.ID </li><li>Running totals: select @running:=if(@previous=t.ID,@running,0)+t.NUM as TOTAL, @previous:=t.ID
</li><li>Support SET REFERENTIAL_INTEGRITY {TRUE|FALSE} </li><li>Support SET REFERENTIAL_INTEGRITY {TRUE|FALSE}
</li><li>Support CHAR data type (internally use VARCHAR, but report CHAR for JPox) </li><li>Support CHAR data type (internally use VARCHAR, but report CHAR for JPox)
</li><li>Recovery for BLOB / CLOB: support functions
</li><li>Better support large transactions, large updates / deletes: use less memory </li><li>Better support large transactions, large updates / deletes: use less memory
</li><li>Better support large transactions, large updates / deletes: allow tables without primary key </li><li>Better support large transactions, large updates / deletes: allow tables without primary key
</li><li>Support Oracle RPAD and LPAD(string, n[, pad]) (truncate the end if longer) </li><li>Support Oracle RPAD and LPAD(string, n[, pad]) (truncate the end if longer)
...@@ -1368,37 +1199,33 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -1368,37 +1199,33 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
<h3>Priority 2</h3> <h3>Priority 2</h3>
<ul> <ul>
<li>Support OSGi: http://oscar-osgi.sourceforge.net, http://incubator.apache.org/felix/index.html <li>Support OSGi: http://oscar-osgi.sourceforge.net, http://incubator.apache.org/felix/index.html
</li><li>System table / function: cache usage
</li><li>Connection pool manager </li><li>Connection pool manager
</li><li>Set the database in an 'exclusive' mode (restrict to one user at a time) </li><li>Set the database in an 'exclusive' mode (restrict to one user at a time)
</li><li>Add a migration guide (list differences between databases) </li><li>Add a migration guide (list differences between databases)
</li><li>Support VALUES(1), (2); SELECT * FROM (VALUES (1), (1), (1), (1), (2)) AS myTable (c1) (Derby)
</li><li>Optimization: automatic index creation suggestion using the trace file? </li><li>Optimization: automatic index creation suggestion using the trace file?
</li><li>Compression performance: don't allocate buffers, compress / expand in to out buffer </li><li>Compression performance: don't allocate buffers, compress / expand in to out buffer
</li><li>Start / stop server with database URL </li><li>Start / stop server with database URL
</li><li># is the start of a single line comment (MySQL) but date quote (Access). Mode specific
</li><li>Run benchmarks with JDK 1.5, JDK 1.6, Server
</li><li>Rebuild index functionality (other than delete the index file) </li><li>Rebuild index functionality (other than delete the index file)
</li><li>Don't use deleteOnExit (bug 4513817: File.deleteOnExit consumes memory) </li><li>Don't use deleteOnExit (bug 4513817: File.deleteOnExit consumes memory)
</li><li>Console: add accesskey to most important commands (A, AREA, BUTTON, INPUT, LABEL, LEGEND, TEXTAREA) </li><li>Console: add accesskey to most important commands (A, AREA, BUTTON, INPUT, LABEL, LEGEND, TEXTAREA)
</li><li>Test hibernate slow startup? Derby faster? derby faster prepared statements?
</li><li>Feature: a setting to delete the the log or not (for backup) </li><li>Feature: a setting to delete the the log or not (for backup)
</li><li>Test WithSun ASPE1_4; JEE Sun AS PE1.4 </li><li>Test with Sun ASPE1_4; JEE Sun AS PE1.4
</li><li>Test performance again with SQL Server, Oracle, DB2 </li><li>Test performance again with SQL Server, Oracle, DB2
</li><li>Test with dbmonster (http://dbmonster.kernelpanic.pl/) </li><li>Test with dbmonster (http://dbmonster.kernelpanic.pl/)
</li><li>Test with dbcopy (http://dbcopyplugin.sourceforge.net) </li><li>Test with dbcopy (http://dbcopyplugin.sourceforge.net)
</li><li>Document how to view / scan a big trace file (less) </li><li>Find a tool to view a text file >100 MB, with find, page up and down (like less)
</li><li>Implement, test, document XAConnection and so on </li><li>Implement, test, document XAConnection and so on
</li><li>Web site: meta keywords, description, get rid of frame set </li><li>Web site: meta keywords, description, get rid of frame set
</li><li>Pluggable data type (for compression, validation, conversion, encryption) </li><li>Pluggable data type (for compression, validation, conversion, encryption)
</li><li>CHECK: find out what makes CHECK=TRUE slow, then: fast, no check, slow </li><li>CHECK: find out what makes CHECK=TRUE slow, move to CHECK2
</li><li>Improve recovery: improve code for log recovery problems (less try/catch) </li><li>Improve recovery: improve code for log recovery problems (less try/catch)
</li><li>Log linear hash index changes, fast open / close </li><li>Log linear hash index changes, fast open / close
</li><li>Index usage for (ID, NAME)=(1, 'Hi'); document </li><li>Index usage for (ID, NAME)=(1, 'Hi'); document
</li><li>Faster hash function for strings, byte arrays, big decimal
</li><li>Suggestion: include jetty as Servlet Container (like LAMP) </li><li>Suggestion: include jetty as Servlet Container (like LAMP)
</li><li>Trace shipping to server </li><li>Trace shipping to server
</li><li>Performance / server mode: delay prepare? use UDP? </li><li>Performance / server mode: use UDP optionally?
</li><li>Version check: javascript in the docs / web console and maybe in the library </li><li>Version check: docs / web console (using javascript), and maybe in the library (using TCP/IP)
</li><li>Aggregates: support MEDIAN </li><li>Aggregates: support MEDIAN
</li><li>Web server classloader: override findResource / getResourceFrom </li><li>Web server classloader: override findResource / getResourceFrom
</li><li>Cost for embedded temporary view is calculated wrong, if result is constant </li><li>Cost for embedded temporary view is calculated wrong, if result is constant
...@@ -1406,53 +1233,39 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -1406,53 +1233,39 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>Count index range query (count(*) where id between 10 and 20) </li><li>Count index range query (count(*) where id between 10 and 20)
</li><li>Eclipse plugin </li><li>Eclipse plugin
</li><li>iReport to support H2 </li><li>iReport to support H2
</li><li>Implement CallableStatement </li><li>Implement missing JDBC API (CallableStatement,...)
</li><li>Compression of the cache </li><li>Compression of the cache
</li><li>Run inside servlet </li><li>Run H2 Console inside servlet (pass-through servlet of fix the JSP / app)
</li><li>Groovy Stored Procedures (http://groovy.codehaus.org/Groovy+SQL) </li><li>Groovy Stored Procedures (http://groovy.codehaus.org/Groovy+SQL)
</li><li>Include SMPT (mail) server (at least client) (alert on cluster failure, low disk space,...) </li><li>Include SMPT (mail) server (at least client) (alert on cluster failure, low disk space,...)
</li><li>Make the jar more modular </li><li>Make the jar more modular
</li><li>Document obfuscator usage
</li><li>Drop with restrict (currently cascade is the default) </li><li>Drop with restrict (currently cascade is the default)
</li><li>Document ConvertTraceToJava in javadoc and features </li><li>JSON parser and functions
</li><li>Document limitation (line length) of find "**" test.trace.db > Trace.java </li><li>Option for Java functions: constant/isDeterministic to allow early evaluation when all parameters are constant
</li><li>Tiny XML parser (ignoring unneeded stuff)
</li><li>JSON parser
</li><li>Read only databases with log file (fast open with summary)
</li><li>Option for Java functions: 'constant' to allow early evaluation when all parameters are constant
</li><li>Improve trace option: add calendar, streams, objects,... try/catch
</li><li>Automatic collection of statistics (auto ANALYZE) </li><li>Automatic collection of statistics (auto ANALYZE)
</li><li>Procedural language </li><li>Procedural language
</li><li>Maybe include JTidy. Check license
</li><li>Server: client ping from time to time (to avoid timeout - is timeout a problem?) </li><li>Server: client ping from time to time (to avoid timeout - is timeout a problem?)
</li><li>Column level privileges
</li><li>Copy database: Tool with config GUI and batch mode, extensible (example: compare) </li><li>Copy database: Tool with config GUI and batch mode, extensible (example: compare)
</li><li>Document shrinking jar file using http://proguard.sourceforge.net/
</li><li>Document, implement tool for long running transactions using user defined compensation statements </li><li>Document, implement tool for long running transactions using user defined compensation statements
</li><li>Support SET TABLE DUAL READONLY; </li><li>Support SET TABLE DUAL READONLY
</li><li>Don't write stack traces for common exceptions like duplicate key to the log by default </li><li>Don't write stack traces for common exceptions like duplicate key to the log by default
</li><li>Setting for MAX_QUERY_TIME (default no limit?) </li><li>Setting for MAX_QUERY_TIME (default no limit?)
</li><li>GCJ: is there a problem with updatable result sets? </li><li>GCJ: what is the state now?
</li><li>Convert large byte[]/Strings to streams in the JDBC API (asap). </li><li>Convert large byte[]/Strings to streams in the JDBC API (asap).
</li><li>Use Janino to convert Java to C++ </li><li>Use Janino to convert Java to C++
</li><li>Reduce disk space usage (Derby uses less disk space?) </li><li>Reduce disk space usage (Derby uses less disk space?)
</li><li>Fast conversion from LOB (stream) to byte array / String
</li><li>When converting to BLOB/CLOB (with setBytes / setString, or using SQL statement), use stream
</li><li>Support for user defined constants (to avoid using text or number literals; compile time safety)
</li><li>Events for: Database Startup, Connections, Login attempts, Disconnections, Prepare (after parsing), Web Server (see http://docs.openlinksw.com/virtuoso/fn_dbev_startup.html) </li><li>Events for: Database Startup, Connections, Login attempts, Disconnections, Prepare (after parsing), Web Server (see http://docs.openlinksw.com/virtuoso/fn_dbev_startup.html)
</li><li>Log compression </li><li>Optimization: Log compression
</li><li>Allow editing NULL values in the Console </li><li>Allow editing NULL values in the Console
</li><li>RunScript / RUNSCRIPT: progress meter and "suspend/resume" capability
</li><li>Compatibility: in MySQL, HSQLDB, /0.0 is NULL; in PostgreSQL, Derby: Division by zero </li><li>Compatibility: in MySQL, HSQLDB, /0.0 is NULL; in PostgreSQL, Derby: Division by zero
</li><li>Functional tables should accept parameters from other tables (see FunctionMultiReturn) </li><li>Functional tables should accept parameters from other tables (see FunctionMultiReturn)
SELECT * FROM TEST T, P2C(T.A, T.R) SELECT * FROM TEST T, P2C(T.A, T.R)
</li><li>Custom class loader to reload functions on demand </li><li>Custom class loader to reload functions on demand
</li><li>Test http://mysql-je.sourceforge.net/ </li><li>Test http://mysql-je.sourceforge.net/
</li><li>Close all files when closing the database (including LOB files that are open on the client side) </li><li>Close all files when closing the database (including LOB files that are open on the client side)
</li><li>Test Connection Pool http://jakarta.apache.org/commons/dbcp </li><li>Test Connection Pool http://jakarta.apache.org/commons/dbcp
</li><li>Implement Statement.cancel for server connections </li><li>Implement Statement.cancel for server connections
</li><li>Should not throw a NullPointerException when closing the connection while an operation is running (TestCases.testDisconnect) </li><li>Profiler option or profiling tool to find long running and often repeated queries (using DatabaseEventListener API)
</li><li>Profiler option or profiling tool to find long running and often repeated queries
</li><li>Function to read/write a file from/to LOB </li><li>Function to read/write a file from/to LOB
</li><li>Allow custom settings (@PATH for RUNSCRIPT for example) </li><li>Allow custom settings (@PATH for RUNSCRIPT for example)
</li><li>Performance test: read the data (getString) and use column names to get the data </li><li>Performance test: read the data (getString) and use column names to get the data
...@@ -1466,43 +1279,36 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -1466,43 +1279,36 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>Maybe use the 0x1234 notation for binary fields, see MS SQL Server </li><li>Maybe use the 0x1234 notation for binary fields, see MS SQL Server
</li><li>KEY_COLUMN_USAGE (http://dev.mysql.com/doc/refman/5.0/en/information-schema.html, http://www.xcdsql.org/Misc/INFORMATION_SCHEMA%20With%20Rolenames.gif) </li><li>KEY_COLUMN_USAGE (http://dev.mysql.com/doc/refman/5.0/en/information-schema.html, http://www.xcdsql.org/Misc/INFORMATION_SCHEMA%20With%20Rolenames.gif)
</li><li>Support Oracle CONNECT BY in some way: http://www.adp-gmbh.ch/ora/sql/connect_by.html, http://philip.greenspun.com/sql/trees.html </li><li>Support Oracle CONNECT BY in some way: http://www.adp-gmbh.ch/ora/sql/connect_by.html, http://philip.greenspun.com/sql/trees.html
</li><li>Support a property isDeterministic for Java functions
</li><li>SQL 2003 (http://www.wiscorp.com/sql_2003_standard.zip) </li><li>SQL 2003 (http://www.wiscorp.com/sql_2003_standard.zip)
</li><li>http://www.jpackage.org </li><li>http://www.jpackage.org
</li><li>Version column (number/sequence and timestamp based) </li><li>Version column (number/sequence and timestamp based)
</li><li>Optimize getGeneratedKey: (include last identity after each execute). </li><li>Optimize getGeneratedKey: send last identity after each execute (server).
</li><li>Clustering: recovery needs to becomes fully automatic. </li><li>Clustering: recovery needs to becomes fully automatic.
</li><li>Date: default date is '1970-01-01' (is it 1900-01-01 in the standard / other databases?) </li><li>Date: default date is '1970-01-01' (is it 1900-01-01 in the standard / other databases?)
</li><li>Test and document UPDATE TEST SET (ID, NAME) = (SELECT ID*10, NAME || '!' FROM TEST T WHERE T.ID=TEST.ID); </li><li>Test and document UPDATE TEST SET (ID, NAME) = (SELECT ID*10, NAME || '!' FROM TEST T WHERE T.ID=TEST.ID);
</li><li>Support home directory as ~ in database URL (jdbc:h2:file:~/.dir/db) </li><li>Support home directory as ~ in database URL (jdbc:h2:file:~/.dir/db)
</li><li>Document EXISTS and so on, provide more samples.
</li><li>Modular build (multiple independent jars).
</li><li>Better space re-use in the files after deleting data (shrink the files) </li><li>Better space re-use in the files after deleting data (shrink the files)
</li><li>Max memory rows / max undo log size: use block count / row size not row count </li><li>Max memory rows / max undo log size: use block count / row size not row count
</li><li>Index summary is only written if log=2; maybe write it also when log=1 and everything is fine (and no in doubt transactions) </li><li>Index summary is only written if log=2; maybe write it also when log=1 and everything is fine (and no in doubt transactions)
</li><li>Support 123L syntax as in Java; example: SELECT (2000000000*2) </li><li>Support 123L syntax as in Java; example: SELECT (2000000000*2)
</li><li>Implement point-in-time recovery </li><li>Implement point-in-time recovery
</li><li>Memory database: add a feature to keep named database open until 'shutdown' </li><li>Memory database: add a feature to keep named database open until 'shutdown'
</li><li>Harden against 'out of memory attacks' (multi-threading, out of memory in the application)
</li><li>Use the directory of the first script as the default directory for any scripts run inside that script </li><li>Use the directory of the first script as the default directory for any scripts run inside that script
</li><li>Include the version name in the jar file name </li><li>Include the version name in the jar file name
</li><li>Optimize IN(...), IN(select), ID=? OR ID=?: create temp table and use join </li><li>Optimize IN(...), IN(select), ID=? OR ID=?: create temp table and use join
</li><li>Set Default Schema (SET search_path TO foo, ALTER USER test SET search_path TO bar,foo)
</li><li>LIKE: improved version for larger texts (currently using naive search) </li><li>LIKE: improved version for larger texts (currently using naive search)
</li><li>LOBs: support streaming for SCRIPT / RUNSCRIPT
</li><li>Auto-reconnect on lost connection to server (even if the server was re-started) except if autocommit was off and there was pending transaction </li><li>Auto-reconnect on lost connection to server (even if the server was re-started) except if autocommit was off and there was pending transaction
</li><li>LOBs: support streaming in server mode and cluster mode, and when using PreparedStatement.set with large values </li><li>The Script tool should work with other databases as well
</li><li>Backup / Restore of BLOBs needs to be improved
</li><li>The Backup tool should work with other databases as well
</li><li>Deferred integrity checking (DEFERRABLE INITIALLY DEFERRED) </li><li>Deferred integrity checking (DEFERRABLE INITIALLY DEFERRED)
</li><li>Automatically convert to the next 'higher' data type whenever there is an overflow. </li><li>Automatically convert to the next 'higher' data type whenever there is an overflow.
</li><li>Throw an exception is thrown when the application calls getInt on a Long. </li><li>Throw an exception when the application calls getInt on a Long (optional)
</li><li>Default date format for input and output (local date constants) </li><li>Default date format for input and output (local date constants)
</li><li>Cache collation keys for performance </li><li>Cache collation keys for performance
</li><li>Convert OR condition to UNION or IN if possible </li><li>Convert OR condition to UNION or IN if possible
</li><li>ValueInt.convertToString and so on (remove Value.convertTo) </li><li>ValueInt.convertToString and so on (remove Value.convertTo)
</li><li>Support custom Collators </li><li>Support custom Collators
</li><li>Document ROWNUM usage for reports: SELECT ROWNUM, * FROM (subquery) </li><li>Document ROWNUM usage for reports: SELECT ROWNUM, * FROM (subquery)
</li><li>Clustering: Reads should be randomly distributed or to a designated database on RAM </li><li>Clustering: Reads should be randomly distributed or to a designated database on RAM
</li><li>Clustering: When a database is back alive, automatically synchronize with the master </li><li>Clustering: When a database is back alive, automatically synchronize with the master
</li><li>Standalone tool to get relevant system properties and add it to the trace output. </li><li>Standalone tool to get relevant system properties and add it to the trace output.
...@@ -1588,7 +1394,6 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -1588,7 +1394,6 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>Document FTP server, including -ftpTask option to execute / kill remote processes </li><li>Document FTP server, including -ftpTask option to execute / kill remote processes
</li><li>Add jdbcx to the javadocs </li><li>Add jdbcx to the javadocs
</li><li>Shrink the data file without closing the database (if the end of the file is empty) </li><li>Shrink the data file without closing the database (if the end of the file is empty)
</li><li>Add TPC-B style benchmark: download/tpcb_current.pdf
</li><li>Delay reading the row if data is not required </li><li>Delay reading the row if data is not required
</li><li>Eliminate undo log records if stored on disk (just one pointer per block, not per record) </li><li>Eliminate undo log records if stored on disk (just one pointer per block, not per record)
</li><li>User defined aggregate functions </li><li>User defined aggregate functions
...@@ -1626,6 +1431,13 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch. ...@@ -1626,6 +1431,13 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>Support using a unique index for IS NULL (including linked tables) </li><li>Support using a unique index for IS NULL (including linked tables)
</li><li>Support linked tables to the current database </li><li>Support linked tables to the current database
</li><li>Support dynamic linked schema (automatically adding/updating/removing tables) </li><li>Support dynamic linked schema (automatically adding/updating/removing tables)
</li><li>Compatibility with Derby: VALUES(1), (2); SELECT * FROM (VALUES (1), (2)) AS myTable(c1)
</li><li>Compatibility: # is the start of a single line comment (MySQL) but date quote (Access). Mode specific
</li><li>Run benchmarks with JDK 1.5, JDK 1.6, java -server
</li><li>Optimizations: Faster hash function for strings, byte arrays, big decimal
</li><li>Improve trace feature: add replay functionality
</li><li>DatabaseEventListener: callback for all operations (including expected time, RUNSCRIPT) and cancel functionality
</li><li>H2 Console / large result sets: use 'streaming' instead of building the page in-memory
</li></ul> </li></ul>
<h3>Not Planned</h3> <h3>Not Planned</h3>
......
...@@ -580,7 +580,7 @@ public class Parser { ...@@ -580,7 +580,7 @@ public class Parser {
alias = readAliasIdentifier(); alias = readAliasIdentifier();
} }
} }
TableFilter filter = new TableFilter(session, table, alias, rightsChecked); TableFilter filter = new TableFilter(session, table, alias, rightsChecked, currentSelect);
return filter; return filter;
} }
...@@ -775,7 +775,7 @@ public class Parser { ...@@ -775,7 +775,7 @@ public class Parser {
alias = readAliasIdentifier(); alias = readAliasIdentifier();
} }
} }
TableFilter filter = new TableFilter(session, table, alias, rightsChecked); TableFilter filter = new TableFilter(session, table, alias, rightsChecked, currentSelect);
return filter; return filter;
} }
...@@ -1297,7 +1297,7 @@ public class Parser { ...@@ -1297,7 +1297,7 @@ public class Parser {
// select without FROM: convert to SELECT ... FROM SYSTEM_RANGE(1,1) // select without FROM: convert to SELECT ... FROM SYSTEM_RANGE(1,1)
Schema main = database.findSchema(Constants.SCHEMA_MAIN); Schema main = database.findSchema(Constants.SCHEMA_MAIN);
Table dual = new RangeTable(main, 1, 1); Table dual = new RangeTable(main, 1, 1);
TableFilter filter = new TableFilter(session, dual, null, rightsChecked); TableFilter filter = new TableFilter(session, dual, null, rightsChecked, currentSelect);
command.addTableFilter(filter, true); command.addTableFilter(filter, true);
} else { } else {
parseSelectSimpleFromPart(command); parseSelectSimpleFromPart(command);
......
...@@ -74,7 +74,7 @@ public class AlterTableAddConstraint extends SchemaCommand { ...@@ -74,7 +74,7 @@ public class AlterTableAddConstraint extends SchemaCommand {
int id = getObjectId(true, true); int id = getObjectId(true, true);
String name = generateConstraintName(id); String name = generateConstraintName(id);
ConstraintCheck check = new ConstraintCheck(getSchema(), id, name, table); ConstraintCheck check = new ConstraintCheck(getSchema(), id, name, table);
TableFilter filter = new TableFilter(session, table, null, false); TableFilter filter = new TableFilter(session, table, null, false, null);
checkExpression.mapColumns(filter, 0); checkExpression.mapColumns(filter, 0);
checkExpression = checkExpression.optimize(session); checkExpression = checkExpression.optimize(session);
check.setExpression(checkExpression); check.setExpression(checkExpression);
......
...@@ -216,4 +216,6 @@ public abstract class Query extends Prepared { ...@@ -216,4 +216,6 @@ public abstract class Query extends Prepared {
return isEverything(visitor); return isEverything(visitor);
} }
public abstract void updateAggregate(Session session) throws SQLException;
} }
...@@ -679,6 +679,20 @@ public class Select extends Query { ...@@ -679,6 +679,20 @@ public class Select extends Query {
} }
} }
} }
public void updateAggregate(Session session) throws SQLException {
for(int i=0; i<expressions.size(); i++) {
Expression e = (Expression) expressions.get(i);
e.updateAggregate(session);
}
if(condition != null) {
condition.updateAggregate(session);
}
if(having != null) {
having.updateAggregate(session);
}
}
public boolean isEverything(ExpressionVisitor visitor) { public boolean isEverything(ExpressionVisitor visitor) {
if(visitor.type == ExpressionVisitor.SET_MAX_DATA_MODIFICATION_ID) { if(visitor.type == ExpressionVisitor.SET_MAX_DATA_MODIFICATION_ID) {
......
...@@ -309,4 +309,9 @@ public class SelectUnion extends Query { ...@@ -309,4 +309,9 @@ public class SelectUnion extends Query {
public Query getRightQuery() { public Query getRightQuery() {
return right; return right;
} }
public void updateAggregate(Session session) throws SQLException {
left.updateAggregate(session);
right.updateAggregate(session);
}
} }
...@@ -32,6 +32,7 @@ public class ConnectionInfo { ...@@ -32,6 +32,7 @@ public class ConnectionInfo {
private boolean remote; private boolean remote;
private boolean ssl; private boolean ssl;
private boolean persistent; private boolean persistent;
private boolean unnamed;
static { static {
ObjectArray list = SetTypes.getSettings(); ObjectArray list = SetTypes.getSettings();
...@@ -84,6 +85,9 @@ public class ConnectionInfo { ...@@ -84,6 +85,9 @@ public class ConnectionInfo {
name = name.substring("ssl:".length()); name = name.substring("ssl:".length());
} else if(name.startsWith("mem:")) { } else if(name.startsWith("mem:")) {
persistent = false; persistent = false;
if(name.equals("mem:")) {
unnamed = true;
}
} else if(name.startsWith("file:")) { } else if(name.startsWith("file:")) {
name = name.substring("file:".length()); name = name.substring("file:".length());
persistent = true; persistent = true;
...@@ -118,6 +122,10 @@ public class ConnectionInfo { ...@@ -118,6 +122,10 @@ public class ConnectionInfo {
public boolean isPersistent() { public boolean isPersistent() {
return persistent; return persistent;
} }
public boolean isUnnamed() {
return unnamed;
}
private void readProperties(Properties info) throws SQLException { private void readProperties(Properties info) throws SQLException {
Object[] list = new Object[info.size()]; Object[] list = new Object[info.size()];
......
...@@ -293,7 +293,7 @@ public class Database implements DataHandler { ...@@ -293,7 +293,7 @@ public class Database implements DataHandler {
public FileStore openFile(String name, boolean mustExist) throws SQLException { public FileStore openFile(String name, boolean mustExist) throws SQLException {
if(mustExist && !FileUtils.exists(name)) { if(mustExist && !FileUtils.exists(name)) {
throw Message.getSQLException(Message.FILE_CORRUPTED_1, name); throw Message.getSQLException(Message.FILE_NOT_FOUND_1, name);
} }
FileStore store = FileStore.open(this, name, getMagic(), cipher, filePasswordHash); FileStore store = FileStore.open(this, name, getMagic(), cipher, filePasswordHash);
try { try {
......
...@@ -34,7 +34,12 @@ public class Engine { ...@@ -34,7 +34,12 @@ public class Engine {
private Session openSession(ConnectionInfo ci, boolean ifExists, String cipher) throws SQLException { private Session openSession(ConnectionInfo ci, boolean ifExists, String cipher) throws SQLException {
// may not remove properties here, otherwise they are lost if it is required to call it twice // may not remove properties here, otherwise they are lost if it is required to call it twice
String name = ci.getName(); String name = ci.getName();
Database database = (Database) databases.get(name); Database database;
if(ci.isUnnamed()) {
database = null;
} else {
database = (Database) databases.get(name);
}
User user = null; User user = null;
boolean opened = false; boolean opened = false;
if(database == null) { if(database == null) {
...@@ -50,7 +55,9 @@ public class Engine { ...@@ -50,7 +55,9 @@ public class Engine {
user.setUserPasswordHash(ci.getUserPasswordHash()); user.setUserPasswordHash(ci.getUserPasswordHash());
database.setMasterUser(user); database.setMasterUser(user);
} }
databases.put(name, database); if(!ci.isUnnamed()) {
databases.put(name, database);
}
} }
synchronized(database) { synchronized(database) {
if(database.isClosing()) { if(database.isClosing()) {
......
...@@ -119,6 +119,10 @@ public class Aggregate extends Expression { ...@@ -119,6 +119,10 @@ public class Aggregate extends Expression {
// on.updateAggregate(); // on.updateAggregate();
// } // }
HashMap group = select.getCurrentGroup(); HashMap group = select.getCurrentGroup();
if(group == null) {
// this is a different level (the enclosing query)
return;
}
AggregateData data = (AggregateData) group.get(this); AggregateData data = (AggregateData) group.get(this);
if(data == null) { if(data == null) {
data = new AggregateData(type); data = new AggregateData(type);
......
...@@ -32,17 +32,14 @@ public class ExpressionColumn extends Expression { ...@@ -32,17 +32,14 @@ public class ExpressionColumn extends Expression {
private int queryLevel; private int queryLevel;
private Column column; private Column column;
private boolean evaluatable; private boolean evaluatable;
private Select select;
public ExpressionColumn(Database database, Select select, Column column) { public ExpressionColumn(Database database, Select select, Column column) {
this.database = database; this.database = database;
this.select = select;
this.column = column; this.column = column;
} }
public ExpressionColumn(Database database, Select select, String schemaName, String tableAlias, String columnName) { public ExpressionColumn(Database database, Select select, String schemaName, String tableAlias, String columnName) {
this.database = database; this.database = database;
this.select = select;
this.schemaName = schemaName; this.schemaName = schemaName;
this.tableAlias = tableAlias; this.tableAlias = tableAlias;
this.columnName = columnName; this.columnName = columnName;
...@@ -114,10 +111,15 @@ public class ExpressionColumn extends Expression { ...@@ -114,10 +111,15 @@ public class ExpressionColumn extends Expression {
public void updateAggregate(Session session) throws SQLException { public void updateAggregate(Session session) throws SQLException {
Value now = resolver.getValue(column); Value now = resolver.getValue(column);
Select select = resolver.getSelect();
if(select == null) { if(select == null) {
throw Message.getSQLException(Message.MUST_GROUP_BY_COLUMN_1, getSQL()); throw Message.getSQLException(Message.MUST_GROUP_BY_COLUMN_1, getSQL());
} }
HashMap values = select.getCurrentGroup(); HashMap values = select.getCurrentGroup();
if(values == null) {
// this is a different level (the enclosing query)
return;
}
Value v = (Value)values.get(this); Value v = (Value)values.get(this);
if(v==null) { if(v==null) {
values.put(this, now); values.put(this, now);
...@@ -131,6 +133,7 @@ public class ExpressionColumn extends Expression { ...@@ -131,6 +133,7 @@ public class ExpressionColumn extends Expression {
public Value getValue(Session session) throws SQLException { public Value getValue(Session session) throws SQLException {
// TODO refactor: simplify check if really part of an aggregated value / detection of // TODO refactor: simplify check if really part of an aggregated value / detection of
// usage of non-grouped by columns without aggregate function // usage of non-grouped by columns without aggregate function
Select select = resolver.getSelect();
if(select != null) { if(select != null) {
HashMap values = select.getCurrentGroup(); HashMap values = select.getCurrentGroup();
if(values != null) { if(values != null) {
......
...@@ -83,8 +83,8 @@ public class Subquery extends Expression { ...@@ -83,8 +83,8 @@ public class Subquery extends Expression {
return "(" + query.getPlan() +")"; return "(" + query.getPlan() +")";
} }
public void updateAggregate(Session session) { public void updateAggregate(Session session) throws SQLException {
// TODO exists: is it possible that the subquery contains related aggregates? probably not query.updateAggregate(session);
} }
private Expression getExpression() { private Expression getExpression() {
......
...@@ -58,6 +58,8 @@ public class AppServer { ...@@ -58,6 +58,8 @@ public class AppServer {
ssl = Boolean.valueOf(args[++i]).booleanValue(); ssl = Boolean.valueOf(args[++i]).booleanValue();
} else if("-webAllowOthers".equals(args[i])) { } else if("-webAllowOthers".equals(args[i])) {
allowOthers = Boolean.valueOf(args[++i]).booleanValue(); allowOthers = Boolean.valueOf(args[++i]).booleanValue();
// } else if("-baseDir".equals(args[i])) {
// String baseDir = args[++i];
} }
} }
// TODO gcj: don't load drivers in case of GCJ // TODO gcj: don't load drivers in case of GCJ
......
...@@ -359,6 +359,8 @@ function showList(s) { ...@@ -359,6 +359,8 @@ function showList(s) {
} else { } else {
showOutput(''); showOutput('');
} }
// scroll to the top left
top.h2result.scrollTo(0, 0);
} }
function retrieveList(s) { function retrieveList(s) {
......
...@@ -249,7 +249,7 @@ public class Column { ...@@ -249,7 +249,7 @@ public class Column {
public void prepareExpression(Session session) throws SQLException { public void prepareExpression(Session session) throws SQLException {
if(defaultExpression != null) { if(defaultExpression != null) {
computeTableFilter = new TableFilter(session, table, null, false); computeTableFilter = new TableFilter(session, table, null, false, null);
defaultExpression.mapColumns(computeTableFilter, 0); defaultExpression.mapColumns(computeTableFilter, 0);
defaultExpression = defaultExpression.optimize(session); defaultExpression = defaultExpression.optimize(session);
} }
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
*/ */
package org.h2.table; package org.h2.table;
import org.h2.command.dml.Select;
import org.h2.value.Value; import org.h2.value.Value;
public interface ColumnResolver { public interface ColumnResolver {
...@@ -13,5 +14,6 @@ public interface ColumnResolver { ...@@ -13,5 +14,6 @@ public interface ColumnResolver {
String getSchemaName(); String getSchemaName();
Value getValue(Column column); Value getValue(Column column);
TableFilter getTableFilter(); TableFilter getTableFilter();
Select getSelect();
} }
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
*/ */
package org.h2.table; package org.h2.table;
import org.h2.command.dml.Select;
import org.h2.value.Value; import org.h2.value.Value;
public class SingleColumnResolver implements ColumnResolver { public class SingleColumnResolver implements ColumnResolver {
...@@ -39,4 +40,8 @@ public class SingleColumnResolver implements ColumnResolver { ...@@ -39,4 +40,8 @@ public class SingleColumnResolver implements ColumnResolver {
return null; return null;
} }
public Select getSelect() {
return null;
}
} }
...@@ -6,6 +6,7 @@ package org.h2.table; ...@@ -6,6 +6,7 @@ package org.h2.table;
import java.sql.SQLException; import java.sql.SQLException;
import org.h2.command.dml.Select;
import org.h2.engine.Constants; import org.h2.engine.Constants;
import org.h2.engine.Right; import org.h2.engine.Right;
import org.h2.engine.Session; import org.h2.engine.Session;
...@@ -33,6 +34,7 @@ public class TableFilter implements ColumnResolver { ...@@ -33,6 +34,7 @@ public class TableFilter implements ColumnResolver {
private Cursor cursor; private Cursor cursor;
private int scanCount; private int scanCount;
private boolean used; // used in the plan private boolean used; // used in the plan
private Select select;
// conditions that can be used for direct index lookup (start or end) // conditions that can be used for direct index lookup (start or end)
private ObjectArray indexConditions = new ObjectArray(); private ObjectArray indexConditions = new ObjectArray();
...@@ -53,11 +55,16 @@ public class TableFilter implements ColumnResolver { ...@@ -53,11 +55,16 @@ public class TableFilter implements ColumnResolver {
private Expression fullCondition; private Expression fullCondition;
private boolean rightsChecked; private boolean rightsChecked;
public TableFilter(Session session, Table table, String alias, boolean rightsChecked) { public TableFilter(Session session, Table table, String alias, boolean rightsChecked, Select select) {
this.session = session; this.session = session;
this.table = table; this.table = table;
this.alias = alias; this.alias = alias;
this.rightsChecked = rightsChecked; this.rightsChecked = rightsChecked;
this.select = select;
}
public Select getSelect() {
return select;
} }
public Session getSession() { public Session getSession() {
......
...@@ -27,7 +27,7 @@ public class Script { ...@@ -27,7 +27,7 @@ public class Script {
private void showUsage() { private void showUsage() {
System.out.println("java "+getClass().getName() System.out.println("java "+getClass().getName()
+ " -url <url> -user <user> [-password <pwd>] [-file <filename>] [-options <option> ...]"); + " -url <url> -user <user> [-password <pwd>] [-script <filename>] [-options <option> ...]");
} }
/** /**
...@@ -39,7 +39,7 @@ public class Script { ...@@ -39,7 +39,7 @@ public class Script {
* </li><li>-url jdbc:h2:... (database URL) * </li><li>-url jdbc:h2:... (database URL)
* </li><li>-user username * </li><li>-user username
* </li><li>-password password * </li><li>-password password
* </li><li>-file filename (default file name is backup.sql) * </li><li>-script filename (default file name is backup.sql)
* </li><li>-options to specify a list of options (only for H2) * </li><li>-options to specify a list of options (only for H2)
* </li></ul> * </li></ul>
* *
...@@ -63,7 +63,7 @@ public class Script { ...@@ -63,7 +63,7 @@ public class Script {
user = args[++i]; user = args[++i];
} else if(args[i].equals("-password")) { } else if(args[i].equals("-password")) {
password = args[++i]; password = args[++i];
} else if(args[i].equals("-file")) { } else if(args[i].equals("-script")) {
file = args[++i]; file = args[++i];
} else if(args[i].equals("-options")) { } else if(args[i].equals("-options")) {
StringBuffer buff1 = new StringBuffer(); StringBuffer buff1 = new StringBuffer();
......
...@@ -74,13 +74,13 @@ public class Server implements Runnable { ...@@ -74,13 +74,13 @@ public class Server implements Runnable {
* The following options are supported: * The following options are supported:
* <ul> * <ul>
* <li>-help or -? (print the list of options) * <li>-help or -? (print the list of options)
* </li><li>-web (start the Web Server) * </li><li>-web (start the Web Server / H2 Console application)
* </li><li>-tcp (start the TCP Server) * </li><li>-tcp (start the TCP Server)
* </li><li>-tcpShutdown {url} (shutdown the running TCP Server, URL example: tcp://localhost:9094) * </li><li>-tcpShutdown {url} (shutdown the running TCP Server, URL example: tcp://localhost:9094)
* </li><li>-odbc (start the ODBC Server) * </li><li>-odbc (start the ODBC Server)
* </li><li>-browser (start a browser and open a page to connect to the Web Server) * </li><li>-browser (start a browser and open a page to connect to the Web Server)
* </li><li>-log [true|false] (enable or disable logging) * </li><li>-log [true|false] (enable or disable logging)
* </li><li>-baseDir {directory} (sets the base directory for database files) * </li><li>-baseDir {directory} (sets the base directory for database files; not for H2 Console)
* </li><li>-ifExists [true|false] (only existing databases may be opened) * </li><li>-ifExists [true|false] (only existing databases may be opened)
* </li><li>-ftp (start the FTP Server) * </li><li>-ftp (start the FTP Server)
* </li></ul> * </li></ul>
......
...@@ -90,56 +90,34 @@ java -Xmx512m -Xrunhprof:cpu=samples,depth=8 org.h2.tools.RunScript -url jdbc:h2 ...@@ -90,56 +90,34 @@ java -Xmx512m -Xrunhprof:cpu=samples,depth=8 org.h2.tools.RunScript -url jdbc:h2
TestAll test = new TestAll(); TestAll test = new TestAll();
test.printSystem(); test.printSystem();
/* /*
drop table people; how to make -baseDir work for H2 Console?
drop table cars; */
create table people (family varchar(1) not null, person
varchar(1) not null);
create table cars (family varchar(1) not null, car
varchar(1) not null);
insert into people values(1, 1);
insert into people values(2, 1);
insert into people values(2, 2);
insert into people values(3, 1);
insert into people values(5, 1);
insert into cars values(2, 1);
insert into cars values(2, 2);
insert into cars values(3, 1);
insert into cars values(3, 2);
insert into cars values(3, 3);
insert into cars values(4, 1);
select family, (select count(car) from cars where cars.family = people.family) as x
from people group by family;
*/
// link_table_update.patch.txt
// runscript and script: use 'script' parameter as before
// autocomplete: scroll up on new list
// doc array
// www.inventec.ch/chdh
// www.source-code.biz
/* /*
Pavel Ganelin Pavel Ganelin
Integrate patches www.dullesopen.com/software/h2-database-03-04-07-mod.src.zip Integrate patches www.dullesopen.com/software/h2-database-03-04-07-mod.src.zip
*/ */
/* /*
drop all objects; drop all objects;
create table parent(id int primary key, parent int); create table parent(id int primary key, parent int);
insert into parent values(1, null), (2, 1), (3, 1); insert into parent values(1, null), (2, 1), (3, 1);
with test_view(id, parent) as with test_view(id, parent) as
select id, parent from parent where parent is null select id, parent from parent where parent is null
union all union all
select parent.id, parent.parent from test_view, parent select parent.id, parent.parent from test_view, parent
where parent.parent = test_view.id where parent.parent = test_view.id
select * from test_view; select * from test_view;
with test_view(id, parent) as with test_view(id, parent) as
select id, parent from parent where id = 2 select id, parent from parent where id = 2
union all union all
select parent.id, parent.parent from test_view, parent select parent.id, parent.parent from test_view, parent
where parent.parent = test_view.id where parent.parent = test_view.id
select * from test_view; select * from test_view;
drop view test_view; drop view test_view;
@LOOP 10 with test_view(id, parent) as @LOOP 10 with test_view(id, parent) as
...@@ -166,8 +144,6 @@ drop table abc; ...@@ -166,8 +144,6 @@ drop table abc;
// run TestHalt // run TestHalt
// document backup command
// WHERE FLAG does not use index, but WHERE FLAG=TRUE does // WHERE FLAG does not use index, but WHERE FLAG=TRUE does
// drop table test; // drop table test;
// CREATE TABLE test (id int, flag BIT NOT NULL); // CREATE TABLE test (id int, flag BIT NOT NULL);
...@@ -186,101 +162,11 @@ drop table abc; ...@@ -186,101 +162,11 @@ drop table abc;
// EXPLAIN SELECT * FROM test WHERE id between 2 and 3 AND flag=true; // EXPLAIN SELECT * FROM test WHERE id between 2 and 3 AND flag=true;
// EXPLAIN SELECT * FROM test WHERE id=2 AND flag; // EXPLAIN SELECT * FROM test WHERE id=2 AND flag;
/* /*
TODO: get FunctionAlias.java from mail Automate real power off test
Here are the proposed changes to support function overload for variable number of arguments */
Example/Test Case
public class OverloadFunction extends TestCase {
public void testOverload() throws Exception {
Class.forName("org.h2.Driver");
Connection ca = DriverManager.getConnection("jdbc:h2:mem:");
Statement sa = ca.createStatement();
sa.execute("CREATE ALIAS foo FOR \"" + this.getClass().getName() + ".foo\"");
ResultSet rs1 = sa.executeQuery("SELECT foo('a',2)");
rs1.next();
assertEquals(2.0, rs1.getDouble(1));
ResultSet rs2 = sa.executeQuery("SELECT foo('a',2,3,4)");
rs2.next();
assertEquals(9.0, rs2.getDouble(1));
try {
ResultSet rs = sa.executeQuery("SELECT foo()");
fail();
} catch (SQLException e) {
e.printStackTrace();
}
try {
ResultSet rs = sa.executeQuery("SELECT foo('a')");
fail();
} catch (SQLException e) {
e.printStackTrace();
}
try {
ResultSet rs = sa.executeQuery("SELECT foo(2,'a')");
fail();
} catch (SQLException e) {
e.printStackTrace();
}
try {
ResultSet rs = sa.executeQuery("SELECT foo('a',2,3)");
fail();
} catch (SQLException e) {
e.printStackTrace();
}
try {
ResultSet rs = sa.executeQuery("SELECT foo('a',2,3,4,5)");
fail();
} catch (SQLException e) {
e.printStackTrace();
}
}
public static double foo(String s, int i) {
return i;
}
public static double foo(String s, int i, double d1, double d2) {
return i + d1 + d2;
}
}
Changes in the Parser
CODE
private JavaFunction readJavaFunction(String name) throws SQLException {
FunctionAlias functionAlias = database.findFunctionAlias(name);
if (functionAlias == null) {
// TODO compatibility: maybe support 'on the fly java functions' as HSQLDB ( CALL "java.lang.Math.sqrt"(2.0) )
throw Message.getSQLException(Message.FUNCTION_NOT_FOUND_1, name);
}
int paramCount = functionAlias.getParameterCount();
int max = functionAlias.getMaxParameterCount();
ObjectArray list = new ObjectArray(paramCount);
do {
if (functionAlias.isAcceptableParameterCount(list.size())) {
if (readIf(")"))
break;
}
if (list.size() == max) {
read(")"); // force syntax error for extra argument
break;
}
if (list.size() > 0) {
read(",");
}
Expression e = readExpression();
list.add(e);
} while (true);
Expression[] args = new Expression[list.size()];
for (int i = 0; i < args.length; i++) {
args[i] = (Expression) list.get(i);
}
JavaFunction func = new JavaFunction(functionAlias, args);
return func;
}
I also attached FunctionAlias.java file
Pavel
*/
// h2 // h2
// update FOO set a = dateadd('second', 4320000, a); // update FOO set a = dateadd('second', 4320000, a);
...@@ -312,8 +198,6 @@ Pavel ...@@ -312,8 +198,6 @@ Pavel
// -- Oracle, Derby: 10, 11 // -- Oracle, Derby: 10, 11
// -- PostgreSQL, H2, HSQLDB: 1, 2 // -- PostgreSQL, H2, HSQLDB: 1, 2
// auto-upgrade application: // auto-upgrade application:
// check if new version is available // check if new version is available
// (option: digital signature) // (option: digital signature)
...@@ -338,7 +222,7 @@ Pavel ...@@ -338,7 +222,7 @@ Pavel
// long running test with the same database // long running test with the same database
// repeatable test with a very big database (making backups of the database files) // repeatable test with a very big database (making backups of the database files)
// the conversion is done automatically when the new engine connects. // data conversion should be done automatically when the new engine connects.
if(args.length>0) { if(args.length>0) {
if("crash".equals(args[0])) { if("crash".equals(args[0])) {
...@@ -543,24 +427,24 @@ Pavel ...@@ -543,24 +427,24 @@ Pavel
} }
void testUnit() { void testUnit() {
// new TestBitField().runTest(this); new TestBitField().runTest(this);
// new TestCompress().runTest(this); new TestCompress().runTest(this);
// new TestDataPage().runTest(this); new TestDataPage().runTest(this);
// new TestExit().runTest(this); new TestExit().runTest(this);
// new TestFileLock().runTest(this); new TestFileLock().runTest(this);
// new TestIntArray().runTest(this); new TestIntArray().runTest(this);
// new TestIntIntHashMap().runTest(this); new TestIntIntHashMap().runTest(this);
// new TestOverflow().runTest(this); new TestOverflow().runTest(this);
// new TestPattern().runTest(this); new TestPattern().runTest(this);
// new TestReader().runTest(this); new TestReader().runTest(this);
// new TestSampleApps().runTest(this); new TestSampleApps().runTest(this);
// new TestScriptReader().runTest(this); new TestScriptReader().runTest(this);
// new TestSecurity().runTest(this); new TestSecurity().runTest(this);
// new TestStreams().runTest(this); new TestStreams().runTest(this);
// new TestStringCache().runTest(this); new TestStringCache().runTest(this);
// new TestStringUtils().runTest(this); new TestStringUtils().runTest(this);
new TestTools().runTest(this); new TestTools().runTest(this);
// new TestValueHashMap().runTest(this); new TestValueHashMap().runTest(this);
} }
void testDatabase() throws Exception { void testDatabase() throws Exception {
...@@ -568,60 +452,60 @@ Pavel ...@@ -568,60 +452,60 @@ Pavel
beforeTest(); beforeTest();
// db // db
// new TestScriptSimple().runTest(this); new TestScriptSimple().runTest(this);
// new TestScript().runTest(this); new TestScript().runTest(this);
// new TestAutoRecompile().runTest(this); new TestAutoRecompile().runTest(this);
// new TestBackup().runTest(this); new TestBackup().runTest(this);
// new TestBatchUpdates().runTest(this); new TestBatchUpdates().runTest(this);
// new TestBigDb().runTest(this); new TestBigDb().runTest(this);
// new TestBigResult().runTest(this); new TestBigResult().runTest(this);
// new TestCache().runTest(this); new TestCache().runTest(this);
// new TestCases().runTest(this); new TestCases().runTest(this);
// new TestCheckpoint().runTest(this); new TestCheckpoint().runTest(this);
// new TestCluster().runTest(this); new TestCluster().runTest(this);
// new TestCompatibility().runTest(this); new TestCompatibility().runTest(this);
// new TestCsv().runTest(this); new TestCsv().runTest(this);
// new TestFunctions().runTest(this); new TestFunctions().runTest(this);
// new TestIndex().runTest(this); new TestIndex().runTest(this);
// new TestLinkedTable().runTest(this); new TestLinkedTable().runTest(this);
// new TestListener().runTest(this); new TestListener().runTest(this);
// new TestLob().runTest(this); new TestLob().runTest(this);
// new TestLogFile().runTest(this); new TestLogFile().runTest(this);
// new TestMemoryUsage().runTest(this); new TestMemoryUsage().runTest(this);
// new TestMultiConn().runTest(this); new TestMultiConn().runTest(this);
// new TestMultiDimension().runTest(this); new TestMultiDimension().runTest(this);
// new TestMultiThread().runTest(this); new TestMultiThread().runTest(this);
// new TestOpenClose().runTest(this); new TestOpenClose().runTest(this);
// new TestOptimizations().runTest(this); new TestOptimizations().runTest(this);
// new TestPowerOff().runTest(this); new TestPowerOff().runTest(this);
// new TestReadOnly().runTest(this); new TestReadOnly().runTest(this);
// new TestRights().runTest(this); new TestRights().runTest(this);
// new TestRunscript().runTest(this); new TestRunscript().runTest(this);
// new TestSQLInjection().runTest(this); new TestSQLInjection().runTest(this);
// new TestSequence().runTest(this); new TestSequence().runTest(this);
// new TestSpaceReuse().runTest(this); new TestSpaceReuse().runTest(this);
// new TestSpeed().runTest(this); new TestSpeed().runTest(this);
// new TestTempTables().runTest(this); new TestTempTables().runTest(this);
// new TestTransaction().runTest(this); new TestTransaction().runTest(this);
// new TestTriggersConstraints().runTest(this); new TestTriggersConstraints().runTest(this);
// new TestTwoPhaseCommit().runTest(this); new TestTwoPhaseCommit().runTest(this);
//
// // server // server
// new TestNestedLoop().runTest(this); new TestNestedLoop().runTest(this);
//
// // jdbc // jdbc
// new TestCancel().runTest(this); new TestCancel().runTest(this);
// new TestDataSource().runTest(this); new TestDataSource().runTest(this);
// new TestManyJdbcObjects().runTest(this); new TestManyJdbcObjects().runTest(this);
// new TestMetaData().runTest(this); new TestMetaData().runTest(this);
// new TestNativeSQL().runTest(this); new TestNativeSQL().runTest(this);
// new TestPreparedStatement().runTest(this); new TestPreparedStatement().runTest(this);
// new TestResultSet().runTest(this); new TestResultSet().runTest(this);
// new TestStatement().runTest(this); new TestStatement().runTest(this);
// new TestTransactionIsolation().runTest(this); new TestTransactionIsolation().runTest(this);
// new TestUpdatableResultSet().runTest(this); new TestUpdatableResultSet().runTest(this);
// new TestXA().runTest(this); new TestXA().runTest(this);
// new TestZloty().runTest(this); new TestZloty().runTest(this);
afterTest(); afterTest();
} }
......
/*
* Copyright 2004-2006 H2 Group. Licensed under the H2 License, Version 1.0 (http://h2database.com/html/license.html).
* Initial Developer: H2 Group
*/
package org.h2.test.cases;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.Statement;
import org.h2.tools.DeleteDbFiles;
public class TestWithRecursive {
public static void main(String[] args) throws Exception {
Class.forName("org.h2.Driver");
DeleteDbFiles.execute(".", "test", true);
String url = "jdbc:h2:test";
Connection conn = DriverManager.getConnection(url, "sa", "sa");
Statement stat = conn.createStatement();
stat.execute("create table parent(id int primary key, parent int)");
stat.execute("insert into parent values(1, null), (2, 1), (3, 1)");
ResultSet rs = stat.executeQuery(
"with test_view(id, parent) as \n"+
"select id, parent from parent where id = 1 \n"+
"union all select parent.id, parent.parent from test_view, parent \n"+
"where parent.parent = test_view.id \n" +
"select * from test_view");
System.out.println("query:");
while(rs.next()) {
System.out.println(rs.getString(1));
}
stat.execute("drop view if exists test_view");
System.out.println("prepared:");
PreparedStatement prep = conn.prepareStatement(
"with test_view(id, parent) as \n"+
"select id, parent from parent where id = ? \n"+
"union all select parent.id, parent.parent from test_view, parent \n"+
"where parent.parent = test_view.id \n" +
"select * from test_view");
prep.setInt(1, 1);
rs = prep.executeQuery();
while(rs.next()) {
System.out.println(rs.getString(1));
}
conn.close();
}
}
...@@ -17,6 +17,7 @@ import java.util.Date; ...@@ -17,6 +17,7 @@ import java.util.Date;
import java.util.LinkedList; import java.util.LinkedList;
import java.util.Random; import java.util.Random;
import org.h2.test.TestAll;
import org.h2.test.TestBase; import org.h2.test.TestBase;
import org.h2.tools.Backup; import org.h2.tools.Backup;
import org.h2.tools.DeleteDbFiles; import org.h2.tools.DeleteDbFiles;
...@@ -292,5 +293,11 @@ public abstract class TestHalt extends TestBase { ...@@ -292,5 +293,11 @@ public abstract class TestHalt extends TestBase {
} }
return buff.toString(); return buff.toString();
} }
public TestBase init(TestAll conf) throws Exception {
super.init(conf);
BASE_DIR = "dataHalt";
return this;
}
} }
...@@ -8,6 +8,11 @@ import java.sql.PreparedStatement; ...@@ -8,6 +8,11 @@ import java.sql.PreparedStatement;
import java.sql.ResultSet; import java.sql.ResultSet;
import java.sql.SQLException; import java.sql.SQLException;
import java.sql.Statement; import java.sql.Statement;
import java.util.ArrayList;
import org.h2.test.TestAll;
import org.h2.test.TestBase;
import org.h2.test.db.TestScript;
public class TestHaltApp extends TestHalt { public class TestHaltApp extends TestHalt {
...@@ -128,5 +133,5 @@ public class TestHaltApp extends TestHalt { ...@@ -128,5 +133,5 @@ public class TestHaltApp extends TestHalt {
traceOperation("rollback"); traceOperation("rollback");
conn.rollback(); conn.rollback();
} }
} }
--- special grammar and test cases --------------------------------------------------------------------------------------------- --- special grammar and test cases ---------------------------------------------------------------------------------------------
create table people (family varchar(1) not null, person varchar(1) not null);
> ok
create table cars (family varchar(1) not null, car varchar(1) not null);
> ok
insert into people values(1, 1), (2, 1), (2, 2), (3, 1), (5, 1);
> update count: 5
insert into cars values(2, 1), (2, 2), (3, 1), (3, 2), (3, 3), (4, 1);
> update count: 6
select family, (select count(car) from cars where cars.family = people.family) as x
from people group by family order by family;
> FAMILY X
> ------ -
> 1 0
> 2 2
> 3 3
> 5 0
> rows (ordered): 4
drop table people, cars;
> ok
select (1, 2); select (1, 2);
> 1, 2 > 1, 2
> ------ > ------
......
...@@ -44,9 +44,9 @@ public class TestTools extends TestBase { ...@@ -44,9 +44,9 @@ public class TestTools extends TestBase {
conn.createStatement().execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)"); conn.createStatement().execute("CREATE TABLE TEST(ID INT PRIMARY KEY, NAME VARCHAR)");
conn.createStatement().execute("INSERT INTO TEST VALUES(1, 'Hello')"); conn.createStatement().execute("INSERT INTO TEST VALUES(1, 'Hello')");
conn.close(); conn.close();
Script.main(new String[]{"-url", url, "-user", user, "-password", password, "-file", fileName, "-options", "nodata", "compression", "lzf", "cipher", "xtea", "password", "'123'"}); Script.main(new String[]{"-url", url, "-user", user, "-password", password, "-script", fileName, "-options", "nodata", "compression", "lzf", "cipher", "xtea", "password", "'123'"});
DeleteDbFiles.main(new String[]{"-dir", BASE_DIR, "-db", "utils", "-quiet"}); DeleteDbFiles.main(new String[]{"-dir", BASE_DIR, "-db", "utils", "-quiet"});
RunScript.main(new String[]{"-url", url, "-user", user, "-password", password, "-file", fileName, "-options", "compression", "lzf", "cipher", "xtea", "password", "'123'"}); RunScript.main(new String[]{"-url", url, "-user", user, "-password", password, "-script", fileName, "-options", "compression", "lzf", "cipher", "xtea", "password", "'123'"});
conn = DriverManager.getConnection("jdbc:h2:" + BASE_DIR+ "/utils", "sa", "abc"); conn = DriverManager.getConnection("jdbc:h2:" + BASE_DIR+ "/utils", "sa", "abc");
ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM TEST"); ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM TEST");
checkFalse(rs.next()); checkFalse(rs.next());
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论