提交 bcee1719 authored 作者: Thomas Mueller's avatar Thomas Mueller

--no commit message

--no commit message
上级 130c3fb2
......@@ -23,6 +23,16 @@ H2 Database Engine
<a href="http://www.h2database.com/h2-2007-03-04.zip">Platform-Independent Zip (3.6 MB)</a><br />
</p>
<h3>Download Mirror</h3>
<p>
<a href="http://code.google.com/p/h2database/downloads/list">Platform-Independent Zips</a><br />
</p>
<h3>Subversion Source Repository</h3>
<p>
<a href="http://code.google.com/p/h2database/source">Google Code</a>
</p>
<p>
For details about changes, see the <a href="history.html">Change Log</a>.
</p>
......
......@@ -253,6 +253,9 @@ It looks like the development of this database has stopped. The last release was
</tr><tr>
<td><a href="http://hibernate.org">Hibernate</a></td>
<td>Relational persistence for idiomatic Java (O-R mapping tool)</td>
</tr><tr>
<td><a href="http://geosysin.iict.ch/irstv-trac/wiki/H2spatial/Why">H2 Spatial</a></td>
<td>A project to add spatial functions to H2 database</td>
</tr><tr>
<td><a href="http://jamwiki.org">JAMWiki</a></td>
<td>Java-based Wiki engine.</td>
......@@ -378,7 +381,7 @@ This is achieved using different database URLs. The settings in the URLs are not
<tr>
<td>Remote using TCP/IP</td>
<td>
jdbc:h2:tcp://&lt;server&gt;[&lt;port&gt;]/&lt;databaseName&gt;<br />
jdbc:h2:tcp://&lt;server&gt;[:&lt;port&gt;]/&lt;databaseName&gt;<br />
jdbc:h2:tcp://localhost/test<br />
jdbc:h2:tcp://dbserv:8084/sample
</td>
......@@ -386,31 +389,31 @@ This is achieved using different database URLs. The settings in the URLs are not
<tr>
<td>Remote using SSL/TLS</td>
<td>
jdbc:h2:ssl://&lt;server&gt;[&lt;port&gt;]/&lt;databaseName&gt;<br />
jdbc:h2:ssl://&lt;server&gt;[:&lt;port&gt;]/&lt;databaseName&gt;<br />
jdbc:h2:ssl://secureserv:8085/sample;
</td>
</tr>
<tr>
<td>Using Encrypted Files</td>
<td>
jdbc:h2:&lt;url&gt;;CIPHER=[AES][XTEA]<br />
jdbc:h2:&lt;url&gt;;CIPHER=[AES|XTEA]<br />
jdbc:h2:ssl://secureserv/testdb;CIPHER=AES<br />
jdbc:h2:file:secure;CIPHER=XTEA<br />
jdbc:h2:file:~/secure;CIPHER=XTEA<br />
</td>
</tr>
<tr>
<td>File Locking Methods</td>
<td>
jdbc:h2:&lt;url&gt;;FILE_LOCK={NO|FILE|SOCKET}<br />
jdbc:h2:file:quickAndDirty;FILE_LOCK=NO<br />
jdbc:h2:file:private;CIPHER=XTEA;FILE_LOCK=SOCKET<br />
jdbc:h2:file:~/quickAndDirty;FILE_LOCK=NO<br />
jdbc:h2:file:~/private;CIPHER=XTEA;FILE_LOCK=SOCKET<br />
</td>
</tr>
<tr>
<td>Only Open if it Already Exists</td>
<td>
jdbc:h2:&lt;url&gt;;IFEXISTS=TRUE<br />
jdbc:h2:file:sample;IFEXISTS=TRUE<br />
jdbc:h2:file:~/sample;IFEXISTS=TRUE<br />
</td>
</tr>
<tr>
......@@ -423,21 +426,21 @@ This is achieved using different database URLs. The settings in the URLs are not
<td>User Name and/or Password</td>
<td>
jdbc:h2:&lt;url&gt;[;USER=&lt;username&gt;][;PASSWORD=&lt;value&gt;]<br />
jdbc:h2:file:sample;USER=sa;PASSWORD=123<br />
jdbc:h2:file:~/sample;USER=sa;PASSWORD=123<br />
</td>
</tr>
<tr>
<td>Log Index Changes</td>
<td>
jdbc:h2:&lt;url&gt;;LOG=2<br />
jdbc:h2:file:sample;LOG=2<br />
jdbc:h2:file:~/sample;LOG=2<br />
</td>
</tr>
<tr>
<td>Debug Trace Settings</td>
<td>
jdbc:h2:&lt;url&gt;;TRACE_LEVEL_FILE=&lt;level 0..3&gt;<br />
jdbc:h2:file:sample;TRACE_LEVEL_FILE=3<br />
jdbc:h2:file:~/sample;TRACE_LEVEL_FILE=3<br />
</td>
</tr>
<tr>
......@@ -456,7 +459,7 @@ This is achieved using different database URLs. The settings in the URLs are not
<td>Changing Other Settings</td>
<td>
jdbc:h2:&lt;url&gt;;&lt;setting&gt;=&lt;value&gt;[;&lt;setting&gt;=&lt;value&gt;...]<br />
jdbc:h2:file:sample;TRACE_LEVEL_SYSTEM_OUT=3<br />
jdbc:h2:file:~/sample;TRACE_LEVEL_SYSTEM_OUT=3<br />
</td>
</tr>
</table>
......@@ -467,7 +470,7 @@ The prefix <code>file:</code> is optional. If no or only a relative path is used
directory is used as a starting point. The case sensitivity of the path and database name depend on the
operating system, however it is suggested to use lowercase letters only.
The database name must be at least three characters long (a limitation of File.createTempFile).
To point to the user directory, use ~/, as in: jdbc:h2:~/test.
To point to the user home directory, use ~/, as in: jdbc:h2:~/test.
<h3>Memory-Only Databases</h3>
<p>
......
......@@ -37,7 +37,10 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
<h3>Version 1.0 (Current)</h3>
<h3>Version 1.0 / 2007-TODO</h3><ul>
<li>CREATE TABLE ... AS SELECT now needs less memory. While inserting the rows, the undo
<li>Calculation of cache memory usage has been improved.
</li><li>In some situations record were released too late from the cache. Fixed.
</li><li>The cache size is now measured in KB instead of blocks of 128 byte.
</li><li>CREATE TABLE ... AS SELECT now needs less memory. While inserting the rows, the undo
log is temporarily disabled. This avoid out of memory problems when creating large tables.
</li><li>The per session undo log can now be disabled. This setting is useful for bulk operations
that don't need to be atomic, like bulk delete or update.
......@@ -1056,6 +1059,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>Store dates in local timezone (portability of database files)
</li><li>Ability to resize the cache array when resizing the cache
</li><li>Automatic conversion from WHERE X>10 AND X>20 to X>20
</li><li>Time based cache writing (one second after writing the log)
</li></ul>
<h3>Not Planned</h3>
......
......@@ -11,7 +11,23 @@ Quickstart
<table class="content"><tr class="content"><td class="content"><div class="contentDiv">
<h1>Quickstart</h1>
<a href="#embedding">
Embedding H2 in an Application</a><br />
<a href="#h2_console">
The H2 Console Application</a><br />
<br /><a name="embedding"></a>
<h2>Embedding H2 in an Application</h2>
This database can be used in embedded mode, or in server mode. To use it in embedded mode, you need to:
<ul>
<li>Add <code>h2.jar</code> to the classpath
</li><li>Use the JDBC driver class: <code>org.h2.Driver</code>
</li><li>The database URL <code>jdbc:h2:~/test</code> opens the database 'test' in your user home directory
</ul>
<br /><a name="h2_console"></a>
<h2>The H2 Console Application</h2>
The Console lets you access a SQL database using a browser interface.
<br />
......
......@@ -23,6 +23,11 @@ pre {
padding: 4px;
}
code {
background-color: #ece9d8;
padding: 0px 2px;
}
body {
margin: 0px;
}
......
......@@ -222,17 +222,7 @@ the administrator of this database.
H2 currently supports three servers: a Web Server, a TCP Server and an ODBC Server.
The servers can be started in different ways.
<h3>Limitations of the Server</h3>
There currently are a few limitations when using the server or cluster mode:
<ul>
<li>Statement.cancel() is only supported in embedded mode.
A connection can only execute one operation at a time in server or cluster mode,
and is blocked until this operation is finished.
</li><li>CLOBs and BLOBs are sent to the server in one piece and not as a stream.
That means those objects need to fit in memory when using the server or cluster mode.
</li></ul>
<h3>Starting from Command Line</h3>
<h3>Starting the Server from Command Line</h3>
To start the Server from the command line with the default settings, run
<pre>
java org.h2.tools.Server
......@@ -241,20 +231,23 @@ This will start the Server with the default options. To get the list of options,
<pre>
java org.h2.tools.Server -?
</pre>
The native version can also be started in this way:
<pre>
h2-server -?
</pre>
There are options available to use a different ports, and start or not start
parts of the Server and so on. For details, see the API documentation of the Server tool.
<h3>Starting within an Application</h3>
<h3>Connecting to the TCP Server</h3>
To remotly connect to a database using the TCP server, use the following driver and database URL:
<ul>
<li>JDBC driver class: org.h2.Driver
</li><li>Database URL: jdbc:h2:tcp://localhost/~/test
</li></ul>
For details about the database URL, see also in Features.
<h3>Starting the Server within an Application</h3>
It is also possible to start and stop a Server from within an application. Sample code:
<pre>
import org.h2.tools.Server;
...
// start the TCP Server with SSL enabled
String[] args = new String[]{"-ssl", "true"};
// start the TCP Server
Server server = Server.createTcpServer(args).start();
...
// stop the TCP Server
......@@ -276,6 +269,16 @@ This function should be called after all connection to the databases are closed
to avoid recovery when the databases are opened the next time.
To stop remote server, remote connections must be enabled on the server.
<h3>Limitations of the Server</h3>
There currently are a few limitations when using the server or cluster mode:
<ul>
<li>Statement.cancel() is only supported in embedded mode.
A connection can only execute one operation at a time in server or cluster mode,
and is blocked until this operation is finished.
</li><li>CLOBs and BLOBs are sent to the server in one piece and not as a stream.
That means those objects need to fit in memory when using the server or cluster mode.
</li></ul>
<br /><a name="using_hibernate"></a>
<h2>Using Hibernate</h2>
This database supports Hibernate version 3.1 and newer. You can use the HSQLDB Dialect,
......
......@@ -62,6 +62,7 @@ public class BackupCommand extends Prepared {
String fn = db.getName() + Constants.SUFFIX_DATA_FILE;
backupDiskFile(out, fn, db.getDataFile());
fn = db.getName() + Constants.SUFFIX_INDEX_FILE;
String base = FileUtils.getParent(fn);
backupDiskFile(out, fn, db.getIndexFile());
ObjectArray list = log.getActiveLogFiles();
int max = list.size();
......@@ -70,7 +71,7 @@ public class BackupCommand extends Prepared {
for(int i=0; i<list.size(); i++) {
LogFile lf = (LogFile) list.get(i);
fn = lf.getFileName();
backupFile(out, fn);
backupFile(out, base, fn);
db.setProgress(DatabaseEventListener.STATE_BACKUP_FILE, name, i, max);
}
String prefix = db.getDatabasePath();
......@@ -79,7 +80,7 @@ public class BackupCommand extends Prepared {
for(int i=0; i<fileList.size(); i++) {
fn = (String) fileList.get(i);
if(fn.endsWith(Constants.SUFFIX_HASH_FILE) || fn.endsWith(Constants.SUFFIX_LOB_FILE)) {
backupFile(out, fn);
backupFile(out, base, fn);
}
}
}
......@@ -109,8 +110,14 @@ public class BackupCommand extends Prepared {
out.closeEntry();
}
private void backupFile(ZipOutputStream out, String fn) throws SQLException, IOException {
out.putNextEntry(new ZipEntry(FileUtils.getFileName(fn)));
private void backupFile(ZipOutputStream out, String base, String fn) throws SQLException, IOException {
String f = FileUtils.getAbsolutePath(fn);
base = FileUtils.getAbsolutePath(base);
if(!f.startsWith(base)) {
throw Message.getInternalError(f + " does not start with " + base);
}
f = f.substring(base.length());
out.putNextEntry(new ZipEntry(f));
InputStream in = FileUtils.openFileInputStream(fn);
IOUtils.copyAndCloseInput(in, out);
out.closeEntry();
......
......@@ -103,7 +103,7 @@ public class Constants {
public static final int IO_BUFFER_SIZE = 4 * 1024;
public static final int IO_BUFFER_SIZE_COMPRESS = 128 * 1024;
public static final int DEFAULT_CACHE_SIZE_LINEAR_INDEX = 1 << 8;
public static final int DEFAULT_CACHE_SIZE_LINEAR_INDEX = 64 * 1024;
public static final String SUFFIX_DB_FILE = ".db";
public static final String SUFFIX_DATA_FILE = ".data.db";
......@@ -246,7 +246,7 @@ public class Constants {
public static final boolean INDEX_LOOKUP_NEW = getBooleanSetting("h2.indexLookupNew", true);
public static final boolean TRACE_IO = getBooleanSetting("h2.traceIO", false);
public static final int DATASOURCE_TRACE_LEVEL = getIntSetting("h2.dataSourceTraceLevel", TraceSystem.ERROR);
public static final int CACHE_SIZE_DEFAULT = getIntSetting("h2.cacheSizeDefault", (1 << 16));
public static final int CACHE_SIZE_DEFAULT = getIntSetting("h2.cacheSizeDefault", 16 * 1024);
public static final int CACHE_SIZE_INDEX_SHIFT = getIntSetting("h2.cacheSizeIndexShift", 3);
public static final int CACHE_SIZE_INDEX_DEFAULT = CACHE_SIZE_DEFAULT >> CACHE_SIZE_INDEX_SHIFT;
public static String BASE_DIR = getStringSetting("h2.baseDir", null);
......
......@@ -1160,16 +1160,16 @@ public class Database implements DataHandler {
return fileIndex;
}
public void setCacheSize(int value) throws SQLException {
public void setCacheSize(int kb) throws SQLException {
if(fileData != null) {
synchronized(fileData) {
fileData.getCache().setMaxSize(value);
fileData.getCache().setMaxSize(kb);
}
int valueIndex = value <= (1<<8) ? value : (value>>>Constants.CACHE_SIZE_INDEX_SHIFT);
int valueIndex = kb <= 32 ? kb : (kb >>> Constants.CACHE_SIZE_INDEX_SHIFT);
synchronized(fileIndex) {
fileIndex.getCache().setMaxSize(valueIndex);
}
cacheSize = value;
cacheSize = kb;
}
}
......
......@@ -34,7 +34,7 @@ public class FunctionCursor implements Cursor {
public boolean next() throws SQLException {
if(result.next()) {
row = new Row(result.currentRow());
row = new Row(result.currentRow(), 0);
} else {
row = null;
}
......
......@@ -44,7 +44,7 @@ public class RangeCursor implements Cursor {
} else {
current++;
}
currentRow = new Row(new Value[]{ValueLong.get(current)});
currentRow = new Row(new Value[]{ValueLong.get(current)}, 0);
return current <= max;
}
......
......@@ -129,7 +129,7 @@ public class ScanIndex extends Index {
}
}
} else {
Row free = new Row();
Row free = new Row(null, 0);
free.setPos(firstFree);
int key = row.getPos();
rows.set(key, free);
......
......@@ -695,13 +695,14 @@ SET AUTOCOMMIT OFF
"Commands (Other)","SET CACHE_SIZE","
SET CACHE_SIZE int
","
Sets the size of the cache.
A cache entry contains about 128 bytes. The default value is 65536.
Sets the size of the cache in KB (each KB being 1024 bytes). The default value is 16384 (16 MB).
The value is rounded to the next higher power of two.
Depending on the virtual machine, the actual memory required may be higher.
This setting is persistent and affects all connections as there is only one cache per database.
Admin rights are required to execute this command.
This setting can be appended to the database URL: jdbc:h2:test;CACHE_SIZE=8192
","
SET CACHE_SIZE 1000
SET CACHE_SIZE 8192
"
"Commands (Other)","SET CLUSTER","
......
......@@ -7,6 +7,7 @@ package org.h2.result;
import java.sql.SQLException;
import org.h2.store.DataPage;
import org.h2.store.DiskFile;
import org.h2.store.Record;
import org.h2.value.Value;
......@@ -14,18 +15,17 @@ import org.h2.value.Value;
* @author Thomas
*/
public class Row extends Record implements SearchRow {
private Value[] data;
private final Value[] data;
private final int memory;
public Row(Value[] data) {
public Row(Value[] data, int memory) {
this.data = data;
this.memory = memory;
}
public Row(Row old) {
this.data = old.data;
}
public Row() {
// empty constructor
this.memory = old.memory;
}
public Value getValue(int i) {
......@@ -61,5 +61,9 @@ public class Row extends Record implements SearchRow {
public int getColumnCount() {
return data.length;
}
public int getMemorySize() {
return blockCount * (DiskFile.BLOCK_SIZE / 16) + memory * 4;
}
}
......@@ -264,6 +264,7 @@ public abstract class DataPage {
case Value.STRING:
case Value.STRING_IGNORECASE:
case Value.STRING_FIXED:
return 1 + getStringLen(v.getString());
case Value.DECIMAL:
return 1 + getStringLen(v.getString());
case Value.JAVA_OBJECT:
......
......@@ -829,5 +829,11 @@ public class DiskFile implements CacheWriter {
public int getReadCount() {
return readCount;
}
public void flushLog() throws SQLException {
if(log != null) {
log.flush();
}
}
}
......@@ -111,6 +111,9 @@ public class LogFile {
buff.setInt(0, blockCount);
buff.updateChecksum();
// IOLogger.getInstance().logWrite(this.fileName, file.getFilePointer(), buff.length());
if(rec != null) {
unwritten.add(rec);
}
if (buff.length() + bufferPos > buffer.length) {
// the buffer is full
flush();
......@@ -123,9 +126,6 @@ public class LogFile {
}
System.arraycopy(buff.getBytes(), 0, buffer, bufferPos, buff.length());
bufferPos += buff.length();
if(rec != null) {
unwritten.add(rec);
}
pos = getBlock() + (bufferPos / BLOCK_SIZE);
}
......@@ -334,13 +334,13 @@ public class LogFile {
throw Message.getSQLException(Message.SIMULATED_POWER_OFF);
}
file.write(buffer, 0, bufferPos);
pos = getBlock();
for(int i=0; i<unwritten.size(); i++) {
Record r = (Record) unwritten.get(i);
r.setLogWritten(id, pos);
}
unwritten.clear();
bufferPos = 0;
pos = getBlock();
long min = (long)pos * BLOCK_SIZE;
min = MathUtils.scaleUp50Percent(128 * 1024, min, 8 * 1024);
if(min > file.length()) {
......
......@@ -131,7 +131,7 @@ public class UndoLogRecord {
for (int i = 0; i < columnCount; i++) {
values[i] = buff.readValue();
}
row = new Row(values);
row = new Row(values, 0);
state = IN_MEMORY_READ_POS;
}
......
......@@ -192,8 +192,10 @@ public class Column {
}
}
value = value.convertScale(Mode.getCurrentMode().convertOnlyToSmallerScale, scale);
if (precision > 0 && value.getPrecision() > precision) {
throw Message.getSQLException(Message.VALUE_TOO_LONG_1, name);
if (precision > 0) {
if(!value.checkPrecision(precision)) {
throw Message.getSQLException(Message.VALUE_TOO_LONG_1, name);
}
}
updateSequenceIfRequired(session, value);
return value;
......
......@@ -1254,7 +1254,7 @@ public class MetaTable extends Table {
v = v.convertTo(col.getType());
values[i] = v;
}
Row row = new Row(values);
Row row = new Row(values, 0);
row.setPos(rows.size());
rows.add(row);
}
......
......@@ -27,6 +27,7 @@ import org.h2.schema.Sequence;
import org.h2.schema.TriggerObject;
import org.h2.store.UndoLogRecord;
import org.h2.util.ObjectArray;
import org.h2.value.DataType;
import org.h2.value.Value;
import org.h2.value.ValueNull;
......@@ -52,6 +53,7 @@ public abstract class Table extends SchemaObject {
private ObjectArray views;
private boolean checkForeignKeyConstraints = true;
private boolean onCommitDrop, onCommitTruncate;
protected int memoryPerRow;
public Table(Schema schema, int id, String name, boolean persistent) {
super(schema, id, name, Trace.TABLE);
......@@ -95,8 +97,10 @@ public abstract class Table extends SchemaObject {
if(columnMap.size() > 0) {
columnMap.clear();
}
int memory = 0;
for (int i = 0; i < columns.length; i++) {
Column col = columns[i];
memory += DataType.getDataType(col.getType()).memory;
col.setTable(this, i);
String columnName = col.getName();
if (columnMap.get(columnName) != null) {
......@@ -105,6 +109,7 @@ public abstract class Table extends SchemaObject {
}
columnMap.put(columnName, col);
}
memoryPerRow = memory;
}
public void renameColumn(Column column, String newName) {
......@@ -200,7 +205,7 @@ public abstract class Table extends SchemaObject {
}
public Row getTemplateRow() {
return new Row(new Value[columns.length]);
return new Row(new Value[columns.length], memoryPerRow);
}
public SearchRow getTemplateSimpleRow(boolean singleColumn) {
......@@ -213,7 +218,7 @@ public abstract class Table extends SchemaObject {
public Row getNullRow() {
// TODO memory usage: if rows are immutable, we could use a static null row
Row row = new Row(new Value[columns.length]);
Row row = new Row(new Value[columns.length], 0);
for (int i = 0; i < columns.length; i++) {
row.setValue(i, ValueNull.INSTANCE);
}
......
......@@ -405,7 +405,7 @@ public class TableData extends Table implements RecordReader {
for(int i=0; i<len; i++) {
data[i] = s.readValue();
}
return new Row(data);
return new Row(data, memoryPerRow);
}
public void setRowCount(int count) {
......
......@@ -12,6 +12,7 @@ import java.util.ArrayList;
import java.util.zip.ZipEntry;
import java.util.zip.ZipOutputStream;
import org.h2.engine.Constants;
import org.h2.message.Message;
import org.h2.store.FileLister;
import org.h2.util.FileUtils;
......@@ -60,6 +61,8 @@ public class Backup {
db = args[++i];
} else if(args[i].equals("-quiet")) {
quiet = true;
} else if(args[i].equals("-file")) {
zipFileName = args[++i];
} else {
showUsage();
return;
......@@ -93,9 +96,22 @@ public class Backup {
try {
out = FileUtils.openFileOutputStream(zipFileName);
ZipOutputStream zipOut = new ZipOutputStream(out);
String base = "";
for(int i=0; i<list.size(); i++) {
String fileName = (String) list.get(i);
ZipEntry entry = new ZipEntry(FileUtils.getFileName(fileName));
if(fileName.endsWith(Constants.SUFFIX_DATA_FILE)) {
base = FileUtils.getParent(fileName);
base = FileUtils.getAbsolutePath(fileName);
}
}
for(int i=0; i<list.size(); i++) {
String fileName = (String) list.get(i);
String f = FileUtils.getAbsolutePath(fileName);
if(!f.startsWith(base)) {
throw Message.getInternalError(f + " does not start with " + base);
}
f = f.substring(base.length());
ZipEntry entry = new ZipEntry(f);
zipOut.putNextEntry(entry);
InputStream in = null;
try {
......
......@@ -22,7 +22,10 @@ public interface Cache {
CacheObject find(int i);
void setMaxSize(int value) throws SQLException;
/*
* @param memorySize in number of double words (4 bytes)
*/
void setMaxSize(int memorySize) throws SQLException;
String getTypeName();
}
......@@ -17,57 +17,45 @@ public class Cache2Q implements Cache {
public static final String TYPE_NAME = "TQ";
private static final int MAIN = 1, IN = 2, OUT = 3;
private final static int PERCENT_IN = 20, PERCENT_OUT = 50;
private final CacheWriter writer;
private int maxSize;
private int percentIn = 20, percentOut = 50;
private int maxMain, maxIn, maxOut;
private CacheObject headMain = new CacheHead();
private CacheObject headIn = new CacheHead();
private CacheObject headOut = new CacheHead();
private int sizeMain, sizeIn, sizeOut, sizeRecords;
private int sizeMain, sizeIn, sizeOut;
private int recordCount;
private int len;
private CacheObject[] values;
private int mask;
public Cache2Q(CacheWriter writer, int maxSize) {
public Cache2Q(CacheWriter writer, int maxKb) {
int maxSize = maxKb * 1024 / 4;
this.writer = writer;
resize(maxSize);
}
private void resize(int maxSize) {
this.maxSize = maxSize;
this.len = MathUtils.nextPowerOf2(maxSize / 2);
this.len = MathUtils.nextPowerOf2(maxSize / 64);
this.mask = len - 1;
MathUtils.checkPowerOf2(len);
recalculateMax();
clear();
}
public void clear() {
public void clear() {
headMain.next = headMain.previous = headMain;
headIn.next = headIn.previous = headIn;
headOut.next = headOut.previous = headOut;
values = new CacheObject[len];
sizeIn = sizeOut = sizeMain = 0;
sizeRecords = 0;
recalculateMax();
recordCount = 0;
}
void setPercentIn(int percent) {
percentIn = percent;
recalculateMax();
}
void setPercentOut(int percent) {
percentOut = percent;
recalculateMax();
}
private void recalculateMax() {
maxMain = maxSize;
maxIn = maxSize * percentIn / 100;
maxOut = maxSize * percentOut / 100;
}
maxIn = maxSize * PERCENT_IN / 100;
maxOut = maxSize * PERCENT_OUT / 100;
}
private void addToFront(CacheObject head, CacheObject rec) {
if(Constants.CHECK) {
......@@ -107,8 +95,8 @@ public class Cache2Q implements Cache {
return null;
} else if(r.cacheQueue == IN) {
removeFromList(r);
sizeIn -= r.getBlockCount();
sizeMain += r.getBlockCount();
sizeIn -= r.getMemorySize();
sizeMain += r.getMemorySize();
r.cacheQueue = MAIN;
addToFront(headMain, r);
}
......@@ -142,7 +130,7 @@ public class Cache2Q implements Cache {
} while(rec.getPos() != pos);
last.chained = rec.chained;
}
sizeRecords--;
recordCount--;
if(Constants.CHECK) {
rec.chained = null;
}
......@@ -154,21 +142,25 @@ public class Cache2Q implements Cache {
if(r != null) {
removeFromList(r);
if(r.cacheQueue == MAIN) {
sizeMain -= r.getBlockCount();
sizeMain -= r.getMemorySize();
} else if(r.cacheQueue == IN) {
sizeIn -= r.getBlockCount();
sizeIn -= r.getMemorySize();
}
}
}
private void removeOld() throws SQLException {
if((sizeIn < maxIn) && (sizeOut < maxOut) && (sizeMain < maxMain)) {
return;
private void removeOldIfRequired() throws SQLException {
// a small method, to allow inlining
if((sizeIn >= maxIn) || (sizeOut >= maxOut) || (sizeMain >= maxMain)) {
removeOld();
}
}
private void removeOld() throws SQLException {
int i=0;
ObjectArray changed = new ObjectArray();
while (((sizeIn*4 > maxIn*3) || (sizeOut*4 > maxOut*3) || (sizeMain*4 > maxMain*3)) && sizeRecords > Constants.CACHE_MIN_RECORDS) {
if(i++ >= sizeRecords) {
while (((sizeIn*4 > maxIn*3) || (sizeOut*4 > maxOut*3) || (sizeMain*4 > maxMain*3)) && recordCount > Constants.CACHE_MIN_RECORDS) {
if(i++ >= recordCount) {
// can't remove any record, because the log is not written yet
// hopefully this does not happen too much, but it could happen theoretically
// TODO log this
......@@ -181,7 +173,7 @@ public class Cache2Q implements Cache {
addToFront(headIn, r);
continue;
}
sizeIn -= r.getBlockCount();
sizeIn -= r.getMemorySize();
int pos = r.getPos();
removeCacheObject(pos);
removeFromList(r);
......@@ -207,7 +199,7 @@ public class Cache2Q implements Cache {
addToFront(headMain, r);
continue;
}
sizeMain -= r.getBlockCount();
sizeMain -= r.getMemorySize();
removeCacheObject(r.getPos());
removeFromList(r);
if(r.isChanged()) {
......@@ -260,7 +252,7 @@ public class Cache2Q implements Cache {
int index = rec.getPos() & mask;
rec.chained = values[index];
values[index] = rec;
sizeRecords++;
recordCount++;
}
......@@ -271,24 +263,24 @@ public class Cache2Q implements Cache {
if(r.cacheQueue == OUT) {
removeCacheObject(pos);
removeFromList(r);
removeOld();
removeOldIfRequired();
rec.cacheQueue = MAIN;
putCacheObject(rec);
addToFront(headMain, rec);
sizeMain += rec.getBlockCount();
sizeMain += rec.getMemorySize();
}
} else if(sizeMain < maxMain) {
removeOld();
removeOldIfRequired();
rec.cacheQueue = MAIN;
putCacheObject(rec);
addToFront(headMain, rec);
sizeMain += rec.getBlockCount();
sizeMain += rec.getMemorySize();
} else {
removeOld();
removeOldIfRequired();
rec.cacheQueue = IN;
putCacheObject(rec);
addToFront(headIn, rec);
sizeIn += rec.getBlockCount();
sizeIn += rec.getMemorySize();
}
}
......@@ -307,11 +299,13 @@ public class Cache2Q implements Cache {
return old;
}
public void setMaxSize(int newSize) throws SQLException {
public void setMaxSize(int maxKb) throws SQLException {
int newSize = maxKb * 1024 / 4;
maxSize = newSize < 0 ? 0 : newSize;
recalculateMax();
// can not resize, otherwise existing records are lost
// resize(maxSize);
removeOld();
removeOldIfRequired();
}
public String getTypeName() {
......
......@@ -24,22 +24,26 @@ public class CacheLRU implements Cache {
private int maxSize;
private CacheObject[] values;
private int mask;
private int sizeRecords;
private int sizeBlocks;
private int recordCount;
private int sizeMemory;
private CacheObject head = new CacheHead();
public CacheLRU(CacheWriter writer, int maxSize) {
public CacheLRU(CacheWriter writer, int maxKb) {
int maxSize = maxKb * 1024 / 4;
this.writer = writer;
resize(maxSize);
}
private void resize(int maxSize) {
this.maxSize = maxSize;
this.len = MathUtils.nextPowerOf2(maxSize / 2);
this.len = MathUtils.nextPowerOf2(maxSize / 64);
this.mask = len - 1;
MathUtils.checkPowerOf2(len);
clear();
}
public void clear() {
head.next = head.previous = head;
values = new CacheObject[len];
recordCount = 0;
sizeMemory = 0;
}
public void put(CacheObject rec) throws SQLException {
if(Constants.CHECK) {
......@@ -53,10 +57,10 @@ public class CacheLRU implements Cache {
int index = rec.getPos() & mask;
rec.chained = values[index];
values[index] = rec;
sizeRecords++;
sizeBlocks += rec.getBlockCount();
recordCount++;
sizeMemory += rec.getMemorySize();
addToFront(rec);
removeOld();
removeOldIfRequired();
}
public CacheObject update(int pos, CacheObject rec) throws SQLException {
......@@ -75,18 +79,32 @@ public class CacheLRU implements Cache {
return old;
}
private void removeOld() throws SQLException {
if(sizeBlocks < maxSize) {
return;
private void removeOldIfRequired() throws SQLException {
// a small method, to allow inlining
if(sizeMemory >= maxSize) {
removeOld();
}
}
private void removeOld() throws SQLException {
int i=0;
ObjectArray changed = new ObjectArray();
while (sizeBlocks*4 > maxSize*3 && sizeRecords > Constants.CACHE_MIN_RECORDS) {
while (sizeMemory*4 > maxSize*3 && recordCount > Constants.CACHE_MIN_RECORDS) {
CacheObject last = head.next;
if(i++ >= sizeRecords) {
i++;
if(i == recordCount) {
int testing;
int todoCopyTo2Q;
System.out.println("flush log");
writer.flushLog();
}
if(i >= recordCount * 2) {
// can't remove any record, because the log is not written yet
// hopefully this does not happen too much, but it could happen theoretically
// TODO log this
System.out.println("can not shrink cache");
break;
}
if(Constants.CHECK && last == head) {
......@@ -154,8 +172,8 @@ public class CacheLRU implements Cache {
} while(rec.getPos() != pos);
last.chained = rec.chained;
}
sizeRecords--;
sizeBlocks -= rec.getBlockCount();
recordCount--;
sizeMemory -= rec.getMemorySize();
removeFromLinkedList(rec);
if(Constants.CHECK) {
rec.chained = null;
......@@ -222,9 +240,9 @@ public class CacheLRU implements Cache {
while (rec != null) {
if(rec.isChanged()) {
list.add(rec);
if(list.size() >= sizeRecords) {
if(list.size() >= recordCount) {
if(Constants.CHECK) {
if(list.size() > sizeRecords) {
if(list.size() > recordCount) {
throw Message.getInternalError("cache chain error");
}
} else {
......@@ -238,18 +256,12 @@ public class CacheLRU implements Cache {
return list;
}
public void clear() {
head.next = head.previous = head;
values = new CacheObject[len];
sizeRecords = 0;
sizeBlocks = 0;
}
public void setMaxSize(int newSize) throws SQLException {
public void setMaxSize(int maxKb) throws SQLException {
int newSize = maxKb * 1024 / 4;
newSize = newSize < 0 ? 0 : newSize;
// can not resize, otherwise
// can not resize, otherwise existing records are lost
// resize(maxSize);
removeOld();
removeOldIfRequired();
}
public String getTypeName() {
......
......@@ -8,12 +8,13 @@ import java.util.Comparator;
import org.h2.engine.Constants;
import org.h2.message.Message;
import org.h2.store.DiskFile;
public abstract class CacheObject {
private boolean changed;
public CacheObject previous, next, chained;
public int cacheQueue;
private int blockCount;
protected int blockCount;
private int pos;
public static void sort(ObjectArray recordList) {
......@@ -60,5 +61,13 @@ public abstract class CacheObject {
public boolean canRemove() {
return true;
}
/*
* Get the estimated memory size.
* @return number of double words (4 bytes)
*/
public int getMemorySize() {
return blockCount * (DiskFile.BLOCK_SIZE / 4);
}
}
......@@ -12,4 +12,5 @@ import java.sql.SQLException;
public interface CacheWriter {
void writeBack(CacheObject entry) throws SQLException;
void flushLog() throws SQLException;
}
......@@ -52,6 +52,7 @@ public class DataType {
public long defaultPrecision;
public int defaultScale;
public boolean hidden;
public int memory;
// for operations that include different types, convert both to the higher order
public int order;
......@@ -74,125 +75,153 @@ public class DataType {
//#endif
add(Value.NULL, Types.NULL, "Null",
new DataType(),
new String[]{"NULL"}
new String[]{"NULL"},
1
);
add(Value.STRING, Types.VARCHAR, "String",
createString(true),
new String[]{"VARCHAR", "VARCHAR2", "NVARCHAR", "NVARCHAR2", "VARCHAR_CASESENSITIVE", "CHARACTER VARYING", "TID"}
new String[]{"VARCHAR", "VARCHAR2", "NVARCHAR", "NVARCHAR2", "VARCHAR_CASESENSITIVE", "CHARACTER VARYING", "TID"},
4
);
add(Value.STRING, Types.LONGVARCHAR, "String",
createString(true),
new String[]{"LONGVARCHAR"}
new String[]{"LONGVARCHAR"},
4
);
add(Value.STRING_FIXED, Types.CHAR, "String",
createString(true),
new String[]{"CHAR", "CHARACTER", "NCHAR"}
new String[]{"CHAR", "CHARACTER", "NCHAR"},
4
);
add(Value.STRING_IGNORECASE, Types.VARCHAR, "String",
createString(false),
new String[]{"VARCHAR_IGNORECASE"}
new String[]{"VARCHAR_IGNORECASE"},
4
);
add(Value.BOOLEAN, DataType.TYPE_BOOLEAN, "Boolean",
createDecimal(ValueBoolean.PRECISION, ValueBoolean.PRECISION, 0, false, false),
new String[]{"BOOLEAN", "BIT", "BOOL"}
new String[]{"BOOLEAN", "BIT", "BOOL"},
1
);
add(Value.BYTE, Types.TINYINT, "Byte",
createDecimal(ValueByte.PRECISION, ValueByte.PRECISION, 0, false, false),
new String[]{"TINYINT"}
new String[]{"TINYINT"},
1
);
add(Value.SHORT, Types.SMALLINT, "Short",
createDecimal(ValueShort.PRECISION, ValueShort.PRECISION, 0, false, false),
new String[]{"SMALLINT", "YEAR", "INT2"}
new String[]{"SMALLINT", "YEAR", "INT2"},
1
);
add(Value.INT, Types.INTEGER, "Int",
createDecimal(ValueInt.PRECISION, ValueInt.PRECISION, 0, false, false),
new String[]{"INTEGER", "INT", "MEDIUMINT", "INT4", "SIGNED"}
new String[]{"INTEGER", "INT", "MEDIUMINT", "INT4", "SIGNED"},
1
);
add(Value.LONG, Types.BIGINT, "Long",
createDecimal(ValueLong.PRECISION, ValueLong.PRECISION, 0, false, false),
new String[]{"BIGINT", "INT8"}
new String[]{"BIGINT", "INT8"},
1
);
add(Value.LONG, Types.BIGINT, "Long",
createDecimal(ValueLong.PRECISION, ValueLong.PRECISION, 0, false, true),
new String[]{"IDENTITY", "SERIAL"}
new String[]{"IDENTITY", "SERIAL"},
1
);
add(Value.DECIMAL, Types.DECIMAL, "BigDecimal",
createDecimal(Integer.MAX_VALUE, ValueDecimal.DEFAULT_PRECISION, ValueDecimal.DEFAULT_SCALE, true, false),
new String[]{"DECIMAL", "DEC"}
new String[]{"DECIMAL", "DEC"},
7
// TODO value: are NaN, Inf, -Inf,... supported as well?
);
add(Value.DECIMAL, Types.NUMERIC, "BigDecimal",
createDecimal(Integer.MAX_VALUE, ValueDecimal.DEFAULT_PRECISION, ValueDecimal.DEFAULT_SCALE, true, false),
new String[]{"NUMERIC", "NUMBER"}
new String[]{"NUMERIC", "NUMBER"},
7
// TODO value: are NaN, Inf, -Inf,... supported as well?
);
add(Value.FLOAT, Types.REAL, "Float",
createDecimal(ValueFloat.PRECISION, ValueFloat.PRECISION, 0, false, false),
new String[] {"REAL", "FLOAT4"}
new String[] {"REAL", "FLOAT4"},
1
);
add(Value.DOUBLE, Types.DOUBLE, "Double",
createDecimal(ValueDouble.PRECISION, ValueDouble.PRECISION, 0, false, false),
new String[] { "DOUBLE", "DOUBLE PRECISION" }
new String[] { "DOUBLE", "DOUBLE PRECISION" },
1
);
add(Value.DOUBLE, Types.FLOAT, "Double",
createDecimal(ValueDouble.PRECISION, ValueDouble.PRECISION, 0, false, false),
new String[] {"FLOAT", "FLOAT8" }
new String[] {"FLOAT", "FLOAT8" },
1
// TODO value: show min and max values, E format if supported
);
add(Value.TIME, Types.TIME, "Time",
createDate(ValueTime.PRECISION, "TIME", 0),
new String[]{"TIME"}
new String[]{"TIME"},
4
// TODO value: min / max for time
);
add(Value.DATE, Types.DATE, "Date",
createDate(ValueDate.PRECISION, "DATE", 0),
new String[]{"DATE"}
new String[]{"DATE"},
4
// TODO value: min / max for date
);
add(Value.TIMESTAMP, Types.TIMESTAMP, "Timestamp",
createDate(ValueTimestamp.PRECISION, "TIMESTAMP", ValueTimestamp.DEFAULT_SCALE),
new String[]{"TIMESTAMP", "DATETIME", "SMALLDATETIME"}
new String[]{"TIMESTAMP", "DATETIME", "SMALLDATETIME"},
4
// TODO value: min / max for timestamp
);
add(Value.BYTES, Types.VARBINARY, "Bytes",
createString(false),
new String[]{"VARBINARY"}
new String[]{"VARBINARY"},
4
);
add(Value.BYTES, Types.BINARY, "Bytes",
createString(false),
new String[]{"BINARY", "RAW", "BYTEA", "LONG RAW"}
new String[]{"BINARY", "RAW", "BYTEA", "LONG RAW"},
4
);
add(Value.BYTES, Types.LONGVARBINARY, "Bytes",
createString(false),
new String[]{"LONGVARBINARY"}
new String[]{"LONGVARBINARY"},
4
);
add(Value.UUID, Types.BINARY, "Bytes",
createString(false),
new String[]{"UUID"}
new String[]{"UUID"},
4
);
add(Value.JAVA_OBJECT, Types.OTHER, "Object",
createString(false),
new String[]{"OTHER", "OBJECT", "JAVA_OBJECT"}
new String[]{"OTHER", "OBJECT", "JAVA_OBJECT"},
4
);
add(Value.BLOB, Types.BLOB, "Bytes",
createString(false),
new String[]{"BLOB", "TINYBLOB", "MEDIUMBLOB", "LONGBLOB", "IMAGE", "OID"}
new String[]{"BLOB", "TINYBLOB", "MEDIUMBLOB", "LONGBLOB", "IMAGE", "OID"},
4
);
add(Value.CLOB, Types.CLOB, "String",
createString(true),
new String[]{"CLOB", "TINYTEXT", "TEXT", "MEDIUMTEXT", "LONGTEXT", "NTEXT", "NCLOB"}
new String[]{"CLOB", "TINYTEXT", "TEXT", "MEDIUMTEXT", "LONGTEXT", "NTEXT", "NCLOB"},
4
);
DataType dataType = new DataType();
dataType.prefix = "(";
dataType.suffix = "')";
add(Value.ARRAY, Types.ARRAY, "Array",
dataType,
new String[]{"ARRAY"}
new String[]{"ARRAY"},
2
);
dataType = new DataType();
add(Value.RESULT_SET, 0, "ResultSet",
dataType,
new String[]{"RESULT_SET"}
new String[]{"RESULT_SET"},
2
);
for(int i=0; i<typesByValueType.length; i++) {
DataType dt = typesByValueType[i];
......@@ -204,7 +233,7 @@ public class DataType {
// TODO data types: try to support other types as well (longvarchar for odbc/access,...) - maybe map them to regular types?
}
private static void add(int type, int sqlType, String jdbc, DataType dataType, String[] names) {
private static void add(int type, int sqlType, String jdbc, DataType dataType, String[] names, int memory) {
for(int i=0; i<names.length; i++) {
DataType dt = new DataType();
dt.type = type;
......@@ -225,6 +254,7 @@ public class DataType {
dt.defaultScale = dataType.defaultScale;
dt.caseSensitive = dataType.caseSensitive;
dt.hidden = i > 0;
dt.memory = memory;
for(int j=0; j<types.size(); j++) {
DataType t2 = (DataType) types.get(j);
if(t2.sqlType == dt.sqlType) {
......
......@@ -655,4 +655,8 @@ public abstract class Value {
public void close() throws SQLException {
}
public boolean checkPrecision(long precision) {
return getPrecision() <= precision;
}
}
......@@ -119,6 +119,13 @@ public class ValueDecimal extends Value {
}
return precision;
}
public boolean checkPrecision(long precision) {
if(precision == DEFAULT_PRECISION) {
return true;
}
return getPrecision() <= precision;
}
public int getScale() {
return value.scale();
......
......@@ -94,6 +94,25 @@ java -Xmx512m -Xrunhprof:cpu=samples,depth=8 org.h2.tools.RunScript -url jdbc:h2
/*
Backup and BackupCommand with subdirectories (lobs): stored in a flat directory structure
-Dh2.lobFilesInDirectories=true
DROP TABLE IF EXISTS TEST;
CREATE TABLE TEST(ID INT PRIMARY KEY, NAME CLOB);
@LOOP 20 INSERT INTO TEST VALUES(?, SPACE(10000));
BACKUP TO 'backup.zip';
test Backup tool as well!
PMD
replace new Byte, Double, Float, Long, Byte, Short with ObjectUtils.get
http://fastutil.dsi.unimi.it/
http://javolution.org/
http://joda-time.sourceforge.net/
http://ibatis.apache.org/
SET REFERENTIAL_INTEGRITY TRUE
replace new Byte, Double, Float, Long, Byte, Short with ObjectUtils.get
http://fastutil.dsi.unimi.it/
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论