提交 81904b30 authored 作者: Thomas Mueller's avatar Thomas Mueller

Support LOG=0 and LOG=1. Use LOG=1 by default in test cases.

上级 70b9adab
......@@ -1118,6 +1118,33 @@ This setting is persistent.
SET IGNORECASE TRUE
"
"Commands (Other)","SET LOG","
SET LOCK_MODE int
","
Sets the transaction log mode. The values 0, 1, and 2 are supported, the default is 2.
This setting affects all connections.
LOG=0 means the transaction log is disabled completely. It is the fastest mode,
but also the most dangerous: if the process is killed while the database is open in this mode,
the data might be lost. It must only be used if this is not a problem, for example when
initially loading a database, or when running tests.
LOG=1 means the transaction log is enabled, but FileDescriptor.sync is disabled.
This setting is about half as fast as with LOG=0. This setting is useful if no protection
against power failure is required, but the data must be protected against killing the process.
LOG=2 (the default) means the transaction log is enabled, and FileDescriptor.sync is called
for each checkpoint. This setting is about half as fast as LOG=1. Depending on the
file system, this will also protect against power failure in the majority if cases.
Admin rights are required to execute this command.
This command commits an open transaction.
This setting is not persistent.
This setting can be appended to the database URL: jdbc:h2:test;LOG=0
","
SET LOG 1
"
"Commands (Other)","SET LOCK_MODE","
SET LOCK_MODE int
","
......@@ -1498,8 +1525,8 @@ CASE WHEN CNT<10 THEN 'Low' ELSE 'High' END
"Other Grammar","Cipher","
{ AES | XTEA }
","
Two algorithms are supported, AES (AES-128) and XTEA (using 32 rounds). The AES
algorithm is about half as fast as XTEA.
Two algorithms are supported: AES (AES-128) and XTEA (using 32 rounds).
XTEA is a bit faster than AES in some environments, but AES is more secure.
","
AES
"
......
......@@ -18,7 +18,11 @@ Change Log
<h1>Change Log</h1>
<h2>Next Version (unreleased)</h2>
<ul><li>After deleting a lot of data (for example by dropping or altering tables, or indexes,
<ul><li>For improved performance, LOG=0 and LOG=1 are again supported.
LOG=0 means the transaction log is disabled completely (fastest; for loading a database).
LOG=1 means the transaction log is enabled, but FileDescriptor.sync is disabled (if no protection against power failure is required).
LOG=2 is the default (transaction log is enabled, FileDescriptor.sync for each checkpoint).
</li><li>After deleting a lot of data (for example by dropping or altering tables, or indexes,
or after a large transaction), opening a large database was very slow. Fixed.
</li><li>When killing the process after creating and dropping many tables (specially temporary tables),
the database could not be opened sometimes.
......
......@@ -148,7 +148,8 @@ bugs that have not yet been found (as with most software). Some features are kno
to be dangerous, they are only supported for situations where performance is more important
than reliability. Those dangerous features are:
</p>
<ul><li>Using the transaction isolation level <code>READ_UNCOMMITTED</code>
<ul><li>Disabling the transaction log or FileDescriptor.sync() using LOG=0 or LOG=1.
</li><li>Using the transaction isolation level <code>READ_UNCOMMITTED</code>
(<code>LOCK_MODE 0</code>) while at the same time using multiple
connections.
</li><li>Disabling database file protection using (setting <code>FILE_LOCK</code> to
......
......@@ -204,7 +204,7 @@ public class Set extends Prepared {
if (value == 0) {
session.getUser().checkAdmin();
}
// currently no effect
database.setLogMode(value);
break;
}
case SetTypes.MAX_LENGTH_INPLACE_LOB: {
......
......@@ -2273,4 +2273,17 @@ public class Database implements DataHandler {
return conn;
}
public void setLogMode(int log) {
if (pageStore != null) {
pageStore.setLogMode(log);
}
}
public int getLogMode() {
if (pageStore != null) {
return pageStore.getLogMode();
}
return PageStore.LOG_MODE_OFF;
}
}
......@@ -81,12 +81,8 @@ import org.h2.value.ValueString;
public class PageStore implements CacheWriter {
// TODO test running out of disk space (using a special file system)
// TODO test with recovery being the default method
// TODO test reservedPages does not grow unbound
// TODO utf-x: test if it's faster
// TODO corrupt pages should be freed once in a while
// TODO node row counts are incorrect (not splitting row counts)
// TODO unused pages should be freed once in a while
// TODO node row counts are incorrect (it's not splitting row counts)
// TODO after opening the database, delay writing until required
// TODO optimization: try to avoid allocating a byte array per page
// TODO optimization: check if calling Data.getValueLen slows things down
......@@ -96,28 +92,12 @@ public class PageStore implements CacheWriter {
// TODO detect circles in linked lists
// (input stream, free list, extend pages...)
// at runtime and recovery
// synchronized correctly (on the index?)
// TODO remove trace or use isDebugEnabled
// TODO recover tool: support syntax to delete a row with a key
// TODO don't store default values (store a special value)
// TODO split files (1 GB max size)
// TODO add a setting (that can be changed at runtime) to call fsync
// and delay on each commit
// TODO check for file size (exception if not exact size expected)
// TODO online backup using bsdiff
// TODO when removing DiskFile:
// remove CacheObject.blockCount
// remove Record.getMemorySize
// simplify InDoubtTransaction
// remove parameter in Record.write(DataPage buff)
// remove Record.getByteCount
// remove Database.objectIds
// remove TableData.checkRowCount
// remove Row.setPos
// remove database URL option RECOVER=1 option
// remove old database URL options and documentation
/**
* The smallest possible page size.
*/
......@@ -128,22 +108,26 @@ public class PageStore implements CacheWriter {
*/
public static final int PAGE_SIZE_MAX = 32768;
/**
* This log mode means the transaction log is not used.
*/
public static final int LOG_MODE_OFF = 0;
/**
* This log mode means the transaction log is used and FileDescriptor.sync()
* is called for each checkpoint. This is the default level.
*/
private static final int LOG_MODE_SYNC = 2;
private static final int PAGE_ID_FREE_LIST_ROOT = 3;
private static final int PAGE_ID_META_ROOT = 4;
private static final int MIN_PAGE_COUNT = 6;
private static final int INCREMENT_PAGES = 128;
private static final int READ_VERSION = 3;
private static final int WRITE_VERSION = 3;
private static final int META_TYPE_DATA_INDEX = 0;
private static final int META_TYPE_BTREE_INDEX = 1;
private static final int META_TABLE_ID = -1;
private static final SearchRow[] EMPTY_SEARCH_ROW = { };
private Database database;
private final Trace trace;
private String fileName;
......@@ -153,11 +137,8 @@ public class PageStore implements CacheWriter {
private int pageSizeShift;
private long writeCountBase, writeCount, readCount;
private int logKey, logFirstTrunkPage, logFirstDataPage;
private Cache cache;
private int freeListPagesPerList;
private boolean recoveryRunning;
/**
......@@ -171,7 +152,6 @@ public class PageStore implements CacheWriter {
private int pageCount;
private PageLog log;
private Schema metaSchema;
private RegularTable metaTable;
private PageDataIndex metaIndex;
......@@ -189,7 +169,6 @@ public class PageStore implements CacheWriter {
private long maxLogSize = Constants.DEFAULT_MAX_LOG_SIZE;
private Session systemSession;
private BitSet freed = new BitSet();
private ArrayList<PageFreeList> freeLists = New.arrayList();
/**
......@@ -202,10 +181,9 @@ public class PageStore implements CacheWriter {
private int changeCount = 1;
private Data emptyPage;
private long logSizeBase;
private HashMap<String, Integer> statistics;
private int logMode = LOG_MODE_SYNC;
/**
* Create a new page store object.
......@@ -755,7 +733,9 @@ public class PageStore implements CacheWriter {
private void writeVariableHeader() {
trace.debug("writeVariableHeader");
file.sync();
if (logMode == LOG_MODE_SYNC) {
file.sync();
}
Data page = createData();
page.writeInt(0);
page.writeLong(getWriteCountTotal());
......@@ -1202,9 +1182,11 @@ public class PageStore implements CacheWriter {
* @param add true if the row is added, false if it is removed
*/
public void logAddOrRemoveRow(Session session, int tableId, Row row, boolean add) {
synchronized (database) {
if (logMode != LOG_MODE_OFF) {
if (!recoveryRunning) {
log.logAddOrRemoveRow(session, tableId, row, add);
synchronized (database) {
log.logAddOrRemoveRow(session, tableId, row, add);
}
}
}
}
......@@ -1703,4 +1685,12 @@ public class PageStore implements CacheWriter {
return changeCount;
}
public void setLogMode(int logMode) {
this.logMode = logMode;
}
public int getLogMode() {
return logMode;
}
}
......@@ -859,6 +859,7 @@ public class MetaTable extends Table {
add(rows, "MULTI_THREADED", database.isMultiThreaded() ? "1" : "0");
add(rows, "MVCC", database.isMultiVersion() ? "TRUE" : "FALSE");
add(rows, "QUERY_TIMEOUT", "" + session.getQueryTimeout());
add(rows, "LOG", "" + database.getLogMode());
// the setting for the current database
add(rows, "h2.allowBigDecimalExtensions", "" + SysProperties.ALLOW_BIG_DECIMAL_EXTENSIONS);
add(rows, "h2.baseDir", "" + SysProperties.getBaseDir());
......
......@@ -352,7 +352,8 @@ kill -9 `jps -l | grep "org.h2.test." | cut -d " " -f 1`
System.setProperty("h2.check2", "false");
System.setProperty("h2.lobInDatabase", "true");
System.setProperty("h2.analyzeAuto", "100");
// System.setProperty("h2.pageSize", "64");
System.setProperty("h2.pageSize", "64");
// System.setProperty("reopenShift", "9");
RecordingFileSystem.register();
test.record = true;
TestReopen reopen = new TestReopen();
......
......@@ -252,17 +252,20 @@ public abstract class TestBase {
url += ";TRACE_LEVEL_SYSTEM_OUT=2";
}
if (config.traceLevelFile > 0 && admin) {
if (url.indexOf("TRACE_LEVEL_FILE=") < 0) {
if (url.indexOf(";TRACE_LEVEL_FILE=") < 0) {
url += ";TRACE_LEVEL_FILE=" + config.traceLevelFile;
}
if (url.indexOf("TRACE_MAX_FILE_SIZE") < 0) {
if (url.indexOf(";TRACE_MAX_FILE_SIZE=") < 0) {
url += ";TRACE_MAX_FILE_SIZE=8";
}
}
if (url.indexOf(";LOG=") < 0) {
url += ";LOG=1";
}
if (config.throttle > 0) {
url += ";THROTTLE=" + config.throttle;
}
if (url.indexOf("LOCK_TIMEOUT=") < 0) {
if (url.indexOf(";LOCK_TIMEOUT=") < 0) {
url += ";LOCK_TIMEOUT=50";
}
if (config.diskUndo && admin) {
......@@ -272,15 +275,15 @@ public abstract class TestBase {
// force operations to disk
url += ";MAX_OPERATION_MEMORY=1";
}
if (config.mvcc && url.indexOf("MVCC=") < 0) {
if (config.mvcc && url.indexOf(";MVCC=") < 0) {
url += ";MVCC=TRUE";
}
if (config.cacheType != null && admin && url.indexOf("CACHE_TYPE=") < 0) {
if (config.cacheType != null && admin && url.indexOf(";CACHE_TYPE=") < 0) {
url += ";CACHE_TYPE=" + config.cacheType;
}
if (config.diskResult && admin) {
url += ";MAX_MEMORY_ROWS=100";
if (url.indexOf("CACHE_SIZE=") < 0) {
if (url.indexOf(";CACHE_SIZE=") < 0) {
url += ";CACHE_SIZE=0";
}
}
......
......@@ -34,6 +34,7 @@ public class TestTransaction extends TestBase {
}
public void test() throws SQLException {
testLogMode();
testRollback();
testRollback2();
testForUpdate();
......@@ -44,6 +45,47 @@ public class TestTransaction extends TestBase {
deleteDb("transaction");
}
private void testLogMode() throws SQLException {
if (config.memory) {
return;
}
deleteDb("transaction");
testLogMode(0);
testLogMode(1);
testLogMode(2);
}
private void testLogMode(int logMode) throws SQLException {
Connection conn;
Statement stat;
ResultSet rs;
conn = getConnection("transaction");
stat = conn.createStatement();
stat.execute("create table test(id int primary key) as select 1");
stat.execute("set write_delay 0");
stat.execute("set log " + logMode);
rs = stat.executeQuery("select value from information_schema.settings where name = 'LOG'");
rs.next();
assertEquals(logMode, rs.getInt(1));
stat.execute("insert into test values(2)");
stat.execute("shutdown immediately");
try {
conn.close();
} catch (SQLException e) {
// expected
}
conn = getConnection("transaction");
stat = conn.createStatement();
rs = stat.executeQuery("select * from test order by id");
assertTrue(rs.next());
if (logMode != 0) {
assertTrue(rs.next());
}
assertFalse(rs.next());
stat.execute("drop table test");
conn.close();
}
private void testForUpdate() throws SQLException {
deleteDb("transaction");
Connection conn = getConnection("transaction");
......
......@@ -58,6 +58,7 @@ I am very interested in analyzing and solving this problem. Corruption problems
- What is your database URL?
- How many connections does your application use concurrently?
- Do you use temporary tables?
- Did you use LOG=0 or LOG=1?
- With which version of H2 was this database created?
You can find it out using:
select * from information_schema.settings where name='CREATE_BUILD'
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论