提交 768b6d56 authored 作者: Thomas Mueller's avatar Thomas Mueller

Documentation.

上级 6c5d1ec8
......@@ -746,7 +746,7 @@ TRUNCATE TABLE TEST
"Commands (Other)","CHECKPOINT","
CHECKPOINT
","
Flushes the log and data files and switches to a new log file if possible.
Flushes the data to disk and switches to a new transaction log if possible.
Admin rights are required to execute this command.
","
......@@ -756,7 +756,7 @@ CHECKPOINT
"Commands (Other)","CHECKPOINT SYNC","
CHECKPOINT SYNC
","
Flushes the log, data and index files and forces all system buffers be written
Flushes the data to disk and and forces all system buffers be written
to the underlying device.
Admin rights are required to execute this command.
......@@ -1029,7 +1029,7 @@ SET DEFAULT_TABLE_TYPE { MEMORY | CACHED }
","
Sets the default table storage type that is used when creating new tables.
Memory tables are kept fully in the main memory (including indexes), however
changes to the data are stored in the log file. The size of memory tables is
the data is still stored in the database file. The size of memory tables is
limited by the memory. The default is CACHED.
Admin rights are required to execute this command.
......@@ -1124,10 +1124,9 @@ SET MAX_LENGTH_INPLACE_LOB 128
"Commands (Other)","SET MAX_LOG_SIZE","
SET MAX_LOG_SIZE int
","
Sets the maximum file size of a log file, in megabytes. If the file exceeds the
limit, a new file is created. Old files (that are not used for recovery) are
deleted automatically, but multiple log files may exist for some time. The
default max size is 32 MB.
Sets the maximum size of the transaction log, in megabytes. If the log exceeds the
limit, a new stream is created. Old streams (that are not used for recovery) are
freed automatically. The default max size is 32 MB.
Admin rights are required to execute this command.
This command commits an open transaction.
......
......@@ -737,8 +737,8 @@ consult the source code of the listener and test application.
<h2 id="using_recover_tool">Using the Recover Tool</h2>
<p>
The <code>Recover</code> tool can be used to extract the contents of a data file, even if the database is corrupted.
It also extracts the content of the log file or large objects (CLOB or BLOB).
The <code>Recover</code> tool can be used to extract the contents of a database file, even if the database is corrupted.
It also extracts the content of the transaction log and large objects (CLOB or BLOB).
To run the tool, type on the command line:
</p>
<pre>
......@@ -754,9 +754,7 @@ user, or if there are conflicting users, running the script will fail. Consider
against a database that was created with a user name that is not in the script.
</p>
<p>
The <code>Recover</code> tool creates a SQL script from database files. It also processes the transaction log file(s),
however it does not automatically apply those changes. Usually, many of those changes are already
applied in the database.
The <code>Recover</code> tool creates a SQL script from database file. It also processes the transaction log.
</p>
<h2 id="file_locking_protocols">File Locking Protocols</h2>
......
......@@ -790,10 +790,10 @@ connecting. For a list of supported settings, see <a href="grammar.html">SQL Gra
<h2 id="custom_access_mode">Custom File Access Mode</h2>
<p>
Usually, the database opens log, data and index files with the access mode
<code>rw</code>, meaning
read-write (except for read only databases, where the mode <code>r</code> is used).
To open a database in read-only mode if the files are not read-only, use
Usually, the database opens the database file with the access mode
<code>rw</code>, meaning read-write (except for read only databases,
where the mode <code>r</code> is used).
To open a database in read-only mode if the database file is not read-only, use
<code>ACCESS_MODE_DATA=r</code>.
Also supported are <code>rws</code> and <code>rwd</code>.
This setting must be specified in the database URL:
......@@ -899,39 +899,12 @@ The following files are created for persistent databases:
<tr><td class="notranslate">
test.h2.db
</td><td>
Database file (H2 version 1.2.x).<br />
Database file.<br />
Contains the transaction log, indexes, and data for all tables.<br />
Format: <code>&lt;database&gt;.h2.db</code>
</td><td>
1 per database
</td></tr>
<tr><td class="notranslate">
test.data.db
</td><td>
Data file (H2 version 1.1.x).<br />
Contains the data for all tables.<br />
Format: <code>&lt;database&gt;.data.db</code>
</td><td>
1 per database
</td></tr>
<tr><td class="notranslate">
test.index.db
</td><td>
Index file (H2 version 1.1.x).<br />
Contains the data for all (b-tree) indexes.<br />
Format: <code>&lt;database&gt;.index.db</code>
</td><td>
1 per database
</td></tr>
<tr><td class="notranslate">
test.0.log.db
</td><td>
Transaction log file (H2 version 1.1.x).<br />
The transaction log is used for recovery.<br />
Format: <code>&lt;database&gt;.&lt;id&gt;.log.db</code>
</td><td>
0 or more per database
</td></tr>
<tr><td class="notranslate">
test.lock.db
</td><td>
......@@ -939,7 +912,7 @@ The following files are created for persistent databases:
Automatically (re-)created while the database is in use.<br />
Format: <code>&lt;database&gt;.lock.db</code>
</td><td>
1 per database
1 per database (only if in use)
</td></tr>
<tr><td class="notranslate">
test.trace.db
......@@ -993,26 +966,11 @@ To backup data while the database is running, the SQL command <code>SCRIPT</code
<h2 id="logging_recovery">Logging and Recovery</h2>
<p>
Whenever data is modified in the database and those changes are committed, the changes are logged
to disk (except for in-memory objects). The changes to the data file itself are usually written
later on, to optimize disk access. If there is a power failure, the data and index files are not up-to-date.
But because the changes are in the log file, the next time the database is opened, the changes that are
in the log file are re-applied automatically.
</p><p>
Please note that index file updates are not logged by default. If the database is opened and recovery is required,
the index file is rebuilt from scratch.
</p><p>
There is usually only one log file per database. This file grows until the database is closed successfully,
and is then deleted. Or, if the file gets too big, the database switches to another log file (with a higher id).
It is possible to force the log switching by using the <code>CHECKPOINT</code> command.
</p><p>
If the database file is corrupted, because the checksum of a record does not match (for example, if the
file was edited with another application), the database can be opened in recovery mode. In this case,
errors in the database are logged but not thrown. The database should be backed up to a script
and re-built as soon as possible. To open the database in the recovery mode, use a database URL
must contain <code>;RECOVER=1</code>, as in
<code>jdbc:h2:~/test;RECOVER=1</code>. Indexes are rebuilt in this case, and
the summary (object allocation table) is not read in this case, so opening the database takes longer.
Whenever data is modified in the database and those changes are committed, the changes are written
to the transaction log (except for in-memory objects). The changes to the main data area itself are usually written
later on, to optimize disk access. If there is a power failure, the main data area is not up-to-date,
but because the changes are in the transaction log, the next time the database is opened, the changes
are re-applied automatically.
</p>
<h2 id="compatibility">Compatibility</h2>
......@@ -1303,8 +1261,8 @@ If it does not work, check the file <code>&lt;database&gt;.trace.db</code> for e
If the database files are read-only, then the database is read-only as well.
It is not possible to create new tables, add or modify data in this database.
Only <code>SELECT</code> and <code>CALL</code> statements are allowed.
To create a read-only database, close the database so that the log file gets smaller. Do not delete the log file.
Then, make the database files read-only using the operating system.
To create a read-only database, close the database.
Then, make the database file read-only.
When you open the database now, it is read-only.
There are two ways an application can find out whether database is read-only:
by calling <code>Connection.isReadOnly()</code>
......@@ -1692,11 +1650,11 @@ The trigger can be used to veto a change by throwing a <code>SQLException</code>
<h2 id="compacting">Compacting a Database</h2>
<p>
Empty space in the database file is re-used automatically.
To re-build the indexes, the simplest way is to delete the <code>.index.db</code> file
while the database is closed. However in some situations (for example after deleting
a lot of data in a database), one sometimes wants to shrink the size of the database
(compact a database). Here is a sample function to do this:
Empty space in the database file re-used automatically. When closing the database,
the database is automatically compacted for up to 1 second by default. To compact more,
use the SQL statement SHUTDOWN COMPACT. However re-creating the database may further
reduce the database size because this will re-build the indexes.
Here is a sample function to do this:
</p>
<pre>
public static void compact(String dir, String dbName,
......@@ -1716,7 +1674,7 @@ of a database and re-build the database from the script.
<h2 id="cache_settings">Cache Settings</h2>
<p>
The database keeps most frequently used data and index pages in the main memory.
The database keeps most frequently used data in the main memory.
The amount of memory used for caching can be changed using the setting
<code>CACHE_SIZE</code>. This setting can be set in the database connection URL
(<code>jdbc:h2:~/test;CACHE_SIZE=131072</code>), or it can be changed at runtime using
......@@ -1731,7 +1689,7 @@ please run your own test cases first.
</p><p>
To get information about page reads and writes, and the current caching algorithm in use,
call <code>SELECT * FROM INFORMATION_SCHEMA.SETTINGS</code>. The number of pages read / written
is listed for the data and index file.
is listed.
</p>
<!-- [close] { --></div></td></tr></table><!-- } --><!-- analytics --></body></html>
......
......@@ -24,7 +24,7 @@ See also <a href="build.html#providing_patches">Providing Patches</a>.
<h2>Priority 1</h2>
<ul><li>Bugfixes
</li><li>Use the transaction log for rollback.
</li><li>Support very large transactions.
</li><li>More tests with MULTI_THREADED=1
</li><li>Optimization: result set caching (like MySQL); option to disable
</li><li>Server side cursors
......
......@@ -171,7 +171,7 @@ public class SysProperties {
/**
* System property <code>h2.defaultMaxLengthInplaceLob</code>
* (default: 4096).<br />
* The default maximum length of an LOB that is stored in the data file itself.
* The default maximum length of an LOB that is stored in the database file.
*/
public static final int DEFAULT_MAX_LENGTH_INPLACE_LOB = getIntSetting("h2.defaultMaxLengthInplaceLob", 4096);
......
......@@ -113,7 +113,7 @@ public class Constants {
public static final int DEFAULT_HTTP_PORT = 8082;
/**
* The default value for the maximum log file size.
* The default value for the maximum transaction log size.
*/
public static final long DEFAULT_MAX_LOG_SIZE = 32 * 1024 * 1024;
......@@ -135,7 +135,7 @@ public class Constants {
public static final int DEFAULT_TCP_PORT = 9092;
/**
* The default delay in milliseconds before the log file is written.
* The default delay in milliseconds before the transaction log is written.
*/
public static final int DEFAULT_WRITE_DELAY = 500;
......
......@@ -992,7 +992,7 @@ public class Database implements DataHandler {
for (Session s : all) {
try {
// must roll back, otherwise the session is removed and
// the log file that contains its uncommitted operations as well
// the transaction log that contains its uncommitted operations as well
s.rollback();
s.close();
} catch (DbException e) {
......@@ -1671,7 +1671,7 @@ public class Database implements DataHandler {
}
/**
* Flush all pending changes to the transaction log files.
* Flush all pending changes to the transaction log.
*/
public void flush() {
if (readOnly || pageStore == null) {
......@@ -1846,15 +1846,6 @@ public class Database implements DataHandler {
this.lobCompressionAlgorithm = stringValue;
}
/**
* Called when the size if the data or index file has been changed.
*
* @param length the new file size
*/
public void notifyFileSize(long length) {
// ignore
}
public synchronized void setMaxLogSize(long value) {
if (pageStore != null) {
pageStore.setMaxLogSize(value);
......@@ -2130,7 +2121,7 @@ public class Database implements DataHandler {
}
/**
* Flush all changes and open a new log file.
* Flush all changes and open a new transaction log.
*/
public void checkpoint() {
if (persistent) {
......@@ -2142,7 +2133,7 @@ public class Database implements DataHandler {
}
/**
* This method is called before writing to the log file.
* This method is called before writing to the transaction log.
*
* @return true if the call was successful and writing is allowed,
* false if another connection was faster
......
......@@ -477,7 +477,7 @@ public class Session extends SessionWithState implements SessionFactory {
}
}
if (unlinkMap != null && unlinkMap.size() > 0) {
// need to flush the log file, because we can't unlink lobs if the
// need to flush the transaction log, because we can't unlink lobs if the
// commit record is not written
database.flush();
for (Value v : unlinkMap.values()) {
......@@ -696,10 +696,10 @@ public class Session extends SessionWithState implements SessionFactory {
/**
* Called when a log entry for this session is added. The session keeps
* track of the first entry in the log file that is not yet committed.
* track of the first entry in the transaction log that is not yet committed.
*
* @param logId the log file id
* @param pos the position of the log entry in the log file
* @param logId the transaction log id
* @param pos the position of the log entry in the transaction log
*/
public void addLogPos(int logId, int pos) {
if (firstUncommittedLog == Session.LOG_WRITTEN) {
......@@ -713,7 +713,8 @@ public class Session extends SessionWithState implements SessionFactory {
}
/**
* This method is called after the log file has committed this session.
* This method is called after the transaction log has written the commit
* entry for this session.
*/
public void setAllCommitted() {
firstUncommittedLog = Session.LOG_WRITTEN;
......
......@@ -276,7 +276,7 @@ public class PageBtreeIndex extends PageIndex {
}
/**
* Get a row from the data file.
* Get a row from the main index.
*
* @param session the session
* @param key the row key
......
......@@ -24,7 +24,7 @@ import org.h2.util.SmallLRUCache;
* usually one trace system per database. It is called 'trace' because the term
* 'log' is already used in the database domain and means 'transaction log'. It
* is possible to write after close was called, but that means for each write
* the log file will be opened and closed again (which is slower).
* the file will be opened and closed again (which is slower).
*/
public class TraceSystem implements TraceWriter {
......
......@@ -262,11 +262,11 @@ Removes all rows from a table."
"Commands (Other)","CHECKPOINT","
CHECKPOINT
","
Flushes the log and data files and switches to a new log file if possible."
Flushes the data to disk and switches to a new transaction log if possible."
"Commands (Other)","CHECKPOINT SYNC","
CHECKPOINT SYNC
","
Flushes the log, data and index files and forces all system buffers be written
Flushes the data to disk and and forces all system buffers be written
to the underlying device."
"Commands (Other)","COMMIT","
COMMIT [ WORK ]
......@@ -385,7 +385,7 @@ Sets the maximum size of an in-place LOB object."
"Commands (Other)","SET MAX_LOG_SIZE","
SET MAX_LOG_SIZE int
","
Sets the maximum file size of a log file, in megabytes."
Sets the maximum size of the transaction log, in megabytes."
"Commands (Other)","SET MAX_MEMORY_ROWS","
SET MAX_MEMORY_ROWS int
","
......
......@@ -22,10 +22,10 @@ public interface CacheWriter {
void writeBack(CacheObject entry);
/**
* Flush the log file, so that entries can be removed from the cache. This
* is only required if the cache is full and contains data that is not yet
* written to the log file. It is required to write the log entries to the
* log file first, because the log file is 'write ahead'.
* Flush the transaction log, so that entries can be removed from the cache.
* This is only required if the cache is full and contains data that is not
* yet written to the log. It is required to write the log entries to the
* log first, because the log is 'write ahead'.
*/
void flushLog();
......
......@@ -580,7 +580,7 @@ public class ValueLob extends Value {
if (hash == 0) {
if (precision > 4096) {
// TODO: should calculate the hash code when saving, and store
// it in the data file
// it in the database file
return (int) (precision ^ (precision >>> 32));
}
if (type == CLOB) {
......
......@@ -226,8 +226,8 @@ java org.h2.test.TestAll timer
public boolean diskResult;
/**
* If the transaction log files should be kept small (that is, log files
* should be switched early).
* If the transaction log should be kept small (that is, the log should be
* switched early).
*/
boolean smallLog;
......@@ -292,9 +292,14 @@ java org.h2.test.TestAll timer
power failure test: larger binaries and additional index.
recover even if table not found
(PageStore.addMeta, type != META_TYPE_SCAN_INDEX)
rename Page* classes
move classes to the right packages
Fix the default value and documentation for MAX_LOG_SIZE.
// System.setProperty("h2.pageSize", "64");
test with small freeList pages, page size 64
......@@ -545,6 +550,7 @@ kill -9 `jps -l | grep "org.h2.test." | cut -d " " -f 1`
new TestCrashAPI().runTest(this);
new TestFuzzOptimizations().runTest(this);
new TestRandomSQL().runTest(this);
new TestKillRestart().runTest(this);
new TestKillRestartMulti().runTest(this);
new TestMultiThreaded().runTest(this);
......
......@@ -52,7 +52,8 @@ public abstract class TestHalt extends TestBase {
protected static final int OP_SELECT = 8;
/**
* This bit flag means operations should be written to the log file immediately.
* This bit flag means operations should be written to the transaction log
* immediately.
*/
protected static final int FLAG_NO_DELAY = 1;
......
......@@ -52,6 +52,7 @@ This looks like a serious problem. I have a few questions:
- Could you send the full stack trace of the exception including message text?
- What is your database URL?
- Did you use multiple connections?
- Do you use temporary tables?
- A workarounds is: use the tool org.h2.tools.Recover to create
the SQL script file, and then re-create the database using this script.
Does it work when you do this?
......@@ -60,14 +61,13 @@ This looks like a serious problem. I have a few questions:
select * from information_schema.settings where name='CREATE_BUILD'
or have a look in the SQL script created by the recover tool.
- Did the application run out of memory (once, or multiple times)?
- Do you use any settings or special features (for example, the setting
LOG=0, or two phase commit, linked tables, cache settings)?
- Do you use any settings or special features (for example cache settings,
two phase commit, linked tables)?
- Do you use any H2-specific system properties?
- Is the application multi-threaded?
- What operating system, file system, and virtual machine
(java -version) do you use?
- How did you start the Java process (java -Xmx... and so on)?
- To you use temporary tables?
- Is it (or was it at some point) a networked file system?
- How big is the database (file sizes)?
- How much heap memory does the Java process have?
......
......@@ -635,4 +635,4 @@ explicitconstructorcall switchstatements members parens alignment declarations
jdt continuation codegen parenthesized tabulation ellipsis imple inits guardian
postfix iconified deiconified deactivated activated worker frequent utilities
workers appender recovers balanced serializing breaking austria wildam
census genealogy scapegoat gov
\ No newline at end of file
census genealogy scapegoat gov compacted migrating
\ No newline at end of file
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论