提交 249d5f2c authored 作者: Thomas Mueller's avatar Thomas Mueller

MVStore: store the file header also at the end of each chunk, which means even…

MVStore: store the file header also at the end of each chunk, which means even less write operations
上级 8846c1bb
...@@ -18,7 +18,9 @@ Change Log ...@@ -18,7 +18,9 @@ Change Log
<h1>Change Log</h1> <h1>Change Log</h1>
<h2>Next Version (unreleased)</h2> <h2>Next Version (unreleased)</h2>
<ul><li>MVStore: a map implementation that supports concurrent operations. <ul><li>MVStore: store the file header also at the end of each chunk,
which results in a further reduced number of write operations.
</li><li>MVStore: a map implementation that supports concurrent operations.
</li><li>MVStore: unified exception handling; the version is included in the messages. </li><li>MVStore: unified exception handling; the version is included in the messages.
</li><li>MVStore: old data is now retained for 45 seconds by default. </li><li>MVStore: old data is now retained for 45 seconds by default.
</ul><li>MVStore: compress is now disabled by default, and can be enabled on request. </ul><li>MVStore: compress is now disabled by default, and can be enabled on request.
......
...@@ -267,34 +267,35 @@ The plan is to add such a mechanism later when needed. ...@@ -267,34 +267,35 @@ The plan is to add such a mechanism later when needed.
Currently, <code>store()</code> needs to be called explicitly to save changes. Currently, <code>store()</code> needs to be called explicitly to save changes.
Changes are buffered in memory, and once enough changes have accumulated Changes are buffered in memory, and once enough changes have accumulated
(for example 2 MB), all changes are written in one continuous disk write operation. (for example 2 MB), all changes are written in one continuous disk write operation.
(According to a test, write throughput of a common SSD gets higher the larger the block size,
until a block size of 2 MB, and then does not further increase.)
But of course, if needed, changes can also be persisted if only little data was changed. But of course, if needed, changes can also be persisted if only little data was changed.
The estimated amount of unsaved changes is tracked. The estimated amount of unsaved changes is tracked.
The plan is to automatically store in a background thread once there are enough changes. The plan is to automatically store in a background thread once there are enough changes.
</p><p> </p><p>
When storing, all changed pages are serialized, When storing, all changed pages are serialized,
compressed using the LZF algorithm (this can be disabled), optionally compressed using the LZF algorithm,
and written sequentially to a free area of the file. and written sequentially to a free area of the file.
Each such change set is called a chunk. Each such change set is called a chunk.
All parent pages of the changed B-trees are stored in this chunk as well, All parent pages of the changed B-trees are stored in this chunk as well,
so that each chunk also contains the root of each changed map so that each chunk also contains the root of each changed map
(which is the entry point to read old data). (which is the entry point to read this version of the data).
There is no separate index: all data is stored as a list of pages. There is no separate index: all data is stored as a list of pages.
Per store, the is one additional map that contains the metadata (the list of Per store, the is one additional map that contains the metadata (the list of
maps, where the root page of each map is stored, and the list of chunks). maps, where the root page of each map is stored, and the list of chunks).
</p><p> </p><p>
There are currently two write operations per chunk: There are usually two write operations per chunk:
one to store the chunk data (the pages), and one to update the file header one to store the chunk data (the pages), and one to update the file header (so it points to the latest chunk).
(so it points to the latest chunk), but the plan is to write the file header only If the chunk is appended at the end of the file, the file header is only written at the end of the chunk.
once in a while, in a way that still allows to open a store very quickly.
</p><p> </p><p>
There is currently no transaction log, no undo log, There is currently no transaction log, no undo log,
and there are no in-place updates (however unused chunks are overwritten). and there are no in-place updates (however unused chunks are overwritten).
To efficiently persist very small transactions, the plan is to support a transaction log To save space when persisting very small transactions, the plan is to use a transaction log
where only the deltas is stored, until enough changes have accumulated to persist a chunk. where only the deltas are stored, until enough changes have accumulated to persist a chunk.
Old versions are kept and are readable until they are no longer needed.
</p><p> </p><p>
The plan is to keep all old data for at least one or two minutes (configurable), Old data is kept for at least 45 seconds (configurable),
so that there are no explicit sync operations required to guarantee data consistency. so that there are no explicit sync operations required to guarantee data consistency,
but an application can also sync explicitly when needed.
To reuse disk space, the chunks with the lowest amount of live data are compacted To reuse disk space, the chunks with the lowest amount of live data are compacted
(the live data is simply stored again in the next chunk). (the live data is simply stored again in the next chunk).
To improve data locality and disk space usage, the plan is to automatically defragment and compact data. To improve data locality and disk space usage, the plan is to automatically defragment and compact data.
...@@ -343,8 +344,8 @@ Instead, unchecked exceptions are thrown if needed. ...@@ -343,8 +344,8 @@ Instead, unchecked exceptions are thrown if needed.
The error message always contains the version of the tool. The error message always contains the version of the tool.
The following exceptions can occur: The following exceptions can occur:
</p> </p>
<ul><li><code>IllegalStateException</code> if a map was already closed, <ul><li><code>IllegalStateException</code> if a map was already closed or
in IO exception occurred, for example if the file was locked, is already closed, an IO exception occurred, for example if the file was locked, is already closed,
could not be opened or closed, if reading or writing failed, could not be opened or closed, if reading or writing failed,
if the file is corrupt, or if there is an internal error in the tool. if the file is corrupt, or if there is an internal error in the tool.
</li><li><code>IllegalArgumentException</code> if a method was called with an illegal argument. </li><li><code>IllegalArgumentException</code> if a method was called with an illegal argument.
...@@ -361,7 +362,7 @@ The MVStore is somewhat similar to the Berkeley DB Java Edition because it is al ...@@ -361,7 +362,7 @@ The MVStore is somewhat similar to the Berkeley DB Java Edition because it is al
and is also a log structured storage, but the H2 license is more liberal. and is also a log structured storage, but the H2 license is more liberal.
</p><p> </p><p>
Like SQLite, the MVStore keeps all data in one file. Like SQLite, the MVStore keeps all data in one file.
The plan is to make the MVStore easier to use and faster than SQLite on Android The plan is to make the MVStore easier to use than SQLite and faster, including faster on Android
(this was not recently tested, however an initial test was successful). (this was not recently tested, however an initial test was successful).
</p><p> </p><p>
The API of the MVStore is similar to MapDB (previously known as JDBM) from Jan Kotek, The API of the MVStore is similar to MapDB (previously known as JDBM) from Jan Kotek,
...@@ -387,4 +388,3 @@ The MVStore should run on any JVM as well as on Android ...@@ -387,4 +388,3 @@ The MVStore should run on any JVM as well as on Android
</p> </p>
<!-- [close] { --></div></td></tr></table><!-- } --><!-- analytics --></body></html> <!-- [close] { --></div></td></tr></table><!-- } --><!-- analytics --></body></html>
...@@ -299,7 +299,7 @@ public class DataUtils { ...@@ -299,7 +299,7 @@ public class DataUtils {
/** /**
* Read from a file channel until the buffer is full, or end-of-file * Read from a file channel until the buffer is full, or end-of-file
* has been reached. The buffer is rewind at the end. * has been reached. The buffer is rewind after reading.
* *
* @param file the file channel * @param file the file channel
* @param pos the absolute position within the file * @param pos the absolute position within the file
......
...@@ -27,6 +27,7 @@ import org.h2.mvstore.type.ObjectDataTypeFactory; ...@@ -27,6 +27,7 @@ import org.h2.mvstore.type.ObjectDataTypeFactory;
import org.h2.mvstore.type.StringDataType; import org.h2.mvstore.type.StringDataType;
import org.h2.store.fs.FilePath; import org.h2.store.fs.FilePath;
import org.h2.store.fs.FileUtils; import org.h2.store.fs.FileUtils;
import org.h2.util.MathUtils;
import org.h2.util.New; import org.h2.util.New;
import org.h2.util.StringUtils; import org.h2.util.StringUtils;
...@@ -36,20 +37,18 @@ File format: ...@@ -36,20 +37,18 @@ File format:
header: (blockSize) bytes header: (blockSize) bytes
header: (blockSize) bytes header: (blockSize) bytes
[ chunk ] * [ chunk ] *
(there are two headers for security) (there are two headers for security at the beginning of the file,
and there is a header after each chunk)
header: header:
H:3,... H:3,...
TODO: TODO:
- concurrent map; avoid locking during IO (pre-load pages)
- maybe split database into multiple files, to speed up compact
- automated 'kill process' and 'power failure' test - automated 'kill process' and 'power failure' test
- implement table engine for H2 - implement table engine for H2
- maybe split database into multiple files, to speed up compact
- auto-compact from time to time and on close - auto-compact from time to time and on close
- test and possibly improve compact operation (for large dbs) - test and possibly improve compact operation (for large dbs)
- support background writes (concurrent modification & store)
- limited support for writing to old versions (branches) - limited support for writing to old versions (branches)
- support concurrent operations (including file I/O)
- on insert, if the child page is already full, don't load and modify it - on insert, if the child page is already full, don't load and modify it
-- split directly (for leaves with 1 entry) -- split directly (for leaves with 1 entry)
- performance test with encrypting file system - performance test with encrypting file system
...@@ -93,6 +92,9 @@ TODO: ...@@ -93,6 +92,9 @@ TODO:
-- and for MVStore on Windows, auto-detect renamed file -- and for MVStore on Windows, auto-detect renamed file
- ensure data is overwritten eventually if the system doesn't have a timer - ensure data is overwritten eventually if the system doesn't have a timer
- SSD-friendly write (always in blocks of 128 or 256 KB?) - SSD-friendly write (always in blocks of 128 or 256 KB?)
- close the file on out of memory or disk write error (out of disk space or so)
- implement a shareded map (in one store, multiple stores)
-- to support concurrent updates and writes, and very large maps
*/ */
...@@ -448,29 +450,52 @@ public class MVStore { ...@@ -448,29 +450,52 @@ public class MVStore {
} }
private void readFileHeader() { private void readFileHeader() {
byte[] headers = new byte[2 * BLOCK_SIZE]; // we don't have a valid header yet
currentVersion = -1;
// read the last block of the file, and then two first blocks
ByteBuffer buff = ByteBuffer.allocate(3 * BLOCK_SIZE);
buff.limit(BLOCK_SIZE);
fileReadCount++; fileReadCount++;
DataUtils.readFully(file, 0, ByteBuffer.wrap(headers)); DataUtils.readFully(file, fileSize - BLOCK_SIZE, buff);
for (int i = 0; i <= BLOCK_SIZE; i += BLOCK_SIZE) { buff.limit(3 * BLOCK_SIZE);
String s = StringUtils.utf8Decode(headers, i, BLOCK_SIZE).trim(); buff.position(BLOCK_SIZE);
fileHeader = DataUtils.parseMap(s); fileReadCount++;
rootChunkStart = Long.parseLong(fileHeader.get("rootChunk")); DataUtils.readFully(file, 0, buff);
creationTime = Long.parseLong(fileHeader.get("creationTime")); for (int i = 0; i < 3 * BLOCK_SIZE; i += BLOCK_SIZE) {
currentVersion = Long.parseLong(fileHeader.get("version")); String s = StringUtils.utf8Decode(buff.array(), i, BLOCK_SIZE)
lastMapId = Integer.parseInt(fileHeader.get("lastMapId")); .trim();
int check = (int) Long.parseLong(fileHeader.get("fletcher"), 16); HashMap<String, String> m = DataUtils.parseMap(s);
String f = m.remove("fletcher");
if (f == null) {
continue;
}
int check;
try {
check = (int) Long.parseLong(f, 16);
} catch (NumberFormatException e) {
check = -1;
}
s = s.substring(0, s.lastIndexOf("fletcher") - 1) + " "; s = s.substring(0, s.lastIndexOf("fletcher") - 1) + " ";
byte[] bytes = StringUtils.utf8Encode(s); byte[] bytes = StringUtils.utf8Encode(s);
int checksum = DataUtils.getFletcher32(bytes, int checksum = DataUtils.getFletcher32(bytes, bytes.length / 2 * 2);
bytes.length / 2 * 2); if (check != checksum) {
if (check == checksum) { continue;
return; }
long version = Long.parseLong(m.get("version"));
if (version > currentVersion) {
fileHeader = m;
rootChunkStart = Long.parseLong(m.get("rootChunk"));
creationTime = Long.parseLong(m.get("creationTime"));
currentVersion = version;
lastMapId = Integer.parseInt(m.get("lastMapId"));
} }
} }
throw DataUtils.illegalStateException("File header is corrupt"); if (currentVersion < 0) {
throw DataUtils.illegalStateException("File header is corrupt");
}
} }
private void writeFileHeader() { private byte[] getFileHeaderBytes() {
StringBuilder buff = new StringBuilder(); StringBuilder buff = new StringBuilder();
fileHeader.put("lastMapId", "" + lastMapId); fileHeader.put("lastMapId", "" + lastMapId);
fileHeader.put("rootChunk", "" + rootChunkStart); fileHeader.put("rootChunk", "" + rootChunkStart);
...@@ -483,6 +508,11 @@ public class MVStore { ...@@ -483,6 +508,11 @@ public class MVStore {
if (bytes.length > BLOCK_SIZE) { if (bytes.length > BLOCK_SIZE) {
throw DataUtils.illegalArgumentException("File header too large: " + buff); throw DataUtils.illegalArgumentException("File header too large: " + buff);
} }
return bytes;
}
private void writeFileHeader() {
byte[] bytes = getFileHeaderBytes();
ByteBuffer header = ByteBuffer.allocate(2 * BLOCK_SIZE); ByteBuffer header = ByteBuffer.allocate(2 * BLOCK_SIZE);
header.put(bytes); header.put(bytes);
header.position(BLOCK_SIZE); header.position(BLOCK_SIZE);
...@@ -614,7 +644,7 @@ public class MVStore { ...@@ -614,7 +644,7 @@ public class MVStore {
buff = writeBuffer; buff = writeBuffer;
buff.clear(); buff.clear();
} else { } else {
writeBuffer = buff = ByteBuffer.allocate(maxLength + 1024 * 1024); writeBuffer = buff = ByteBuffer.allocate(maxLength + 128 * 1024);
} }
} }
// need to patch the header later // need to patch the header later
...@@ -644,9 +674,14 @@ public class MVStore { ...@@ -644,9 +674,14 @@ public class MVStore {
// the correct value is written in the chunk header // the correct value is written in the chunk header
meta.getRoot().writeUnsavedRecursive(c, buff); meta.getRoot().writeUnsavedRecursive(c, buff);
buff.flip(); int chunkLength = buff.position();
int length = buff.limit();
long filePos = allocateChunk(length); int length = MathUtils.roundUpInt(chunkLength, BLOCK_SIZE) + BLOCK_SIZE;
buff.limit(length);
long fileLength = getFileLengthUsed();
long filePos = reuseSpace ? allocateChunk(length) : fileLength;
boolean atEnd = filePos + length >= fileLength;
// need to keep old chunks // need to keep old chunks
// until they are are no longer referenced // until they are are no longer referenced
...@@ -656,22 +691,30 @@ public class MVStore { ...@@ -656,22 +691,30 @@ public class MVStore {
chunks.remove(x); chunks.remove(x);
} }
buff.rewind();
c.start = filePos; c.start = filePos;
c.length = length; c.length = chunkLength;
c.metaRootPos = meta.getRoot().getPos(); c.metaRootPos = meta.getRoot().getPos();
buff.position(0);
c.writeHeader(buff); c.writeHeader(buff);
buff.rewind(); rootChunkStart = filePos;
revertTemp();
buff.position(buff.limit() - BLOCK_SIZE);
byte[] header = getFileHeaderBytes();
buff.put(header);
// fill the header with zeroes
buff.put(new byte[BLOCK_SIZE - header.length]);
buff.position(0);
fileWriteCount++; fileWriteCount++;
DataUtils.writeFully(file, filePos, buff); DataUtils.writeFully(file, filePos, buff);
fileSize = Math.max(fileSize, filePos + buff.position()); fileSize = Math.max(fileSize, filePos + buff.position());
rootChunkStart = filePos;
revertTemp();
// write the new version (after the commit) // overwrite the header if required
writeFileHeader(); if (!atEnd) {
shrinkFileIfPossible(1); writeFileHeader();
shrinkFileIfPossible(1);
}
// some pages might have been changed in the meantime (in the newest version) // some pages might have been changed in the meantime (in the newest version)
unsavedPageCount = Math.max(0, unsavedPageCount - currentUnsavedPageCount); unsavedPageCount = Math.max(0, unsavedPageCount - currentUnsavedPageCount);
return version; return version;
...@@ -725,21 +768,18 @@ public class MVStore { ...@@ -725,21 +768,18 @@ public class MVStore {
} }
private long getFileLengthUsed() { private long getFileLengthUsed() {
int min = 2; long size = 2 * BLOCK_SIZE;
for (Chunk c : chunks.values()) { for (Chunk c : chunks.values()) {
if (c.start == Long.MAX_VALUE) { if (c.start == Long.MAX_VALUE) {
continue; continue;
} }
int last = (int) ((c.start + c.length) / BLOCK_SIZE); long x = c.start + c.length;
min = Math.max(min, last + 1); size = Math.max(size, MathUtils.roundUpLong(x, BLOCK_SIZE) + BLOCK_SIZE);
} }
return min * BLOCK_SIZE; return size;
} }
private long allocateChunk(long length) { private long allocateChunk(long length) {
if (!reuseSpace) {
return getFileLengthUsed();
}
BitSet set = new BitSet(); BitSet set = new BitSet();
set.set(0); set.set(0);
set.set(1); set.set(1);
...@@ -749,7 +789,7 @@ public class MVStore { ...@@ -749,7 +789,7 @@ public class MVStore {
} }
int first = (int) (c.start / BLOCK_SIZE); int first = (int) (c.start / BLOCK_SIZE);
int last = (int) ((c.start + c.length) / BLOCK_SIZE); int last = (int) ((c.start + c.length) / BLOCK_SIZE);
set.set(first, last +1); set.set(first, last + 2);
} }
int required = (int) (length / BLOCK_SIZE) + 1; int required = (int) (length / BLOCK_SIZE) + 1;
for (int i = 0; i < set.size(); i++) { for (int i = 0; i < set.size(); i++) {
...@@ -1197,6 +1237,15 @@ public class MVStore { ...@@ -1197,6 +1237,15 @@ public class MVStore {
} while (last.version > version && chunks.size() > 0); } while (last.version > version && chunks.size() > 0);
rootChunkStart = last.start; rootChunkStart = last.start;
writeFileHeader(); writeFileHeader();
// need to write the header at the end of the file as well,
// so that the old end header is not used
byte[] bytes = getFileHeaderBytes();
ByteBuffer header = ByteBuffer.allocate(BLOCK_SIZE);
header.put(bytes);
header.rewind();
fileWriteCount++;
DataUtils.writeFully(file, fileSize, header);
fileSize += BLOCK_SIZE;
readFileHeader(); readFileHeader();
readMeta(); readMeta();
} }
......
...@@ -6,13 +6,11 @@ ...@@ -6,13 +6,11 @@
*/ */
package org.h2.mvstore; package org.h2.mvstore;
import java.io.ByteArrayInputStream;
import java.io.IOException; import java.io.IOException;
import java.io.PrintWriter; import java.io.PrintWriter;
import java.io.StringReader; import java.io.Writer;
import java.nio.ByteBuffer; import java.nio.ByteBuffer;
import java.nio.channels.FileChannel; import java.nio.channels.FileChannel;
import java.util.Properties;
import org.h2.store.fs.FilePath; import org.h2.store.fs.FilePath;
import org.h2.store.fs.FileUtils; import org.h2.store.fs.FileUtils;
...@@ -47,9 +45,10 @@ public class MVStoreTool { ...@@ -47,9 +45,10 @@ public class MVStoreTool {
* @param fileName the name of the file * @param fileName the name of the file
* @param writer the print writer * @param writer the print writer
*/ */
public static void dump(String fileName, PrintWriter writer) throws IOException { public static void dump(String fileName, Writer writer) throws IOException {
PrintWriter pw = new PrintWriter(writer, true);
if (!FileUtils.exists(fileName)) { if (!FileUtils.exists(fileName)) {
writer.println("File not found: " + fileName); pw.println("File not found: " + fileName);
return; return;
} }
FileChannel file = null; FileChannel file = null;
...@@ -57,20 +56,21 @@ public class MVStoreTool { ...@@ -57,20 +56,21 @@ public class MVStoreTool {
try { try {
file = FilePath.get(fileName).open("r"); file = FilePath.get(fileName).open("r");
long fileLength = file.size(); long fileLength = file.size();
byte[] header = new byte[blockSize]; pw.println("file " + fileName);
file.read(ByteBuffer.wrap(header), 0); pw.println(" length " + fileLength);
Properties prop = new Properties(); ByteBuffer block = ByteBuffer.allocate(4096);
prop.load(new ByteArrayInputStream(header));
prop.load(new StringReader(new String(header, "UTF-8")));
writer.println("file " + fileName);
writer.println(" length " + fileLength);
writer.println(" " + prop);
ByteBuffer block = ByteBuffer.allocate(40);
for (long pos = 0; pos < fileLength;) { for (long pos = 0; pos < fileLength;) {
block.rewind(); block.rewind();
DataUtils.readFully(file, pos, block); DataUtils.readFully(file, pos, block);
block.rewind(); block.rewind();
if (block.get() != 'c') { int tag = block.get();
if (tag == 'H') {
pw.println(" header at " + pos);
pw.println(" " + new String(block.array(), "UTF-8").trim());
pos += blockSize;
continue;
}
if (tag != 'c') {
pos += blockSize; pos += blockSize;
continue; continue;
} }
...@@ -80,7 +80,7 @@ public class MVStoreTool { ...@@ -80,7 +80,7 @@ public class MVStoreTool {
long metaRootPos = block.getLong(); long metaRootPos = block.getLong();
long maxLength = block.getLong(); long maxLength = block.getLong();
long maxLengthLive = block.getLong(); long maxLengthLive = block.getLong();
writer.println(" chunk " + chunkId + pw.println(" chunk " + chunkId +
" at " + pos + " at " + pos +
" length " + chunkLength + " length " + chunkLength +
" pageCount " + pageCount + " pageCount " + pageCount +
...@@ -102,7 +102,7 @@ public class MVStoreTool { ...@@ -102,7 +102,7 @@ public class MVStoreTool {
int type = chunk.get(); int type = chunk.get();
boolean compressed = (type & 2) != 0; boolean compressed = (type & 2) != 0;
boolean node = (type & 1) != 0; boolean node = (type & 1) != 0;
writer.println(" map " + mapId + " at " + p + " " + pw.println(" map " + mapId + " at " + p + " " +
(node ? "node" : "leaf") + " " + (node ? "node" : "leaf") + " " +
(compressed ? "compressed " : "") + (compressed ? "compressed " : "") +
"len: " + pageLength + " entries: " + len); "len: " + pageLength + " entries: " + len);
...@@ -111,7 +111,7 @@ public class MVStoreTool { ...@@ -111,7 +111,7 @@ public class MVStoreTool {
} }
} }
} catch (IOException e) { } catch (IOException e) {
writer.println("ERROR: " + e); pw.println("ERROR: " + e);
throw e; throw e;
} finally { } finally {
if (file != null) { if (file != null) {
...@@ -122,8 +122,8 @@ public class MVStoreTool { ...@@ -122,8 +122,8 @@ public class MVStoreTool {
} }
} }
} }
writer.println(); pw.println();
writer.flush(); pw.flush();
} }
} }
...@@ -7,10 +7,9 @@ package org.h2.test.store; ...@@ -7,10 +7,9 @@ package org.h2.test.store;
import java.io.BufferedInputStream; import java.io.BufferedInputStream;
import java.io.ByteArrayOutputStream; import java.io.ByteArrayOutputStream;
import java.io.FileInputStream;
import java.io.FileOutputStream; import java.io.FileOutputStream;
import java.io.InputStream; import java.io.InputStream;
import java.nio.channels.Channels;
import java.nio.channels.FileChannel;
import java.util.Iterator; import java.util.Iterator;
import java.util.Map; import java.util.Map;
import java.util.Random; import java.util.Random;
...@@ -39,7 +38,7 @@ public class TestConcurrent extends TestMVStore { ...@@ -39,7 +38,7 @@ public class TestConcurrent extends TestMVStore {
public void test() throws Exception { public void test() throws Exception {
FileUtils.deleteRecursive(getBaseDir(), true); FileUtils.deleteRecursive(getBaseDir(), true);
// testConcurrentOnlineBackup(); testConcurrentOnlineBackup();
testConcurrentMap(); testConcurrentMap();
testConcurrentIterate(); testConcurrentIterate();
testConcurrentWrite(); testConcurrentWrite();
...@@ -100,9 +99,7 @@ public class TestConcurrent extends TestMVStore { ...@@ -100,9 +99,7 @@ public class TestConcurrent extends TestMVStore {
} }
private void testConcurrentOnlineBackup() throws Exception { private void testConcurrentOnlineBackup() throws Exception {
// because absolute and relative reads are mixed, this currently String fileName = getBaseDir() + "/onlineBackup.h3";
// only works when using FileChannel directly
String fileName = "nio:" + getBaseDir() + "/onlineBackup.h3";
String fileNameRestore = getBaseDir() + "/onlineRestore.h3"; String fileNameRestore = getBaseDir() + "/onlineRestore.h3";
final MVStore s = openStore(fileName); final MVStore s = openStore(fileName);
final MVMap<Integer, byte[]> map = s.openMap("test"); final MVMap<Integer, byte[]> map = s.openMap("test");
...@@ -111,49 +108,28 @@ public class TestConcurrent extends TestMVStore { ...@@ -111,49 +108,28 @@ public class TestConcurrent extends TestMVStore {
@Override @Override
public void call() throws Exception { public void call() throws Exception {
while (!stop) { while (!stop) {
for (int i = 0; i < 100; i++) { for (int i = 0; i < 20; i++) {
map.put(i, new byte[100 * r.nextInt(100)]); map.put(i, new byte[100 * r.nextInt(100)]);
} }
s.store(); s.store();
map.clear(); map.clear();
s.store(); s.store();
long len = s.getFile().size(); long len = s.getFile().size();
if (len > 1024 * 100) { if (len > 1024 * 1024) {
// slow down writing a lot
Thread.sleep(200);
} else if (len > 1024 * 100) {
// slow down writing // slow down writing
Thread.sleep(20); Thread.sleep(20);
} else if (len > 1024 * 200) {
// slow down writing
Thread.sleep(200);
} }
} }
} }
}; };
t.execute(); t.execute();
// the wrong way to back up
try {
for (int i = 0; i < 100; i++) {
byte[] buff = readFileSlowly(s.getFile());
FileOutputStream out = new FileOutputStream(fileNameRestore);
out.write(buff);
MVStore s2 = openStore(fileNameRestore);
try {
MVMap<Integer, byte[]> test = s2.openMap("test");
for (Integer k : test.keySet()) {
test.get(k);
}
} finally {
s2.close();
}
}
fail();
} catch (Exception e) {
// expected
}
// the right way to back up
for (int i = 0; i < 10; i++) { for (int i = 0; i < 10; i++) {
// System.out.println("test " + i); // System.out.println("test " + i);
s.setReuseSpace(false); s.setReuseSpace(false);
byte[] buff = readFileSlowly(s.getFile()); byte[] buff = readFileSlowly(fileName, s.getFile().size());
s.setReuseSpace(true); s.setReuseSpace(true);
FileOutputStream out = new FileOutputStream(fileNameRestore); FileOutputStream out = new FileOutputStream(fileNameRestore);
out.write(buff); out.write(buff);
...@@ -170,22 +146,17 @@ public class TestConcurrent extends TestMVStore { ...@@ -170,22 +146,17 @@ public class TestConcurrent extends TestMVStore {
s.close(); s.close();
} }
private static byte[] readFileSlowly(FileChannel file) throws Exception { private static byte[] readFileSlowly(String fileName, long length) throws Exception {
file.position(0); InputStream in = new BufferedInputStream(new FileInputStream(fileName));
InputStream in = new BufferedInputStream(Channels.newInputStream(file));
ByteArrayOutputStream buff = new ByteArrayOutputStream(); ByteArrayOutputStream buff = new ByteArrayOutputStream();
for (int j = 0;; j++) { for (int j = 0; j < length; j++) {
int x = in.read(); int x = in.read();
if (x < 0) { if (x < 0) {
break; break;
} }
buff.write(x); buff.write(x);
if (j % 4096 == 0) {
Thread.sleep(1);
}
} }
// in.close() could close the stream in.close();
// in.close();
return buff.toByteArray(); return buff.toByteArray();
} }
......
...@@ -265,10 +265,16 @@ public class TestMVRTree extends TestMVStore { ...@@ -265,10 +265,16 @@ public class TestMVRTree extends TestMVStore {
} }
private void testRandom() { private void testRandom() {
testRandom(true);
testRandom(false);
}
private void testRandom(boolean quadraticSplit) {
String fileName = getBaseDir() + "/testRandom.h3"; String fileName = getBaseDir() + "/testRandom.h3";
FileUtils.delete(fileName); FileUtils.delete(fileName);
MVStore s = openStore(fileName); MVStore s = openStore(fileName);
MVRTreeMap<String> m = MVRTreeMap.create(2, StringDataType.INSTANCE); MVRTreeMap<String> m = MVRTreeMap.create(2, StringDataType.INSTANCE);
m.setQuadraticSplit(quadraticSplit);
m = s.openMap("data", m); m = s.openMap("data", m);
HashMap<SpatialKey, String> map = new HashMap<SpatialKey, String>(); HashMap<SpatialKey, String> map = new HashMap<SpatialKey, String>();
Random rand = new Random(1); Random rand = new Random(1);
......
...@@ -103,7 +103,7 @@ public class TestMVStore extends TestBase { ...@@ -103,7 +103,7 @@ public class TestMVStore extends TestBase {
s.store(); s.store();
s.close(); s.close();
int[] expectedReadsForCacheSize = { int[] expectedReadsForCacheSize = {
3407, 2590, 1924, 1440, 1101, 956, 918 3408, 2590, 1924, 1440, 1102, 956, 918
}; };
for (int cacheSize = 0; cacheSize <= 6; cacheSize += 4) { for (int cacheSize = 0; cacheSize <= 6; cacheSize += 4) {
s = MVStoreBuilder.fileBased(fileName). s = MVStoreBuilder.fileBased(fileName).
...@@ -172,6 +172,10 @@ public class TestMVStore extends TestBase { ...@@ -172,6 +172,10 @@ public class TestMVStore extends TestBase {
// test corrupt file headers // test corrupt file headers
for (int i = 0; i <= blockSize; i += blockSize) { for (int i = 0; i <= blockSize; i += blockSize) {
FileChannel fc = f.open("rw"); FileChannel fc = f.open("rw");
if (i == 0) {
// corrupt the last block (the end header)
fc.truncate(fc.size() - 4096);
}
ByteBuffer buff = ByteBuffer.allocate(4 * 1024); ByteBuffer buff = ByteBuffer.allocate(4 * 1024);
fc.read(buff, i); fc.read(buff, i);
String h = new String(buff.array(), "UTF-8").trim(); String h = new String(buff.array(), "UTF-8").trim();
...@@ -577,7 +581,7 @@ public class TestMVStore extends TestBase { ...@@ -577,7 +581,7 @@ public class TestMVStore extends TestBase {
assertEquals(1000, m.size()); assertEquals(1000, m.size());
assertEquals(284, s.getUnsavedPageCount()); assertEquals(284, s.getUnsavedPageCount());
s.store(); s.store();
assertEquals(3, s.getFileWriteCount()); assertEquals(2, s.getFileWriteCount());
s.close(); s.close();
s = openStore(fileName); s = openStore(fileName);
...@@ -586,8 +590,8 @@ public class TestMVStore extends TestBase { ...@@ -586,8 +590,8 @@ public class TestMVStore extends TestBase {
assertEquals(0, m.size()); assertEquals(0, m.size());
s.store(); s.store();
// ensure only nodes are read, but not leaves // ensure only nodes are read, but not leaves
assertEquals(41, s.getFileReadCount()); assertEquals(42, s.getFileReadCount());
assertEquals(2, s.getFileWriteCount()); assertEquals(1, s.getFileWriteCount());
s.close(); s.close();
} }
...@@ -832,6 +836,7 @@ public class TestMVStore extends TestBase { ...@@ -832,6 +836,7 @@ public class TestMVStore extends TestBase {
FileUtils.delete(fileName); FileUtils.delete(fileName);
MVStore s = openStore(fileName); MVStore s = openStore(fileName);
s.close(); s.close();
s = openStore(fileName); s = openStore(fileName);
MVMap<Integer, String> m = s.openMap("data", Integer.class, String.class); MVMap<Integer, String> m = s.openMap("data", Integer.class, String.class);
int count = 2000; int count = 2000;
...@@ -857,6 +862,7 @@ public class TestMVStore extends TestBase { ...@@ -857,6 +862,7 @@ public class TestMVStore extends TestBase {
} }
s.store(); s.store();
s.close(); s.close();
s = openStore(fileName); s = openStore(fileName);
m = s.openMap("data", Integer.class, String.class); m = s.openMap("data", Integer.class, String.class);
assertNull(m.get(0)); assertNull(m.get(0));
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论