Skip to content
项目
群组
代码片段
帮助
正在加载...
帮助
为 GitLab 提交贡献
登录/注册
切换导航
H
h2database
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
分枝图
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
计划
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
分枝图
统计图
创建新议题
作业
提交
议题看板
打开侧边栏
Administrator
h2database
Commits
289e69de
提交
289e69de
authored
12 年前
作者:
Thomas Mueller
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
A persistent multi-version map: move to the main source tree
上级
bd50328e
显示空白字符变更
内嵌
并排
正在显示
6 个修改的文件
包含
0 行增加
和
3725 行删除
+0
-3725
MVStore.java
h2/src/tools/org/h2/dev/store/btree/MVStore.java
+0
-1265
MVRTreeMap.java
h2/src/tools/org/h2/dev/store/rtree/MVRTreeMap.java
+0
-540
SpatialKey.java
h2/src/tools/org/h2/dev/store/rtree/SpatialKey.java
+0
-98
SpatialType.java
h2/src/tools/org/h2/dev/store/rtree/SpatialType.java
+0
-306
package.html
h2/src/tools/org/h2/dev/store/rtree/package.html
+0
-15
ObjectType.java
h2/src/tools/org/h2/dev/store/type/ObjectType.java
+0
-1501
没有找到文件。
h2/src/tools/org/h2/dev/store/btree/MVStore.java
deleted
100644 → 0
浏览文件 @
bd50328e
/*
* Copyright 2004-2011 H2 Group. Multiple-Licensed under the H2 License,
* Version 1.0, and under the Eclipse Public License, Version 1.0
* (http://h2database.com/html/license.html).
* Initial Developer: H2 Group
*/
package
org
.
h2
.
dev
.
store
.
btree
;
import
java.io.IOException
;
import
java.nio.ByteBuffer
;
import
java.nio.channels.FileChannel
;
import
java.nio.channels.FileLock
;
import
java.util.ArrayList
;
import
java.util.BitSet
;
import
java.util.Collections
;
import
java.util.Comparator
;
import
java.util.HashMap
;
import
java.util.Iterator
;
import
java.util.Map
;
import
org.h2.compress.CompressLZF
;
import
org.h2.compress.Compressor
;
import
org.h2.dev.store.cache.CacheLongKeyLIRS
;
import
org.h2.dev.store.cache.FilePathCache
;
import
org.h2.dev.store.type.DataType
;
import
org.h2.dev.store.type.DataTypeFactory
;
import
org.h2.dev.store.type.ObjectTypeFactory
;
import
org.h2.dev.store.type.StringType
;
import
org.h2.store.fs.FilePath
;
import
org.h2.store.fs.FileUtils
;
import
org.h2.util.New
;
/*
File format:
header: (blockSize) bytes
header: (blockSize) bytes
[ chunk ] *
(there are two headers for security)
header:
H:3,...
TODO:
- support serialization by default
- build script
- test concurrent storing in a background thread
- store store creation in file header, and seconds since creation
-- in chunk header (plus a counter)
- recovery: keep some old chunks; don't overwritten
-- for 5 minutes (configurable)
- allocate memory with Utils.newBytes and so on
- unified exception handling
- concurrent map; avoid locking during IO (pre-load pages)
- maybe split database into multiple files, to speed up compact
- automated 'kill process' and 'power failure' test
- implement table engine for H2
- auto-compact from time to time and on close
- test and possibly improve compact operation (for large dbs)
- support background writes (concurrent modification & store)
- limited support for writing to old versions (branches)
- support concurrent operations (including file I/O)
- on insert, if the child page is already full, don't load and modify it
-- split directly (for leaves with 1 entry)
- performance test with encrypting file system
- possibly split chunk data into immutable and mutable
- compact: avoid processing pages using a counting bloom filter
- defragment (re-creating maps, specially those with small pages)
- write using ByteArrayOutputStream; remove DataType.getMaxLength
- file header: check formatRead and format (is formatRead
-- needed if equal to format?)
- chunk header: store changed chunk data as row; maybe after the root
- chunk checksum (header, last page, 2 bytes per page?)
- allow renaming maps
- file locking: solve problem that locks are shared for a VM
- online backup
- MapFactory is the wrong name (StorePlugin?) or is too flexible: remove?
- store file "header" at the end of each chunk; at the end of the file
- is there a better name for the file header,
-- if it's no longer always at the beginning of a file?
- maybe let a chunk point to possible next chunks
-- (so no fixed location header is needed)
- support stores that span multiple files (chunks stored in other files)
- triggers (can be implemented with a custom map)
- store write operations per page (maybe defragment
-- if much different than count)
- r-tree: nearest neighbor search
- use FileChannel by default (nio file system), but:
-- an interrupt close the FileChannel
- auto-save temporary data if it uses too much memory,
-- but revert it on startup if needed.
- map and chunk metadata: do not store default values
- support maps without values (non-unique indexes),
- and maps without keys (counted b-tree)
- use a small object cache (StringCache)
- dump values
- tool to import / manipulate CSV files
*/
/**
* A persistent storage for maps.
*/
public
class
MVStore
{
/**
* Whether assertions are enabled.
*/
public
static
final
boolean
ASSERT
=
false
;
/**
* The block size (physical sector size) of the disk. The file header is
* written twice, one copy in each block, to ensure it survives a crash.
*/
static
final
int
BLOCK_SIZE
=
4
*
1024
;
private
final
HashMap
<
String
,
Object
>
config
;
private
final
String
fileName
;
private
final
DataTypeFactory
dataTypeFactory
;
private
int
pageSize
=
6
*
1024
;
private
FileChannel
file
;
private
FileLock
fileLock
;
private
long
fileSize
;
private
long
rootChunkStart
;
/**
* The cache. The default size is 16 MB, and the average size is 2 KB. It is
* split in 16 segments. The stack move distance is 2% of the expected
* number of entries.
*/
private
final
CacheLongKeyLIRS
<
Page
>
cache
;
private
int
lastChunkId
;
private
final
HashMap
<
Integer
,
Chunk
>
chunks
=
New
.
hashMap
();
/**
* The map of temporarily freed entries in the chunks. The key is the
* unsaved version, the value is the map of chunks. The maps of chunks
* contains the number of freed entries per chunk.
*/
private
final
HashMap
<
Long
,
HashMap
<
Integer
,
Chunk
>>
freedChunks
=
New
.
hashMap
();
private
MVMap
<
String
,
String
>
meta
;
private
final
HashMap
<
String
,
MVMap
<?,
?>>
maps
=
New
.
hashMap
();
/**
* The set of maps with potentially unsaved changes.
*/
private
final
HashMap
<
Integer
,
MVMap
<?,
?>>
mapsChanged
=
New
.
hashMap
();
private
HashMap
<
String
,
String
>
fileHeader
=
New
.
hashMap
();
private
final
boolean
readOnly
;
private
int
lastMapId
;
private
volatile
boolean
reuseSpace
=
true
;
private
long
retainVersion
=
-
1
;
private
int
retainChunk
=
-
1
;
private
Compressor
compressor
;
private
long
currentVersion
;
private
int
fileReadCount
;
private
int
fileWriteCount
;
private
int
unsavedPageCount
;
MVStore
(
HashMap
<
String
,
Object
>
config
)
{
this
.
config
=
config
;
this
.
fileName
=
(
String
)
config
.
get
(
"fileName"
);
DataTypeFactory
parent
=
new
ObjectTypeFactory
();
DataTypeFactory
f
=
(
DataTypeFactory
)
config
.
get
(
"dataTypeFactory"
);
if
(
f
==
null
)
{
f
=
parent
;
}
else
{
f
.
setParent
(
parent
);
}
this
.
dataTypeFactory
=
f
;
this
.
readOnly
=
"r"
.
equals
(
config
.
get
(
"openMode"
));
this
.
compressor
=
"0"
.
equals
(
config
.
get
(
"compression"
))
?
null
:
new
CompressLZF
();
if
(
fileName
!=
null
)
{
Object
s
=
config
.
get
(
"cacheSize"
);
int
mb
=
s
==
null
?
16
:
Integer
.
parseInt
(
s
.
toString
());
cache
=
CacheLongKeyLIRS
.
newInstance
(
mb
*
1024
*
1024
,
2048
,
16
,
mb
*
1024
*
1024
/
2048
*
2
/
100
);
}
else
{
cache
=
null
;
}
}
/**
* Open a store in exclusive mode.
*
* @param fileName the file name (null for in-memory)
* @return the store
*/
public
static
MVStore
open
(
String
fileName
)
{
HashMap
<
String
,
Object
>
config
=
New
.
hashMap
();
config
.
put
(
"fileName"
,
fileName
);
MVStore
s
=
new
MVStore
(
config
);
s
.
open
();
return
s
;
}
/**
* Open an old, stored version of a map.
*
* @param version the version
* @param name the map name
* @param template the template map
* @return the read-only map
*/
@SuppressWarnings
(
"unchecked"
)
<
T
extends
MVMap
<?,
?>>
T
openMapVersion
(
long
version
,
String
name
,
MVMap
<?,
?>
template
)
{
MVMap
<
String
,
String
>
oldMeta
=
getMetaMap
(
version
);
String
r
=
oldMeta
.
get
(
"root."
+
template
.
getId
());
long
rootPos
=
r
==
null
?
0
:
Long
.
parseLong
(
r
);
MVMap
<?,
?>
m
=
template
.
openReadOnly
();
m
.
setRootPos
(
rootPos
);
return
(
T
)
m
;
}
/**
* Open a map with the previous key and value type (if the map already
* exists), or Object if not.
*
* @param <K> the key type
* @param <V> the value type
* @param name the name of the map
* @return the map
*/
@SuppressWarnings
(
"unchecked"
)
public
<
K
,
V
>
MVMap
<
K
,
V
>
openMap
(
String
name
)
{
return
(
MVMap
<
K
,
V
>)
openMap
(
name
,
Object
.
class
,
Object
.
class
);
}
/**
* Open a map.
*
* @param <K> the key type
* @param <V> the value type
* @param name the name of the map
* @param keyClass the key class
* @param valueClass the value class
* @return the map
*/
public
<
K
,
V
>
MVMap
<
K
,
V
>
openMap
(
String
name
,
Class
<
K
>
keyClass
,
Class
<
V
>
valueClass
)
{
DataType
keyType
=
getDataType
(
keyClass
);
DataType
valueType
=
getDataType
(
valueClass
);
MVMap
<
K
,
V
>
m
=
new
MVMap
<
K
,
V
>(
keyType
,
valueType
);
return
openMap
(
name
,
m
);
}
/**
* Open a map using the given template. The returned map is of the same type
* as the template, and contains the same key and value types. If a map with
* this name is already open, this map is returned. If it is not open,
* the template object is opened with the applicable configuration.
*
* @param <T> the map type
* @param name the name of the map
* @param template the template map
* @return the opened map
*/
@SuppressWarnings
(
"unchecked"
)
public
<
T
extends
MVMap
<
K
,
V
>,
K
,
V
>
T
openMap
(
String
name
,
T
template
)
{
MVMap
<
K
,
V
>
m
=
(
MVMap
<
K
,
V
>)
maps
.
get
(
name
);
if
(
m
!=
null
)
{
return
(
T
)
m
;
}
m
=
template
;
String
config
=
meta
.
get
(
"map."
+
name
);
long
root
;
HashMap
<
String
,
String
>
c
;
if
(
config
==
null
)
{
c
=
New
.
hashMap
();
c
.
put
(
"id"
,
Integer
.
toString
(++
lastMapId
));
c
.
put
(
"name"
,
name
);
c
.
put
(
"createVersion"
,
Long
.
toString
(
currentVersion
));
m
.
open
(
this
,
c
);
meta
.
put
(
"map."
+
name
,
m
.
asString
());
root
=
0
;
}
else
{
c
=
DataUtils
.
parseMap
(
config
);
String
r
=
meta
.
get
(
"root."
+
c
.
get
(
"id"
));
root
=
r
==
null
?
0
:
Long
.
parseLong
(
r
);
}
m
.
open
(
this
,
c
);
m
.
setRootPos
(
root
);
maps
.
put
(
name
,
m
);
return
(
T
)
m
;
}
/**
* Get the metadata map. This data is for informational purposes only. The
* data is subject to change in future versions. The data should not be
* modified (doing so may corrupt the store).
* <p>
* It contains the following entries:
*
* <pre>
* map.{name} = {map metadata}
* root.{mapId} = {root position}
* chunk.{chunkId} = {chunk metadata}
* </pre>
*
* @return the metadata map
*/
public
MVMap
<
String
,
String
>
getMetaMap
()
{
return
meta
;
}
private
MVMap
<
String
,
String
>
getMetaMap
(
long
version
)
{
Chunk
c
=
getChunkForVersion
(
version
);
if
(
c
==
null
)
{
throw
new
IllegalArgumentException
(
"Unknown version: "
+
version
);
}
c
=
readChunkHeader
(
c
.
start
);
MVMap
<
String
,
String
>
oldMeta
=
meta
.
openReadOnly
();
oldMeta
.
setRootPos
(
c
.
metaRootPos
);
return
oldMeta
;
}
private
Chunk
getChunkForVersion
(
long
version
)
{
for
(
int
chunkId
=
lastChunkId
;;
chunkId
--)
{
Chunk
x
=
chunks
.
get
(
chunkId
);
if
(
x
==
null
||
x
.
version
<
version
)
{
return
null
;
}
else
if
(
x
.
version
==
version
)
{
return
x
;
}
}
}
/**
* Remove a map.
*
* @param name the map name
*/
void
removeMap
(
String
name
)
{
MVMap
<?,
?>
map
=
maps
.
remove
(
name
);
mapsChanged
.
remove
(
map
.
getId
());
}
private
DataType
getDataType
(
Class
<?>
clazz
)
{
if
(
clazz
==
String
.
class
)
{
return
StringType
.
INSTANCE
;
}
if
(
dataTypeFactory
==
null
)
{
throw
new
RuntimeException
(
"No data type factory set "
+
"and don't know how to serialize "
+
clazz
);
}
String
s
=
dataTypeFactory
.
getDataType
(
clazz
);
return
dataTypeFactory
.
buildDataType
(
s
);
}
/**
* Mark a map as changed (containing unsaved changes).
*
* @param map the map
*/
void
markChanged
(
MVMap
<?,
?>
map
)
{
mapsChanged
.
put
(
map
.
getId
(),
map
);
}
/**
* Open the store.
*/
void
open
()
{
meta
=
new
MVMap
<
String
,
String
>(
StringType
.
INSTANCE
,
StringType
.
INSTANCE
);
HashMap
<
String
,
String
>
c
=
New
.
hashMap
();
c
.
put
(
"id"
,
"0"
);
c
.
put
(
"name"
,
"meta"
);
c
.
put
(
"createVersion"
,
Long
.
toString
(
currentVersion
));
meta
.
open
(
this
,
c
);
if
(
fileName
==
null
)
{
return
;
}
FileUtils
.
createDirectories
(
FileUtils
.
getParent
(
fileName
));
try
{
log
(
"file open"
);
file
=
FilePathCache
.
wrap
(
FilePath
.
get
(
fileName
).
open
(
"rw"
));
if
(
readOnly
)
{
fileLock
=
file
.
tryLock
(
0
,
Long
.
MAX_VALUE
,
true
);
if
(
fileLock
==
null
)
{
throw
new
RuntimeException
(
"The file is locked: "
+
fileName
);
}
}
else
{
fileLock
=
file
.
tryLock
();
if
(
fileLock
==
null
)
{
throw
new
RuntimeException
(
"The file is locked: "
+
fileName
);
}
}
fileSize
=
file
.
size
();
if
(
fileSize
==
0
)
{
fileHeader
.
put
(
"H"
,
"3"
);
fileHeader
.
put
(
"blockSize"
,
""
+
BLOCK_SIZE
);
fileHeader
.
put
(
"format"
,
"1"
);
fileHeader
.
put
(
"formatRead"
,
"1"
);
writeFileHeader
();
}
else
{
readFileHeader
();
if
(
rootChunkStart
>
0
)
{
readMeta
();
}
}
}
catch
(
Exception
e
)
{
close
();
throw
convert
(
e
);
}
}
private
void
readMeta
()
{
Chunk
header
=
readChunkHeader
(
rootChunkStart
);
lastChunkId
=
header
.
id
;
chunks
.
put
(
header
.
id
,
header
);
meta
.
setRootPos
(
header
.
metaRootPos
);
Iterator
<
String
>
it
=
meta
.
keyIterator
(
"chunk."
);
while
(
it
.
hasNext
())
{
String
s
=
it
.
next
();
if
(!
s
.
startsWith
(
"chunk."
))
{
break
;
}
Chunk
c
=
Chunk
.
fromString
(
meta
.
get
(
s
));
if
(
c
.
id
==
header
.
id
)
{
c
.
start
=
header
.
start
;
c
.
length
=
header
.
length
;
c
.
metaRootPos
=
header
.
metaRootPos
;
c
.
maxLengthLive
=
header
.
maxLengthLive
;
c
.
pageCount
=
header
.
pageCount
;
c
.
maxLength
=
header
.
maxLength
;
}
lastChunkId
=
Math
.
max
(
c
.
id
,
lastChunkId
);
chunks
.
put
(
c
.
id
,
c
);
}
}
private
void
readFileHeader
()
{
try
{
byte
[]
headers
=
new
byte
[
2
*
BLOCK_SIZE
];
fileReadCount
++;
DataUtils
.
readFully
(
file
,
0
,
ByteBuffer
.
wrap
(
headers
));
for
(
int
i
=
0
;
i
<=
BLOCK_SIZE
;
i
+=
BLOCK_SIZE
)
{
String
s
=
new
String
(
headers
,
i
,
BLOCK_SIZE
,
"UTF-8"
).
trim
();
fileHeader
=
DataUtils
.
parseMap
(
s
);
rootChunkStart
=
Long
.
parseLong
(
fileHeader
.
get
(
"rootChunk"
));
currentVersion
=
Long
.
parseLong
(
fileHeader
.
get
(
"version"
));
lastMapId
=
Integer
.
parseInt
(
fileHeader
.
get
(
"lastMapId"
));
int
check
=
(
int
)
Long
.
parseLong
(
fileHeader
.
get
(
"fletcher"
),
16
);
s
=
s
.
substring
(
0
,
s
.
lastIndexOf
(
"fletcher"
)
-
1
)
+
" "
;
byte
[]
bytes
=
s
.
getBytes
(
"UTF-8"
);
int
checksum
=
DataUtils
.
getFletcher32
(
bytes
,
bytes
.
length
/
2
*
2
);
if
(
check
==
checksum
)
{
return
;
}
}
throw
new
RuntimeException
(
"File header is corrupt"
);
}
catch
(
Exception
e
)
{
throw
convert
(
e
);
}
}
private
void
writeFileHeader
()
{
try
{
StringBuilder
buff
=
new
StringBuilder
();
fileHeader
.
put
(
"lastMapId"
,
""
+
lastMapId
);
fileHeader
.
put
(
"rootChunk"
,
""
+
rootChunkStart
);
fileHeader
.
put
(
"version"
,
""
+
currentVersion
);
DataUtils
.
appendMap
(
buff
,
fileHeader
);
byte
[]
bytes
=
(
buff
.
toString
()
+
" "
).
getBytes
(
"UTF-8"
);
int
checksum
=
DataUtils
.
getFletcher32
(
bytes
,
bytes
.
length
/
2
*
2
);
DataUtils
.
appendMap
(
buff
,
"fletcher"
,
Integer
.
toHexString
(
checksum
));
bytes
=
buff
.
toString
().
getBytes
(
"UTF-8"
);
if
(
bytes
.
length
>
BLOCK_SIZE
)
{
throw
new
IllegalArgumentException
(
"File header too large: "
+
buff
);
}
ByteBuffer
header
=
ByteBuffer
.
allocate
(
2
*
BLOCK_SIZE
);
header
.
put
(
bytes
);
header
.
position
(
BLOCK_SIZE
);
header
.
put
(
bytes
);
header
.
rewind
();
fileWriteCount
++;
DataUtils
.
writeFully
(
file
,
0
,
header
);
fileSize
=
Math
.
max
(
fileSize
,
2
*
BLOCK_SIZE
);
}
catch
(
Exception
e
)
{
throw
convert
(
e
);
}
}
private
static
RuntimeException
convert
(
Exception
e
)
{
throw
new
RuntimeException
(
"Exception: "
+
e
,
e
);
}
/**
* Close the file. Uncommitted changes are ignored, and all open maps are closed.
*/
public
void
close
()
{
if
(
file
!=
null
)
{
try
{
shrinkFileIfPossible
(
0
);
log
(
"file close"
);
if
(
fileLock
!=
null
)
{
fileLock
.
release
();
fileLock
=
null
;
}
file
.
close
();
for
(
MVMap
<?,
?>
m
:
New
.
arrayList
(
maps
.
values
()))
{
m
.
close
();
}
meta
=
null
;
compressor
=
null
;
chunks
.
clear
();
cache
.
clear
();
maps
.
clear
();
mapsChanged
.
clear
();
}
catch
(
Exception
e
)
{
throw
convert
(
e
);
}
finally
{
file
=
null
;
}
}
}
/**
* Get the chunk for the given position.
*
* @param pos the position
* @return the chunk
*/
Chunk
getChunk
(
long
pos
)
{
return
chunks
.
get
(
DataUtils
.
getPageChunkId
(
pos
));
}
/**
* Increment the current version.
*
* @return the new version
*/
public
long
incrementVersion
()
{
return
++
currentVersion
;
}
/**
* Commit all changes and persist them to disk. This method does nothing if
* there are no unsaved changes, otherwise it stores the data and increments
* the current version.
*
* @return the new version (incremented if there were changes)
*/
public
long
store
()
{
if
(!
hasUnsavedChanges
())
{
return
currentVersion
;
}
// the last chunk was not completely correct in the last store()
// this needs to be updated now (it's better not to update right after
// storing, because that would modify the meta map again)
Chunk
c
=
chunks
.
get
(
lastChunkId
);
if
(
c
!=
null
)
{
meta
.
put
(
"chunk."
+
c
.
id
,
c
.
asString
());
}
c
=
new
Chunk
(++
lastChunkId
);
c
.
maxLength
=
Long
.
MAX_VALUE
;
c
.
maxLengthLive
=
Long
.
MAX_VALUE
;
c
.
start
=
Long
.
MAX_VALUE
;
c
.
length
=
Integer
.
MAX_VALUE
;
c
.
version
=
currentVersion
+
1
;
chunks
.
put
(
c
.
id
,
c
);
meta
.
put
(
"chunk."
+
c
.
id
,
c
.
asString
());
int
maxLength
=
1
+
4
+
4
+
8
;
for
(
MVMap
<?,
?>
m
:
mapsChanged
.
values
())
{
if
(
m
==
meta
||
!
m
.
hasUnsavedChanges
())
{
continue
;
}
Page
p
=
m
.
getRoot
();
if
(
p
.
getTotalCount
()
==
0
)
{
meta
.
put
(
"root."
+
m
.
getId
(),
"0"
);
}
else
{
maxLength
+=
p
.
getMaxLengthTempRecursive
();
meta
.
put
(
"root."
+
m
.
getId
(),
String
.
valueOf
(
Long
.
MAX_VALUE
));
}
}
applyFreedChunks
();
ArrayList
<
Integer
>
removedChunks
=
New
.
arrayList
();
do
{
for
(
Chunk
x
:
chunks
.
values
())
{
if
(
x
.
maxLengthLive
==
0
&&
(
retainChunk
==
-
1
||
x
.
id
<
retainChunk
))
{
meta
.
remove
(
"chunk."
+
x
.
id
);
removedChunks
.
add
(
x
.
id
);
}
else
{
meta
.
put
(
"chunk."
+
x
.
id
,
x
.
asString
());
}
applyFreedChunks
();
}
}
while
(
freedChunks
.
size
()
>
0
);
maxLength
+=
meta
.
getRoot
().
getMaxLengthTempRecursive
();
ByteBuffer
buff
=
ByteBuffer
.
allocate
(
maxLength
);
// need to patch the header later
c
.
writeHeader
(
buff
);
c
.
maxLength
=
0
;
c
.
maxLengthLive
=
0
;
for
(
MVMap
<?,
?>
m
:
mapsChanged
.
values
())
{
if
(
m
==
meta
||
!
m
.
hasUnsavedChanges
())
{
continue
;
}
Page
p
=
m
.
getRoot
();
if
(
p
.
getTotalCount
()
>
0
)
{
long
root
=
p
.
writeUnsavedRecursive
(
c
,
buff
);
meta
.
put
(
"root."
+
m
.
getId
(),
""
+
root
);
}
}
meta
.
put
(
"chunk."
+
c
.
id
,
c
.
asString
());
if
(
ASSERT
)
{
if
(
freedChunks
.
size
()
>
0
)
{
throw
new
RuntimeException
(
"Temporary freed chunks"
);
}
}
// this will modify maxLengthLive, but
// the correct value is written in the chunk header
meta
.
getRoot
().
writeUnsavedRecursive
(
c
,
buff
);
buff
.
flip
();
int
length
=
buff
.
limit
();
long
filePos
=
allocateChunk
(
length
);
// need to keep old chunks
// until they are are no longer referenced
// by an old version
// so empty space is not reused too early
for
(
int
x
:
removedChunks
)
{
chunks
.
remove
(
x
);
}
buff
.
rewind
();
c
.
start
=
filePos
;
c
.
length
=
length
;
c
.
metaRootPos
=
meta
.
getRoot
().
getPos
();
c
.
writeHeader
(
buff
);
buff
.
rewind
();
try
{
fileWriteCount
++;
DataUtils
.
writeFully
(
file
,
filePos
,
buff
);
}
catch
(
IOException
e
)
{
throw
new
RuntimeException
(
e
);
}
fileSize
=
Math
.
max
(
fileSize
,
filePos
+
buff
.
position
());
rootChunkStart
=
filePos
;
revertTemp
();
long
version
=
incrementVersion
();
// write the new version (after the commit)
writeFileHeader
();
shrinkFileIfPossible
(
1
);
unsavedPageCount
=
0
;
return
version
;
}
private
void
applyFreedChunks
()
{
for
(
HashMap
<
Integer
,
Chunk
>
freed
:
freedChunks
.
values
())
{
for
(
Chunk
f
:
freed
.
values
())
{
Chunk
c
=
chunks
.
get
(
f
.
id
);
c
.
maxLengthLive
+=
f
.
maxLengthLive
;
if
(
c
.
maxLengthLive
<
0
)
{
throw
new
RuntimeException
(
"Corrupt max length"
);
}
}
}
freedChunks
.
clear
();
}
/**
* Shrink the file if possible, and if at least a given percentage can be
* saved.
*
* @param minPercent the minimum percentage to save
*/
private
void
shrinkFileIfPossible
(
int
minPercent
)
{
long
used
=
getFileLengthUsed
();
try
{
if
(
used
>=
fileSize
)
{
return
;
}
if
(
minPercent
>
0
&&
fileSize
-
used
<
BLOCK_SIZE
)
{
return
;
}
int
savedPercent
=
(
int
)
(
100
-
(
used
*
100
/
fileSize
));
if
(
savedPercent
<
minPercent
)
{
return
;
}
file
.
truncate
(
used
);
fileSize
=
used
;
}
catch
(
Exception
e
)
{
throw
convert
(
e
);
}
}
private
long
getFileLengthUsed
()
{
int
min
=
2
;
for
(
Chunk
c
:
chunks
.
values
())
{
if
(
c
.
start
==
Long
.
MAX_VALUE
)
{
continue
;
}
int
last
=
(
int
)
((
c
.
start
+
c
.
length
)
/
BLOCK_SIZE
);
min
=
Math
.
max
(
min
,
last
+
1
);
}
return
min
*
BLOCK_SIZE
;
}
private
long
allocateChunk
(
long
length
)
{
if
(!
reuseSpace
)
{
return
getFileLengthUsed
();
}
BitSet
set
=
new
BitSet
();
set
.
set
(
0
);
set
.
set
(
1
);
for
(
Chunk
c
:
chunks
.
values
())
{
if
(
c
.
start
==
Long
.
MAX_VALUE
)
{
continue
;
}
int
first
=
(
int
)
(
c
.
start
/
BLOCK_SIZE
);
int
last
=
(
int
)
((
c
.
start
+
c
.
length
)
/
BLOCK_SIZE
);
set
.
set
(
first
,
last
+
1
);
}
int
required
=
(
int
)
(
length
/
BLOCK_SIZE
)
+
1
;
for
(
int
i
=
0
;
i
<
set
.
size
();
i
++)
{
if
(!
set
.
get
(
i
))
{
boolean
ok
=
true
;
for
(
int
j
=
0
;
j
<
required
;
j
++)
{
if
(
set
.
get
(
i
+
j
))
{
ok
=
false
;
break
;
}
}
if
(
ok
)
{
return
i
*
BLOCK_SIZE
;
}
}
}
return
set
.
size
()
*
BLOCK_SIZE
;
}
/**
* Check whether there are any unsaved changes.
*
* @return if there are any changes
*/
public
boolean
hasUnsavedChanges
()
{
if
(
mapsChanged
.
size
()
==
0
)
{
return
false
;
}
for
(
MVMap
<?,
?>
m
:
mapsChanged
.
values
())
{
if
(
m
.
hasUnsavedChanges
())
{
return
true
;
}
}
return
false
;
}
private
Chunk
readChunkHeader
(
long
start
)
{
try
{
fileReadCount
++;
ByteBuffer
buff
=
ByteBuffer
.
allocate
(
40
);
DataUtils
.
readFully
(
file
,
start
,
buff
);
buff
.
rewind
();
return
Chunk
.
fromHeader
(
buff
,
start
);
}
catch
(
IOException
e
)
{
throw
new
RuntimeException
(
e
);
}
}
/**
* Try to reduce the file size. Chunks with a low number of live items will
* be re-written. If the current fill rate is higher than the target fill
* rate, no optimization is done.
*
* @param fillRate the minimum percentage of live entries
* @return if anything was written
*/
public
boolean
compact
(
int
fillRate
)
{
if
(
chunks
.
size
()
==
0
)
{
// avoid division by 0
return
false
;
}
long
maxLengthSum
=
0
,
maxLengthLiveSum
=
0
;
for
(
Chunk
c
:
chunks
.
values
())
{
maxLengthSum
+=
c
.
maxLength
;
maxLengthLiveSum
+=
c
.
maxLengthLive
;
}
if
(
maxLengthSum
<=
0
)
{
// avoid division by 0
maxLengthSum
=
1
;
}
int
percentTotal
=
(
int
)
(
100
*
maxLengthLiveSum
/
maxLengthSum
);
if
(
percentTotal
>
fillRate
)
{
return
false
;
}
// calculate the average max length
int
averageMaxLength
=
(
int
)
(
maxLengthSum
/
chunks
.
size
());
// the 'old' list contains the chunks we want to free up
ArrayList
<
Chunk
>
old
=
New
.
arrayList
();
for
(
Chunk
c
:
chunks
.
values
())
{
if
(
retainChunk
==
-
1
||
c
.
id
<
retainChunk
)
{
int
age
=
lastChunkId
-
c
.
id
+
1
;
c
.
collectPriority
=
c
.
getFillRate
()
/
age
;
old
.
add
(
c
);
}
}
if
(
old
.
size
()
==
0
)
{
return
false
;
}
// sort the list, so the first entry should be collected first
Collections
.
sort
(
old
,
new
Comparator
<
Chunk
>()
{
public
int
compare
(
Chunk
o1
,
Chunk
o2
)
{
return
new
Integer
(
o1
.
collectPriority
).
compareTo
(
o2
.
collectPriority
);
}
});
// find out up to were in the old list we need to move
// try to move one (average sized) chunk
long
moved
=
0
;
Chunk
move
=
null
;
for
(
Chunk
c
:
old
)
{
if
(
move
!=
null
&&
moved
+
c
.
maxLengthLive
>
averageMaxLength
)
{
break
;
}
log
(
" chunk "
+
c
.
id
+
" "
+
c
.
getFillRate
()
+
"% full; prio="
+
c
.
collectPriority
);
moved
+=
c
.
maxLengthLive
;
move
=
c
;
}
// remove the chunks we want to keep from this list
boolean
remove
=
false
;
for
(
Iterator
<
Chunk
>
it
=
old
.
iterator
();
it
.
hasNext
();)
{
Chunk
c
=
it
.
next
();
if
(
move
==
c
)
{
remove
=
true
;
}
else
if
(
remove
)
{
it
.
remove
();
}
}
// iterate over all the pages in the old pages
for
(
Chunk
c
:
old
)
{
copyLive
(
c
,
old
);
}
store
();
return
true
;
}
private
void
copyLive
(
Chunk
chunk
,
ArrayList
<
Chunk
>
old
)
{
ByteBuffer
buff
=
ByteBuffer
.
allocate
(
chunk
.
length
);
try
{
DataUtils
.
readFully
(
file
,
chunk
.
start
,
buff
);
}
catch
(
IOException
e
)
{
throw
new
RuntimeException
(
e
);
}
Chunk
.
fromHeader
(
buff
,
chunk
.
start
);
int
chunkLength
=
chunk
.
length
;
while
(
buff
.
position
()
<
chunkLength
)
{
int
start
=
buff
.
position
();
int
pageLength
=
buff
.
getInt
();
buff
.
getShort
();
int
mapId
=
DataUtils
.
readVarInt
(
buff
);
@SuppressWarnings
(
"unchecked"
)
MVMap
<
Object
,
Object
>
map
=
(
MVMap
<
Object
,
Object
>)
getMap
(
mapId
);
if
(
map
==
null
)
{
buff
.
position
(
start
+
pageLength
);
continue
;
}
buff
.
position
(
start
);
Page
page
=
new
Page
(
map
,
0
);
page
.
read
(
buff
,
chunk
.
id
,
buff
.
position
(),
chunk
.
length
);
for
(
int
i
=
0
;
i
<
page
.
getKeyCount
();
i
++)
{
Object
k
=
page
.
getKey
(
i
);
Page
p
=
map
.
getPage
(
k
);
if
(
p
==
null
)
{
// was removed later - ignore
// or the chunk no longer exists
}
else
if
(
p
.
getPos
()
<
0
)
{
// temporarily changed - ok
// TODO move old data if there is an uncommitted change?
}
else
{
Chunk
c
=
getChunk
(
p
.
getPos
());
if
(
old
.
contains
(
c
))
{
log
(
" move key:"
+
k
+
" chunk:"
+
c
.
id
);
Object
value
=
map
.
remove
(
k
);
map
.
put
(
k
,
value
);
}
}
}
}
}
private
MVMap
<?,
?>
getMap
(
int
mapId
)
{
if
(
mapId
==
0
)
{
return
meta
;
}
for
(
MVMap
<?,
?>
m
:
maps
.
values
())
{
if
(
m
.
getId
()
==
mapId
)
{
return
m
;
}
}
return
null
;
}
/**
* Read a page.
*
* @param map the map
* @param pos the page position
* @return the page
*/
Page
readPage
(
MVMap
<?,
?>
map
,
long
pos
)
{
Page
p
=
cache
.
get
(
pos
);
if
(
p
==
null
)
{
Chunk
c
=
getChunk
(
pos
);
if
(
c
==
null
)
{
throw
new
RuntimeException
(
"Chunk "
+
DataUtils
.
getPageChunkId
(
pos
)
+
" not found"
);
}
long
filePos
=
c
.
start
;
filePos
+=
DataUtils
.
getPageOffset
(
pos
);
fileReadCount
++;
p
=
Page
.
read
(
file
,
map
,
pos
,
filePos
,
fileSize
);
cache
.
put
(
pos
,
p
,
p
.
getMemory
());
}
return
p
;
}
/**
* Remove a page.
*
* @param pos the position of the page
*/
void
removePage
(
long
pos
)
{
// we need to keep temporary pages,
// to support reading old versions and rollback
if
(
pos
==
0
)
{
unsavedPageCount
--;
return
;
}
// this could result in a cache miss
// if the operation is rolled back,
// but we don't optimize for rollback
cache
.
remove
(
pos
);
Chunk
c
=
getChunk
(
pos
);
HashMap
<
Integer
,
Chunk
>
freed
=
freedChunks
.
get
(
currentVersion
);
if
(
freed
==
null
)
{
freed
=
New
.
hashMap
();
freedChunks
.
put
(
currentVersion
,
freed
);
}
Chunk
f
=
freed
.
get
(
c
.
id
);
if
(
f
==
null
)
{
f
=
new
Chunk
(
c
.
id
);
freed
.
put
(
c
.
id
,
f
);
}
f
.
maxLengthLive
-=
DataUtils
.
getPageMaxLength
(
pos
);
}
/**
* Log the string, if logging is enabled.
*
* @param string the string to log
*/
void
log
(
String
string
)
{
// TODO logging
// System.out.println(string);
}
/**
* Set the amount of memory a page should contain at most, in bytes. Larger
* pages are split. The default is 6 KB. This is not a limit in the page
* size (pages with one entry can get larger), it is just the point where
* pages are split.
*
* @param pageSize the page size
*/
public
void
setPageSize
(
int
pageSize
)
{
this
.
pageSize
=
pageSize
;
}
/**
* Get the page size, in bytes.
*
* @return the page size
*/
public
int
getPageSize
()
{
return
pageSize
;
}
Compressor
getCompressor
()
{
return
compressor
;
}
public
boolean
getReuseSpace
()
{
return
reuseSpace
;
}
/**
* Whether empty space in the file should be re-used. If enabled, old data
* is overwritten (default). If disabled, writes are appended at the end of
* the file.
* <p>
* This setting is specially useful for online backup. To create an online
* backup, disable this setting, then copy the file (starting at the
* beginning of the file). In this case, concurrent backup and write
* operations are possible (obviously the backup process needs to be faster
* than the write operations).
*
* @param reuseSpace the new value
*/
public
void
setReuseSpace
(
boolean
reuseSpace
)
{
this
.
reuseSpace
=
reuseSpace
;
}
public
int
getRetainChunk
()
{
return
retainChunk
;
}
/**
* Which chunk to retain. If not set, old chunks are re-used as soon as
* possible, which may make it impossible to roll back beyond a save
* operation, or read a older version before.
* <p>
* This setting is not persisted.
*
* @param retainChunk the earliest chunk to retain (0 to retain all chunks,
* -1 to re-use space as early as possible)
*/
public
void
setRetainChunk
(
int
retainChunk
)
{
this
.
retainChunk
=
retainChunk
;
}
/**
* Which version to retain. If not set, all versions up to the last stored
* version are retained.
*
* @param retainVersion the oldest version to retain
*/
public
void
setRetainVersion
(
long
retainVersion
)
{
this
.
retainVersion
=
retainVersion
;
}
public
long
getRetainVersion
()
{
return
retainVersion
;
}
/**
* Check whether all data can be read from this version. This requires that
* all chunks referenced by this version are still available (not
* overwritten).
*
* @param version the version
* @return true if all data can be read
*/
private
boolean
isKnownVersion
(
long
version
)
{
if
(
version
>
currentVersion
||
version
<
0
)
{
return
false
;
}
if
(
version
==
currentVersion
||
chunks
.
size
()
==
0
)
{
// no stored data
return
true
;
}
// need to check if a chunk for this version exists
Chunk
c
=
getChunkForVersion
(
version
);
if
(
c
==
null
)
{
return
false
;
}
// also, all chunks referenced by this version
// need to be available in the file
MVMap
<
String
,
String
>
oldMeta
=
getMetaMap
(
version
);
if
(
oldMeta
==
null
)
{
return
false
;
}
for
(
Iterator
<
String
>
it
=
oldMeta
.
keyIterator
(
"chunk."
);
it
.
hasNext
();)
{
String
chunkKey
=
it
.
next
();
if
(!
chunkKey
.
startsWith
(
"chunk."
))
{
break
;
}
if
(!
meta
.
containsKey
(
chunkKey
))
{
return
false
;
}
}
return
true
;
}
/**
* Get the estimated number of unsaved pages. The returned value is not
* accurate, specially after rollbacks, but can be used to estimate the
* memory usage for unsaved data.
*
* @return the number of unsaved pages
*/
public
int
getUnsavedPageCount
()
{
return
unsavedPageCount
;
}
/**
* Increment the number of unsaved pages.
*/
void
registerUnsavedPage
()
{
unsavedPageCount
++;
}
/**
* Get the store version. The store version is usually used to upgrade the
* structure of the store after upgrading the application. Initially the
* store version is 0, until it is changed.
*
* @return the store version
*/
public
int
getStoreVersion
()
{
String
x
=
meta
.
get
(
"setting.storeVersion"
);
return
x
==
null
?
0
:
Integer
.
parseInt
(
x
);
}
/**
* Update the store version.
*
* @param version the new store version
*/
public
void
setStoreVersion
(
int
version
)
{
meta
.
put
(
"setting.storeVersion"
,
Integer
.
toString
(
version
));
}
/**
* Revert to the beginning of the given version. All later changes (stored
* or not) are forgotten. All maps that were created later are closed. A
* rollback to a version before the last stored version is immediately
* persisted.
*
* @param version the version to revert to
*/
public
void
rollbackTo
(
long
version
)
{
if
(!
isKnownVersion
(
version
))
{
throw
new
IllegalArgumentException
(
"Unknown version: "
+
version
);
}
// TODO could remove newer temporary pages on rollback
for
(
MVMap
<?,
?>
m
:
mapsChanged
.
values
())
{
m
.
rollbackTo
(
version
);
}
for
(
long
v
=
currentVersion
;
v
>=
version
;
v
--)
{
if
(
freedChunks
.
size
()
==
0
)
{
break
;
}
freedChunks
.
remove
(
v
);
}
meta
.
rollbackTo
(
version
);
boolean
loadFromFile
=
false
;
Chunk
last
=
chunks
.
get
(
lastChunkId
);
if
(
last
!=
null
)
{
if
(
last
.
version
>=
version
)
{
revertTemp
();
}
if
(
last
.
version
>
version
)
{
loadFromFile
=
true
;
while
(
last
!=
null
&&
last
.
version
>
version
)
{
chunks
.
remove
(
lastChunkId
);
lastChunkId
--;
last
=
chunks
.
get
(
lastChunkId
);
}
rootChunkStart
=
last
.
start
;
writeFileHeader
();
readFileHeader
();
readMeta
();
}
}
for
(
MVMap
<?,
?>
m
:
maps
.
values
())
{
if
(
m
.
getCreateVersion
()
>=
version
)
{
m
.
close
();
removeMap
(
m
.
getName
());
}
else
{
if
(
loadFromFile
)
{
String
r
=
meta
.
get
(
"root."
+
m
.
getId
());
long
root
=
r
==
null
?
0
:
Long
.
parseLong
(
r
);
m
.
setRootPos
(
root
);
}
}
}
this
.
currentVersion
=
version
;
}
private
void
revertTemp
()
{
freedChunks
.
clear
();
for
(
MVMap
<?,
?>
m
:
mapsChanged
.
values
())
{
m
.
removeAllOldVersions
();
}
mapsChanged
.
clear
();
}
/**
* Get the current version of the data. When a new store is created, the
* version is 0.
*
* @return the version
*/
public
long
getCurrentVersion
()
{
return
currentVersion
;
}
/**
* Get the number of file write operations since this store was opened.
*
* @return the number of write operations
*/
public
int
getFileWriteCount
()
{
return
fileWriteCount
;
}
/**
* Get the number of file read operations since this store was opened.
*
* @return the number of read operations
*/
public
int
getFileReadCount
()
{
return
fileReadCount
;
}
/**
* Get the file name, or null for in-memory stores.
*
* @return the file name
*/
public
String
getFileName
()
{
return
fileName
;
}
/**
* Get the file header. This data is for informational purposes only. The
* data is subject to change in future versions. The data should not be
* modified (doing so may corrupt the store).
*
* @return the file header
*/
public
Map
<
String
,
String
>
getFileHeader
()
{
return
fileHeader
;
}
/**
* Get the file instance in use, if a file is used. The application may read
* from the file (for example for online backup), but not write to it or
* truncate it.
*
* @return the file, or null
*/
public
FileChannel
getFile
()
{
return
file
;
}
public
String
toString
()
{
return
DataUtils
.
appendMap
(
new
StringBuilder
(),
config
).
toString
();
}
}
This diff is collapsed.
Click to expand it.
h2/src/tools/org/h2/dev/store/rtree/MVRTreeMap.java
deleted
100644 → 0
浏览文件 @
bd50328e
/*
* Copyright 2004-2011 H2 Group. Multiple-Licensed under the H2 License,
* Version 1.0, and under the Eclipse Public License, Version 1.0
* (http://h2database.com/html/license.html).
* Initial Developer: H2 Group
*/
package
org
.
h2
.
dev
.
store
.
rtree
;
import
java.util.ArrayList
;
import
java.util.Iterator
;
import
org.h2.dev.store.btree.Cursor
;
import
org.h2.dev.store.btree.CursorPos
;
import
org.h2.dev.store.btree.MVMap
;
import
org.h2.dev.store.btree.Page
;
import
org.h2.dev.store.type.DataType
;
import
org.h2.util.New
;
/**
* An r-tree implementation. It uses the quadratic split algorithm.
*
* @param <V> the value class
*/
public
class
MVRTreeMap
<
V
>
extends
MVMap
<
SpatialKey
,
V
>
{
/**
* The spatial key type.
*/
final
SpatialType
keyType
;
private
boolean
quadraticSplit
;
public
MVRTreeMap
(
int
dimensions
,
DataType
valueType
)
{
super
(
new
SpatialType
(
dimensions
),
valueType
);
this
.
keyType
=
(
SpatialType
)
getKeyType
();
}
/**
* Create a new map with the given dimensions and value type.
*
* @param <V> the value type
* @param dimensions the number of dimensions
* @param valueType the value type
* @return the map
*/
public
static
<
V
>
MVRTreeMap
<
V
>
create
(
int
dimensions
,
DataType
valueType
)
{
return
new
MVRTreeMap
<
V
>(
dimensions
,
valueType
);
}
@SuppressWarnings
(
"unchecked"
)
public
V
get
(
Object
key
)
{
checkOpen
();
return
(
V
)
get
(
root
,
key
);
}
/**
* Iterate over all keys that have an intersection with the given rectangle.
*
* @param x the rectangle
* @return the iterator
*/
public
Iterator
<
SpatialKey
>
findIntersectingKeys
(
SpatialKey
x
)
{
checkOpen
();
return
new
RTreeCursor
(
this
,
root
,
x
)
{
protected
boolean
check
(
boolean
leaf
,
SpatialKey
key
,
SpatialKey
test
)
{
return
keyType
.
isOverlap
(
key
,
test
);
}
};
}
/**
* Iterate over all keys that are fully contained within the given rectangle.
*
* @param x the rectangle
* @return the iterator
*/
public
Iterator
<
SpatialKey
>
findContainedKeys
(
SpatialKey
x
)
{
checkOpen
();
return
new
RTreeCursor
(
this
,
root
,
x
)
{
protected
boolean
check
(
boolean
leaf
,
SpatialKey
key
,
SpatialKey
test
)
{
if
(
leaf
)
{
return
keyType
.
isInside
(
key
,
test
);
}
return
keyType
.
isOverlap
(
key
,
test
);
}
};
}
private
boolean
contains
(
Page
p
,
int
index
,
Object
key
)
{
return
keyType
.
contains
(
p
.
getKey
(
index
),
key
);
}
/**
* Get the object for the given key. An exact match is required.
*
* @param p the page
* @param key the key
* @return the value, or null if not found
*/
protected
Object
get
(
Page
p
,
Object
key
)
{
if
(!
p
.
isLeaf
())
{
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
if
(
contains
(
p
,
i
,
key
))
{
Object
o
=
get
(
p
.
getChildPage
(
i
),
key
);
if
(
o
!=
null
)
{
return
o
;
}
}
}
}
else
{
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
if
(
keyType
.
equals
(
p
.
getKey
(
i
),
key
))
{
return
p
.
getValue
(
i
);
}
}
}
return
null
;
}
protected
Page
getPage
(
SpatialKey
key
)
{
return
getPage
(
root
,
key
);
}
private
Page
getPage
(
Page
p
,
Object
key
)
{
if
(!
p
.
isLeaf
())
{
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
if
(
contains
(
p
,
i
,
key
))
{
Page
x
=
getPage
(
p
.
getChildPage
(
i
),
key
);
if
(
x
!=
null
)
{
return
x
;
}
}
}
}
else
{
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
if
(
keyType
.
equals
(
p
.
getKey
(
i
),
key
))
{
return
p
;
}
}
}
return
null
;
}
protected
Object
remove
(
Page
p
,
long
writeVersion
,
Object
key
)
{
Object
result
=
null
;
if
(
p
.
isLeaf
())
{
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
if
(
keyType
.
equals
(
p
.
getKey
(
i
),
key
))
{
result
=
p
.
getValue
(
i
);
p
.
remove
(
i
);
if
(
p
.
getKeyCount
()
==
0
)
{
removePage
(
p
);
}
break
;
}
}
return
result
;
}
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
if
(
contains
(
p
,
i
,
key
))
{
Page
cOld
=
p
.
getChildPage
(
i
);
Page
c
=
cOld
.
copyOnWrite
(
writeVersion
);
long
oldSize
=
c
.
getTotalCount
();
result
=
remove
(
c
,
writeVersion
,
key
);
if
(
oldSize
==
c
.
getTotalCount
())
{
continue
;
}
if
(
c
.
getTotalCount
()
==
0
)
{
// this child was deleted
p
.
remove
(
i
);
if
(
p
.
getKeyCount
()
==
0
)
{
removePage
(
p
);
}
break
;
}
Object
oldBounds
=
p
.
getKey
(
i
);
if
(!
keyType
.
isInside
(
key
,
oldBounds
))
{
p
.
setKey
(
i
,
getBounds
(
c
));
}
p
.
setChild
(
i
,
c
);
break
;
}
}
return
result
;
}
private
Object
getBounds
(
Page
x
)
{
Object
bounds
=
keyType
.
createBoundingBox
(
x
.
getKey
(
0
));
for
(
int
i
=
1
;
i
<
x
.
getKeyCount
();
i
++)
{
keyType
.
increaseBounds
(
bounds
,
x
.
getKey
(
i
));
}
return
bounds
;
}
@SuppressWarnings
(
"unchecked"
)
public
V
put
(
SpatialKey
key
,
V
value
)
{
return
(
V
)
putOrAdd
(
key
,
value
,
false
);
}
/**
* Add a given key-value pair. The key should not exist (if it exists, the
* result is undefined).
*
* @param key the key
* @param value the value
*/
public
void
add
(
SpatialKey
key
,
V
value
)
{
putOrAdd
(
key
,
value
,
true
);
}
private
Object
putOrAdd
(
SpatialKey
key
,
V
value
,
boolean
alwaysAdd
)
{
checkWrite
();
long
writeVersion
=
store
.
getCurrentVersion
();
Page
p
=
root
.
copyOnWrite
(
writeVersion
);
Object
result
;
if
(
alwaysAdd
||
get
(
key
)
==
null
)
{
if
(
p
.
getMemory
()
>
store
.
getPageSize
()
&&
p
.
getKeyCount
()
>
1
)
{
// only possible if this is the root, else we would have split earlier
// (this requires maxPageSize is fixed)
long
totalCount
=
p
.
getTotalCount
();
Page
split
=
split
(
p
,
writeVersion
);
Object
k1
=
getBounds
(
p
);
Object
k2
=
getBounds
(
split
);
Object
[]
keys
=
{
k1
,
k2
};
long
[]
children
=
{
p
.
getPos
(),
split
.
getPos
(),
0
};
Page
[]
childrenPages
=
{
p
,
split
,
null
};
long
[]
counts
=
{
p
.
getTotalCount
(),
split
.
getTotalCount
(),
0
};
p
=
Page
.
create
(
this
,
writeVersion
,
2
,
keys
,
null
,
children
,
childrenPages
,
counts
,
totalCount
,
0
,
0
);
// now p is a node; continues
}
add
(
p
,
writeVersion
,
key
,
value
);
result
=
null
;
}
else
{
result
=
set
(
p
,
writeVersion
,
key
,
value
);
}
newRoot
(
p
);
return
result
;
}
/**
* Update the value for the given key. The key must exist.
*
* @param p the page
* @param writeVersion the write version
* @param key the key
* @param value the value
* @return the old value
*/
private
Object
set
(
Page
p
,
long
writeVersion
,
Object
key
,
Object
value
)
{
if
(!
p
.
isLeaf
())
{
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
if
(
contains
(
p
,
i
,
key
))
{
Page
c
=
p
.
getChildPage
(
i
).
copyOnWrite
(
writeVersion
);
Object
result
=
set
(
c
,
writeVersion
,
key
,
value
);
if
(
result
!=
null
)
{
p
.
setChild
(
i
,
c
);
return
result
;
}
}
}
}
else
{
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
if
(
keyType
.
equals
(
p
.
getKey
(
i
),
key
))
{
return
p
.
setValue
(
i
,
value
);
}
}
}
return
null
;
}
private
void
add
(
Page
p
,
long
writeVersion
,
Object
key
,
Object
value
)
{
if
(
p
.
isLeaf
())
{
p
.
insertLeaf
(
p
.
getKeyCount
(),
key
,
value
);
return
;
}
// p is a node
int
index
=
-
1
;
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
if
(
contains
(
p
,
i
,
key
))
{
index
=
i
;
break
;
}
}
if
(
index
<
0
)
{
// a new entry, we don't know where to add yet
float
min
=
Float
.
MAX_VALUE
;
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
Object
k
=
p
.
getKey
(
i
);
float
areaIncrease
=
keyType
.
getAreaIncrease
(
k
,
key
);
if
(
areaIncrease
<
min
)
{
index
=
i
;
min
=
areaIncrease
;
}
}
}
Page
c
=
p
.
getChildPage
(
index
).
copyOnWrite
(
writeVersion
);
if
(
c
.
getMemory
()
>
store
.
getPageSize
()
&&
c
.
getKeyCount
()
>
1
)
{
// split on the way down
Page
split
=
split
(
c
,
writeVersion
);
p
=
p
.
copyOnWrite
(
writeVersion
);
p
.
setKey
(
index
,
getBounds
(
c
));
p
.
setChild
(
index
,
c
);
p
.
insertNode
(
index
,
getBounds
(
split
),
split
);
// now we are not sure where to add
add
(
p
,
writeVersion
,
key
,
value
);
return
;
}
add
(
c
,
writeVersion
,
key
,
value
);
Object
bounds
=
p
.
getKey
(
index
);
keyType
.
increaseBounds
(
bounds
,
key
);
p
.
setKey
(
index
,
bounds
);
p
.
setChild
(
index
,
c
);
}
private
Page
split
(
Page
p
,
long
writeVersion
)
{
return
quadraticSplit
?
splitQuadratic
(
p
,
writeVersion
)
:
splitLinear
(
p
,
writeVersion
);
}
private
Page
splitLinear
(
Page
p
,
long
writeVersion
)
{
ArrayList
<
Object
>
keys
=
New
.
arrayList
();
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
keys
.
add
(
p
.
getKey
(
i
));
}
int
[]
extremes
=
keyType
.
getExtremes
(
keys
);
if
(
extremes
==
null
)
{
return
splitQuadratic
(
p
,
writeVersion
);
}
Page
splitA
=
newPage
(
p
.
isLeaf
(),
writeVersion
);
Page
splitB
=
newPage
(
p
.
isLeaf
(),
writeVersion
);
move
(
p
,
splitA
,
extremes
[
0
]);
if
(
extremes
[
1
]
>
extremes
[
0
])
{
extremes
[
1
]--;
}
move
(
p
,
splitB
,
extremes
[
1
]);
Object
boundsA
=
keyType
.
createBoundingBox
(
splitA
.
getKey
(
0
));
Object
boundsB
=
keyType
.
createBoundingBox
(
splitB
.
getKey
(
0
));
while
(
p
.
getKeyCount
()
>
0
)
{
Object
o
=
p
.
getKey
(
0
);
float
a
=
keyType
.
getAreaIncrease
(
boundsA
,
o
);
float
b
=
keyType
.
getAreaIncrease
(
boundsB
,
o
);
if
(
a
<
b
)
{
keyType
.
increaseBounds
(
boundsA
,
o
);
move
(
p
,
splitA
,
0
);
}
else
{
keyType
.
increaseBounds
(
boundsB
,
o
);
move
(
p
,
splitB
,
0
);
}
}
while
(
splitB
.
getKeyCount
()
>
0
)
{
move
(
splitB
,
p
,
0
);
}
return
splitA
;
}
private
Page
splitQuadratic
(
Page
p
,
long
writeVersion
)
{
Page
splitA
=
newPage
(
p
.
isLeaf
(),
writeVersion
);
Page
splitB
=
newPage
(
p
.
isLeaf
(),
writeVersion
);
float
largest
=
Float
.
MIN_VALUE
;
int
ia
=
0
,
ib
=
0
;
for
(
int
a
=
0
;
a
<
p
.
getKeyCount
();
a
++)
{
Object
objA
=
p
.
getKey
(
a
);
for
(
int
b
=
0
;
b
<
p
.
getKeyCount
();
b
++)
{
if
(
a
==
b
)
{
continue
;
}
Object
objB
=
p
.
getKey
(
b
);
float
area
=
keyType
.
getCombinedArea
(
objA
,
objB
);
if
(
area
>
largest
)
{
largest
=
area
;
ia
=
a
;
ib
=
b
;
}
}
}
move
(
p
,
splitA
,
ia
);
if
(
ia
<
ib
)
{
ib
--;
}
move
(
p
,
splitB
,
ib
);
Object
boundsA
=
keyType
.
createBoundingBox
(
splitA
.
getKey
(
0
));
Object
boundsB
=
keyType
.
createBoundingBox
(
splitB
.
getKey
(
0
));
while
(
p
.
getKeyCount
()
>
0
)
{
float
diff
=
0
,
bestA
=
0
,
bestB
=
0
;
int
best
=
0
;
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
Object
o
=
p
.
getKey
(
i
);
float
incA
=
keyType
.
getAreaIncrease
(
boundsA
,
o
);
float
incB
=
keyType
.
getAreaIncrease
(
boundsB
,
o
);
float
d
=
Math
.
abs
(
incA
-
incB
);
if
(
d
>
diff
)
{
diff
=
d
;
bestA
=
incA
;
bestB
=
incB
;
best
=
i
;
}
}
if
(
bestA
<
bestB
)
{
keyType
.
increaseBounds
(
boundsA
,
p
.
getKey
(
best
));
move
(
p
,
splitA
,
best
);
}
else
{
keyType
.
increaseBounds
(
boundsB
,
p
.
getKey
(
best
));
move
(
p
,
splitB
,
best
);
}
}
while
(
splitB
.
getKeyCount
()
>
0
)
{
move
(
splitB
,
p
,
0
);
}
return
splitA
;
}
private
Page
newPage
(
boolean
leaf
,
long
writeVersion
)
{
Object
[]
values
=
leaf
?
new
Object
[
4
]
:
null
;
long
[]
c
=
leaf
?
null
:
new
long
[
1
];
Page
[]
cp
=
leaf
?
null
:
new
Page
[
1
];
return
Page
.
create
(
this
,
writeVersion
,
0
,
new
Object
[
4
],
values
,
c
,
cp
,
c
,
0
,
0
,
0
);
}
private
static
void
move
(
Page
source
,
Page
target
,
int
sourceIndex
)
{
Object
k
=
source
.
getKey
(
sourceIndex
);
if
(
source
.
isLeaf
())
{
Object
v
=
source
.
getValue
(
sourceIndex
);
target
.
insertLeaf
(
0
,
k
,
v
);
}
else
{
Page
c
=
source
.
getChildPage
(
sourceIndex
);
target
.
insertNode
(
0
,
k
,
c
);
}
source
.
remove
(
sourceIndex
);
}
/**
* Add all node keys (including internal bounds) to the given list.
* This is mainly used to visualize the internal splits.
*
* @param list the list
* @param p the root page
*/
public
void
addNodeKeys
(
ArrayList
<
SpatialKey
>
list
,
Page
p
)
{
if
(
p
!=
null
&&
!
p
.
isLeaf
())
{
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
list
.
add
((
SpatialKey
)
p
.
getKey
(
i
));
addNodeKeys
(
list
,
p
.
getChildPage
(
i
));
}
}
}
public
boolean
isQuadraticSplit
()
{
return
quadraticSplit
;
}
public
void
setQuadraticSplit
(
boolean
quadraticSplit
)
{
this
.
quadraticSplit
=
quadraticSplit
;
}
protected
int
getChildPageCount
(
Page
p
)
{
return
p
.
getChildPageCount
()
-
1
;
}
/**
* A cursor to iterate over a subset of the keys.
*/
static
class
RTreeCursor
extends
Cursor
<
SpatialKey
>
{
protected
RTreeCursor
(
MVRTreeMap
<?>
map
,
Page
root
,
SpatialKey
from
)
{
super
(
map
,
root
,
from
);
}
public
void
skip
(
long
n
)
{
if
(!
hasNext
())
{
return
;
}
while
(
n
--
>
0
)
{
fetchNext
();
}
}
/**
* Check a given key.
*
* @param leaf if the key is from a leaf page
* @param key the stored key
* @param test the user-supplied test key
* @return true if there is a match
*/
protected
boolean
check
(
boolean
leaf
,
SpatialKey
key
,
SpatialKey
test
)
{
return
true
;
}
protected
void
min
(
Page
p
,
SpatialKey
x
)
{
while
(
true
)
{
if
(
p
.
isLeaf
())
{
pos
=
new
CursorPos
(
p
,
0
,
pos
);
return
;
}
boolean
found
=
false
;
for
(
int
i
=
0
;
i
<
p
.
getKeyCount
();
i
++)
{
if
(
check
(
false
,
(
SpatialKey
)
p
.
getKey
(
i
),
x
))
{
pos
=
new
CursorPos
(
p
,
i
+
1
,
pos
);
p
=
p
.
getChildPage
(
i
);
found
=
true
;
break
;
}
}
if
(!
found
)
{
break
;
}
}
fetchNext
();
}
protected
void
fetchNext
()
{
while
(
pos
!=
null
)
{
while
(
pos
.
index
<
pos
.
page
.
getKeyCount
())
{
SpatialKey
k
=
(
SpatialKey
)
pos
.
page
.
getKey
(
pos
.
index
++);
if
(
check
(
true
,
k
,
from
))
{
current
=
k
;
return
;
}
}
pos
=
pos
.
parent
;
if
(
pos
==
null
)
{
break
;
}
MVRTreeMap
<?>
m
=
(
MVRTreeMap
<?>)
map
;
if
(
pos
.
index
<
m
.
getChildPageCount
(
pos
.
page
))
{
min
(
pos
.
page
.
getChildPage
(
pos
.
index
++),
from
);
}
}
current
=
null
;
}
}
public
String
getType
()
{
return
"rtree"
;
}
}
This diff is collapsed.
Click to expand it.
h2/src/tools/org/h2/dev/store/rtree/SpatialKey.java
deleted
100644 → 0
浏览文件 @
bd50328e
/*
* Copyright 2004-2011 H2 Group. Multiple-Licensed under the H2 License,
* Version 1.0, and under the Eclipse Public License, Version 1.0
* (http://h2database.com/html/license.html).
* Initial Developer: H2 Group
*/
package
org
.
h2
.
dev
.
store
.
rtree
;
import
java.util.Arrays
;
/**
* A unique spatial key.
*/
public
class
SpatialKey
{
private
final
long
id
;
private
final
float
[]
minMax
;
/**
* Create a new key.
*
* @param id the id
* @param minMax min x, max x, min y, max y, and so on
*/
public
SpatialKey
(
long
id
,
float
...
minMax
)
{
this
.
id
=
id
;
this
.
minMax
=
minMax
;
}
/**
* Get the minimum value for the given dimension.
*
* @param dim the dimension
* @return the value
*/
public
float
min
(
int
dim
)
{
return
minMax
[
dim
+
dim
];
}
/**
* Set the minimum value for the given dimension.
*
* @param dim the dimension
* @param x the value
*/
public
void
setMin
(
int
dim
,
float
x
)
{
minMax
[
dim
+
dim
]
=
x
;
}
/**
* Get the maximum value for the given dimension.
*
* @param dim the dimension
* @return the value
*/
public
float
max
(
int
dim
)
{
return
minMax
[
dim
+
dim
+
1
];
}
/**
* Set the maximum value for the given dimension.
*
* @param dim the dimension
* @param x the value
*/
public
void
setMax
(
int
dim
,
float
x
)
{
minMax
[
dim
+
dim
+
1
]
=
x
;
}
public
long
getId
()
{
return
id
;
}
public
String
toString
()
{
StringBuilder
buff
=
new
StringBuilder
();
buff
.
append
(
id
).
append
(
": ("
);
for
(
int
i
=
0
;
i
<
minMax
.
length
;
i
+=
2
)
{
if
(
i
>
0
)
{
buff
.
append
(
", "
);
}
buff
.
append
(
minMax
[
i
]).
append
(
'/'
).
append
(
minMax
[
i
+
1
]);
}
return
buff
.
append
(
")"
).
toString
();
}
public
int
hashCode
()
{
return
(
int
)
((
id
>>>
32
)
^
id
);
}
public
boolean
equals
(
Object
other
)
{
if
(!(
other
instanceof
SpatialKey
))
{
return
false
;
}
SpatialKey
o
=
(
SpatialKey
)
other
;
return
Arrays
.
equals
(
minMax
,
o
.
minMax
);
}
}
This diff is collapsed.
Click to expand it.
h2/src/tools/org/h2/dev/store/rtree/SpatialType.java
deleted
100644 → 0
浏览文件 @
bd50328e
/*
* Copyright 2004-2011 H2 Group. Multiple-Licensed under the H2 License,
* Version 1.0, and under the Eclipse Public License, Version 1.0
* (http://h2database.com/html/license.html).
* Initial Developer: H2 Group
*/
package
org
.
h2
.
dev
.
store
.
rtree
;
import
java.nio.ByteBuffer
;
import
java.util.ArrayList
;
import
org.h2.dev.store.btree.DataUtils
;
import
org.h2.dev.store.type.DataType
;
/**
* A spatial data type. This class supports up to 255 dimensions. Each dimension
* can have a minimum and a maximum value of type float. For each dimension, the
* maximum value is only stored when it is not the same as the minimum.
*/
public
class
SpatialType
implements
DataType
{
private
final
int
dimensions
;
public
SpatialType
(
int
dimensions
)
{
if
(
dimensions
<=
0
||
dimensions
>
255
)
{
throw
new
IllegalArgumentException
(
"Dimensions: "
+
dimensions
);
}
this
.
dimensions
=
dimensions
;
}
/**
* Read a value from a string.
*
* @param s the string
* @return the value
*/
public
static
SpatialType
fromString
(
String
s
)
{
return
new
SpatialType
(
Integer
.
parseInt
(
s
.
substring
(
1
)));
}
@Override
public
int
compare
(
Object
a
,
Object
b
)
{
long
la
=
((
SpatialKey
)
a
).
getId
();
long
lb
=
((
SpatialKey
)
b
).
getId
();
return
la
<
lb
?
-
1
:
la
>
lb
?
1
:
0
;
}
/**
* Check whether two spatial values are equal.
*
* @param a the first value
* @param b the second value
* @return true if they are equal
*/
public
boolean
equals
(
Object
a
,
Object
b
)
{
long
la
=
((
SpatialKey
)
a
).
getId
();
long
lb
=
((
SpatialKey
)
b
).
getId
();
return
la
==
lb
;
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
return
1
+
dimensions
*
8
+
DataUtils
.
MAX_VAR_LONG_LEN
;
}
@Override
public
int
getMemory
(
Object
obj
)
{
return
40
+
dimensions
*
4
;
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
SpatialKey
k
=
(
SpatialKey
)
obj
;
int
flags
=
0
;
for
(
int
i
=
0
;
i
<
dimensions
;
i
++)
{
if
(
k
.
min
(
i
)
==
k
.
max
(
i
))
{
flags
|=
1
<<
i
;
}
}
DataUtils
.
writeVarInt
(
buff
,
flags
);
for
(
int
i
=
0
;
i
<
dimensions
;
i
++)
{
buff
.
putFloat
(
k
.
min
(
i
));
if
((
flags
&
(
1
<<
i
))
==
0
)
{
buff
.
putFloat
(
k
.
max
(
i
));
}
}
DataUtils
.
writeVarLong
(
buff
,
k
.
getId
());
}
@Override
public
Object
read
(
ByteBuffer
buff
)
{
int
flags
=
DataUtils
.
readVarInt
(
buff
);
float
[]
minMax
=
new
float
[
dimensions
*
2
];
for
(
int
i
=
0
;
i
<
dimensions
;
i
++)
{
float
min
=
buff
.
getFloat
();
float
max
;
if
((
flags
&
(
1
<<
i
))
!=
0
)
{
max
=
min
;
}
else
{
max
=
buff
.
getFloat
();
}
minMax
[
i
+
i
]
=
min
;
minMax
[
i
+
i
+
1
]
=
max
;
}
long
id
=
DataUtils
.
readVarLong
(
buff
);
return
new
SpatialKey
(
id
,
minMax
);
}
@Override
public
String
asString
()
{
return
"s"
+
dimensions
;
}
/**
* Check whether the two objects overlap.
*
* @param objA the first object
* @param objB the second object
* @return true if they overlap
*/
public
boolean
isOverlap
(
Object
objA
,
Object
objB
)
{
SpatialKey
a
=
(
SpatialKey
)
objA
;
SpatialKey
b
=
(
SpatialKey
)
objB
;
for
(
int
i
=
0
;
i
<
dimensions
;
i
++)
{
if
(
a
.
max
(
i
)
<
b
.
min
(
i
)
||
a
.
min
(
i
)
>
b
.
max
(
i
))
{
return
false
;
}
}
return
true
;
}
/**
* Increase the bounds in the given spatial object.
*
* @param bounds the bounds (may be modified)
* @param add the value
*/
public
void
increaseBounds
(
Object
bounds
,
Object
add
)
{
SpatialKey
b
=
(
SpatialKey
)
bounds
;
SpatialKey
a
=
(
SpatialKey
)
add
;
for
(
int
i
=
0
;
i
<
dimensions
;
i
++)
{
b
.
setMin
(
i
,
Math
.
min
(
b
.
min
(
i
),
a
.
min
(
i
)));
b
.
setMax
(
i
,
Math
.
max
(
b
.
max
(
i
),
a
.
max
(
i
)));
}
}
/**
* Get the area increase by extending a to contain b.
*
* @param objA the bounding box
* @param objB the object
* @return the area
*/
public
float
getAreaIncrease
(
Object
objA
,
Object
objB
)
{
SpatialKey
a
=
(
SpatialKey
)
objA
;
SpatialKey
b
=
(
SpatialKey
)
objB
;
float
min
=
a
.
min
(
0
);
float
max
=
a
.
max
(
0
);
float
areaOld
=
max
-
min
;
min
=
Math
.
min
(
min
,
b
.
min
(
0
));
max
=
Math
.
max
(
max
,
b
.
max
(
0
));
float
areaNew
=
max
-
min
;
for
(
int
i
=
1
;
i
<
dimensions
;
i
++)
{
min
=
a
.
min
(
i
);
max
=
a
.
max
(
i
);
areaOld
*=
max
-
min
;
min
=
Math
.
min
(
min
,
b
.
min
(
i
));
max
=
Math
.
max
(
max
,
b
.
max
(
i
));
areaNew
*=
max
-
min
;
}
return
areaNew
-
areaOld
;
}
/**
* Get the combined area of both objects.
*
* @param objA the first object
* @param objB the second object
* @return the area
*/
float
getCombinedArea
(
Object
objA
,
Object
objB
)
{
SpatialKey
a
=
(
SpatialKey
)
objA
;
SpatialKey
b
=
(
SpatialKey
)
objB
;
float
area
=
1
;
for
(
int
i
=
0
;
i
<
dimensions
;
i
++)
{
float
min
=
Math
.
min
(
a
.
min
(
i
),
b
.
min
(
i
));
float
max
=
Math
.
max
(
a
.
max
(
i
),
b
.
max
(
i
));
area
*=
max
-
min
;
}
return
area
;
}
/**
* Check whether a contains b.
*
* @param objA the bounding box
* @param objB the object
* @return the area
*/
public
boolean
contains
(
Object
objA
,
Object
objB
)
{
SpatialKey
a
=
(
SpatialKey
)
objA
;
SpatialKey
b
=
(
SpatialKey
)
objB
;
for
(
int
i
=
0
;
i
<
dimensions
;
i
++)
{
if
(
a
.
min
(
i
)
>
b
.
min
(
i
)
||
a
.
max
(
i
)
<
b
.
max
(
i
))
{
return
false
;
}
}
return
true
;
}
/**
* Check whether a is completely inside b and does not touch the
* given bound.
*
* @param objA the object to check
* @param objB the bounds
* @return true if a is completely inside b
*/
public
boolean
isInside
(
Object
objA
,
Object
objB
)
{
SpatialKey
a
=
(
SpatialKey
)
objA
;
SpatialKey
b
=
(
SpatialKey
)
objB
;
for
(
int
i
=
0
;
i
<
dimensions
;
i
++)
{
if
(
a
.
min
(
i
)
<=
b
.
min
(
i
)
||
a
.
max
(
i
)
>=
b
.
max
(
i
))
{
return
false
;
}
}
return
true
;
}
/**
* Create a bounding box starting with the given object.
*
* @param objA the object
* @return the bounding box
*/
Object
createBoundingBox
(
Object
objA
)
{
float
[]
minMax
=
new
float
[
dimensions
*
2
];
SpatialKey
a
=
(
SpatialKey
)
objA
;
for
(
int
i
=
0
;
i
<
dimensions
;
i
++)
{
minMax
[
i
+
i
]
=
a
.
min
(
i
);
minMax
[
i
+
i
+
1
]
=
a
.
max
(
i
);
}
return
new
SpatialKey
(
0
,
minMax
);
}
/**
* Get the most extreme pair (elements that are as far apart as possible).
* This method is used to split a page (linear split). If no extreme objects
* could be found, this method returns null.
*
* @param list the objects
* @return the indexes of the extremes
*/
public
int
[]
getExtremes
(
ArrayList
<
Object
>
list
)
{
SpatialKey
bounds
=
(
SpatialKey
)
createBoundingBox
(
list
.
get
(
0
));
SpatialKey
boundsInner
=
(
SpatialKey
)
createBoundingBox
(
bounds
);
for
(
int
i
=
0
;
i
<
dimensions
;
i
++)
{
float
t
=
boundsInner
.
min
(
i
);
boundsInner
.
setMin
(
i
,
boundsInner
.
max
(
i
));
boundsInner
.
setMax
(
i
,
t
);
}
for
(
int
i
=
0
;
i
<
list
.
size
();
i
++)
{
Object
o
=
list
.
get
(
i
);
increaseBounds
(
bounds
,
o
);
increaseMaxInnerBounds
(
boundsInner
,
o
);
}
double
best
=
0
;
int
bestDim
=
0
;
for
(
int
i
=
0
;
i
<
dimensions
;
i
++)
{
float
inner
=
boundsInner
.
max
(
i
)
-
boundsInner
.
min
(
i
);
if
(
inner
<
0
)
{
continue
;
}
float
outer
=
bounds
.
max
(
i
)
-
bounds
.
min
(
i
);
float
d
=
inner
/
outer
;
if
(
d
>
best
)
{
best
=
d
;
bestDim
=
i
;
}
}
if
(
best
<=
0
)
{
return
null
;
}
float
min
=
boundsInner
.
min
(
bestDim
);
float
max
=
boundsInner
.
max
(
bestDim
);
int
firstIndex
=
-
1
,
lastIndex
=
-
1
;
for
(
int
i
=
0
;
i
<
list
.
size
()
&&
(
firstIndex
<
0
||
lastIndex
<
0
);
i
++)
{
SpatialKey
o
=
(
SpatialKey
)
list
.
get
(
i
);
if
(
firstIndex
<
0
&&
o
.
max
(
bestDim
)
==
min
)
{
firstIndex
=
i
;
}
else
if
(
lastIndex
<
0
&&
o
.
min
(
bestDim
)
==
max
)
{
lastIndex
=
i
;
}
}
return
new
int
[]
{
firstIndex
,
lastIndex
};
}
private
void
increaseMaxInnerBounds
(
Object
bounds
,
Object
add
)
{
SpatialKey
b
=
(
SpatialKey
)
bounds
;
SpatialKey
a
=
(
SpatialKey
)
add
;
for
(
int
i
=
0
;
i
<
dimensions
;
i
++)
{
b
.
setMin
(
i
,
Math
.
min
(
b
.
min
(
i
),
a
.
max
(
i
)));
b
.
setMax
(
i
,
Math
.
max
(
b
.
max
(
i
),
a
.
min
(
i
)));
}
}
}
This diff is collapsed.
Click to expand it.
h2/src/tools/org/h2/dev/store/rtree/package.html
deleted
100644 → 0
浏览文件 @
bd50328e
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<!--
Copyright 2004-2011 H2 Group. Multiple-Licensed under the H2 License, Version 1.0,
and under the Eclipse Public License, Version 1.0
(http://h2database.com/html/license.html).
Initial Developer: H2 Group
-->
<html
xmlns=
"http://www.w3.org/1999/xhtml"
lang=
"en"
xml:lang=
"en"
>
<head><meta
http-equiv=
"Content-Type"
content=
"text/html;charset=utf-8"
/><title>
Javadoc package documentation
</title></head><body
style=
"font: 9pt/130% Tahoma, Arial, Helvetica, sans-serif; font-weight: normal;"
><p>
An R-tree implementation
</p></body></html>
\ No newline at end of file
This diff is collapsed.
Click to expand it.
h2/src/tools/org/h2/dev/store/type/ObjectType.java
deleted
100644 → 0
浏览文件 @
bd50328e
/*
* Copyright 2004-2011 H2 Group. Multiple-Licensed under the H2 License,
* Version 1.0, and under the Eclipse Public License, Version 1.0
* (http://h2database.com/html/license.html).
* Initial Developer: H2 Group
*/
package
org
.
h2
.
dev
.
store
.
type
;
import
java.math.BigDecimal
;
import
java.math.BigInteger
;
import
java.nio.ByteBuffer
;
import
java.util.UUID
;
import
org.h2.dev.store.btree.DataUtils
;
import
org.h2.util.Utils
;
/**
* A data type implementation for the most common data types, including
* serializable objects.
*/
public
class
ObjectType
implements
DataType
{
// TODO maybe support InputStream, Reader
// TODO maybe support ResultSet, Date, Time, Timestamp
// TODO maybe support boolean[], short[],...
/**
* The type constants are also used as tag values.
*/
static
final
int
TYPE_BOOLEAN
=
1
;
static
final
int
TYPE_BYTE
=
2
;
static
final
int
TYPE_SHORT
=
3
;
static
final
int
TYPE_INTEGER
=
4
;
static
final
int
TYPE_LONG
=
5
;
static
final
int
TYPE_BIG_INTEGER
=
6
;
static
final
int
TYPE_FLOAT
=
7
;
static
final
int
TYPE_DOUBLE
=
8
;
static
final
int
TYPE_BIG_DECIMAL
=
9
;
static
final
int
TYPE_CHARACTER
=
10
;
static
final
int
TYPE_STRING
=
11
;
static
final
int
TYPE_UUID
=
12
;
static
final
int
TYPE_BYTE_ARRAY
=
13
;
static
final
int
TYPE_INT_ARRAY
=
14
;
static
final
int
TYPE_LONG_ARRAY
=
15
;
static
final
int
TYPE_CHAR_ARRAY
=
16
;
static
final
int
TYPE_SERIALIZED_OBJECT
=
17
;
/**
* For very common values (e.g. 0 and 1) we save space by encoding the value in the tag.
* e.g. TAG_BOOLEAN_TRUE and TAG_FLOAT_0.
*/
static
final
int
TAG_BOOLEAN_TRUE
=
32
;
static
final
int
TAG_INTEGER_NEGATIVE
=
33
;
static
final
int
TAG_INTEGER_FIXED
=
34
;
static
final
int
TAG_LONG_NEGATIVE
=
35
;
static
final
int
TAG_LONG_FIXED
=
36
;
static
final
int
TAG_BIG_INTEGER_0
=
37
;
static
final
int
TAG_BIG_INTEGER_1
=
38
;
static
final
int
TAG_BIG_INTEGER_SMALL
=
39
;
static
final
int
TAG_FLOAT_0
=
40
;
static
final
int
TAG_FLOAT_1
=
41
;
static
final
int
TAG_FLOAT_FIXED
=
42
;
static
final
int
TAG_DOUBLE_0
=
43
;
static
final
int
TAG_DOUBLE_1
=
44
;
static
final
int
TAG_DOUBLE_FIXED
=
45
;
static
final
int
TAG_BIG_DECIMAL_0
=
46
;
static
final
int
TAG_BIG_DECIMAL_1
=
47
;
static
final
int
TAG_BIG_DECIMAL_SMALL
=
48
;
static
final
int
TAG_BIG_DECIMAL_SMALL_SCALED
=
49
;
/** For small-values/small-arrays, we encode the value/array-length in the tag. */
static
final
int
TAG_INTEGER_0_15
=
64
;
static
final
int
TAG_LONG_0_7
=
80
;
static
final
int
TAG_STRING_0_15
=
88
;
static
final
int
TAG_BYTE_ARRAY_0_15
=
104
;
static
final
int
FLOAT_ZERO_BITS
=
Float
.
floatToIntBits
(
0.0f
);
static
final
int
FLOAT_ONE_BITS
=
Float
.
floatToIntBits
(
1.0f
);
static
final
long
DOUBLE_ZERO_BITS
=
Double
.
doubleToLongBits
(
0.0d
);
static
final
long
DOUBLE_ONE_BITS
=
Double
.
doubleToLongBits
(
1.0d
);
private
AutoDetectDataType
last
=
new
StringType
(
this
);
@Override
public
int
compare
(
Object
a
,
Object
b
)
{
return
last
.
compare
(
a
,
b
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
return
last
.
getMaxLength
(
obj
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
return
last
.
getMemory
(
obj
);
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
last
.
write
(
buff
,
obj
);
}
private
AutoDetectDataType
newType
(
int
typeId
)
{
switch
(
typeId
)
{
case
TYPE_BOOLEAN:
return
new
BooleanType
(
this
);
case
TYPE_BYTE:
return
new
ByteType
(
this
);
case
TYPE_SHORT:
return
new
ShortType
(
this
);
case
TYPE_CHARACTER:
return
new
CharacterType
(
this
);
case
TYPE_INTEGER:
return
new
IntegerType
(
this
);
case
TYPE_LONG:
return
new
LongType
(
this
);
case
TYPE_FLOAT:
return
new
FloatType
(
this
);
case
TYPE_DOUBLE:
return
new
DoubleType
(
this
);
case
TYPE_BIG_INTEGER:
return
new
BigIntegerType
(
this
);
case
TYPE_BIG_DECIMAL:
return
new
BigDecimalType
(
this
);
case
TYPE_BYTE_ARRAY:
return
new
ByteArrayType
(
this
);
case
TYPE_CHAR_ARRAY:
return
new
CharArrayType
(
this
);
case
TYPE_INT_ARRAY:
return
new
IntArrayType
(
this
);
case
TYPE_LONG_ARRAY:
return
new
LongArrayType
(
this
);
case
TYPE_STRING:
return
new
StringType
(
this
);
case
TYPE_UUID:
return
new
UUIDType
(
this
);
case
TYPE_SERIALIZED_OBJECT:
return
new
SerializedObjectType
(
this
);
}
throw
new
RuntimeException
(
"Unsupported type: "
+
typeId
);
}
@Override
public
Object
read
(
ByteBuffer
buff
)
{
int
tag
=
buff
.
get
();
int
typeId
;
if
(
tag
<=
TYPE_SERIALIZED_OBJECT
)
{
typeId
=
tag
;
}
else
{
switch
(
tag
)
{
case
TAG_BOOLEAN_TRUE:
typeId
=
TYPE_BOOLEAN
;
break
;
case
TAG_INTEGER_NEGATIVE:
case
TAG_INTEGER_FIXED:
typeId
=
TYPE_INTEGER
;
break
;
case
TAG_LONG_NEGATIVE:
case
TAG_LONG_FIXED:
typeId
=
TYPE_LONG
;
break
;
case
TAG_BIG_INTEGER_0:
case
TAG_BIG_INTEGER_1:
case
TAG_BIG_INTEGER_SMALL:
typeId
=
TYPE_BIG_INTEGER
;
break
;
case
TAG_FLOAT_0:
case
TAG_FLOAT_1:
case
TAG_FLOAT_FIXED:
typeId
=
TYPE_FLOAT
;
break
;
case
TAG_DOUBLE_0:
case
TAG_DOUBLE_1:
case
TAG_DOUBLE_FIXED:
typeId
=
TYPE_DOUBLE
;
break
;
case
TAG_BIG_DECIMAL_0:
case
TAG_BIG_DECIMAL_1:
case
TAG_BIG_DECIMAL_SMALL:
case
TAG_BIG_DECIMAL_SMALL_SCALED:
typeId
=
TYPE_BIG_DECIMAL
;
break
;
default
:
if
(
tag
>=
TAG_INTEGER_0_15
&&
tag
<=
TAG_INTEGER_0_15
+
15
)
{
typeId
=
TYPE_INTEGER
;
}
else
if
(
tag
>=
TAG_STRING_0_15
&&
tag
<=
TAG_STRING_0_15
+
15
)
{
typeId
=
TYPE_STRING
;
}
else
if
(
tag
>=
TAG_LONG_0_7
&&
tag
<=
TAG_LONG_0_7
+
7
)
{
typeId
=
TYPE_LONG
;
}
else
if
(
tag
>=
TAG_BYTE_ARRAY_0_15
&&
tag
<=
TAG_BYTE_ARRAY_0_15
+
15
)
{
typeId
=
TYPE_BYTE_ARRAY
;
}
else
{
throw
new
RuntimeException
(
"Unknown tag: "
+
tag
);
}
}
}
if
(
typeId
!=
last
.
typeId
)
{
last
=
newType
(
typeId
);
}
return
last
.
read
(
buff
,
tag
);
}
@Override
public
String
asString
()
{
return
"o"
;
}
private
static
int
getTypeId
(
Object
obj
)
{
if
(
obj
instanceof
Integer
)
{
return
TYPE_INTEGER
;
}
else
if
(
obj
instanceof
String
)
{
return
TYPE_STRING
;
}
else
if
(
obj
instanceof
Long
)
{
return
TYPE_LONG
;
}
else
if
(
obj
instanceof
BigDecimal
)
{
if
(
obj
.
getClass
()
==
BigDecimal
.
class
)
{
return
TYPE_BIG_DECIMAL
;
}
}
else
if
(
obj
instanceof
byte
[])
{
return
TYPE_BYTE_ARRAY
;
}
else
if
(
obj
instanceof
Double
)
{
return
TYPE_DOUBLE
;
}
else
if
(
obj
instanceof
Float
)
{
return
TYPE_FLOAT
;
}
else
if
(
obj
instanceof
Boolean
)
{
return
TYPE_BOOLEAN
;
}
else
if
(
obj
instanceof
UUID
)
{
return
TYPE_UUID
;
}
else
if
(
obj
instanceof
Byte
)
{
return
TYPE_BYTE
;
}
else
if
(
obj
instanceof
int
[])
{
return
TYPE_INT_ARRAY
;
}
else
if
(
obj
instanceof
long
[])
{
return
TYPE_LONG_ARRAY
;
}
else
if
(
obj
instanceof
char
[])
{
return
TYPE_CHAR_ARRAY
;
}
else
if
(
obj
instanceof
Short
)
{
return
TYPE_SHORT
;
}
else
if
(
obj
instanceof
BigInteger
)
{
if
(
obj
.
getClass
()
==
BigInteger
.
class
)
{
return
TYPE_BIG_INTEGER
;
}
}
else
if
(
obj
instanceof
Character
)
{
return
TYPE_CHARACTER
;
}
if
(
obj
==
null
)
{
throw
new
NullPointerException
();
}
return
TYPE_SERIALIZED_OBJECT
;
}
/**
* Switch the last remembered type to match the type of the given object.
*
* @param obj the object
* @return the auto-detected type used
*/
AutoDetectDataType
switchType
(
Object
obj
)
{
int
typeId
=
getTypeId
(
obj
);
AutoDetectDataType
l
=
last
;
if
(
typeId
!=
l
.
typeId
)
{
l
=
last
=
newType
(
typeId
);
}
return
l
;
}
/**
* Compare the contents of two arrays.
*
* @param data1 the first array (must not be null)
* @param data2 the second array (must not be null)
* @return the result of the comparison (-1, 1 or 0)
*/
public
static
int
compareNotNull
(
char
[]
data1
,
char
[]
data2
)
{
if
(
data1
==
data2
)
{
return
0
;
}
int
len
=
Math
.
min
(
data1
.
length
,
data2
.
length
);
for
(
int
i
=
0
;
i
<
len
;
i
++)
{
char
x
=
data1
[
i
];
char
x2
=
data2
[
i
];
if
(
x
!=
x2
)
{
return
x
>
x2
?
1
:
-
1
;
}
}
return
Integer
.
signum
(
data1
.
length
-
data2
.
length
);
}
/**
* Compare the contents of two arrays.
*
* @param data1 the first array (must not be null)
* @param data2 the second array (must not be null)
* @return the result of the comparison (-1, 1 or 0)
*/
public
static
int
compareNotNull
(
int
[]
data1
,
int
[]
data2
)
{
if
(
data1
==
data2
)
{
return
0
;
}
int
len
=
Math
.
min
(
data1
.
length
,
data2
.
length
);
for
(
int
i
=
0
;
i
<
len
;
i
++)
{
int
x
=
data1
[
i
];
int
x2
=
data2
[
i
];
if
(
x
!=
x2
)
{
return
x
>
x2
?
1
:
-
1
;
}
}
return
Integer
.
signum
(
data1
.
length
-
data2
.
length
);
}
/**
* Compare the contents of two arrays.
*
* @param data1 the first array (must not be null)
* @param data2 the second array (must not be null)
* @return the result of the comparison (-1, 1 or 0)
*/
public
static
int
compareNotNull
(
long
[]
data1
,
long
[]
data2
)
{
if
(
data1
==
data2
)
{
return
0
;
}
int
len
=
Math
.
min
(
data1
.
length
,
data2
.
length
);
for
(
int
i
=
0
;
i
<
len
;
i
++)
{
long
x
=
data1
[
i
];
long
x2
=
data2
[
i
];
if
(
x
!=
x2
)
{
return
x
>
x2
?
1
:
-
1
;
}
}
return
Integer
.
signum
(
data1
.
length
-
data2
.
length
);
}
/**
* The base class for auto-detect data types.
*/
abstract
class
AutoDetectDataType
implements
DataType
{
protected
final
ObjectType
base
;
protected
final
int
typeId
;
AutoDetectDataType
(
ObjectType
base
,
int
typeId
)
{
this
.
base
=
base
;
this
.
typeId
=
typeId
;
}
@Override
public
int
getMemory
(
Object
o
)
{
return
getType
(
o
).
getMemory
(
o
);
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
AutoDetectDataType
aType
=
getType
(
aObj
);
AutoDetectDataType
bType
=
getType
(
bObj
);
if
(
aType
==
bType
)
{
return
aType
.
compare
(
aObj
,
bObj
);
}
int
typeDiff
=
aType
.
typeId
-
bType
.
typeId
;
return
Integer
.
signum
(
typeDiff
);
}
@Override
public
int
getMaxLength
(
Object
o
)
{
return
getType
(
o
).
getMaxLength
(
o
);
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
o
)
{
getType
(
o
).
write
(
buff
,
o
);
}
@Override
public
final
Object
read
(
ByteBuffer
buff
)
{
throw
new
RuntimeException
();
}
/**
* Get the type for the given object.
*
* @param o the object
* @return the type
*/
AutoDetectDataType
getType
(
Object
o
)
{
return
base
.
switchType
(
o
);
}
/**
* Read an object from the buffer.
*
* @param buff the buffer
* @param tag the first byte of the object (usually the type)
* @return the read object
*/
abstract
Object
read
(
ByteBuffer
buff
,
int
tag
);
@Override
public
String
asString
()
{
return
"o"
+
typeId
;
}
}
/**
* The type for boolean true and false.
*/
class
BooleanType
extends
AutoDetectDataType
{
BooleanType
(
ObjectType
base
)
{
super
(
base
,
TYPE_BOOLEAN
);
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
Boolean
&&
bObj
instanceof
Boolean
)
{
Boolean
a
=
(
Boolean
)
aObj
;
Boolean
b
=
(
Boolean
)
bObj
;
return
a
.
compareTo
(
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
return
obj
instanceof
Boolean
?
0
:
super
.
getMemory
(
obj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
return
obj
instanceof
Boolean
?
1
:
super
.
getMaxLength
(
obj
);
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
Boolean
)
{
int
tag
=
((
Boolean
)
obj
)
?
TAG_BOOLEAN_TRUE
:
TYPE_BOOLEAN
;
buff
.
put
((
byte
)
tag
);
}
else
{
super
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
return
tag
==
TYPE_BOOLEAN
?
Boolean
.
FALSE
:
Boolean
.
TRUE
;
}
}
/**
* The type for byte objects.
*/
class
ByteType
extends
AutoDetectDataType
{
ByteType
(
ObjectType
base
)
{
super
(
base
,
TYPE_BYTE
);
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
Byte
&&
bObj
instanceof
Byte
)
{
Byte
a
=
(
Byte
)
aObj
;
Byte
b
=
(
Byte
)
bObj
;
return
a
.
compareTo
(
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
return
obj
instanceof
Byte
?
0
:
super
.
getMemory
(
obj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
return
obj
instanceof
Byte
?
2
:
super
.
getMaxLength
(
obj
);
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
Byte
)
{
buff
.
put
((
byte
)
TYPE_BYTE
);
buff
.
put
(((
Byte
)
obj
).
byteValue
());
}
else
{
super
.
write
(
buff
,
obj
);
}
}
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
return
Byte
.
valueOf
(
buff
.
get
());
}
}
/**
* The type for character objects.
*/
class
CharacterType
extends
AutoDetectDataType
{
CharacterType
(
ObjectType
base
)
{
super
(
base
,
TYPE_CHARACTER
);
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
Character
&&
bObj
instanceof
Character
)
{
Character
a
=
(
Character
)
aObj
;
Character
b
=
(
Character
)
bObj
;
return
a
.
compareTo
(
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
return
obj
instanceof
Character
?
24
:
super
.
getMemory
(
obj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
return
obj
instanceof
Character
?
3
:
super
.
getMaxLength
(
obj
);
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
Character
)
{
buff
.
put
((
byte
)
TYPE_CHARACTER
);
buff
.
putChar
(((
Character
)
obj
).
charValue
());
}
else
{
super
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
return
Character
.
valueOf
(
buff
.
getChar
());
}
}
/**
* The type for short objects.
*/
class
ShortType
extends
AutoDetectDataType
{
ShortType
(
ObjectType
base
)
{
super
(
base
,
TYPE_SHORT
);
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
Short
&&
bObj
instanceof
Short
)
{
Short
a
=
(
Short
)
aObj
;
Short
b
=
(
Short
)
bObj
;
return
a
.
compareTo
(
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
return
obj
instanceof
Short
?
24
:
super
.
getMemory
(
obj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
return
obj
instanceof
Short
?
3
:
super
.
getMaxLength
(
obj
);
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
Short
)
{
buff
.
put
((
byte
)
TYPE_SHORT
);
buff
.
putShort
(((
Short
)
obj
).
shortValue
());
}
else
{
super
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
return
Short
.
valueOf
(
buff
.
getShort
());
}
}
/**
* The type for integer objects.
*/
class
IntegerType
extends
AutoDetectDataType
{
IntegerType
(
ObjectType
base
)
{
super
(
base
,
TYPE_INTEGER
);
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
Integer
&&
bObj
instanceof
Integer
)
{
Integer
a
=
(
Integer
)
aObj
;
Integer
b
=
(
Integer
)
bObj
;
return
a
.
compareTo
(
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
return
obj
instanceof
Integer
?
24
:
super
.
getMemory
(
obj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
return
obj
instanceof
Integer
?
1
+
DataUtils
.
MAX_VAR_INT_LEN
:
super
.
getMaxLength
(
obj
);
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
Integer
)
{
int
x
=
(
Integer
)
obj
;
if
(
x
<
0
)
{
// -Integer.MIN_VALUE is smaller than 0
if
(-
x
<
0
||
-
x
>
DataUtils
.
COMPRESSED_VAR_INT_MAX
)
{
buff
.
put
((
byte
)
TAG_INTEGER_FIXED
);
buff
.
putInt
(
x
);
}
else
{
buff
.
put
((
byte
)
TAG_INTEGER_NEGATIVE
);
DataUtils
.
writeVarInt
(
buff
,
-
x
);
}
}
else
if
(
x
<=
15
)
{
buff
.
put
((
byte
)
(
TAG_INTEGER_0_15
+
x
));
}
else
if
(
x
<=
DataUtils
.
COMPRESSED_VAR_INT_MAX
)
{
buff
.
put
((
byte
)
TYPE_INTEGER
);
DataUtils
.
writeVarInt
(
buff
,
x
);
}
else
{
buff
.
put
((
byte
)
TAG_INTEGER_FIXED
);
buff
.
putInt
(
x
);
}
}
else
{
super
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
switch
(
tag
)
{
case
TYPE_INTEGER:
return
DataUtils
.
readVarInt
(
buff
);
case
TAG_INTEGER_NEGATIVE:
return
-
DataUtils
.
readVarInt
(
buff
);
case
TAG_INTEGER_FIXED:
return
buff
.
getInt
();
}
return
tag
-
TAG_INTEGER_0_15
;
}
}
/**
* The type for long objects.
*/
public
class
LongType
extends
AutoDetectDataType
{
LongType
(
ObjectType
base
)
{
super
(
base
,
TYPE_LONG
);
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
Long
&&
bObj
instanceof
Long
)
{
Long
a
=
(
Long
)
aObj
;
Long
b
=
(
Long
)
bObj
;
return
a
.
compareTo
(
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
return
obj
instanceof
Long
?
30
:
super
.
getMemory
(
obj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
return
obj
instanceof
Long
?
1
+
DataUtils
.
MAX_VAR_LONG_LEN
:
super
.
getMaxLength
(
obj
);
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
Long
)
{
long
x
=
(
Long
)
obj
;
if
(
x
<
0
)
{
// -Long.MIN_VALUE is smaller than 0
if
(-
x
<
0
||
-
x
>
DataUtils
.
COMPRESSED_VAR_LONG_MAX
)
{
buff
.
put
((
byte
)
TAG_LONG_FIXED
);
buff
.
putLong
(
x
);
}
else
{
buff
.
put
((
byte
)
TAG_LONG_NEGATIVE
);
DataUtils
.
writeVarLong
(
buff
,
-
x
);
}
}
else
if
(
x
<=
7
)
{
buff
.
put
((
byte
)
(
TAG_LONG_0_7
+
x
));
}
else
if
(
x
<=
DataUtils
.
COMPRESSED_VAR_LONG_MAX
)
{
buff
.
put
((
byte
)
TYPE_LONG
);
DataUtils
.
writeVarLong
(
buff
,
x
);
}
else
{
buff
.
put
((
byte
)
TAG_LONG_FIXED
);
buff
.
putLong
(
x
);
}
}
else
{
super
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
switch
(
tag
)
{
case
TYPE_LONG:
return
DataUtils
.
readVarLong
(
buff
);
case
TAG_LONG_NEGATIVE:
return
-
DataUtils
.
readVarLong
(
buff
);
case
TAG_LONG_FIXED:
return
buff
.
getLong
();
}
return
Long
.
valueOf
(
tag
-
TAG_LONG_0_7
);
}
}
/**
* The type for float objects.
*/
class
FloatType
extends
AutoDetectDataType
{
FloatType
(
ObjectType
base
)
{
super
(
base
,
TYPE_FLOAT
);
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
Float
&&
bObj
instanceof
Float
)
{
Float
a
=
(
Float
)
aObj
;
Float
b
=
(
Float
)
bObj
;
return
a
.
compareTo
(
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
return
obj
instanceof
Float
?
24
:
super
.
getMemory
(
obj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
return
obj
instanceof
Float
?
1
+
DataUtils
.
MAX_VAR_INT_LEN
:
super
.
getMaxLength
(
obj
);
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
Float
)
{
float
x
=
(
Float
)
obj
;
int
f
=
Float
.
floatToIntBits
(
x
);
if
(
f
==
ObjectType
.
FLOAT_ZERO_BITS
)
{
buff
.
put
((
byte
)
TAG_FLOAT_0
);
}
else
if
(
f
==
ObjectType
.
FLOAT_ONE_BITS
)
{
buff
.
put
((
byte
)
TAG_FLOAT_1
);
}
else
{
int
value
=
Integer
.
reverse
(
f
);
if
(
value
>=
0
&&
value
<=
DataUtils
.
COMPRESSED_VAR_INT_MAX
)
{
buff
.
put
((
byte
)
TYPE_FLOAT
);
DataUtils
.
writeVarInt
(
buff
,
value
);
}
else
{
buff
.
put
((
byte
)
TAG_FLOAT_FIXED
);
buff
.
putFloat
(
x
);
}
}
}
else
{
super
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
switch
(
tag
)
{
case
TAG_FLOAT_0:
return
0
f
;
case
TAG_FLOAT_1:
return
1
f
;
case
TAG_FLOAT_FIXED:
return
buff
.
getFloat
();
}
return
Float
.
intBitsToFloat
(
Integer
.
reverse
(
DataUtils
.
readVarInt
(
buff
)));
}
}
/**
* The type for double objects.
*/
class
DoubleType
extends
AutoDetectDataType
{
DoubleType
(
ObjectType
base
)
{
super
(
base
,
TYPE_DOUBLE
);
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
Double
&&
bObj
instanceof
Double
)
{
Double
a
=
(
Double
)
aObj
;
Double
b
=
(
Double
)
bObj
;
return
a
.
compareTo
(
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
return
obj
instanceof
Double
?
30
:
super
.
getMemory
(
obj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
return
obj
instanceof
Double
?
1
+
DataUtils
.
MAX_VAR_LONG_LEN
:
super
.
getMaxLength
(
obj
);
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
Double
)
{
double
x
=
(
Double
)
obj
;
long
d
=
Double
.
doubleToLongBits
(
x
);
if
(
d
==
ObjectType
.
DOUBLE_ZERO_BITS
)
{
buff
.
put
((
byte
)
TAG_DOUBLE_0
);
}
else
if
(
d
==
ObjectType
.
DOUBLE_ONE_BITS
)
{
buff
.
put
((
byte
)
TAG_DOUBLE_1
);
}
else
{
long
value
=
Long
.
reverse
(
d
);
if
(
value
>=
0
&&
value
<=
DataUtils
.
COMPRESSED_VAR_LONG_MAX
)
{
buff
.
put
((
byte
)
TYPE_DOUBLE
);
DataUtils
.
writeVarLong
(
buff
,
value
);
}
else
{
buff
.
put
((
byte
)
TAG_DOUBLE_FIXED
);
buff
.
putDouble
(
x
);
}
}
}
else
{
super
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
switch
(
tag
)
{
case
TAG_DOUBLE_0:
return
0
d
;
case
TAG_DOUBLE_1:
return
1
d
;
case
TAG_DOUBLE_FIXED:
return
buff
.
getDouble
();
}
return
Double
.
longBitsToDouble
(
Long
.
reverse
(
DataUtils
.
readVarLong
(
buff
)));
}
}
/**
* The type for BigInteger objects.
*/
class
BigIntegerType
extends
AutoDetectDataType
{
BigIntegerType
(
ObjectType
base
)
{
super
(
base
,
TYPE_BIG_INTEGER
);
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
BigInteger
&&
bObj
instanceof
BigInteger
)
{
BigInteger
a
=
(
BigInteger
)
aObj
;
BigInteger
b
=
(
BigInteger
)
bObj
;
return
a
.
compareTo
(
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
return
obj
instanceof
BigInteger
?
100
:
super
.
getMemory
(
obj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
if
(!(
obj
instanceof
BigInteger
))
{
return
super
.
getMaxLength
(
obj
);
}
BigInteger
x
=
(
BigInteger
)
obj
;
if
(
BigInteger
.
ZERO
.
equals
(
x
)
||
BigInteger
.
ONE
.
equals
(
x
))
{
return
1
;
}
int
bits
=
x
.
bitLength
();
if
(
bits
<=
63
)
{
return
1
+
DataUtils
.
MAX_VAR_LONG_LEN
;
}
byte
[]
bytes
=
x
.
toByteArray
();
return
1
+
DataUtils
.
MAX_VAR_INT_LEN
+
bytes
.
length
;
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
BigInteger
)
{
BigInteger
x
=
(
BigInteger
)
obj
;
if
(
BigInteger
.
ZERO
.
equals
(
x
))
{
buff
.
put
((
byte
)
TAG_BIG_INTEGER_0
);
}
else
if
(
BigInteger
.
ONE
.
equals
(
x
))
{
buff
.
put
((
byte
)
TAG_BIG_INTEGER_1
);
}
else
{
int
bits
=
x
.
bitLength
();
if
(
bits
<=
63
)
{
buff
.
put
((
byte
)
TAG_BIG_INTEGER_SMALL
);
DataUtils
.
writeVarLong
(
buff
,
x
.
longValue
());
}
else
{
buff
.
put
((
byte
)
TYPE_BIG_INTEGER
);
byte
[]
bytes
=
x
.
toByteArray
();
DataUtils
.
writeVarInt
(
buff
,
bytes
.
length
);
buff
.
put
(
bytes
);
}
}
}
else
{
super
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
switch
(
tag
)
{
case
TAG_BIG_INTEGER_0:
return
BigInteger
.
ZERO
;
case
TAG_BIG_INTEGER_1:
return
BigInteger
.
ONE
;
case
TAG_BIG_INTEGER_SMALL:
return
BigInteger
.
valueOf
(
DataUtils
.
readVarLong
(
buff
));
}
int
len
=
DataUtils
.
readVarInt
(
buff
);
byte
[]
bytes
=
Utils
.
newBytes
(
len
);
buff
.
get
(
bytes
);
return
new
BigInteger
(
bytes
);
}
}
/**
* The type for BigDecimal objects.
*/
class
BigDecimalType
extends
AutoDetectDataType
{
BigDecimalType
(
ObjectType
base
)
{
super
(
base
,
TYPE_BIG_DECIMAL
);
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
BigDecimal
&&
bObj
instanceof
BigDecimal
)
{
BigDecimal
a
=
(
BigDecimal
)
aObj
;
BigDecimal
b
=
(
BigDecimal
)
bObj
;
return
a
.
compareTo
(
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
return
obj
instanceof
BigDecimal
?
150
:
super
.
getMemory
(
obj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
if
(!(
obj
instanceof
BigDecimal
))
{
return
super
.
getMaxLength
(
obj
);
}
BigDecimal
x
=
(
BigDecimal
)
obj
;
if
(
BigDecimal
.
ZERO
.
equals
(
x
)
||
BigDecimal
.
ONE
.
equals
(
x
))
{
return
1
;
}
int
scale
=
x
.
scale
();
BigInteger
b
=
x
.
unscaledValue
();
int
bits
=
b
.
bitLength
();
if
(
bits
<=
63
)
{
if
(
scale
==
0
)
{
return
1
+
DataUtils
.
MAX_VAR_LONG_LEN
;
}
return
1
+
DataUtils
.
MAX_VAR_INT_LEN
+
DataUtils
.
MAX_VAR_LONG_LEN
;
}
byte
[]
bytes
=
b
.
toByteArray
();
return
1
+
DataUtils
.
MAX_VAR_INT_LEN
+
DataUtils
.
MAX_VAR_INT_LEN
+
bytes
.
length
;
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
BigDecimal
)
{
BigDecimal
x
=
(
BigDecimal
)
obj
;
if
(
BigDecimal
.
ZERO
.
equals
(
x
))
{
buff
.
put
((
byte
)
TAG_BIG_DECIMAL_0
);
}
else
if
(
BigDecimal
.
ONE
.
equals
(
x
))
{
buff
.
put
((
byte
)
TAG_BIG_DECIMAL_1
);
}
else
{
int
scale
=
x
.
scale
();
BigInteger
b
=
x
.
unscaledValue
();
int
bits
=
b
.
bitLength
();
if
(
bits
<
64
)
{
if
(
scale
==
0
)
{
buff
.
put
((
byte
)
TAG_BIG_DECIMAL_SMALL
);
}
else
{
buff
.
put
((
byte
)
TAG_BIG_DECIMAL_SMALL_SCALED
);
DataUtils
.
writeVarInt
(
buff
,
scale
);
}
DataUtils
.
writeVarLong
(
buff
,
b
.
longValue
());
}
else
{
buff
.
put
((
byte
)
TYPE_BIG_DECIMAL
);
DataUtils
.
writeVarInt
(
buff
,
scale
);
byte
[]
bytes
=
b
.
toByteArray
();
DataUtils
.
writeVarInt
(
buff
,
bytes
.
length
);
buff
.
put
(
bytes
);
}
}
}
else
{
super
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
switch
(
tag
)
{
case
TAG_BIG_DECIMAL_0:
return
BigDecimal
.
ZERO
;
case
TAG_BIG_DECIMAL_1:
return
BigDecimal
.
ONE
;
case
TAG_BIG_DECIMAL_SMALL:
return
BigDecimal
.
valueOf
(
DataUtils
.
readVarLong
(
buff
));
case
TAG_BIG_DECIMAL_SMALL_SCALED:
int
scale
=
DataUtils
.
readVarInt
(
buff
);
return
BigDecimal
.
valueOf
(
DataUtils
.
readVarLong
(
buff
),
scale
);
}
int
scale
=
DataUtils
.
readVarInt
(
buff
);
int
len
=
DataUtils
.
readVarInt
(
buff
);
byte
[]
bytes
=
Utils
.
newBytes
(
len
);
buff
.
get
(
bytes
);
BigInteger
b
=
new
BigInteger
(
bytes
);
return
new
BigDecimal
(
b
,
scale
);
}
}
/**
* The type for string objects.
*/
class
StringType
extends
AutoDetectDataType
{
StringType
(
ObjectType
base
)
{
super
(
base
,
TYPE_STRING
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
if
(!(
obj
instanceof
String
))
{
return
super
.
getMemory
(
obj
);
}
return
24
+
2
*
obj
.
toString
().
length
();
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
String
&&
bObj
instanceof
String
)
{
return
aObj
.
toString
().
compareTo
(
bObj
.
toString
());
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
if
(!(
obj
instanceof
String
))
{
return
super
.
getMaxLength
(
obj
);
}
return
1
+
DataUtils
.
MAX_VAR_INT_LEN
+
3
*
obj
.
toString
().
length
();
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(!(
obj
instanceof
String
))
{
super
.
write
(
buff
,
obj
);
return
;
}
String
s
=
(
String
)
obj
;
int
len
=
s
.
length
();
if
(
len
<=
15
)
{
buff
.
put
((
byte
)
(
TAG_STRING_0_15
+
len
));
}
else
{
buff
.
put
((
byte
)
TYPE_STRING
);
DataUtils
.
writeVarInt
(
buff
,
len
);
}
DataUtils
.
writeStringData
(
buff
,
s
,
len
);
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
int
len
;
if
(
tag
==
TYPE_STRING
)
{
len
=
DataUtils
.
readVarInt
(
buff
);
}
else
{
len
=
tag
-
TAG_STRING_0_15
;
}
return
DataUtils
.
readString
(
buff
,
len
);
}
}
/**
* The type for UUID objects.
*/
class
UUIDType
extends
AutoDetectDataType
{
UUIDType
(
ObjectType
base
)
{
super
(
base
,
TYPE_UUID
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
return
obj
instanceof
UUID
?
40
:
super
.
getMemory
(
obj
);
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
UUID
&&
bObj
instanceof
UUID
)
{
UUID
a
=
(
UUID
)
aObj
;
UUID
b
=
(
UUID
)
bObj
;
return
a
.
compareTo
(
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
if
(!(
obj
instanceof
UUID
))
{
return
super
.
getMaxLength
(
obj
);
}
return
17
;
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(!(
obj
instanceof
UUID
))
{
super
.
write
(
buff
,
obj
);
return
;
}
buff
.
put
((
byte
)
TYPE_UUID
);
UUID
a
=
(
UUID
)
obj
;
buff
.
putLong
(
a
.
getMostSignificantBits
());
buff
.
putLong
(
a
.
getLeastSignificantBits
());
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
long
a
=
buff
.
getLong
(),
b
=
buff
.
getLong
();
return
new
UUID
(
a
,
b
);
}
}
/**
* The type for byte arrays.
*/
class
ByteArrayType
extends
AutoDetectDataType
{
ByteArrayType
(
ObjectType
base
)
{
super
(
base
,
TYPE_BYTE_ARRAY
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
if
(!(
obj
instanceof
byte
[]))
{
return
super
.
getMemory
(
obj
);
}
return
24
+
((
byte
[])
obj
).
length
;
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
byte
[]
&&
bObj
instanceof
byte
[])
{
byte
[]
a
=
(
byte
[])
aObj
;
byte
[]
b
=
(
byte
[])
bObj
;
return
Utils
.
compareNotNull
(
a
,
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
if
(!(
obj
instanceof
byte
[]))
{
return
super
.
getMaxLength
(
obj
);
}
return
1
+
DataUtils
.
MAX_VAR_INT_LEN
+
((
byte
[])
obj
).
length
;
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
byte
[])
{
byte
[]
data
=
(
byte
[])
obj
;
int
len
=
data
.
length
;
if
(
len
<=
15
)
{
buff
.
put
((
byte
)
(
TAG_BYTE_ARRAY_0_15
+
len
));
}
else
{
buff
.
put
((
byte
)
TYPE_BYTE_ARRAY
);
DataUtils
.
writeVarInt
(
buff
,
data
.
length
);
}
buff
.
put
(
data
);
}
else
{
super
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
int
len
;
if
(
tag
==
TYPE_BYTE_ARRAY
)
{
len
=
DataUtils
.
readVarInt
(
buff
);
}
else
{
len
=
tag
-
TAG_BYTE_ARRAY_0_15
;
}
byte
[]
data
=
new
byte
[
len
];
buff
.
get
(
data
);
return
data
;
}
}
/**
* The type for char arrays.
*/
class
CharArrayType
extends
AutoDetectDataType
{
CharArrayType
(
ObjectType
base
)
{
super
(
base
,
TYPE_CHAR_ARRAY
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
if
(!(
obj
instanceof
char
[]))
{
return
super
.
getMemory
(
obj
);
}
return
24
+
2
*
((
char
[])
obj
).
length
;
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
char
[]
&&
bObj
instanceof
char
[])
{
char
[]
a
=
(
char
[])
aObj
;
char
[]
b
=
(
char
[])
bObj
;
return
compareNotNull
(
a
,
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
if
(!(
obj
instanceof
char
[]))
{
return
super
.
getMaxLength
(
obj
);
}
return
1
+
DataUtils
.
MAX_VAR_INT_LEN
+
2
*
((
char
[])
obj
).
length
;
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
char
[])
{
buff
.
put
((
byte
)
TYPE_CHAR_ARRAY
);
char
[]
data
=
(
char
[])
obj
;
int
len
=
data
.
length
;
DataUtils
.
writeVarInt
(
buff
,
len
);
buff
.
asCharBuffer
().
put
(
data
);
buff
.
position
(
buff
.
position
()
+
len
+
len
);
}
else
{
super
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
int
len
=
DataUtils
.
readVarInt
(
buff
);
char
[]
data
=
new
char
[
len
];
buff
.
asCharBuffer
().
get
(
data
);
buff
.
position
(
buff
.
position
()
+
len
+
len
);
return
data
;
}
}
/**
* The type for char arrays.
*/
class
IntArrayType
extends
AutoDetectDataType
{
IntArrayType
(
ObjectType
base
)
{
super
(
base
,
TYPE_INT_ARRAY
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
if
(!(
obj
instanceof
int
[]))
{
return
super
.
getMemory
(
obj
);
}
return
24
+
4
*
((
int
[])
obj
).
length
;
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
int
[]
&&
bObj
instanceof
int
[])
{
int
[]
a
=
(
int
[])
aObj
;
int
[]
b
=
(
int
[])
bObj
;
return
compareNotNull
(
a
,
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
if
(!(
obj
instanceof
int
[]))
{
return
super
.
getMaxLength
(
obj
);
}
return
1
+
DataUtils
.
MAX_VAR_INT_LEN
+
4
*
((
int
[])
obj
).
length
;
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
int
[])
{
buff
.
put
((
byte
)
TYPE_INT_ARRAY
);
int
[]
data
=
(
int
[])
obj
;
int
len
=
data
.
length
;
DataUtils
.
writeVarInt
(
buff
,
len
);
buff
.
asIntBuffer
().
put
(
data
);
buff
.
position
(
buff
.
position
()
+
4
*
len
);
}
else
{
super
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
int
len
=
DataUtils
.
readVarInt
(
buff
);
int
[]
data
=
new
int
[
len
];
buff
.
asIntBuffer
().
get
(
data
);
buff
.
position
(
buff
.
position
()
+
4
*
len
);
return
data
;
}
}
/**
* The type for char arrays.
*/
class
LongArrayType
extends
AutoDetectDataType
{
LongArrayType
(
ObjectType
base
)
{
super
(
base
,
TYPE_LONG_ARRAY
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
if
(!(
obj
instanceof
long
[]))
{
return
super
.
getMemory
(
obj
);
}
return
24
+
8
*
((
long
[])
obj
).
length
;
}
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
instanceof
long
[]
&&
bObj
instanceof
long
[])
{
long
[]
a
=
(
long
[])
aObj
;
long
[]
b
=
(
long
[])
bObj
;
return
compareNotNull
(
a
,
b
);
}
return
super
.
compare
(
aObj
,
bObj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
if
(!(
obj
instanceof
long
[]))
{
return
super
.
getMaxLength
(
obj
);
}
return
1
+
DataUtils
.
MAX_VAR_INT_LEN
+
8
*
((
long
[])
obj
).
length
;
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
if
(
obj
instanceof
long
[])
{
buff
.
put
((
byte
)
TYPE_LONG_ARRAY
);
long
[]
data
=
(
long
[])
obj
;
int
len
=
data
.
length
;
DataUtils
.
writeVarInt
(
buff
,
len
);
buff
.
asLongBuffer
().
put
(
data
);
buff
.
position
(
buff
.
position
()
+
8
*
len
);
}
else
{
super
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
int
len
=
DataUtils
.
readVarInt
(
buff
);
long
[]
data
=
new
long
[
len
];
buff
.
asLongBuffer
().
get
(
data
);
buff
.
position
(
buff
.
position
()
+
8
*
len
);
return
data
;
}
}
/**
* The type for serialized objects.
*/
class
SerializedObjectType
extends
AutoDetectDataType
{
SerializedObjectType
(
ObjectType
base
)
{
super
(
base
,
TYPE_SERIALIZED_OBJECT
);
}
@SuppressWarnings
(
"unchecked"
)
@Override
public
int
compare
(
Object
aObj
,
Object
bObj
)
{
if
(
aObj
==
bObj
)
{
return
0
;
}
DataType
ta
=
getType
(
aObj
);
DataType
tb
=
getType
(
bObj
);
if
(
ta
!=
this
&&
ta
==
tb
)
{
return
ta
.
compare
(
aObj
,
bObj
);
}
// TODO ensure comparable type (both may be comparable but not
// with each other)
if
(
aObj
instanceof
Comparable
)
{
if
(
aObj
.
getClass
().
isAssignableFrom
(
bObj
.
getClass
()))
{
return
((
Comparable
<
Object
>)
aObj
).
compareTo
(
bObj
);
}
}
if
(
bObj
instanceof
Comparable
)
{
if
(
bObj
.
getClass
().
isAssignableFrom
(
aObj
.
getClass
()))
{
return
-((
Comparable
<
Object
>)
bObj
).
compareTo
(
aObj
);
}
}
byte
[]
a
=
Utils
.
serialize
(
aObj
);
byte
[]
b
=
Utils
.
serialize
(
bObj
);
return
Utils
.
compareNotNull
(
a
,
b
);
}
@Override
public
int
getMemory
(
Object
obj
)
{
DataType
t
=
getType
(
obj
);
if
(
t
==
this
)
{
return
1000
;
}
return
t
.
getMemory
(
obj
);
}
@Override
public
int
getMaxLength
(
Object
obj
)
{
DataType
t
=
getType
(
obj
);
if
(
t
==
this
)
{
byte
[]
data
=
Utils
.
serialize
(
obj
);
return
1
+
DataUtils
.
MAX_VAR_INT_LEN
+
data
.
length
;
}
return
t
.
getMaxLength
(
obj
);
}
@Override
public
void
write
(
ByteBuffer
buff
,
Object
obj
)
{
DataType
t
=
getType
(
obj
);
if
(
t
==
this
)
{
buff
.
put
((
byte
)
TYPE_SERIALIZED_OBJECT
);
byte
[]
data
=
Utils
.
serialize
(
obj
);
DataUtils
.
writeVarInt
(
buff
,
data
.
length
);
buff
.
put
(
data
);
}
else
{
t
.
write
(
buff
,
obj
);
}
}
@Override
public
Object
read
(
ByteBuffer
buff
,
int
tag
)
{
int
len
=
DataUtils
.
readVarInt
(
buff
);
byte
[]
data
=
new
byte
[
len
];
buff
.
get
(
data
);
return
Utils
.
deserialize
(
data
);
}
}
}
This diff is collapsed.
Click to expand it.
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论