Skip to content
项目
群组
代码片段
帮助
正在加载...
帮助
为 GitLab 提交贡献
登录/注册
切换导航
H
h2database
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
分枝图
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
计划
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
分枝图
统计图
创建新议题
作业
提交
议题看板
打开侧边栏
Administrator
h2database
Commits
cb967665
提交
cb967665
authored
9月 28, 2007
作者:
Thomas Mueller
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
--no commit message
--no commit message
上级
5ff0818d
隐藏空白字符变更
内嵌
并排
正在显示
16 个修改的文件
包含
704 行增加
和
535 行删除
+704
-535
features.html
h2/src/docsrc/html/features.html
+3
-0
history.html
h2/src/docsrc/html/history.html
+3
-1
performance.html
h2/src/docsrc/html/performance.html
+2
-0
Database.java
h2/src/main/org/h2/engine/Database.java
+39
-51
Session.java
h2/src/main/org/h2/engine/Session.java
+20
-17
Function.java
h2/src/main/org/h2/expression/Function.java
+46
-0
Operation.java
h2/src/main/org/h2/expression/Operation.java
+6
-1
JdbcConnection.java
h2/src/main/org/h2/jdbc/JdbcConnection.java
+1
-1
JdbcStatement.java
h2/src/main/org/h2/jdbc/JdbcStatement.java
+4
-0
help.csv
h2/src/main/org/h2/res/help.csv
+1
-1
DiskFile.java
h2/src/main/org/h2/store/DiskFile.java
+455
-414
Storage.java
h2/src/main/org/h2/store/Storage.java
+30
-26
TableData.java
h2/src/main/org/h2/table/TableData.java
+5
-5
TestAll.java
h2/src/test/org/h2/test/TestAll.java
+2
-3
TestResultSet.java
h2/src/test/org/h2/test/jdbc/TestResultSet.java
+21
-15
TestMultiThreadedKernel.java
h2/src/test/org/h2/test/unit/TestMultiThreadedKernel.java
+66
-0
没有找到文件。
h2/src/docsrc/html/features.html
浏览文件 @
cb967665
...
...
@@ -271,6 +271,9 @@ It looks like the development of this database has stopped. The last release was
</tr><tr>
<td><a
href=
"http://mywebpage.netscape.com/davidlbarron/javaplayer.html"
>
JavaPlayer
</a></td>
<td>
Pure Java MP3 player.
</td>
</tr><tr>
<td><a
href=
"http://jmatter.org/"
>
JMatter
</a></td>
<td>
Framework for constructing workgroup business applications based on the Naked Objects Architectural Pattern.
</td>
</tr><tr>
<td><a
href=
"http://www.jpox.org"
>
JPOX
</a></td>
<td>
Java persistent objects.
</td>
...
...
h2/src/docsrc/html/history.html
浏览文件 @
cb967665
...
...
@@ -40,7 +40,8 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
<h3>
Version 1.0 (Current)
</h3>
<h3>
Version 1.0.59 (2007-09-TODO)
</h3><ul>
<li>
A PreparedStatement that was cancelled could not be reused. Fixed.
<li>
Multi-threaded kernel (MULTI_THREADED=1): A synchronization problem has been fixed.
</li><li>
A PreparedStatement that was cancelled could not be reused. Fixed.
</li><li>
H2 Console: Progress information when logging into a H2 embedded database (useful when opening a database is slow).
</li><li>
When the database was closed while logging was disabled (LOG 0), re-opening the database was slow. Fixed.
</li><li>
Fulltext search is now documented (in the Tutorial).
...
...
@@ -782,6 +783,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>
NATURAL JOIN: MySQL and PostgreSQL don't repeat columns when using SELECT * ...
</li><li>
Optimize SELECT MIN(ID), MAX(ID), COUNT(*) FROM TEST WHERE ID BETWEEN 100 AND 200
</li><li>
Support Oracle functions: TRUNC, NVL2, TO_CHAR, TO_DATE, TO_NUMBER
</li><li>
Support setQueryTimeout (using System.currentTimeMillis in a loop; not using a thread)
</li></ul>
<h3>
Not Planned
</h3>
...
...
h2/src/docsrc/html/performance.html
浏览文件 @
cb967665
...
...
@@ -111,6 +111,8 @@ The reason is, a backup of the database is created whenever the database is open
Version 10.3.1.4 was used for the test. Derby is clearly the slowest embedded database in this test.
This seems to be a structural problem, because all operations are really slow.
It will not be easy for the developers of Derby to improve the performance to a reasonable level.
A few problems have been identified: Leaving autocommit on is a problem for Derby.
If it is switched off during the whole test, the results are about 20% better for Derby.
</p>
<h4>
PostgreSQL
</h4>
...
...
h2/src/main/org/h2/engine/Database.java
浏览文件 @
cb967665
...
...
@@ -181,9 +181,7 @@ public class Database implements DataHandler {
TraceSystem
.
DEFAULT_TRACE_LEVEL_SYSTEM_OUT
);
this
.
cacheType
=
StringUtils
.
toUpperEnglish
(
ci
.
removeProperty
(
"CACHE_TYPE"
,
CacheLRU
.
TYPE_NAME
));
try
{
synchronized
(
this
)
{
open
(
traceLevelFile
,
traceLevelSystemOut
);
}
open
(
traceLevelFile
,
traceLevelSystemOut
);
if
(
closeAtVmShutdown
)
{
closeOnExit
=
new
DatabaseCloser
(
this
,
0
,
true
);
try
{
...
...
@@ -199,9 +197,7 @@ public class Database implements DataHandler {
traceSystem
.
getTrace
(
Trace
.
DATABASE
).
error
(
"opening "
+
databaseName
,
e
);
traceSystem
.
close
();
}
synchronized
(
this
)
{
closeOpenFilesAndUnlock
();
}
closeOpenFilesAndUnlock
();
throw
Message
.
convert
(
e
);
}
}
...
...
@@ -413,7 +409,7 @@ public class Database implements DataHandler {
return
StringUtils
.
toUpperEnglish
(
n
);
}
private
void
open
(
int
traceLevelFile
,
int
traceLevelSystemOut
)
throws
SQLException
{
private
synchronized
void
open
(
int
traceLevelFile
,
int
traceLevelSystemOut
)
throws
SQLException
{
if
(
persistent
)
{
String
dataFileName
=
databaseName
+
Constants
.
SUFFIX_DATA_FILE
;
if
(
FileUtils
.
exists
(
dataFileName
))
{
...
...
@@ -626,7 +622,7 @@ public class Database implements DataHandler {
infoSchema
.
add
(
m
);
}
private
void
addMeta
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
private
synchronized
void
addMeta
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
if
(
obj
.
getTemporary
())
{
return
;
}
...
...
@@ -642,7 +638,7 @@ public class Database implements DataHandler {
}
}
private
void
removeMeta
(
Session
session
,
int
id
)
throws
SQLException
{
private
synchronized
void
removeMeta
(
Session
session
,
int
id
)
throws
SQLException
{
SearchRow
r
=
meta
.
getTemplateSimpleRow
(
false
);
r
.
setValue
(
0
,
ValueInt
.
get
(
id
));
Cursor
cursor
=
metaIdIndex
.
find
(
session
,
r
,
r
);
...
...
@@ -685,7 +681,7 @@ public class Database implements DataHandler {
}
}
public
void
addSchemaObject
(
Session
session
,
SchemaObject
obj
)
throws
SQLException
{
public
synchronized
void
addSchemaObject
(
Session
session
,
SchemaObject
obj
)
throws
SQLException
{
obj
.
getSchema
().
add
(
obj
);
int
id
=
obj
.
getId
();
if
(
id
>
0
&&
!
starting
)
{
...
...
@@ -693,7 +689,7 @@ public class Database implements DataHandler {
}
}
public
void
addDatabaseObject
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
public
synchronized
void
addDatabaseObject
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
HashMap
map
=
getMap
(
obj
.
getType
());
if
(
obj
.
getType
()
==
DbObject
.
USER
)
{
User
user
=
(
User
)
obj
;
...
...
@@ -781,23 +777,21 @@ public class Database implements DataHandler {
}
}
void
close
(
boolean
fromShutdownHook
)
{
synchronized
(
this
)
{
closing
=
true
;
if
(
sessions
.
size
()
>
0
)
{
if
(!
fromShutdownHook
)
{
return
;
}
traceSystem
.
getTrace
(
Trace
.
DATABASE
).
info
(
"closing "
+
databaseName
+
" from shutdown hook"
);
Session
[]
all
=
new
Session
[
sessions
.
size
()];
sessions
.
toArray
(
all
);
for
(
int
i
=
0
;
i
<
all
.
length
;
i
++)
{
Session
s
=
all
[
i
];
try
{
s
.
close
();
}
catch
(
SQLException
e
)
{
traceSystem
.
getTrace
(
Trace
.
SESSION
).
error
(
"disconnecting #"
+
s
.
getId
(),
e
);
}
synchronized
void
close
(
boolean
fromShutdownHook
)
{
closing
=
true
;
if
(
sessions
.
size
()
>
0
)
{
if
(!
fromShutdownHook
)
{
return
;
}
traceSystem
.
getTrace
(
Trace
.
DATABASE
).
info
(
"closing "
+
databaseName
+
" from shutdown hook"
);
Session
[]
all
=
new
Session
[
sessions
.
size
()];
sessions
.
toArray
(
all
);
for
(
int
i
=
0
;
i
<
all
.
length
;
i
++)
{
Session
s
=
all
[
i
];
try
{
s
.
close
();
}
catch
(
SQLException
e
)
{
traceSystem
.
getTrace
(
Trace
.
SESSION
).
error
(
"disconnecting #"
+
s
.
getId
(),
e
);
}
}
}
...
...
@@ -872,7 +866,7 @@ public class Database implements DataHandler {
}
}
private
void
closeOpenFilesAndUnlock
()
throws
SQLException
{
private
synchronized
void
closeOpenFilesAndUnlock
()
throws
SQLException
{
if
(
log
!=
null
)
{
stopWriter
();
log
.
close
();
...
...
@@ -923,7 +917,7 @@ public class Database implements DataHandler {
}
}
public
int
allocateObjectId
(
boolean
needFresh
,
boolean
dataFile
)
{
public
synchronized
int
allocateObjectId
(
boolean
needFresh
,
boolean
dataFile
)
{
// TODO refactor: use hash map instead of bit field for object ids
needFresh
=
true
;
int
i
;
...
...
@@ -1008,18 +1002,18 @@ public class Database implements DataHandler {
return
list
;
}
public
void
update
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
public
synchronized
void
update
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
int
id
=
obj
.
getId
();
removeMeta
(
session
,
id
);
addMeta
(
session
,
obj
);
}
public
void
renameSchemaObject
(
Session
session
,
SchemaObject
obj
,
String
newName
)
throws
SQLException
{
public
synchronized
void
renameSchemaObject
(
Session
session
,
SchemaObject
obj
,
String
newName
)
throws
SQLException
{
obj
.
getSchema
().
rename
(
obj
,
newName
);
updateWithChildren
(
session
,
obj
);
}
private
void
updateWithChildren
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
private
synchronized
void
updateWithChildren
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
ObjectArray
list
=
obj
.
getChildren
();
Comment
comment
=
findComment
(
obj
);
if
(
comment
!=
null
)
{
...
...
@@ -1035,7 +1029,7 @@ public class Database implements DataHandler {
}
}
public
void
renameDatabaseObject
(
Session
session
,
DbObject
obj
,
String
newName
)
throws
SQLException
{
public
synchronized
void
renameDatabaseObject
(
Session
session
,
DbObject
obj
,
String
newName
)
throws
SQLException
{
int
type
=
obj
.
getType
();
HashMap
map
=
getMap
(
type
);
if
(
SysProperties
.
CHECK
)
{
...
...
@@ -1131,7 +1125,7 @@ public class Database implements DataHandler {
return
schema
;
}
public
void
removeDatabaseObject
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
public
synchronized
void
removeDatabaseObject
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
String
objName
=
obj
.
getName
();
int
type
=
obj
.
getType
();
HashMap
map
=
getMap
(
type
);
...
...
@@ -1163,7 +1157,7 @@ public class Database implements DataHandler {
return
null
;
}
public
void
removeSchemaObject
(
Session
session
,
SchemaObject
obj
)
throws
SQLException
{
public
synchronized
void
removeSchemaObject
(
Session
session
,
SchemaObject
obj
)
throws
SQLException
{
if
(
obj
.
getType
()
==
DbObject
.
TABLE_OR_VIEW
)
{
Table
table
=
(
Table
)
obj
;
if
(
table
.
getTemporary
()
&&
!
table
.
getGlobalTemporary
())
{
...
...
@@ -1202,24 +1196,18 @@ public class Database implements DataHandler {
return
fileIndex
;
}
public
void
setCacheSize
(
int
kb
)
throws
SQLException
{
public
synchronized
void
setCacheSize
(
int
kb
)
throws
SQLException
{
if
(
fileData
!=
null
)
{
synchronized
(
fileData
)
{
fileData
.
getCache
().
setMaxSize
(
kb
);
}
fileData
.
getCache
().
setMaxSize
(
kb
);
int
valueIndex
=
kb
<=
32
?
kb
:
(
kb
>>>
SysProperties
.
CACHE_SIZE_INDEX_SHIFT
);
synchronized
(
fileIndex
)
{
fileIndex
.
getCache
().
setMaxSize
(
valueIndex
);
}
fileIndex
.
getCache
().
setMaxSize
(
valueIndex
);
cacheSize
=
kb
;
}
}
public
void
setMasterUser
(
User
user
)
throws
SQLException
{
synchronized
(
this
)
{
addDatabaseObject
(
systemSession
,
user
);
systemSession
.
commit
(
true
);
}
public
synchronized
void
setMasterUser
(
User
user
)
throws
SQLException
{
addDatabaseObject
(
systemSession
,
user
);
systemSession
.
commit
(
true
);
}
public
Role
getPublicRole
()
{
...
...
@@ -1295,7 +1283,7 @@ public class Database implements DataHandler {
}
}
public
void
freeUpDiskSpace
()
throws
SQLException
{
public
synchronized
void
freeUpDiskSpace
()
throws
SQLException
{
long
sizeAvailable
=
0
;
if
(
emergencyReserve
!=
null
)
{
sizeAvailable
=
emergencyReserve
.
length
();
...
...
@@ -1383,7 +1371,7 @@ public class Database implements DataHandler {
return
logIndexChanges
;
}
public
void
setLog
(
int
level
)
throws
SQLException
{
public
synchronized
void
setLog
(
int
level
)
throws
SQLException
{
if
(
logLevel
==
level
)
{
return
;
}
...
...
@@ -1495,7 +1483,7 @@ public class Database implements DataHandler {
}
}
public
void
setMaxLogSize
(
long
value
)
{
public
synchronized
void
setMaxLogSize
(
long
value
)
{
long
minLogSize
=
biggestFileSize
/
Constants
.
LOG_SIZE_DIVIDER
;
minLogSize
=
Math
.
max
(
value
,
minLogSize
);
long
currentLogSize
=
getLog
().
getMaxLogSize
();
...
...
h2/src/main/org/h2/engine/Session.java
浏览文件 @
cb967665
...
...
@@ -70,6 +70,7 @@ public class Session implements SessionInterface {
private
boolean
undoLogEnabled
=
true
;
private
boolean
autoCommitAtTransactionEnd
;
private
String
currentTransactionName
;
private
boolean
isClosed
;
public
Session
()
{
}
...
...
@@ -109,7 +110,7 @@ public class Session implements SessionInterface {
if
(!
SysProperties
.
runFinalize
)
{
return
;
}
if
(
database
!=
null
)
{
if
(
!
isClosed
)
{
throw
Message
.
getInternalError
(
"not closed"
,
stackTrace
);
}
}
...
...
@@ -164,7 +165,7 @@ public class Session implements SessionInterface {
}
public
Command
prepareLocal
(
String
sql
)
throws
SQLException
{
if
(
database
==
null
)
{
if
(
isClosed
)
{
throw
Message
.
getSQLException
(
ErrorCode
.
CONNECTION_BROKEN
);
}
Parser
parser
=
new
Parser
(
this
);
...
...
@@ -176,13 +177,11 @@ public class Session implements SessionInterface {
}
public
int
getPowerOffCount
()
{
return
database
==
null
?
0
:
database
.
getPowerOffCount
();
return
database
.
getPowerOffCount
();
}
public
void
setPowerOffCount
(
int
count
)
{
if
(
database
!=
null
)
{
database
.
setPowerOffCount
(
count
);
}
database
.
setPowerOffCount
(
count
);
}
public
void
commit
(
boolean
ddl
)
throws
SQLException
{
...
...
@@ -277,12 +276,12 @@ public class Session implements SessionInterface {
}
public
void
close
()
throws
SQLException
{
if
(
database
!=
null
)
{
if
(
!
isClosed
)
{
try
{
cleanTempTables
(
true
);
database
.
removeSession
(
this
);
}
finally
{
database
=
null
;
isClosed
=
true
;
}
}
}
...
...
@@ -320,24 +319,28 @@ public class Session implements SessionInterface {
for
(
int
i
=
0
;
i
<
locks
.
size
();
i
++)
{
Table
t
=
(
Table
)
locks
.
get
(
i
);
if
(!
t
.
isLockedExclusively
())
{
t
.
unlock
(
this
);
locks
.
remove
(
i
);
synchronized
(
database
)
{
t
.
unlock
(
this
);
locks
.
remove
(
i
);
}
i
--;
}
}
}
private
void
unlockAll
()
throws
SQLException
{
if
(
SysProperties
.
CHECK
)
{
if
(
undoLog
.
size
()
>
0
)
{
throw
Message
.
getInternalError
();
}
}
for
(
int
i
=
0
;
i
<
locks
.
size
();
i
++)
{
Table
t
=
(
Table
)
locks
.
get
(
i
);
t
.
unlock
(
this
);
synchronized
(
database
)
{
for
(
int
i
=
0
;
i
<
locks
.
size
();
i
++)
{
Table
t
=
(
Table
)
locks
.
get
(
i
);
t
.
unlock
(
this
);
}
locks
.
clear
();
}
locks
.
clear
();
savepoints
=
null
;
}
...
...
@@ -368,7 +371,7 @@ public class Session implements SessionInterface {
if
(
traceModuleName
==
null
)
{
traceModuleName
=
Trace
.
JDBC
+
"["
+
id
+
"]"
;
}
if
(
database
==
null
)
{
if
(
isClosed
)
{
return
new
TraceSystem
(
null
,
false
).
getTrace
(
traceModuleName
);
}
return
database
.
getTrace
(
traceModuleName
);
...
...
@@ -460,7 +463,7 @@ public class Session implements SessionInterface {
}
public
boolean
isClosed
()
{
return
database
==
null
;
return
isClosed
;
}
public
void
setThrottle
(
int
throttle
)
{
...
...
h2/src/main/org/h2/expression/Function.java
浏览文件 @
cb967665
...
...
@@ -1419,6 +1419,52 @@ public class Function extends Expression implements FunctionCall {
}
public
long
getPrecision
()
{
switch
(
info
.
type
)
{
case
ENCRYPT:
case
DECRYPT:
precision
=
args
[
2
].
getPrecision
();
break
;
case
COMPRESS:
precision
=
args
[
0
].
getPrecision
();
break
;
case
CHAR:
precision
=
1
;
break
;
case
CONCAT:
precision
=
0
;
for
(
int
i
=
0
;
i
<
args
.
length
;
i
++)
{
precision
+=
args
[
i
].
getPrecision
();
if
(
precision
<
0
)
{
precision
=
Long
.
MAX_VALUE
;
}
}
break
;
case
HEXTORAW:
precision
=
(
args
[
0
].
getPrecision
()
+
3
)
/
4
;
break
;
case
LCASE:
case
LTRIM:
case
RIGHT:
case
RTRIM:
case
UCASE:
case
LOWER:
case
UPPER:
case
TRIM:
case
STRINGDECODE:
case
UTF8TOSTRING:
precision
=
args
[
0
].
getPrecision
();
break
;
case
RAWTOHEX:
precision
=
args
[
0
].
getPrecision
()
*
4
;
break
;
case
SOUNDEX:
precision
=
4
;
break
;
case
DAYNAME:
case
MONTHNAME:
precision
=
20
;
// day and month names may be long in some languages
break
;
}
return
precision
;
}
...
...
h2/src/main/org/h2/expression/Operation.java
浏览文件 @
cb967665
...
...
@@ -188,7 +188,12 @@ public class Operation extends Expression {
public
long
getPrecision
()
{
if
(
right
!=
null
)
{
return
Math
.
max
(
left
.
getPrecision
(),
right
.
getPrecision
());
switch
(
opType
)
{
case
CONCAT:
return
left
.
getPrecision
()
+
right
.
getPrecision
();
default
:
return
Math
.
max
(
left
.
getPrecision
(),
right
.
getPrecision
());
}
}
return
left
.
getPrecision
();
}
...
...
h2/src/main/org/h2/jdbc/JdbcConnection.java
浏览文件 @
cb967665
...
...
@@ -1237,7 +1237,7 @@ public class JdbcConnection extends TraceObject implements Connection {
int
id
=
getNextId
(
TraceObject
.
RESULT_SET
);
if
(
debug
())
{
debugCodeAssign
(
"ResultSet"
,
TraceObject
.
RESULT_SET
,
id
);
debugCodeCall
(
"executeQuery"
,
"CALL IDENTITY()"
);
statement
.
debugCodeCallMe
(
"executeQuery"
,
"CALL IDENTITY()"
);
}
ResultSet
rs
=
new
JdbcResultSet
(
session
,
this
,
statement
,
result
,
id
,
false
,
true
);
return
rs
;
...
...
h2/src/main/org/h2/jdbc/JdbcStatement.java
浏览文件 @
cb967665
...
...
@@ -888,6 +888,10 @@ public class JdbcStatement extends TraceObject implements Statement {
debugCode
(
"setPoolable("
+
poolable
+
");"
);
}
}
void
debugCodeCallMe
(
String
text
,
String
param
)
{
debugCodeCall
(
text
,
param
);
}
}
h2/src/main/org/h2/res/help.csv
浏览文件 @
cb967665
...
...
@@ -2090,7 +2090,7 @@ If a start position is used, the characters before it are ignored.
INSTR(EMAIL,'@')
"
"Functions (String)","INSERT Function","
INSERT(originalString, startInt, lengthInt, add
Int
): string
INSERT(originalString, startInt, lengthInt, add
String
): string
","
Inserts a additional string into the original string at a specified start position.
The length specifies the number of characters that are removed at the start position
...
...
h2/src/main/org/h2/store/DiskFile.java
浏览文件 @
cb967665
...
...
@@ -145,50 +145,52 @@ public class DiskFile implements CacheWriter {
}
}
public
synchronized
byte
[]
getSummary
()
throws
SQLException
{
try
{
ByteArrayOutputStream
buff
=
new
ByteArrayOutputStream
();
DataOutputStream
out
=
new
DataOutputStream
(
buff
);
int
blocks
=
(
int
)
((
file
.
length
()
-
OFFSET
)
/
BLOCK_SIZE
);
out
.
writeInt
(
blocks
);
for
(
int
i
=
0
,
x
=
0
;
i
<
blocks
/
8
;
i
++)
{
int
mask
=
0
;
for
(
int
j
=
0
;
j
<
8
;
j
++)
{
if
(
used
.
get
(
x
))
{
mask
|=
1
<<
j
;
public
byte
[]
getSummary
()
throws
SQLException
{
synchronized
(
database
)
{
try
{
ByteArrayOutputStream
buff
=
new
ByteArrayOutputStream
();
DataOutputStream
out
=
new
DataOutputStream
(
buff
);
int
blocks
=
(
int
)
((
file
.
length
()
-
OFFSET
)
/
BLOCK_SIZE
);
out
.
writeInt
(
blocks
);
for
(
int
i
=
0
,
x
=
0
;
i
<
blocks
/
8
;
i
++)
{
int
mask
=
0
;
for
(
int
j
=
0
;
j
<
8
;
j
++)
{
if
(
used
.
get
(
x
))
{
mask
|=
1
<<
j
;
}
x
++;
}
x
++
;
}
out
.
write
(
mask
);
}
out
.
writeInt
(
pageOwners
.
size
());
ObjectArray
storages
=
new
ObjectArray
(
);
for
(
int
i
=
0
;
i
<
pageOwners
.
size
();
i
++)
{
int
s
=
pageOwners
.
get
(
i
);
out
.
writeInt
(
s
);
if
(
s
>=
0
&&
(
s
>=
storages
.
size
()
||
storages
.
get
(
s
)
==
null
)
)
{
Storage
storage
=
database
.
getStorage
(
s
,
this
);
while
(
storages
.
size
()
<=
s
)
{
storages
.
add
(
null
);
out
.
write
(
mask
)
;
}
out
.
write
Int
(
pageOwners
.
size
()
);
ObjectArray
storages
=
new
ObjectArray
();
for
(
int
i
=
0
;
i
<
pageOwners
.
size
();
i
++)
{
int
s
=
pageOwners
.
get
(
i
);
out
.
writeInt
(
s
);
if
(
s
>=
0
&&
(
s
>=
storages
.
size
()
||
storages
.
get
(
s
)
==
null
))
{
Storage
storage
=
database
.
getStorage
(
s
,
thi
s
);
while
(
storages
.
size
()
<=
s
)
{
storages
.
add
(
null
);
}
storages
.
set
(
s
,
storage
);
}
storages
.
set
(
s
,
storage
);
}
}
for
(
int
i
=
0
;
i
<
storages
.
size
();
i
++)
{
Storage
storage
=
(
Storage
)
storages
.
get
(
i
);
if
(
storage
!=
null
)
{
out
.
writeInt
(
i
);
out
.
writeInt
(
storage
.
getRecordCount
());
for
(
int
i
=
0
;
i
<
storages
.
size
();
i
++)
{
Storage
storage
=
(
Storage
)
storages
.
get
(
i
);
if
(
storage
!=
null
)
{
out
.
writeInt
(
i
);
out
.
writeInt
(
storage
.
getRecordCount
()
);
}
}
out
.
writeInt
(-
1
);
out
.
close
();
byte
[]
b2
=
buff
.
toByteArray
();
return
b2
;
}
catch
(
IOException
e
)
{
// will probably never happen, because only in-memory structures are
// used
return
null
;
}
out
.
writeInt
(-
1
);
out
.
close
();
byte
[]
b2
=
buff
.
toByteArray
();
return
b2
;
}
catch
(
IOException
e
)
{
// will probably never happen, because only in-memory structures are
// used
return
null
;
}
}
...
...
@@ -201,167 +203,176 @@ public class DiskFile implements CacheWriter {
return
true
;
}
public
synchronized
void
initFromSummary
(
byte
[]
summary
)
{
if
(
summary
==
null
||
summary
.
length
==
0
)
{
ObjectArray
list
=
database
.
getAllStorages
();
for
(
int
i
=
0
;
i
<
list
.
size
();
i
++)
{
Storage
s
=
(
Storage
)
list
.
get
(
i
);
if
(
s
!=
null
&&
s
.
getDiskFile
()
==
this
)
{
database
.
removeStorage
(
s
.
getId
(),
this
);
public
void
initFromSummary
(
byte
[]
summary
)
{
synchronized
(
database
)
{
if
(
summary
==
null
||
summary
.
length
==
0
)
{
ObjectArray
list
=
database
.
getAllStorages
();
for
(
int
i
=
0
;
i
<
list
.
size
();
i
++)
{
Storage
s
=
(
Storage
)
list
.
get
(
i
);
if
(
s
!=
null
&&
s
.
getDiskFile
()
==
this
)
{
database
.
removeStorage
(
s
.
getId
(),
this
);
}
}
reset
();
initAlreadyTried
=
false
;
init
=
false
;
return
;
}
reset
();
initAlreadyTried
=
false
;
init
=
false
;
return
;
}
if
(
database
.
getRecovery
()
||
initAlreadyTried
)
{
return
;
}
initAlreadyTried
=
true
;
int
stage
=
0
;
try
{
DataInputStream
in
=
new
DataInputStream
(
new
ByteArrayInputStream
(
summary
));
int
b2
=
in
.
readInt
();
if
(
b2
>
fileBlockCount
)
{
database
.
getTrace
(
Trace
.
DATABASE
).
info
(
"unexpected size "
+
b2
+
" when initializing summary for "
+
fileName
+
" expected:"
+
fileBlockCount
);
if
(
database
.
getRecovery
()
||
initAlreadyTried
)
{
return
;
}
stage
++;
for
(
int
i
=
0
,
x
=
0
;
i
<
b2
/
8
;
i
++)
{
int
mask
=
in
.
read
();
for
(
int
j
=
0
;
j
<
8
;
j
++)
{
if
((
mask
&
(
1
<<
j
))
!=
0
)
{
used
.
set
(
x
);
initAlreadyTried
=
true
;
int
stage
=
0
;
try
{
DataInputStream
in
=
new
DataInputStream
(
new
ByteArrayInputStream
(
summary
));
int
b2
=
in
.
readInt
();
if
(
b2
>
fileBlockCount
)
{
database
.
getTrace
(
Trace
.
DATABASE
).
info
(
"unexpected size "
+
b2
+
" when initializing summary for "
+
fileName
+
" expected:"
+
fileBlockCount
);
return
;
}
stage
++;
for
(
int
i
=
0
,
x
=
0
;
i
<
b2
/
8
;
i
++)
{
int
mask
=
in
.
read
();
for
(
int
j
=
0
;
j
<
8
;
j
++)
{
if
((
mask
&
(
1
<<
j
))
!=
0
)
{
used
.
set
(
x
);
}
x
++;
}
x
++;
}
}
stage
++;
int
len
=
in
.
readInt
();
ObjectArray
storages
=
new
ObjectArray
();
for
(
int
i
=
0
;
i
<
len
;
i
++)
{
int
s
=
in
.
readInt
();
if
(
s
>=
0
)
{
Storage
storage
=
database
.
getStorage
(
s
,
this
);
while
(
storages
.
size
()
<=
s
)
{
storages
.
add
(
null
);
}
stage
++;
int
len
=
in
.
readInt
();
ObjectArray
storages
=
new
ObjectArray
();
for
(
int
i
=
0
;
i
<
len
;
i
++)
{
int
s
=
in
.
readInt
();
if
(
s
>=
0
)
{
Storage
storage
=
database
.
getStorage
(
s
,
this
);
while
(
storages
.
size
()
<=
s
)
{
storages
.
add
(
null
);
}
storages
.
set
(
s
,
storage
);
storage
.
addPage
(
i
);
}
storages
.
set
(
s
,
storage
);
storage
.
addPage
(
i
);
setPageOwner
(
i
,
s
);
}
setPageOwner
(
i
,
s
);
}
stage
++;
while
(
true
)
{
int
s
=
in
.
readInt
();
if
(
s
<
0
)
{
break
;
stage
++;
while
(
true
)
{
int
s
=
in
.
readInt
();
if
(
s
<
0
)
{
break
;
}
int
recordCount
=
in
.
readInt
();
Storage
storage
=
(
Storage
)
storages
.
get
(
s
);
storage
.
setRecordCount
(
recordCount
);
}
int
recordCount
=
in
.
readInt
();
Storage
storage
=
(
Storage
)
storages
.
get
(
s
);
storage
.
setRecordCount
(
recordCount
);
stage
++;
freeUnusedPages
();
init
=
true
;
}
catch
(
Exception
e
)
{
database
.
getTrace
(
Trace
.
DATABASE
).
error
(
"error initializing summary for "
+
fileName
+
" size:"
+
summary
.
length
+
" stage:"
+
stage
,
e
);
// ignore - init is still false in this case
}
stage
++;
freeUnusedPages
();
init
=
true
;
}
catch
(
Exception
e
)
{
database
.
getTrace
(
Trace
.
DATABASE
).
error
(
"error initializing summary for "
+
fileName
+
" size:"
+
summary
.
length
+
" stage:"
+
stage
,
e
);
// ignore - init is still false in this case
}
}
public
synchronized
void
init
()
throws
SQLException
{
if
(
init
)
{
return
;
}
ObjectArray
storages
=
database
.
getAllStorages
();
for
(
int
i
=
0
;
i
<
storages
.
size
();
i
++)
{
Storage
s
=
(
Storage
)
storages
.
get
(
i
);
if
(
s
!=
null
&&
s
.
getDiskFile
()
==
this
)
{
s
.
setRecordCount
(
0
);
}
}
int
blockHeaderLen
=
Math
.
max
(
Constants
.
FILE_BLOCK_SIZE
,
2
*
rowBuff
.
getIntLen
());
byte
[]
buff
=
new
byte
[
blockHeaderLen
];
DataPage
s
=
DataPage
.
create
(
database
,
buff
);
long
time
=
0
;
for
(
int
i
=
0
;
i
<
fileBlockCount
;)
{
long
t2
=
System
.
currentTimeMillis
();
if
(
t2
>
time
+
10
)
{
time
=
t2
;
database
.
setProgress
(
DatabaseEventListener
.
STATE_SCAN_FILE
,
this
.
fileName
,
i
,
fileBlockCount
);
}
go
(
i
);
file
.
readFully
(
buff
,
0
,
blockHeaderLen
);
s
.
reset
();
int
blockCount
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
blockCount
<
0
)
{
throw
Message
.
getInternalError
();
public
void
init
()
throws
SQLException
{
synchronized
(
database
)
{
if
(
init
)
{
return
;
}
if
(
blockCount
==
0
)
{
setUnused
(
i
,
1
);
i
++;
}
else
{
int
id
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
id
<
0
)
{
ObjectArray
storages
=
database
.
getAllStorages
();
for
(
int
i
=
0
;
i
<
storages
.
size
();
i
++)
{
Storage
s
=
(
Storage
)
storages
.
get
(
i
);
if
(
s
!=
null
&&
s
.
getDiskFile
()
==
this
)
{
s
.
setRecordCount
(
0
);
}
}
int
blockHeaderLen
=
Math
.
max
(
Constants
.
FILE_BLOCK_SIZE
,
2
*
rowBuff
.
getIntLen
());
byte
[]
buff
=
new
byte
[
blockHeaderLen
];
DataPage
s
=
DataPage
.
create
(
database
,
buff
);
long
time
=
0
;
for
(
int
i
=
0
;
i
<
fileBlockCount
;)
{
long
t2
=
System
.
currentTimeMillis
();
if
(
t2
>
time
+
10
)
{
time
=
t2
;
database
.
setProgress
(
DatabaseEventListener
.
STATE_SCAN_FILE
,
this
.
fileName
,
i
,
fileBlockCount
);
}
go
(
i
);
file
.
readFully
(
buff
,
0
,
blockHeaderLen
);
s
.
reset
();
int
blockCount
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
blockCount
<
0
)
{
throw
Message
.
getInternalError
();
}
Storage
storage
=
database
.
getStorage
(
id
,
this
);
setBlockOwner
(
storage
,
i
,
blockCount
,
true
);
storage
.
incrementRecordCount
();
i
+=
blockCount
;
if
(
blockCount
==
0
)
{
setUnused
(
i
,
1
);
i
++;
}
else
{
int
id
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
id
<
0
)
{
throw
Message
.
getInternalError
();
}
Storage
storage
=
database
.
getStorage
(
id
,
this
);
setBlockOwner
(
storage
,
i
,
blockCount
,
true
);
storage
.
incrementRecordCount
();
i
+=
blockCount
;
}
}
database
.
setProgress
(
DatabaseEventListener
.
STATE_SCAN_FILE
,
this
.
fileName
,
fileBlockCount
,
fileBlockCount
);
init
=
true
;
}
database
.
setProgress
(
DatabaseEventListener
.
STATE_SCAN_FILE
,
this
.
fileName
,
fileBlockCount
,
fileBlockCount
);
init
=
true
;
}
public
synchronized
void
flush
()
throws
SQLException
{
database
.
checkPowerOff
();
ObjectArray
list
=
cache
.
getAllChanged
();
CacheObject
.
sort
(
list
);
for
(
int
i
=
0
;
i
<
list
.
size
();
i
++)
{
Record
rec
=
(
Record
)
list
.
get
(
i
);
writeBack
(
rec
);
}
// TODO flush performance: maybe it would be faster to write records in
// the same loop
for
(
int
i
=
0
;
i
<
fileBlockCount
;
i
++)
{
i
=
deleted
.
nextSetBit
(
i
);
if
(
i
<
0
)
{
break
;
}
if
(
deleted
.
get
(
i
))
{
writeDirectDeleted
(
i
,
1
);
deleted
.
clear
(
i
);
public
void
flush
()
throws
SQLException
{
synchronized
(
database
)
{
database
.
checkPowerOff
();
ObjectArray
list
=
cache
.
getAllChanged
();
CacheObject
.
sort
(
list
);
for
(
int
i
=
0
;
i
<
list
.
size
();
i
++)
{
Record
rec
=
(
Record
)
list
.
get
(
i
);
writeBack
(
rec
);
}
// TODO flush performance: maybe it would be faster to write records in
// the same loop
for
(
int
i
=
0
;
i
<
fileBlockCount
;
i
++)
{
i
=
deleted
.
nextSetBit
(
i
);
if
(
i
<
0
)
{
break
;
}
if
(
deleted
.
get
(
i
))
{
writeDirectDeleted
(
i
,
1
);
deleted
.
clear
(
i
);
}
}
}
}
public
synchronized
void
close
()
throws
SQLException
{
SQLException
closeException
=
null
;
if
(!
database
.
getReadOnly
())
{
try
{
flush
();
}
catch
(
SQLException
e
)
{
closeException
=
e
;
public
void
close
()
throws
SQLException
{
synchronized
(
database
)
{
SQLException
closeException
=
null
;
if
(!
database
.
getReadOnly
())
{
try
{
flush
();
}
catch
(
SQLException
e
)
{
closeException
=
e
;
}
}
cache
.
clear
();
// continue with close even if flush was not possible (file storage
// problem)
if
(
file
!=
null
)
{
file
.
closeSilently
();
file
=
null
;
}
if
(
closeException
!=
null
)
{
throw
closeException
;
}
readCount
=
writeCount
=
0
;
}
cache
.
clear
();
// continue with close even if flush was not possible (file storage
// problem)
if
(
file
!=
null
)
{
file
.
closeSilently
();
file
=
null
;
}
if
(
closeException
!=
null
)
{
throw
closeException
;
}
readCount
=
writeCount
=
0
;
}
private
void
go
(
int
block
)
throws
SQLException
{
...
...
@@ -372,34 +383,36 @@ public class DiskFile implements CacheWriter {
return
((
long
)
block
*
BLOCK_SIZE
)
+
OFFSET
;
}
synchronized
Record
getRecordIfStored
(
Session
session
,
int
pos
,
RecordReader
reader
,
int
storageId
)
Record
getRecordIfStored
(
Session
session
,
int
pos
,
RecordReader
reader
,
int
storageId
)
throws
SQLException
{
try
{
int
owner
=
getPageOwner
(
getPage
(
pos
));
if
(
owner
!=
storageId
)
{
return
null
;
}
go
(
pos
);
rowBuff
.
reset
();
byte
[]
buff
=
rowBuff
.
getBytes
();
file
.
readFully
(
buff
,
0
,
BLOCK_SIZE
);
DataPage
s
=
DataPage
.
create
(
database
,
buff
);
s
.
readInt
();
// blockCount
int
id
=
s
.
readInt
();
if
(
id
!=
storageId
)
{
synchronized
(
database
)
{
try
{
int
owner
=
getPageOwner
(
getPage
(
pos
));
if
(
owner
!=
storageId
)
{
return
null
;
}
go
(
pos
);
rowBuff
.
reset
();
byte
[]
buff
=
rowBuff
.
getBytes
();
file
.
readFully
(
buff
,
0
,
BLOCK_SIZE
);
DataPage
s
=
DataPage
.
create
(
database
,
buff
);
s
.
readInt
();
// blockCount
int
id
=
s
.
readInt
();
if
(
id
!=
storageId
)
{
return
null
;
}
}
catch
(
Exception
e
)
{
return
null
;
}
}
catch
(
Exception
e
)
{
return
null
;
return
getRecord
(
session
,
pos
,
reader
,
storageId
);
}
return
getRecord
(
session
,
pos
,
reader
,
storageId
);
}
synchronized
Record
getRecord
(
Session
session
,
int
pos
,
RecordReader
reader
,
int
storageId
)
throws
SQLException
{
if
(
file
==
null
)
{
throw
Message
.
getSQLException
(
ErrorCode
.
SIMULATED_POWER_OFF
);
}
synchronized
(
this
)
{
Record
getRecord
(
Session
session
,
int
pos
,
RecordReader
reader
,
int
storageId
)
throws
SQLException
{
synchronized
(
database
)
{
if
(
file
==
null
)
{
throw
Message
.
getSQLException
(
ErrorCode
.
SIMULATED_POWER_OFF
);
}
Record
record
=
(
Record
)
cache
.
get
(
pos
);
if
(
record
!=
null
)
{
return
record
;
...
...
@@ -438,50 +451,52 @@ public class DiskFile implements CacheWriter {
}
}
synchronized
int
allocate
(
Storage
storage
,
int
blockCount
)
throws
SQLException
{
if
(
file
==
null
)
{
throw
Message
.
getSQLException
(
ErrorCode
.
SIMULATED_POWER_OFF
);
}
blockCount
=
getPage
(
blockCount
+
BLOCKS_PER_PAGE
-
1
)
*
BLOCKS_PER_PAGE
;
int
lastPage
=
getPage
(
getBlockCount
());
int
pageCount
=
getPage
(
blockCount
);
int
pos
=
-
1
;
boolean
found
=
false
;
for
(
int
i
=
0
;
i
<
lastPage
;
i
++)
{
found
=
true
;
for
(
int
j
=
i
;
j
<
i
+
pageCount
;
j
++)
{
if
(
j
>=
lastPage
||
getPageOwner
(
j
)
!=
-
1
)
{
found
=
false
;
int
allocate
(
Storage
storage
,
int
blockCount
)
throws
SQLException
{
synchronized
(
database
)
{
if
(
file
==
null
)
{
throw
Message
.
getSQLException
(
ErrorCode
.
SIMULATED_POWER_OFF
);
}
blockCount
=
getPage
(
blockCount
+
BLOCKS_PER_PAGE
-
1
)
*
BLOCKS_PER_PAGE
;
int
lastPage
=
getPage
(
getBlockCount
());
int
pageCount
=
getPage
(
blockCount
);
int
pos
=
-
1
;
boolean
found
=
false
;
for
(
int
i
=
0
;
i
<
lastPage
;
i
++)
{
found
=
true
;
for
(
int
j
=
i
;
j
<
i
+
pageCount
;
j
++)
{
if
(
j
>=
lastPage
||
getPageOwner
(
j
)
!=
-
1
)
{
found
=
false
;
break
;
}
}
if
(
found
)
{
pos
=
i
*
BLOCKS_PER_PAGE
;
break
;
}
}
if
(
found
)
{
pos
=
i
*
BLOCKS_PER_PAGE
;
break
;
}
}
if
(!
found
)
{
int
max
=
getBlockCount
();
pos
=
MathUtils
.
roundUp
(
max
,
BLOCKS_PER_PAGE
);
if
(
rowBuff
instanceof
DataPageText
)
{
if
(
pos
>
max
)
{
writeDirectDeleted
(
max
,
pos
-
max
);
}
writeDirectDeleted
(
pos
,
blockCount
);
}
else
{
long
min
=
((
long
)
pos
+
blockCount
)
*
BLOCK_SIZE
;
min
=
MathUtils
.
scaleUp50Percent
(
Constants
.
FILE_MIN_SIZE
,
min
,
Constants
.
FILE_PAGE_SIZE
,
Constants
.
FILE_MAX_INCREMENT
)
+
OFFSET
;
if
(
min
>
file
.
length
())
{
file
.
setLength
(
min
);
database
.
notifyFileSize
(
min
);
if
(!
found
)
{
int
max
=
getBlockCount
();
pos
=
MathUtils
.
roundUp
(
max
,
BLOCKS_PER_PAGE
);
if
(
rowBuff
instanceof
DataPageText
)
{
if
(
pos
>
max
)
{
writeDirectDeleted
(
max
,
pos
-
max
);
}
writeDirectDeleted
(
pos
,
blockCount
);
}
else
{
long
min
=
((
long
)
pos
+
blockCount
)
*
BLOCK_SIZE
;
min
=
MathUtils
.
scaleUp50Percent
(
Constants
.
FILE_MIN_SIZE
,
min
,
Constants
.
FILE_PAGE_SIZE
,
Constants
.
FILE_MAX_INCREMENT
)
+
OFFSET
;
if
(
min
>
file
.
length
())
{
file
.
setLength
(
min
);
database
.
notifyFileSize
(
min
);
}
}
}
setBlockOwner
(
storage
,
pos
,
blockCount
,
false
);
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
storage
.
free
(
i
+
pos
,
1
);
}
return
pos
;
}
setBlockOwner
(
storage
,
pos
,
blockCount
,
false
);
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
storage
.
free
(
i
+
pos
,
1
);
}
return
pos
;
}
private
void
setBlockOwner
(
Storage
storage
,
int
pos
,
int
blockCount
,
boolean
inUse
)
throws
SQLException
{
...
...
@@ -550,24 +565,28 @@ public class DiskFile implements CacheWriter {
pageOwners
.
set
(
page
,
storageId
);
}
synchronized
void
setUsed
(
int
pos
,
int
blockCount
)
{
if
(
pos
+
blockCount
>
fileBlockCount
)
{
setBlockCount
(
pos
+
blockCount
);
void
setUsed
(
int
pos
,
int
blockCount
)
{
synchronized
(
database
)
{
if
(
pos
+
blockCount
>
fileBlockCount
)
{
setBlockCount
(
pos
+
blockCount
);
}
used
.
setRange
(
pos
,
blockCount
,
true
);
deleted
.
setRange
(
pos
,
blockCount
,
false
);
}
used
.
setRange
(
pos
,
blockCount
,
true
);
deleted
.
setRange
(
pos
,
blockCount
,
false
);
}
public
synchronized
void
delete
()
throws
SQLException
{
try
{
cache
.
clear
();
file
.
close
();
FileUtils
.
delete
(
fileName
);
}
catch
(
IOException
e
)
{
throw
Message
.
convertIOException
(
e
,
fileName
);
}
finally
{
file
=
null
;
fileName
=
null
;
public
void
delete
()
throws
SQLException
{
synchronized
(
database
)
{
try
{
cache
.
clear
();
file
.
close
();
FileUtils
.
delete
(
fileName
);
}
catch
(
IOException
e
)
{
throw
Message
.
convertIOException
(
e
,
fileName
);
}
finally
{
file
=
null
;
fileName
=
null
;
}
}
}
...
...
@@ -584,10 +603,10 @@ public class DiskFile implements CacheWriter {
// return start;
// }
public
synchronized
void
writeBack
(
CacheObject
obj
)
throws
SQLException
{
writeCount
++;
Record
record
=
(
Record
)
obj
;
synchronized
(
this
)
{
public
void
writeBack
(
CacheObject
obj
)
throws
SQLException
{
synchronized
(
database
)
{
writeCount
++
;
Record
record
=
(
Record
)
obj
;
int
blockCount
=
record
.
getBlockCount
();
record
.
prepareWrite
();
go
(
record
.
getPos
());
...
...
@@ -600,8 +619,8 @@ public class DiskFile implements CacheWriter {
buff
.
fill
(
blockCount
*
BLOCK_SIZE
);
buff
.
updateChecksum
();
file
.
write
(
buff
.
getBytes
(),
0
,
buff
.
length
());
record
.
setChanged
(
false
);
}
record
.
setChanged
(
false
);
}
/*
...
...
@@ -611,31 +630,33 @@ public class DiskFile implements CacheWriter {
return
used
;
}
synchronized
void
updateRecord
(
Session
session
,
Record
record
)
throws
SQLException
{
record
.
setChanged
(
true
);
int
pos
=
record
.
getPos
();
Record
old
=
(
Record
)
cache
.
update
(
pos
,
record
);
if
(
SysProperties
.
CHECK
)
{
if
(
old
!=
null
)
{
if
(
old
!=
record
)
{
database
.
checkPowerOff
();
throw
Message
.
getInternalError
(
"old != record old="
+
old
+
" new="
+
record
);
}
int
blockCount
=
record
.
getBlockCount
();
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
if
(
deleted
.
get
(
i
+
pos
))
{
throw
Message
.
getInternalError
(
"update marked as deleted: "
+
(
i
+
pos
));
void
updateRecord
(
Session
session
,
Record
record
)
throws
SQLException
{
synchronized
(
database
)
{
record
.
setChanged
(
true
);
int
pos
=
record
.
getPos
();
Record
old
=
(
Record
)
cache
.
update
(
pos
,
record
);
if
(
SysProperties
.
CHECK
)
{
if
(
old
!=
null
)
{
if
(
old
!=
record
)
{
database
.
checkPowerOff
();
throw
Message
.
getInternalError
(
"old != record old="
+
old
+
" new="
+
record
);
}
int
blockCount
=
record
.
getBlockCount
();
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
if
(
deleted
.
get
(
i
+
pos
))
{
throw
Message
.
getInternalError
(
"update marked as deleted: "
+
(
i
+
pos
));
}
}
}
}
}
if
(
logChanges
)
{
log
.
add
(
session
,
this
,
record
);
if
(
logChanges
)
{
log
.
add
(
session
,
this
,
record
);
}
}
}
synchronized
void
writeDirectDeleted
(
int
recordId
,
int
blockCount
)
throws
SQLException
{
synchronized
(
this
)
{
void
writeDirectDeleted
(
int
recordId
,
int
blockCount
)
throws
SQLException
{
synchronized
(
database
)
{
go
(
recordId
);
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
file
.
write
(
freeBlock
.
getBytes
(),
0
,
freeBlock
.
length
());
...
...
@@ -644,73 +665,81 @@ public class DiskFile implements CacheWriter {
}
}
synchronized
void
writeDirect
(
Storage
storage
,
int
pos
,
byte
[]
data
,
int
offset
)
throws
SQLException
{
go
(
pos
);
file
.
write
(
data
,
offset
,
BLOCK_SIZE
);
setBlockOwner
(
storage
,
pos
,
1
,
true
);
void
writeDirect
(
Storage
storage
,
int
pos
,
byte
[]
data
,
int
offset
)
throws
SQLException
{
synchronized
(
database
)
{
go
(
pos
);
file
.
write
(
data
,
offset
,
BLOCK_SIZE
);
setBlockOwner
(
storage
,
pos
,
1
,
true
);
}
}
public
synchronized
int
copyDirect
(
int
pos
,
OutputStream
out
)
throws
SQLException
{
try
{
if
(
pos
<
0
)
{
// read the header
byte
[]
buffer
=
new
byte
[
OFFSET
];
file
.
seek
(
0
);
file
.
readFullyDirect
(
buffer
,
0
,
OFFSET
);
out
.
write
(
buffer
);
return
0
;
}
if
(
pos
>=
fileBlockCount
)
{
return
-
1
;
}
int
blockSize
=
DiskFile
.
BLOCK_SIZE
;
byte
[]
buff
=
new
byte
[
blockSize
];
DataPage
s
=
DataPage
.
create
(
database
,
buff
);
database
.
setProgress
(
DatabaseEventListener
.
STATE_BACKUP_FILE
,
this
.
fileName
,
pos
,
fileBlockCount
);
go
(
pos
);
file
.
readFully
(
buff
,
0
,
blockSize
);
s
.
reset
();
int
blockCount
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
blockCount
<
0
)
{
throw
Message
.
getInternalError
();
}
if
(
blockCount
==
0
)
{
blockCount
=
1
;
}
int
id
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
id
<
0
)
{
throw
Message
.
getInternalError
();
}
s
.
checkCapacity
(
blockCount
*
blockSize
);
if
(
blockCount
>
1
)
{
file
.
readFully
(
s
.
getBytes
(),
blockSize
,
blockCount
*
blockSize
-
blockSize
);
}
if
(
file
.
isEncrypted
())
{
s
.
reset
();
public
int
copyDirect
(
int
pos
,
OutputStream
out
)
throws
SQLException
{
synchronized
(
database
)
{
try
{
if
(
pos
<
0
)
{
// read the header
byte
[]
buffer
=
new
byte
[
OFFSET
];
file
.
seek
(
0
);
file
.
readFullyDirect
(
buffer
,
0
,
OFFSET
);
out
.
write
(
buffer
);
return
0
;
}
if
(
pos
>=
fileBlockCount
)
{
return
-
1
;
}
int
blockSize
=
DiskFile
.
BLOCK_SIZE
;
byte
[]
buff
=
new
byte
[
blockSize
];
DataPage
s
=
DataPage
.
create
(
database
,
buff
);
database
.
setProgress
(
DatabaseEventListener
.
STATE_BACKUP_FILE
,
this
.
fileName
,
pos
,
fileBlockCount
);
go
(
pos
);
file
.
readFullyDirect
(
s
.
getBytes
(),
0
,
blockCount
*
blockSize
);
file
.
readFully
(
buff
,
0
,
blockSize
);
s
.
reset
();
int
blockCount
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
blockCount
<
0
)
{
throw
Message
.
getInternalError
();
}
if
(
blockCount
==
0
)
{
blockCount
=
1
;
}
int
id
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
id
<
0
)
{
throw
Message
.
getInternalError
();
}
s
.
checkCapacity
(
blockCount
*
blockSize
);
if
(
blockCount
>
1
)
{
file
.
readFully
(
s
.
getBytes
(),
blockSize
,
blockCount
*
blockSize
-
blockSize
);
}
if
(
file
.
isEncrypted
())
{
s
.
reset
();
go
(
pos
);
file
.
readFullyDirect
(
s
.
getBytes
(),
0
,
blockCount
*
blockSize
);
}
out
.
write
(
s
.
getBytes
(),
0
,
blockCount
*
blockSize
);
return
pos
+
blockCount
;
}
catch
(
IOException
e
)
{
throw
Message
.
convertIOException
(
e
,
fileName
);
}
out
.
write
(
s
.
getBytes
(),
0
,
blockCount
*
blockSize
);
return
pos
+
blockCount
;
}
catch
(
IOException
e
)
{
throw
Message
.
convertIOException
(
e
,
fileName
);
}
}
synchronized
void
removeRecord
(
Session
session
,
int
pos
,
Record
record
,
int
blockCount
)
throws
SQLException
{
if
(
logChanges
)
{
log
.
add
(
session
,
this
,
record
);
void
removeRecord
(
Session
session
,
int
pos
,
Record
record
,
int
blockCount
)
throws
SQLException
{
synchronized
(
database
)
{
if
(
logChanges
)
{
log
.
add
(
session
,
this
,
record
);
}
cache
.
remove
(
pos
);
deleted
.
setRange
(
pos
,
blockCount
,
true
);
setUnused
(
pos
,
blockCount
);
}
cache
.
remove
(
pos
);
deleted
.
setRange
(
pos
,
blockCount
,
true
);
setUnused
(
pos
,
blockCount
);
}
synchronized
void
addRecord
(
Session
session
,
Record
record
)
throws
SQLException
{
if
(
logChanges
)
{
log
.
add
(
session
,
this
,
record
);
void
addRecord
(
Session
session
,
Record
record
)
throws
SQLException
{
synchronized
(
database
)
{
if
(
logChanges
)
{
log
.
add
(
session
,
this
,
record
);
}
cache
.
put
(
record
);
}
cache
.
put
(
record
);
}
/*
...
...
@@ -720,47 +749,53 @@ public class DiskFile implements CacheWriter {
return
cache
;
}
synchronized
void
free
(
int
pos
,
int
blockCount
)
{
used
.
setRange
(
pos
,
blockCount
,
false
);
void
free
(
int
pos
,
int
blockCount
)
{
synchronized
(
database
)
{
used
.
setRange
(
pos
,
blockCount
,
false
);
}
}
public
int
getRecordOverhead
()
{
return
recordOverhead
;
}
public
synchronized
void
truncateStorage
(
Session
session
,
Storage
storage
,
IntArray
pages
)
throws
SQLException
{
int
storageId
=
storage
.
getId
();
// make sure the cache records of this storage are not flushed to disk
// afterwards
ObjectArray
list
=
cache
.
getAllChanged
();
for
(
int
i
=
0
;
i
<
list
.
size
();
i
++)
{
Record
r
=
(
Record
)
list
.
get
(
i
);
if
(
r
.
getStorageId
()
==
storageId
)
{
r
.
setChanged
(
false
);
}
}
int
[]
pagesCopy
=
new
int
[
pages
.
size
()];
// can not use pages directly, because setUnused removes rows from there
pages
.
toArray
(
pagesCopy
);
for
(
int
i
=
0
;
i
<
pagesCopy
.
length
;
i
++)
{
int
page
=
pagesCopy
[
i
];
if
(
logChanges
)
{
log
.
addTruncate
(
session
,
this
,
storageId
,
page
*
BLOCKS_PER_PAGE
,
BLOCKS_PER_PAGE
);
public
void
truncateStorage
(
Session
session
,
Storage
storage
,
IntArray
pages
)
throws
SQLException
{
synchronized
(
database
)
{
int
storageId
=
storage
.
getId
();
// make sure the cache records of this storage are not flushed to disk
// afterwards
ObjectArray
list
=
cache
.
getAllChanged
();
for
(
int
i
=
0
;
i
<
list
.
size
();
i
++)
{
Record
r
=
(
Record
)
list
.
get
(
i
);
if
(
r
.
getStorageId
()
==
storageId
)
{
r
.
setChanged
(
false
);
}
}
for
(
int
j
=
0
;
j
<
BLOCKS_PER_PAGE
;
j
++)
{
Record
r
=
(
Record
)
cache
.
find
(
page
*
BLOCKS_PER_PAGE
+
j
);
if
(
r
!=
null
)
{
cache
.
remove
(
r
.
getPos
());
int
[]
pagesCopy
=
new
int
[
pages
.
size
()];
// can not use pages directly, because setUnused removes rows from there
pages
.
toArray
(
pagesCopy
);
for
(
int
i
=
0
;
i
<
pagesCopy
.
length
;
i
++)
{
int
page
=
pagesCopy
[
i
];
if
(
logChanges
)
{
log
.
addTruncate
(
session
,
this
,
storageId
,
page
*
BLOCKS_PER_PAGE
,
BLOCKS_PER_PAGE
);
}
for
(
int
j
=
0
;
j
<
BLOCKS_PER_PAGE
;
j
++)
{
Record
r
=
(
Record
)
cache
.
find
(
page
*
BLOCKS_PER_PAGE
+
j
);
if
(
r
!=
null
)
{
cache
.
remove
(
r
.
getPos
());
}
}
deleted
.
setRange
(
page
*
BLOCKS_PER_PAGE
,
BLOCKS_PER_PAGE
,
true
);
setUnused
(
page
*
BLOCKS_PER_PAGE
,
BLOCKS_PER_PAGE
);
}
deleted
.
setRange
(
page
*
BLOCKS_PER_PAGE
,
BLOCKS_PER_PAGE
,
true
);
setUnused
(
page
*
BLOCKS_PER_PAGE
,
BLOCKS_PER_PAGE
);
}
}
public
synchronized
void
sync
()
{
if
(
file
!=
null
)
{
file
.
sync
();
public
void
sync
()
{
synchronized
(
database
)
{
if
(
file
!=
null
)
{
file
.
sync
();
}
}
}
...
...
@@ -768,70 +803,76 @@ public class DiskFile implements CacheWriter {
return
dataFile
;
}
public
synchronized
void
setLogChanges
(
boolean
b
)
{
this
.
logChanges
=
b
;
public
void
setLogChanges
(
boolean
b
)
{
synchronized
(
database
)
{
this
.
logChanges
=
b
;
}
}
public
synchronized
void
addRedoLog
(
Storage
storage
,
int
recordId
,
int
blockCount
,
DataPage
rec
)
throws
SQLException
{
byte
[]
data
=
null
;
if
(
rec
!=
null
)
{
DataPage
all
=
rowBuff
;
all
.
reset
();
all
.
writeInt
(
blockCount
);
all
.
writeInt
(
storage
.
getId
());
all
.
writeDataPageNoSize
(
rec
);
// the buffer may have some additional fillers - just ignore them
all
.
fill
(
blockCount
*
BLOCK_SIZE
);
all
.
updateChecksum
();
if
(
SysProperties
.
CHECK
&&
all
.
length
()
!=
BLOCK_SIZE
*
blockCount
)
{
throw
Message
.
getInternalError
(
"blockCount:"
+
blockCount
+
" length: "
+
all
.
length
()
*
BLOCK_SIZE
);
public
void
addRedoLog
(
Storage
storage
,
int
recordId
,
int
blockCount
,
DataPage
rec
)
throws
SQLException
{
synchronized
(
database
)
{
byte
[]
data
=
null
;
if
(
rec
!=
null
)
{
DataPage
all
=
rowBuff
;
all
.
reset
();
all
.
writeInt
(
blockCount
);
all
.
writeInt
(
storage
.
getId
());
all
.
writeDataPageNoSize
(
rec
);
// the buffer may have some additional fillers - just ignore them
all
.
fill
(
blockCount
*
BLOCK_SIZE
);
all
.
updateChecksum
();
if
(
SysProperties
.
CHECK
&&
all
.
length
()
!=
BLOCK_SIZE
*
blockCount
)
{
throw
Message
.
getInternalError
(
"blockCount:"
+
blockCount
+
" length: "
+
all
.
length
()
*
BLOCK_SIZE
);
}
data
=
new
byte
[
all
.
length
()];
System
.
arraycopy
(
all
.
getBytes
(),
0
,
data
,
0
,
all
.
length
());
}
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
RedoLogRecord
log
=
new
RedoLogRecord
();
log
.
recordId
=
recordId
+
i
;
log
.
offset
=
i
*
BLOCK_SIZE
;
log
.
storage
=
storage
;
log
.
data
=
data
;
log
.
sequenceId
=
redoBuffer
.
size
();
redoBuffer
.
add
(
log
);
redoBufferSize
+=
log
.
getSize
();
}
if
(
redoBufferSize
>
SysProperties
.
REDO_BUFFER_SIZE
)
{
flushRedoLog
();
}
data
=
new
byte
[
all
.
length
()];
System
.
arraycopy
(
all
.
getBytes
(),
0
,
data
,
0
,
all
.
length
());
}
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
RedoLogRecord
log
=
new
RedoLogRecord
();
log
.
recordId
=
recordId
+
i
;
log
.
offset
=
i
*
BLOCK_SIZE
;
log
.
storage
=
storage
;
log
.
data
=
data
;
log
.
sequenceId
=
redoBuffer
.
size
();
redoBuffer
.
add
(
log
);
redoBufferSize
+=
log
.
getSize
();
}
if
(
redoBufferSize
>
SysProperties
.
REDO_BUFFER_SIZE
)
{
flushRedoLog
();
}
}
public
synchronized
void
flushRedoLog
()
throws
SQLException
{
if
(
redoBuffer
.
size
()
==
0
)
{
return
;
}
redoBuffer
.
sort
(
new
Comparator
()
{
public
int
compare
(
Object
o1
,
Object
o2
)
{
RedoLogRecord
e1
=
(
RedoLogRecord
)
o1
;
RedoLogRecord
e2
=
(
RedoLogRecord
)
o2
;
int
comp
=
e1
.
recordId
-
e2
.
recordId
;
if
(
comp
==
0
)
{
comp
=
e1
.
sequenceId
-
e2
.
sequenceId
;
}
return
comp
;
}
});
RedoLogRecord
last
=
null
;
for
(
int
i
=
0
;
i
<
redoBuffer
.
size
();
i
++)
{
RedoLogRecord
entry
=
(
RedoLogRecord
)
redoBuffer
.
get
(
i
);
if
(
last
!=
null
&&
entry
.
recordId
!=
last
.
recordId
)
{
public
void
flushRedoLog
()
throws
SQLException
{
synchronized
(
database
)
{
if
(
redoBuffer
.
size
()
==
0
)
{
return
;
}
redoBuffer
.
sort
(
new
Comparator
()
{
public
int
compare
(
Object
o1
,
Object
o2
)
{
RedoLogRecord
e1
=
(
RedoLogRecord
)
o1
;
RedoLogRecord
e2
=
(
RedoLogRecord
)
o2
;
int
comp
=
e1
.
recordId
-
e2
.
recordId
;
if
(
comp
==
0
)
{
comp
=
e1
.
sequenceId
-
e2
.
sequenceId
;
}
return
comp
;
}
});
RedoLogRecord
last
=
null
;
for
(
int
i
=
0
;
i
<
redoBuffer
.
size
();
i
++)
{
RedoLogRecord
entry
=
(
RedoLogRecord
)
redoBuffer
.
get
(
i
);
if
(
last
!=
null
&&
entry
.
recordId
!=
last
.
recordId
)
{
writeRedoLog
(
last
);
}
last
=
entry
;
}
if
(
last
!=
null
)
{
writeRedoLog
(
last
);
}
last
=
entry
;
}
if
(
last
!=
null
)
{
writeRedoLog
(
last
);
redoBuffer
.
clear
();
redoBufferSize
=
0
;
}
redoBuffer
.
clear
();
redoBufferSize
=
0
;
}
private
void
writeRedoLog
(
RedoLogRecord
entry
)
throws
SQLException
{
...
...
h2/src/main/org/h2/store/Storage.java
浏览文件 @
cb967665
...
...
@@ -90,28 +90,30 @@ public class Storage {
lastCheckedPage
=
file
.
getPage
(
record
.
getPos
());
next
=
record
.
getPos
()
+
blockCount
;
}
BitField
used
=
file
.
getUsed
();
while
(
true
)
{
int
page
=
file
.
getPage
(
next
);
if
(
lastCheckedPage
!=
page
)
{
if
(
pageIndex
<
0
)
{
pageIndex
=
pages
.
findNextValueIndex
(
page
);
}
else
{
pageIndex
++;
synchronized
(
database
)
{
BitField
used
=
file
.
getUsed
();
while
(
true
)
{
int
page
=
file
.
getPage
(
next
);
if
(
lastCheckedPage
!=
page
)
{
if
(
pageIndex
<
0
)
{
pageIndex
=
pages
.
findNextValueIndex
(
page
);
}
else
{
pageIndex
++;
}
if
(
pageIndex
>=
pages
.
size
())
{
return
-
1
;
}
lastCheckedPage
=
pages
.
get
(
pageIndex
);
next
=
Math
.
max
(
next
,
DiskFile
.
BLOCKS_PER_PAGE
*
lastCheckedPage
);
}
if
(
pageIndex
>=
pages
.
size
())
{
return
-
1
;
if
(
used
.
get
(
next
))
{
return
next
;
}
if
(
used
.
getLong
(
next
)
==
0
)
{
next
=
MathUtils
.
roundUp
(
next
+
1
,
64
);
}
else
{
next
++;
}
lastCheckedPage
=
pages
.
get
(
pageIndex
);
next
=
Math
.
max
(
next
,
DiskFile
.
BLOCKS_PER_PAGE
*
lastCheckedPage
);
}
if
(
used
.
get
(
next
))
{
return
next
;
}
if
(
used
.
getLong
(
next
)
==
0
)
{
next
=
MathUtils
.
roundUp
(
next
+
1
,
64
);
}
else
{
next
++;
}
}
}
...
...
@@ -153,18 +155,20 @@ public class Storage {
}
private
boolean
isFreeAndMine
(
int
pos
,
int
blocks
)
{
BitField
used
=
file
.
getUsed
();
for
(
int
i
=
blocks
+
pos
-
1
;
i
>=
pos
;
i
--)
{
if
(
file
.
getPageOwner
(
file
.
getPage
(
i
))
!=
id
||
used
.
get
(
i
))
{
return
false
;
synchronized
(
database
)
{
BitField
used
=
file
.
getUsed
();
for
(
int
i
=
blocks
+
pos
-
1
;
i
>=
pos
;
i
--)
{
if
(
file
.
getPageOwner
(
file
.
getPage
(
i
))
!=
id
||
used
.
get
(
i
))
{
return
false
;
}
}
return
true
;
}
return
true
;
}
public
int
allocate
(
int
blockCount
)
throws
SQLException
{
if
(
freeList
.
size
()
>
0
)
{
synchronized
(
fil
e
)
{
synchronized
(
databas
e
)
{
BitField
used
=
file
.
getUsed
();
for
(
int
i
=
0
;
i
<
freeList
.
size
();
i
++)
{
int
px
=
freeList
.
get
(
i
);
...
...
h2/src/main/org/h2/table/TableData.java
浏览文件 @
cb967665
...
...
@@ -322,12 +322,12 @@ public class TableData extends Table implements RecordReader {
return
;
}
if
(
lockShared
.
isEmpty
())
{
traceLock
(
session
,
exclusive
,
"
ok
"
);
traceLock
(
session
,
exclusive
,
"
added for
"
);
session
.
addLock
(
this
);
lockExclusive
=
session
;
return
;
}
else
if
(
lockShared
.
size
()
==
1
&&
lockShared
.
contains
(
session
))
{
traceLock
(
session
,
exclusive
,
"
ok (upgrade)
"
);
traceLock
(
session
,
exclusive
,
"
add (upgraded) for
"
);
lockExclusive
=
session
;
return
;
}
...
...
@@ -353,11 +353,11 @@ public class TableData extends Table implements RecordReader {
}
long
now
=
System
.
currentTimeMillis
();
if
(
now
>=
max
)
{
traceLock
(
session
,
exclusive
,
"timeout "
+
session
.
getLockTimeout
());
traceLock
(
session
,
exclusive
,
"timeout
after
"
+
session
.
getLockTimeout
());
throw
Message
.
getSQLException
(
ErrorCode
.
LOCK_TIMEOUT_1
,
getName
());
}
try
{
traceLock
(
session
,
exclusive
,
"waiting"
);
traceLock
(
session
,
exclusive
,
"waiting
for
"
);
if
(
database
.
getLockMode
()
==
Constants
.
LOCK_MODE_TABLE_GC
)
{
for
(
int
i
=
0
;
i
<
20
;
i
++)
{
long
free
=
Runtime
.
getRuntime
().
freeMemory
();
...
...
@@ -378,7 +378,7 @@ public class TableData extends Table implements RecordReader {
private
void
traceLock
(
Session
session
,
boolean
exclusive
,
String
s
)
{
if
(
traceLock
.
debug
())
{
traceLock
.
debug
(
session
.
getId
()
+
" "
+
(
exclusive
?
"
xlock"
:
"s
lock"
)
+
" "
+
s
+
" "
+
getName
());
traceLock
.
debug
(
session
.
getId
()
+
" "
+
(
exclusive
?
"
exclusive write lock"
:
"shared read
lock"
)
+
" "
+
s
+
" "
+
getName
());
}
}
...
...
h2/src/test/org/h2/test/TestAll.java
浏览文件 @
cb967665
...
...
@@ -4,7 +4,6 @@
*/
package
org
.
h2
.
test
;
import
java.sql.PreparedStatement
;
import
java.sql.SQLException
;
import
java.util.Properties
;
...
...
@@ -79,6 +78,7 @@ import org.h2.test.unit.TestFile;
import
org.h2.test.unit.TestFileLock
;
import
org.h2.test.unit.TestIntArray
;
import
org.h2.test.unit.TestIntIntHashMap
;
import
org.h2.test.unit.TestMultiThreadedKernel
;
import
org.h2.test.unit.TestOverflow
;
import
org.h2.test.unit.TestPattern
;
import
org.h2.test.unit.TestReader
;
...
...
@@ -147,8 +147,6 @@ java org.h2.test.TestAll timer
web page translation
TestMultiThreadedKernel and integrate in unit tests; use also in-memory and so on
At startup, when corrupted, say if LOG=0 was used before
add MVCC
...
...
@@ -479,6 +477,7 @@ write tests using the PostgreSQL JDBC driver
new
TestFileLock
().
runTest
(
this
);
new
TestIntArray
().
runTest
(
this
);
new
TestIntIntHashMap
().
runTest
(
this
);
new
TestMultiThreadedKernel
().
runTest
(
this
);
new
TestOverflow
().
runTest
(
this
);
new
TestPattern
().
runTest
(
this
);
new
TestReader
().
runTest
(
this
);
...
...
h2/src/test/org/h2/test/jdbc/TestResultSet.java
浏览文件 @
cb967665
...
...
@@ -35,6 +35,7 @@ public class TestResultSet extends TestBase {
stat
=
conn
.
createStatement
();
testColumnLength
();
testArray
();
testLimitMaxRows
();
...
...
@@ -57,24 +58,29 @@ public class TestResultSet extends TestBase {
}
private
void
testColumnLength
()
throws
Exception
{
trace
(
"Test ColumnLength"
);
}
private
void
testLimitMaxRows
()
throws
Exception
{
trace
(
"Test LimitMaxRows"
);
ResultSet
rs
;
stat
.
execute
(
"CREATE TABLE
TEST(ID INT PRIMARY KEY
)"
);
stat
.
execute
(
"INSERT INTO TEST VALUES(1), (2), (3), (4)
"
);
rs
=
stat
.
executeQuery
(
"SELECT * FROM TEST"
);
check
ResultRowCount
(
rs
,
4
);
rs
=
stat
.
executeQuery
(
"SELECT * FROM TEST LIMIT 2
"
);
checkResultRowCount
(
rs
,
2
);
stat
.
setMaxRows
(
2
);
rs
=
stat
.
executeQuery
(
"SELECT
* FROM TEST
"
);
checkResultRowCount
(
rs
,
2
);
rs
=
stat
.
executeQuery
(
"SELECT * FROM TEST LIMIT 1"
);
check
ResultRowCount
(
rs
,
1
);
rs
=
stat
.
executeQuery
(
"SELECT * FROM TEST LIMIT 3"
);
check
ResultRowCount
(
rs
,
2
);
stat
.
setMaxRows
(
0
);
stat
.
execute
(
"DROP TABLE
TEST
"
);
stat
.
execute
(
"CREATE TABLE
one (C CHARACTER(10)
)"
);
rs
=
stat
.
executeQuery
(
"SELECT C || C FROM one;
"
);
ResultSetMetaData
md
=
rs
.
getMetaData
(
);
check
(
20
,
md
.
getPrecision
(
1
)
);
ResultSet
rs2
=
stat
.
executeQuery
(
"SELECT UPPER (C) FROM one;
"
);
ResultSetMetaData
md2
=
rs2
.
getMetaData
(
);
check
(
10
,
md2
.
getPrecision
(
1
)
);
rs
=
stat
.
executeQuery
(
"SELECT
UPPER (C), CHAR(10), CONCAT(C,C,C), HEXTORAW(C), RAWTOHEX(C) FROM one
"
);
ResultSetMetaData
meta
=
rs
.
getMetaData
(
);
check
(
10
,
meta
.
getPrecision
(
1
)
);
check
(
1
,
meta
.
getPrecision
(
2
)
);
check
(
30
,
meta
.
getPrecision
(
3
)
);
check
(
3
,
meta
.
getPrecision
(
4
)
);
check
(
40
,
meta
.
getPrecision
(
5
)
);
stat
.
execute
(
"DROP TABLE
one
"
);
}
void
testAutoIncrement
()
throws
Exception
{
...
...
h2/src/test/org/h2/test/unit/TestMultiThreadedKernel.java
0 → 100644
浏览文件 @
cb967665
/*
* Copyright 2004-2007 H2 Group. Licensed under the H2 License, Version 1.0 (http://h2database.com/html/license.html).
* Initial Developer: H2 Group
*/
package
org
.
h2
.
test
.
unit
;
import
java.sql.Connection
;
import
java.sql.DriverManager
;
import
java.sql.PreparedStatement
;
import
java.sql.SQLException
;
import
org.h2.constant.SysProperties
;
import
org.h2.test.TestBase
;
public
class
TestMultiThreadedKernel
extends
TestBase
implements
Runnable
{
private
String
url
,
user
,
password
;
private
int
id
;
private
TestMultiThreadedKernel
master
;
private
volatile
boolean
stop
;
public
void
test
()
throws
Exception
{
if
(
config
.
networked
)
{
return
;
}
deleteDb
(
"multiThreadedKernel"
);
int
count
=
getSize
(
2
,
5
);
Thread
[]
list
=
new
Thread
[
count
];
for
(
int
i
=
0
;
i
<
count
;
i
++)
{
TestMultiThreadedKernel
r
=
new
TestMultiThreadedKernel
();
r
.
url
=
getURL
(
"multiThreadedKernel"
,
true
);
r
.
user
=
getUser
();
r
.
password
=
getPassword
();
r
.
master
=
this
;
r
.
id
=
i
;
Thread
thread
=
new
Thread
(
r
);
thread
.
setName
(
"Thread "
+
i
);
thread
.
start
();
list
[
i
]
=
thread
;
}
Thread
.
sleep
(
getSize
(
2000
,
5000
));
stop
=
true
;
for
(
int
i
=
0
;
i
<
count
;
i
++)
{
list
[
i
].
join
();
}
SysProperties
.
multiThreadedKernel
=
false
;
}
public
void
run
()
{
try
{
org
.
h2
.
Driver
.
load
();
Connection
conn
=
DriverManager
.
getConnection
(
url
+
";MULTI_THREADED=1;LOCK_MODE=3;WRITE_DELAY=0"
,
user
,
password
);
conn
.
createStatement
().
execute
(
"CREATE TABLE TEST"
+
id
+
"(COL1 BIGINT AUTO_INCREMENT PRIMARY KEY, COL2 BIGINT)"
);
PreparedStatement
prep
=
conn
.
prepareStatement
(
"insert into TEST"
+
id
+
"(col2) values (?)"
);
for
(
int
i
=
0
;
!
master
.
stop
;
i
++)
{
prep
.
setLong
(
1
,
i
);
prep
.
execute
();
}
conn
.
close
();
}
catch
(
SQLException
e
)
{
e
.
printStackTrace
();
}
}
}
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论