Skip to content
项目
群组
代码片段
帮助
正在加载...
帮助
为 GitLab 提交贡献
登录/注册
切换导航
H
h2database
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
分枝图
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
计划
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
分枝图
统计图
创建新议题
作业
提交
议题看板
打开侧边栏
Administrator
h2database
Commits
cb967665
提交
cb967665
authored
9月 28, 2007
作者:
Thomas Mueller
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
--no commit message
--no commit message
上级
5ff0818d
隐藏空白字符变更
内嵌
并排
正在显示
16 个修改的文件
包含
704 行增加
和
535 行删除
+704
-535
features.html
h2/src/docsrc/html/features.html
+3
-0
history.html
h2/src/docsrc/html/history.html
+3
-1
performance.html
h2/src/docsrc/html/performance.html
+2
-0
Database.java
h2/src/main/org/h2/engine/Database.java
+39
-51
Session.java
h2/src/main/org/h2/engine/Session.java
+20
-17
Function.java
h2/src/main/org/h2/expression/Function.java
+46
-0
Operation.java
h2/src/main/org/h2/expression/Operation.java
+6
-1
JdbcConnection.java
h2/src/main/org/h2/jdbc/JdbcConnection.java
+1
-1
JdbcStatement.java
h2/src/main/org/h2/jdbc/JdbcStatement.java
+4
-0
help.csv
h2/src/main/org/h2/res/help.csv
+1
-1
DiskFile.java
h2/src/main/org/h2/store/DiskFile.java
+455
-414
Storage.java
h2/src/main/org/h2/store/Storage.java
+30
-26
TableData.java
h2/src/main/org/h2/table/TableData.java
+5
-5
TestAll.java
h2/src/test/org/h2/test/TestAll.java
+2
-3
TestResultSet.java
h2/src/test/org/h2/test/jdbc/TestResultSet.java
+21
-15
TestMultiThreadedKernel.java
h2/src/test/org/h2/test/unit/TestMultiThreadedKernel.java
+66
-0
没有找到文件。
h2/src/docsrc/html/features.html
浏览文件 @
cb967665
...
@@ -271,6 +271,9 @@ It looks like the development of this database has stopped. The last release was
...
@@ -271,6 +271,9 @@ It looks like the development of this database has stopped. The last release was
</tr><tr>
</tr><tr>
<td><a
href=
"http://mywebpage.netscape.com/davidlbarron/javaplayer.html"
>
JavaPlayer
</a></td>
<td><a
href=
"http://mywebpage.netscape.com/davidlbarron/javaplayer.html"
>
JavaPlayer
</a></td>
<td>
Pure Java MP3 player.
</td>
<td>
Pure Java MP3 player.
</td>
</tr><tr>
<td><a
href=
"http://jmatter.org/"
>
JMatter
</a></td>
<td>
Framework for constructing workgroup business applications based on the Naked Objects Architectural Pattern.
</td>
</tr><tr>
</tr><tr>
<td><a
href=
"http://www.jpox.org"
>
JPOX
</a></td>
<td><a
href=
"http://www.jpox.org"
>
JPOX
</a></td>
<td>
Java persistent objects.
</td>
<td>
Java persistent objects.
</td>
...
...
h2/src/docsrc/html/history.html
浏览文件 @
cb967665
...
@@ -40,7 +40,8 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
...
@@ -40,7 +40,8 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
<h3>
Version 1.0 (Current)
</h3>
<h3>
Version 1.0 (Current)
</h3>
<h3>
Version 1.0.59 (2007-09-TODO)
</h3><ul>
<h3>
Version 1.0.59 (2007-09-TODO)
</h3><ul>
<li>
A PreparedStatement that was cancelled could not be reused. Fixed.
<li>
Multi-threaded kernel (MULTI_THREADED=1): A synchronization problem has been fixed.
</li><li>
A PreparedStatement that was cancelled could not be reused. Fixed.
</li><li>
H2 Console: Progress information when logging into a H2 embedded database (useful when opening a database is slow).
</li><li>
H2 Console: Progress information when logging into a H2 embedded database (useful when opening a database is slow).
</li><li>
When the database was closed while logging was disabled (LOG 0), re-opening the database was slow. Fixed.
</li><li>
When the database was closed while logging was disabled (LOG 0), re-opening the database was slow. Fixed.
</li><li>
Fulltext search is now documented (in the Tutorial).
</li><li>
Fulltext search is now documented (in the Tutorial).
...
@@ -782,6 +783,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
...
@@ -782,6 +783,7 @@ Hypersonic SQL or HSQLDB. H2 is built from scratch.
</li><li>
NATURAL JOIN: MySQL and PostgreSQL don't repeat columns when using SELECT * ...
</li><li>
NATURAL JOIN: MySQL and PostgreSQL don't repeat columns when using SELECT * ...
</li><li>
Optimize SELECT MIN(ID), MAX(ID), COUNT(*) FROM TEST WHERE ID BETWEEN 100 AND 200
</li><li>
Optimize SELECT MIN(ID), MAX(ID), COUNT(*) FROM TEST WHERE ID BETWEEN 100 AND 200
</li><li>
Support Oracle functions: TRUNC, NVL2, TO_CHAR, TO_DATE, TO_NUMBER
</li><li>
Support Oracle functions: TRUNC, NVL2, TO_CHAR, TO_DATE, TO_NUMBER
</li><li>
Support setQueryTimeout (using System.currentTimeMillis in a loop; not using a thread)
</li></ul>
</li></ul>
<h3>
Not Planned
</h3>
<h3>
Not Planned
</h3>
...
...
h2/src/docsrc/html/performance.html
浏览文件 @
cb967665
...
@@ -111,6 +111,8 @@ The reason is, a backup of the database is created whenever the database is open
...
@@ -111,6 +111,8 @@ The reason is, a backup of the database is created whenever the database is open
Version 10.3.1.4 was used for the test. Derby is clearly the slowest embedded database in this test.
Version 10.3.1.4 was used for the test. Derby is clearly the slowest embedded database in this test.
This seems to be a structural problem, because all operations are really slow.
This seems to be a structural problem, because all operations are really slow.
It will not be easy for the developers of Derby to improve the performance to a reasonable level.
It will not be easy for the developers of Derby to improve the performance to a reasonable level.
A few problems have been identified: Leaving autocommit on is a problem for Derby.
If it is switched off during the whole test, the results are about 20% better for Derby.
</p>
</p>
<h4>
PostgreSQL
</h4>
<h4>
PostgreSQL
</h4>
...
...
h2/src/main/org/h2/engine/Database.java
浏览文件 @
cb967665
...
@@ -181,9 +181,7 @@ public class Database implements DataHandler {
...
@@ -181,9 +181,7 @@ public class Database implements DataHandler {
TraceSystem
.
DEFAULT_TRACE_LEVEL_SYSTEM_OUT
);
TraceSystem
.
DEFAULT_TRACE_LEVEL_SYSTEM_OUT
);
this
.
cacheType
=
StringUtils
.
toUpperEnglish
(
ci
.
removeProperty
(
"CACHE_TYPE"
,
CacheLRU
.
TYPE_NAME
));
this
.
cacheType
=
StringUtils
.
toUpperEnglish
(
ci
.
removeProperty
(
"CACHE_TYPE"
,
CacheLRU
.
TYPE_NAME
));
try
{
try
{
synchronized
(
this
)
{
open
(
traceLevelFile
,
traceLevelSystemOut
);
open
(
traceLevelFile
,
traceLevelSystemOut
);
}
if
(
closeAtVmShutdown
)
{
if
(
closeAtVmShutdown
)
{
closeOnExit
=
new
DatabaseCloser
(
this
,
0
,
true
);
closeOnExit
=
new
DatabaseCloser
(
this
,
0
,
true
);
try
{
try
{
...
@@ -199,9 +197,7 @@ public class Database implements DataHandler {
...
@@ -199,9 +197,7 @@ public class Database implements DataHandler {
traceSystem
.
getTrace
(
Trace
.
DATABASE
).
error
(
"opening "
+
databaseName
,
e
);
traceSystem
.
getTrace
(
Trace
.
DATABASE
).
error
(
"opening "
+
databaseName
,
e
);
traceSystem
.
close
();
traceSystem
.
close
();
}
}
synchronized
(
this
)
{
closeOpenFilesAndUnlock
();
closeOpenFilesAndUnlock
();
}
throw
Message
.
convert
(
e
);
throw
Message
.
convert
(
e
);
}
}
}
}
...
@@ -413,7 +409,7 @@ public class Database implements DataHandler {
...
@@ -413,7 +409,7 @@ public class Database implements DataHandler {
return
StringUtils
.
toUpperEnglish
(
n
);
return
StringUtils
.
toUpperEnglish
(
n
);
}
}
private
void
open
(
int
traceLevelFile
,
int
traceLevelSystemOut
)
throws
SQLException
{
private
synchronized
void
open
(
int
traceLevelFile
,
int
traceLevelSystemOut
)
throws
SQLException
{
if
(
persistent
)
{
if
(
persistent
)
{
String
dataFileName
=
databaseName
+
Constants
.
SUFFIX_DATA_FILE
;
String
dataFileName
=
databaseName
+
Constants
.
SUFFIX_DATA_FILE
;
if
(
FileUtils
.
exists
(
dataFileName
))
{
if
(
FileUtils
.
exists
(
dataFileName
))
{
...
@@ -626,7 +622,7 @@ public class Database implements DataHandler {
...
@@ -626,7 +622,7 @@ public class Database implements DataHandler {
infoSchema
.
add
(
m
);
infoSchema
.
add
(
m
);
}
}
private
void
addMeta
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
private
synchronized
void
addMeta
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
if
(
obj
.
getTemporary
())
{
if
(
obj
.
getTemporary
())
{
return
;
return
;
}
}
...
@@ -642,7 +638,7 @@ public class Database implements DataHandler {
...
@@ -642,7 +638,7 @@ public class Database implements DataHandler {
}
}
}
}
private
void
removeMeta
(
Session
session
,
int
id
)
throws
SQLException
{
private
synchronized
void
removeMeta
(
Session
session
,
int
id
)
throws
SQLException
{
SearchRow
r
=
meta
.
getTemplateSimpleRow
(
false
);
SearchRow
r
=
meta
.
getTemplateSimpleRow
(
false
);
r
.
setValue
(
0
,
ValueInt
.
get
(
id
));
r
.
setValue
(
0
,
ValueInt
.
get
(
id
));
Cursor
cursor
=
metaIdIndex
.
find
(
session
,
r
,
r
);
Cursor
cursor
=
metaIdIndex
.
find
(
session
,
r
,
r
);
...
@@ -685,7 +681,7 @@ public class Database implements DataHandler {
...
@@ -685,7 +681,7 @@ public class Database implements DataHandler {
}
}
}
}
public
void
addSchemaObject
(
Session
session
,
SchemaObject
obj
)
throws
SQLException
{
public
synchronized
void
addSchemaObject
(
Session
session
,
SchemaObject
obj
)
throws
SQLException
{
obj
.
getSchema
().
add
(
obj
);
obj
.
getSchema
().
add
(
obj
);
int
id
=
obj
.
getId
();
int
id
=
obj
.
getId
();
if
(
id
>
0
&&
!
starting
)
{
if
(
id
>
0
&&
!
starting
)
{
...
@@ -693,7 +689,7 @@ public class Database implements DataHandler {
...
@@ -693,7 +689,7 @@ public class Database implements DataHandler {
}
}
}
}
public
void
addDatabaseObject
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
public
synchronized
void
addDatabaseObject
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
HashMap
map
=
getMap
(
obj
.
getType
());
HashMap
map
=
getMap
(
obj
.
getType
());
if
(
obj
.
getType
()
==
DbObject
.
USER
)
{
if
(
obj
.
getType
()
==
DbObject
.
USER
)
{
User
user
=
(
User
)
obj
;
User
user
=
(
User
)
obj
;
...
@@ -781,23 +777,21 @@ public class Database implements DataHandler {
...
@@ -781,23 +777,21 @@ public class Database implements DataHandler {
}
}
}
}
void
close
(
boolean
fromShutdownHook
)
{
synchronized
void
close
(
boolean
fromShutdownHook
)
{
synchronized
(
this
)
{
closing
=
true
;
closing
=
true
;
if
(
sessions
.
size
()
>
0
)
{
if
(
sessions
.
size
()
>
0
)
{
if
(!
fromShutdownHook
)
{
if
(!
fromShutdownHook
)
{
return
;
return
;
}
}
traceSystem
.
getTrace
(
Trace
.
DATABASE
).
info
(
"closing "
+
databaseName
+
" from shutdown hook"
);
traceSystem
.
getTrace
(
Trace
.
DATABASE
).
info
(
"closing "
+
databaseName
+
" from shutdown hook"
);
Session
[]
all
=
new
Session
[
sessions
.
size
()];
Session
[]
all
=
new
Session
[
sessions
.
size
()];
sessions
.
toArray
(
all
);
sessions
.
toArray
(
all
);
for
(
int
i
=
0
;
i
<
all
.
length
;
i
++)
{
for
(
int
i
=
0
;
i
<
all
.
length
;
i
++)
{
Session
s
=
all
[
i
];
Session
s
=
all
[
i
];
try
{
try
{
s
.
close
();
s
.
close
();
}
catch
(
SQLException
e
)
{
}
catch
(
SQLException
e
)
{
traceSystem
.
getTrace
(
Trace
.
SESSION
).
error
(
"disconnecting #"
+
s
.
getId
(),
e
);
traceSystem
.
getTrace
(
Trace
.
SESSION
).
error
(
"disconnecting #"
+
s
.
getId
(),
e
);
}
}
}
}
}
}
}
...
@@ -872,7 +866,7 @@ public class Database implements DataHandler {
...
@@ -872,7 +866,7 @@ public class Database implements DataHandler {
}
}
}
}
private
void
closeOpenFilesAndUnlock
()
throws
SQLException
{
private
synchronized
void
closeOpenFilesAndUnlock
()
throws
SQLException
{
if
(
log
!=
null
)
{
if
(
log
!=
null
)
{
stopWriter
();
stopWriter
();
log
.
close
();
log
.
close
();
...
@@ -923,7 +917,7 @@ public class Database implements DataHandler {
...
@@ -923,7 +917,7 @@ public class Database implements DataHandler {
}
}
}
}
public
int
allocateObjectId
(
boolean
needFresh
,
boolean
dataFile
)
{
public
synchronized
int
allocateObjectId
(
boolean
needFresh
,
boolean
dataFile
)
{
// TODO refactor: use hash map instead of bit field for object ids
// TODO refactor: use hash map instead of bit field for object ids
needFresh
=
true
;
needFresh
=
true
;
int
i
;
int
i
;
...
@@ -1008,18 +1002,18 @@ public class Database implements DataHandler {
...
@@ -1008,18 +1002,18 @@ public class Database implements DataHandler {
return
list
;
return
list
;
}
}
public
void
update
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
public
synchronized
void
update
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
int
id
=
obj
.
getId
();
int
id
=
obj
.
getId
();
removeMeta
(
session
,
id
);
removeMeta
(
session
,
id
);
addMeta
(
session
,
obj
);
addMeta
(
session
,
obj
);
}
}
public
void
renameSchemaObject
(
Session
session
,
SchemaObject
obj
,
String
newName
)
throws
SQLException
{
public
synchronized
void
renameSchemaObject
(
Session
session
,
SchemaObject
obj
,
String
newName
)
throws
SQLException
{
obj
.
getSchema
().
rename
(
obj
,
newName
);
obj
.
getSchema
().
rename
(
obj
,
newName
);
updateWithChildren
(
session
,
obj
);
updateWithChildren
(
session
,
obj
);
}
}
private
void
updateWithChildren
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
private
synchronized
void
updateWithChildren
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
ObjectArray
list
=
obj
.
getChildren
();
ObjectArray
list
=
obj
.
getChildren
();
Comment
comment
=
findComment
(
obj
);
Comment
comment
=
findComment
(
obj
);
if
(
comment
!=
null
)
{
if
(
comment
!=
null
)
{
...
@@ -1035,7 +1029,7 @@ public class Database implements DataHandler {
...
@@ -1035,7 +1029,7 @@ public class Database implements DataHandler {
}
}
}
}
public
void
renameDatabaseObject
(
Session
session
,
DbObject
obj
,
String
newName
)
throws
SQLException
{
public
synchronized
void
renameDatabaseObject
(
Session
session
,
DbObject
obj
,
String
newName
)
throws
SQLException
{
int
type
=
obj
.
getType
();
int
type
=
obj
.
getType
();
HashMap
map
=
getMap
(
type
);
HashMap
map
=
getMap
(
type
);
if
(
SysProperties
.
CHECK
)
{
if
(
SysProperties
.
CHECK
)
{
...
@@ -1131,7 +1125,7 @@ public class Database implements DataHandler {
...
@@ -1131,7 +1125,7 @@ public class Database implements DataHandler {
return
schema
;
return
schema
;
}
}
public
void
removeDatabaseObject
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
public
synchronized
void
removeDatabaseObject
(
Session
session
,
DbObject
obj
)
throws
SQLException
{
String
objName
=
obj
.
getName
();
String
objName
=
obj
.
getName
();
int
type
=
obj
.
getType
();
int
type
=
obj
.
getType
();
HashMap
map
=
getMap
(
type
);
HashMap
map
=
getMap
(
type
);
...
@@ -1163,7 +1157,7 @@ public class Database implements DataHandler {
...
@@ -1163,7 +1157,7 @@ public class Database implements DataHandler {
return
null
;
return
null
;
}
}
public
void
removeSchemaObject
(
Session
session
,
SchemaObject
obj
)
throws
SQLException
{
public
synchronized
void
removeSchemaObject
(
Session
session
,
SchemaObject
obj
)
throws
SQLException
{
if
(
obj
.
getType
()
==
DbObject
.
TABLE_OR_VIEW
)
{
if
(
obj
.
getType
()
==
DbObject
.
TABLE_OR_VIEW
)
{
Table
table
=
(
Table
)
obj
;
Table
table
=
(
Table
)
obj
;
if
(
table
.
getTemporary
()
&&
!
table
.
getGlobalTemporary
())
{
if
(
table
.
getTemporary
()
&&
!
table
.
getGlobalTemporary
())
{
...
@@ -1202,24 +1196,18 @@ public class Database implements DataHandler {
...
@@ -1202,24 +1196,18 @@ public class Database implements DataHandler {
return
fileIndex
;
return
fileIndex
;
}
}
public
void
setCacheSize
(
int
kb
)
throws
SQLException
{
public
synchronized
void
setCacheSize
(
int
kb
)
throws
SQLException
{
if
(
fileData
!=
null
)
{
if
(
fileData
!=
null
)
{
synchronized
(
fileData
)
{
fileData
.
getCache
().
setMaxSize
(
kb
);
fileData
.
getCache
().
setMaxSize
(
kb
);
}
int
valueIndex
=
kb
<=
32
?
kb
:
(
kb
>>>
SysProperties
.
CACHE_SIZE_INDEX_SHIFT
);
int
valueIndex
=
kb
<=
32
?
kb
:
(
kb
>>>
SysProperties
.
CACHE_SIZE_INDEX_SHIFT
);
synchronized
(
fileIndex
)
{
fileIndex
.
getCache
().
setMaxSize
(
valueIndex
);
fileIndex
.
getCache
().
setMaxSize
(
valueIndex
);
}
cacheSize
=
kb
;
cacheSize
=
kb
;
}
}
}
}
public
void
setMasterUser
(
User
user
)
throws
SQLException
{
public
synchronized
void
setMasterUser
(
User
user
)
throws
SQLException
{
synchronized
(
this
)
{
addDatabaseObject
(
systemSession
,
user
);
addDatabaseObject
(
systemSession
,
user
);
systemSession
.
commit
(
true
);
systemSession
.
commit
(
true
);
}
}
}
public
Role
getPublicRole
()
{
public
Role
getPublicRole
()
{
...
@@ -1295,7 +1283,7 @@ public class Database implements DataHandler {
...
@@ -1295,7 +1283,7 @@ public class Database implements DataHandler {
}
}
}
}
public
void
freeUpDiskSpace
()
throws
SQLException
{
public
synchronized
void
freeUpDiskSpace
()
throws
SQLException
{
long
sizeAvailable
=
0
;
long
sizeAvailable
=
0
;
if
(
emergencyReserve
!=
null
)
{
if
(
emergencyReserve
!=
null
)
{
sizeAvailable
=
emergencyReserve
.
length
();
sizeAvailable
=
emergencyReserve
.
length
();
...
@@ -1383,7 +1371,7 @@ public class Database implements DataHandler {
...
@@ -1383,7 +1371,7 @@ public class Database implements DataHandler {
return
logIndexChanges
;
return
logIndexChanges
;
}
}
public
void
setLog
(
int
level
)
throws
SQLException
{
public
synchronized
void
setLog
(
int
level
)
throws
SQLException
{
if
(
logLevel
==
level
)
{
if
(
logLevel
==
level
)
{
return
;
return
;
}
}
...
@@ -1495,7 +1483,7 @@ public class Database implements DataHandler {
...
@@ -1495,7 +1483,7 @@ public class Database implements DataHandler {
}
}
}
}
public
void
setMaxLogSize
(
long
value
)
{
public
synchronized
void
setMaxLogSize
(
long
value
)
{
long
minLogSize
=
biggestFileSize
/
Constants
.
LOG_SIZE_DIVIDER
;
long
minLogSize
=
biggestFileSize
/
Constants
.
LOG_SIZE_DIVIDER
;
minLogSize
=
Math
.
max
(
value
,
minLogSize
);
minLogSize
=
Math
.
max
(
value
,
minLogSize
);
long
currentLogSize
=
getLog
().
getMaxLogSize
();
long
currentLogSize
=
getLog
().
getMaxLogSize
();
...
...
h2/src/main/org/h2/engine/Session.java
浏览文件 @
cb967665
...
@@ -70,6 +70,7 @@ public class Session implements SessionInterface {
...
@@ -70,6 +70,7 @@ public class Session implements SessionInterface {
private
boolean
undoLogEnabled
=
true
;
private
boolean
undoLogEnabled
=
true
;
private
boolean
autoCommitAtTransactionEnd
;
private
boolean
autoCommitAtTransactionEnd
;
private
String
currentTransactionName
;
private
String
currentTransactionName
;
private
boolean
isClosed
;
public
Session
()
{
public
Session
()
{
}
}
...
@@ -109,7 +110,7 @@ public class Session implements SessionInterface {
...
@@ -109,7 +110,7 @@ public class Session implements SessionInterface {
if
(!
SysProperties
.
runFinalize
)
{
if
(!
SysProperties
.
runFinalize
)
{
return
;
return
;
}
}
if
(
database
!=
null
)
{
if
(
!
isClosed
)
{
throw
Message
.
getInternalError
(
"not closed"
,
stackTrace
);
throw
Message
.
getInternalError
(
"not closed"
,
stackTrace
);
}
}
}
}
...
@@ -164,7 +165,7 @@ public class Session implements SessionInterface {
...
@@ -164,7 +165,7 @@ public class Session implements SessionInterface {
}
}
public
Command
prepareLocal
(
String
sql
)
throws
SQLException
{
public
Command
prepareLocal
(
String
sql
)
throws
SQLException
{
if
(
database
==
null
)
{
if
(
isClosed
)
{
throw
Message
.
getSQLException
(
ErrorCode
.
CONNECTION_BROKEN
);
throw
Message
.
getSQLException
(
ErrorCode
.
CONNECTION_BROKEN
);
}
}
Parser
parser
=
new
Parser
(
this
);
Parser
parser
=
new
Parser
(
this
);
...
@@ -176,13 +177,11 @@ public class Session implements SessionInterface {
...
@@ -176,13 +177,11 @@ public class Session implements SessionInterface {
}
}
public
int
getPowerOffCount
()
{
public
int
getPowerOffCount
()
{
return
database
==
null
?
0
:
database
.
getPowerOffCount
();
return
database
.
getPowerOffCount
();
}
}
public
void
setPowerOffCount
(
int
count
)
{
public
void
setPowerOffCount
(
int
count
)
{
if
(
database
!=
null
)
{
database
.
setPowerOffCount
(
count
);
database
.
setPowerOffCount
(
count
);
}
}
}
public
void
commit
(
boolean
ddl
)
throws
SQLException
{
public
void
commit
(
boolean
ddl
)
throws
SQLException
{
...
@@ -277,12 +276,12 @@ public class Session implements SessionInterface {
...
@@ -277,12 +276,12 @@ public class Session implements SessionInterface {
}
}
public
void
close
()
throws
SQLException
{
public
void
close
()
throws
SQLException
{
if
(
database
!=
null
)
{
if
(
!
isClosed
)
{
try
{
try
{
cleanTempTables
(
true
);
cleanTempTables
(
true
);
database
.
removeSession
(
this
);
database
.
removeSession
(
this
);
}
finally
{
}
finally
{
database
=
null
;
isClosed
=
true
;
}
}
}
}
}
}
...
@@ -320,24 +319,28 @@ public class Session implements SessionInterface {
...
@@ -320,24 +319,28 @@ public class Session implements SessionInterface {
for
(
int
i
=
0
;
i
<
locks
.
size
();
i
++)
{
for
(
int
i
=
0
;
i
<
locks
.
size
();
i
++)
{
Table
t
=
(
Table
)
locks
.
get
(
i
);
Table
t
=
(
Table
)
locks
.
get
(
i
);
if
(!
t
.
isLockedExclusively
())
{
if
(!
t
.
isLockedExclusively
())
{
t
.
unlock
(
this
);
synchronized
(
database
)
{
locks
.
remove
(
i
);
t
.
unlock
(
this
);
locks
.
remove
(
i
);
}
i
--;
i
--;
}
}
}
}
}
}
private
void
unlockAll
()
throws
SQLException
{
private
void
unlockAll
()
throws
SQLException
{
if
(
SysProperties
.
CHECK
)
{
if
(
SysProperties
.
CHECK
)
{
if
(
undoLog
.
size
()
>
0
)
{
if
(
undoLog
.
size
()
>
0
)
{
throw
Message
.
getInternalError
();
throw
Message
.
getInternalError
();
}
}
}
}
for
(
int
i
=
0
;
i
<
locks
.
size
();
i
++)
{
synchronized
(
database
)
{
Table
t
=
(
Table
)
locks
.
get
(
i
);
for
(
int
i
=
0
;
i
<
locks
.
size
();
i
++)
{
t
.
unlock
(
this
);
Table
t
=
(
Table
)
locks
.
get
(
i
);
t
.
unlock
(
this
);
}
locks
.
clear
();
}
}
locks
.
clear
();
savepoints
=
null
;
savepoints
=
null
;
}
}
...
@@ -368,7 +371,7 @@ public class Session implements SessionInterface {
...
@@ -368,7 +371,7 @@ public class Session implements SessionInterface {
if
(
traceModuleName
==
null
)
{
if
(
traceModuleName
==
null
)
{
traceModuleName
=
Trace
.
JDBC
+
"["
+
id
+
"]"
;
traceModuleName
=
Trace
.
JDBC
+
"["
+
id
+
"]"
;
}
}
if
(
database
==
null
)
{
if
(
isClosed
)
{
return
new
TraceSystem
(
null
,
false
).
getTrace
(
traceModuleName
);
return
new
TraceSystem
(
null
,
false
).
getTrace
(
traceModuleName
);
}
}
return
database
.
getTrace
(
traceModuleName
);
return
database
.
getTrace
(
traceModuleName
);
...
@@ -460,7 +463,7 @@ public class Session implements SessionInterface {
...
@@ -460,7 +463,7 @@ public class Session implements SessionInterface {
}
}
public
boolean
isClosed
()
{
public
boolean
isClosed
()
{
return
database
==
null
;
return
isClosed
;
}
}
public
void
setThrottle
(
int
throttle
)
{
public
void
setThrottle
(
int
throttle
)
{
...
...
h2/src/main/org/h2/expression/Function.java
浏览文件 @
cb967665
...
@@ -1419,6 +1419,52 @@ public class Function extends Expression implements FunctionCall {
...
@@ -1419,6 +1419,52 @@ public class Function extends Expression implements FunctionCall {
}
}
public
long
getPrecision
()
{
public
long
getPrecision
()
{
switch
(
info
.
type
)
{
case
ENCRYPT:
case
DECRYPT:
precision
=
args
[
2
].
getPrecision
();
break
;
case
COMPRESS:
precision
=
args
[
0
].
getPrecision
();
break
;
case
CHAR:
precision
=
1
;
break
;
case
CONCAT:
precision
=
0
;
for
(
int
i
=
0
;
i
<
args
.
length
;
i
++)
{
precision
+=
args
[
i
].
getPrecision
();
if
(
precision
<
0
)
{
precision
=
Long
.
MAX_VALUE
;
}
}
break
;
case
HEXTORAW:
precision
=
(
args
[
0
].
getPrecision
()
+
3
)
/
4
;
break
;
case
LCASE:
case
LTRIM:
case
RIGHT:
case
RTRIM:
case
UCASE:
case
LOWER:
case
UPPER:
case
TRIM:
case
STRINGDECODE:
case
UTF8TOSTRING:
precision
=
args
[
0
].
getPrecision
();
break
;
case
RAWTOHEX:
precision
=
args
[
0
].
getPrecision
()
*
4
;
break
;
case
SOUNDEX:
precision
=
4
;
break
;
case
DAYNAME:
case
MONTHNAME:
precision
=
20
;
// day and month names may be long in some languages
break
;
}
return
precision
;
return
precision
;
}
}
...
...
h2/src/main/org/h2/expression/Operation.java
浏览文件 @
cb967665
...
@@ -188,7 +188,12 @@ public class Operation extends Expression {
...
@@ -188,7 +188,12 @@ public class Operation extends Expression {
public
long
getPrecision
()
{
public
long
getPrecision
()
{
if
(
right
!=
null
)
{
if
(
right
!=
null
)
{
return
Math
.
max
(
left
.
getPrecision
(),
right
.
getPrecision
());
switch
(
opType
)
{
case
CONCAT:
return
left
.
getPrecision
()
+
right
.
getPrecision
();
default
:
return
Math
.
max
(
left
.
getPrecision
(),
right
.
getPrecision
());
}
}
}
return
left
.
getPrecision
();
return
left
.
getPrecision
();
}
}
...
...
h2/src/main/org/h2/jdbc/JdbcConnection.java
浏览文件 @
cb967665
...
@@ -1237,7 +1237,7 @@ public class JdbcConnection extends TraceObject implements Connection {
...
@@ -1237,7 +1237,7 @@ public class JdbcConnection extends TraceObject implements Connection {
int
id
=
getNextId
(
TraceObject
.
RESULT_SET
);
int
id
=
getNextId
(
TraceObject
.
RESULT_SET
);
if
(
debug
())
{
if
(
debug
())
{
debugCodeAssign
(
"ResultSet"
,
TraceObject
.
RESULT_SET
,
id
);
debugCodeAssign
(
"ResultSet"
,
TraceObject
.
RESULT_SET
,
id
);
debugCodeCall
(
"executeQuery"
,
"CALL IDENTITY()"
);
statement
.
debugCodeCallMe
(
"executeQuery"
,
"CALL IDENTITY()"
);
}
}
ResultSet
rs
=
new
JdbcResultSet
(
session
,
this
,
statement
,
result
,
id
,
false
,
true
);
ResultSet
rs
=
new
JdbcResultSet
(
session
,
this
,
statement
,
result
,
id
,
false
,
true
);
return
rs
;
return
rs
;
...
...
h2/src/main/org/h2/jdbc/JdbcStatement.java
浏览文件 @
cb967665
...
@@ -888,6 +888,10 @@ public class JdbcStatement extends TraceObject implements Statement {
...
@@ -888,6 +888,10 @@ public class JdbcStatement extends TraceObject implements Statement {
debugCode
(
"setPoolable("
+
poolable
+
");"
);
debugCode
(
"setPoolable("
+
poolable
+
");"
);
}
}
}
}
void
debugCodeCallMe
(
String
text
,
String
param
)
{
debugCodeCall
(
text
,
param
);
}
}
}
h2/src/main/org/h2/res/help.csv
浏览文件 @
cb967665
...
@@ -2090,7 +2090,7 @@ If a start position is used, the characters before it are ignored.
...
@@ -2090,7 +2090,7 @@ If a start position is used, the characters before it are ignored.
INSTR(EMAIL,'@')
INSTR(EMAIL,'@')
"
"
"Functions (String)","INSERT Function","
"Functions (String)","INSERT Function","
INSERT(originalString, startInt, lengthInt, add
Int
): string
INSERT(originalString, startInt, lengthInt, add
String
): string
","
","
Inserts a additional string into the original string at a specified start position.
Inserts a additional string into the original string at a specified start position.
The length specifies the number of characters that are removed at the start position
The length specifies the number of characters that are removed at the start position
...
...
h2/src/main/org/h2/store/DiskFile.java
浏览文件 @
cb967665
...
@@ -145,50 +145,52 @@ public class DiskFile implements CacheWriter {
...
@@ -145,50 +145,52 @@ public class DiskFile implements CacheWriter {
}
}
}
}
public
synchronized
byte
[]
getSummary
()
throws
SQLException
{
public
byte
[]
getSummary
()
throws
SQLException
{
try
{
synchronized
(
database
)
{
ByteArrayOutputStream
buff
=
new
ByteArrayOutputStream
();
try
{
DataOutputStream
out
=
new
DataOutputStream
(
buff
);
ByteArrayOutputStream
buff
=
new
ByteArrayOutputStream
();
int
blocks
=
(
int
)
((
file
.
length
()
-
OFFSET
)
/
BLOCK_SIZE
);
DataOutputStream
out
=
new
DataOutputStream
(
buff
);
out
.
writeInt
(
blocks
);
int
blocks
=
(
int
)
((
file
.
length
()
-
OFFSET
)
/
BLOCK_SIZE
);
for
(
int
i
=
0
,
x
=
0
;
i
<
blocks
/
8
;
i
++)
{
out
.
writeInt
(
blocks
);
int
mask
=
0
;
for
(
int
i
=
0
,
x
=
0
;
i
<
blocks
/
8
;
i
++)
{
for
(
int
j
=
0
;
j
<
8
;
j
++)
{
int
mask
=
0
;
if
(
used
.
get
(
x
))
{
for
(
int
j
=
0
;
j
<
8
;
j
++)
{
mask
|=
1
<<
j
;
if
(
used
.
get
(
x
))
{
mask
|=
1
<<
j
;
}
x
++;
}
}
x
++
;
out
.
write
(
mask
)
;
}
}
out
.
write
(
mask
);
out
.
write
Int
(
pageOwners
.
size
()
);
}
ObjectArray
storages
=
new
ObjectArray
();
out
.
writeInt
(
pageOwners
.
size
());
for
(
int
i
=
0
;
i
<
pageOwners
.
size
();
i
++)
{
ObjectArray
storages
=
new
ObjectArray
(
);
int
s
=
pageOwners
.
get
(
i
);
for
(
int
i
=
0
;
i
<
pageOwners
.
size
();
i
++)
{
out
.
writeInt
(
s
);
int
s
=
pageOwners
.
get
(
i
);
if
(
s
>=
0
&&
(
s
>=
storages
.
size
()
||
storages
.
get
(
s
)
==
null
))
{
out
.
writeInt
(
s
);
Storage
storage
=
database
.
getStorage
(
s
,
thi
s
);
if
(
s
>=
0
&&
(
s
>=
storages
.
size
()
||
storages
.
get
(
s
)
==
null
)
)
{
while
(
storages
.
size
()
<=
s
)
{
Storage
storage
=
database
.
getStorage
(
s
,
this
);
storages
.
add
(
null
);
while
(
storages
.
size
()
<=
s
)
{
}
storages
.
add
(
null
);
storages
.
set
(
s
,
storage
);
}
}
storages
.
set
(
s
,
storage
);
}
}
}
for
(
int
i
=
0
;
i
<
storages
.
size
();
i
++)
{
for
(
int
i
=
0
;
i
<
storages
.
size
();
i
++)
{
Storage
storage
=
(
Storage
)
storages
.
get
(
i
);
Storage
storage
=
(
Storage
)
storages
.
get
(
i
);
if
(
storage
!=
null
)
{
if
(
storage
!=
null
)
{
out
.
writeInt
(
i
);
out
.
writeInt
(
i
);
out
.
writeInt
(
storage
.
getRecordCount
()
);
out
.
writeInt
(
storage
.
getRecordCount
());
}
}
}
out
.
writeInt
(-
1
);
out
.
close
();
byte
[]
b2
=
buff
.
toByteArray
();
return
b2
;
}
catch
(
IOException
e
)
{
// will probably never happen, because only in-memory structures are
// used
return
null
;
}
}
out
.
writeInt
(-
1
);
out
.
close
();
byte
[]
b2
=
buff
.
toByteArray
();
return
b2
;
}
catch
(
IOException
e
)
{
// will probably never happen, because only in-memory structures are
// used
return
null
;
}
}
}
}
...
@@ -201,167 +203,176 @@ public class DiskFile implements CacheWriter {
...
@@ -201,167 +203,176 @@ public class DiskFile implements CacheWriter {
return
true
;
return
true
;
}
}
public
synchronized
void
initFromSummary
(
byte
[]
summary
)
{
public
void
initFromSummary
(
byte
[]
summary
)
{
if
(
summary
==
null
||
summary
.
length
==
0
)
{
synchronized
(
database
)
{
ObjectArray
list
=
database
.
getAllStorages
();
if
(
summary
==
null
||
summary
.
length
==
0
)
{
for
(
int
i
=
0
;
i
<
list
.
size
();
i
++)
{
ObjectArray
list
=
database
.
getAllStorages
();
Storage
s
=
(
Storage
)
list
.
get
(
i
);
for
(
int
i
=
0
;
i
<
list
.
size
();
i
++)
{
if
(
s
!=
null
&&
s
.
getDiskFile
()
==
this
)
{
Storage
s
=
(
Storage
)
list
.
get
(
i
);
database
.
removeStorage
(
s
.
getId
(),
this
);
if
(
s
!=
null
&&
s
.
getDiskFile
()
==
this
)
{
database
.
removeStorage
(
s
.
getId
(),
this
);
}
}
}
reset
();
initAlreadyTried
=
false
;
init
=
false
;
return
;
}
}
reset
();
if
(
database
.
getRecovery
()
||
initAlreadyTried
)
{
initAlreadyTried
=
false
;
init
=
false
;
return
;
}
if
(
database
.
getRecovery
()
||
initAlreadyTried
)
{
return
;
}
initAlreadyTried
=
true
;
int
stage
=
0
;
try
{
DataInputStream
in
=
new
DataInputStream
(
new
ByteArrayInputStream
(
summary
));
int
b2
=
in
.
readInt
();
if
(
b2
>
fileBlockCount
)
{
database
.
getTrace
(
Trace
.
DATABASE
).
info
(
"unexpected size "
+
b2
+
" when initializing summary for "
+
fileName
+
" expected:"
+
fileBlockCount
);
return
;
return
;
}
}
stage
++;
initAlreadyTried
=
true
;
for
(
int
i
=
0
,
x
=
0
;
i
<
b2
/
8
;
i
++)
{
int
stage
=
0
;
int
mask
=
in
.
read
();
try
{
for
(
int
j
=
0
;
j
<
8
;
j
++)
{
DataInputStream
in
=
new
DataInputStream
(
new
ByteArrayInputStream
(
summary
));
if
((
mask
&
(
1
<<
j
))
!=
0
)
{
int
b2
=
in
.
readInt
();
used
.
set
(
x
);
if
(
b2
>
fileBlockCount
)
{
database
.
getTrace
(
Trace
.
DATABASE
).
info
(
"unexpected size "
+
b2
+
" when initializing summary for "
+
fileName
+
" expected:"
+
fileBlockCount
);
return
;
}
stage
++;
for
(
int
i
=
0
,
x
=
0
;
i
<
b2
/
8
;
i
++)
{
int
mask
=
in
.
read
();
for
(
int
j
=
0
;
j
<
8
;
j
++)
{
if
((
mask
&
(
1
<<
j
))
!=
0
)
{
used
.
set
(
x
);
}
x
++;
}
}
x
++;
}
}
stage
++;
}
int
len
=
in
.
readInt
();
stage
++;
ObjectArray
storages
=
new
ObjectArray
();
int
len
=
in
.
readInt
();
for
(
int
i
=
0
;
i
<
len
;
i
++)
{
ObjectArray
storages
=
new
ObjectArray
();
int
s
=
in
.
readInt
();
for
(
int
i
=
0
;
i
<
len
;
i
++)
{
if
(
s
>=
0
)
{
int
s
=
in
.
readInt
();
Storage
storage
=
database
.
getStorage
(
s
,
this
);
if
(
s
>=
0
)
{
while
(
storages
.
size
()
<=
s
)
{
Storage
storage
=
database
.
getStorage
(
s
,
this
);
storages
.
add
(
null
);
while
(
storages
.
size
()
<=
s
)
{
}
storages
.
add
(
null
);
storages
.
set
(
s
,
storage
);
storage
.
addPage
(
i
);
}
}
storages
.
set
(
s
,
storage
);
setPageOwner
(
i
,
s
);
storage
.
addPage
(
i
);
}
}
setPageOwner
(
i
,
s
);
stage
++;
}
while
(
true
)
{
stage
++;
int
s
=
in
.
readInt
();
while
(
true
)
{
if
(
s
<
0
)
{
int
s
=
in
.
readInt
();
break
;
if
(
s
<
0
)
{
}
break
;
int
recordCount
=
in
.
readInt
();
Storage
storage
=
(
Storage
)
storages
.
get
(
s
);
storage
.
setRecordCount
(
recordCount
);
}
}
int
recordCount
=
in
.
readInt
();
stage
++;
Storage
storage
=
(
Storage
)
storages
.
get
(
s
);
freeUnusedPages
();
storage
.
setRecordCount
(
recordCount
);
init
=
true
;
}
catch
(
Exception
e
)
{
database
.
getTrace
(
Trace
.
DATABASE
).
error
(
"error initializing summary for "
+
fileName
+
" size:"
+
summary
.
length
+
" stage:"
+
stage
,
e
);
// ignore - init is still false in this case
}
}
stage
++;
freeUnusedPages
();
init
=
true
;
}
catch
(
Exception
e
)
{
database
.
getTrace
(
Trace
.
DATABASE
).
error
(
"error initializing summary for "
+
fileName
+
" size:"
+
summary
.
length
+
" stage:"
+
stage
,
e
);
// ignore - init is still false in this case
}
}
}
}
public
synchronized
void
init
()
throws
SQLException
{
public
void
init
()
throws
SQLException
{
if
(
init
)
{
synchronized
(
database
)
{
return
;
if
(
init
)
{
}
return
;
ObjectArray
storages
=
database
.
getAllStorages
();
for
(
int
i
=
0
;
i
<
storages
.
size
();
i
++)
{
Storage
s
=
(
Storage
)
storages
.
get
(
i
);
if
(
s
!=
null
&&
s
.
getDiskFile
()
==
this
)
{
s
.
setRecordCount
(
0
);
}
}
int
blockHeaderLen
=
Math
.
max
(
Constants
.
FILE_BLOCK_SIZE
,
2
*
rowBuff
.
getIntLen
());
byte
[]
buff
=
new
byte
[
blockHeaderLen
];
DataPage
s
=
DataPage
.
create
(
database
,
buff
);
long
time
=
0
;
for
(
int
i
=
0
;
i
<
fileBlockCount
;)
{
long
t2
=
System
.
currentTimeMillis
();
if
(
t2
>
time
+
10
)
{
time
=
t2
;
database
.
setProgress
(
DatabaseEventListener
.
STATE_SCAN_FILE
,
this
.
fileName
,
i
,
fileBlockCount
);
}
go
(
i
);
file
.
readFully
(
buff
,
0
,
blockHeaderLen
);
s
.
reset
();
int
blockCount
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
blockCount
<
0
)
{
throw
Message
.
getInternalError
();
}
}
if
(
blockCount
==
0
)
{
ObjectArray
storages
=
database
.
getAllStorages
();
setUnused
(
i
,
1
);
for
(
int
i
=
0
;
i
<
storages
.
size
();
i
++)
{
i
++;
Storage
s
=
(
Storage
)
storages
.
get
(
i
);
}
else
{
if
(
s
!=
null
&&
s
.
getDiskFile
()
==
this
)
{
int
id
=
s
.
readInt
();
s
.
setRecordCount
(
0
);
if
(
SysProperties
.
CHECK
&&
id
<
0
)
{
}
}
int
blockHeaderLen
=
Math
.
max
(
Constants
.
FILE_BLOCK_SIZE
,
2
*
rowBuff
.
getIntLen
());
byte
[]
buff
=
new
byte
[
blockHeaderLen
];
DataPage
s
=
DataPage
.
create
(
database
,
buff
);
long
time
=
0
;
for
(
int
i
=
0
;
i
<
fileBlockCount
;)
{
long
t2
=
System
.
currentTimeMillis
();
if
(
t2
>
time
+
10
)
{
time
=
t2
;
database
.
setProgress
(
DatabaseEventListener
.
STATE_SCAN_FILE
,
this
.
fileName
,
i
,
fileBlockCount
);
}
go
(
i
);
file
.
readFully
(
buff
,
0
,
blockHeaderLen
);
s
.
reset
();
int
blockCount
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
blockCount
<
0
)
{
throw
Message
.
getInternalError
();
throw
Message
.
getInternalError
();
}
}
Storage
storage
=
database
.
getStorage
(
id
,
this
);
if
(
blockCount
==
0
)
{
setBlockOwner
(
storage
,
i
,
blockCount
,
true
);
setUnused
(
i
,
1
);
storage
.
incrementRecordCount
();
i
++;
i
+=
blockCount
;
}
else
{
int
id
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
id
<
0
)
{
throw
Message
.
getInternalError
();
}
Storage
storage
=
database
.
getStorage
(
id
,
this
);
setBlockOwner
(
storage
,
i
,
blockCount
,
true
);
storage
.
incrementRecordCount
();
i
+=
blockCount
;
}
}
}
database
.
setProgress
(
DatabaseEventListener
.
STATE_SCAN_FILE
,
this
.
fileName
,
fileBlockCount
,
fileBlockCount
);
init
=
true
;
}
}
database
.
setProgress
(
DatabaseEventListener
.
STATE_SCAN_FILE
,
this
.
fileName
,
fileBlockCount
,
fileBlockCount
);
init
=
true
;
}
}
public
synchronized
void
flush
()
throws
SQLException
{
public
void
flush
()
throws
SQLException
{
database
.
checkPowerOff
();
synchronized
(
database
)
{
ObjectArray
list
=
cache
.
getAllChanged
();
database
.
checkPowerOff
();
CacheObject
.
sort
(
list
);
ObjectArray
list
=
cache
.
getAllChanged
();
for
(
int
i
=
0
;
i
<
list
.
size
();
i
++)
{
CacheObject
.
sort
(
list
);
Record
rec
=
(
Record
)
list
.
get
(
i
);
for
(
int
i
=
0
;
i
<
list
.
size
();
i
++)
{
writeBack
(
rec
);
Record
rec
=
(
Record
)
list
.
get
(
i
);
}
writeBack
(
rec
);
// TODO flush performance: maybe it would be faster to write records in
}
// the same loop
// TODO flush performance: maybe it would be faster to write records in
for
(
int
i
=
0
;
i
<
fileBlockCount
;
i
++)
{
// the same loop
i
=
deleted
.
nextSetBit
(
i
);
for
(
int
i
=
0
;
i
<
fileBlockCount
;
i
++)
{
if
(
i
<
0
)
{
i
=
deleted
.
nextSetBit
(
i
);
break
;
if
(
i
<
0
)
{
}
break
;
if
(
deleted
.
get
(
i
))
{
}
writeDirectDeleted
(
i
,
1
);
if
(
deleted
.
get
(
i
))
{
deleted
.
clear
(
i
);
writeDirectDeleted
(
i
,
1
);
deleted
.
clear
(
i
);
}
}
}
}
}
}
}
public
synchronized
void
close
()
throws
SQLException
{
public
void
close
()
throws
SQLException
{
SQLException
closeException
=
null
;
synchronized
(
database
)
{
if
(!
database
.
getReadOnly
())
{
SQLException
closeException
=
null
;
try
{
if
(!
database
.
getReadOnly
())
{
flush
();
try
{
}
catch
(
SQLException
e
)
{
flush
();
closeException
=
e
;
}
catch
(
SQLException
e
)
{
closeException
=
e
;
}
}
}
cache
.
clear
();
// continue with close even if flush was not possible (file storage
// problem)
if
(
file
!=
null
)
{
file
.
closeSilently
();
file
=
null
;
}
if
(
closeException
!=
null
)
{
throw
closeException
;
}
readCount
=
writeCount
=
0
;
}
}
cache
.
clear
();
// continue with close even if flush was not possible (file storage
// problem)
if
(
file
!=
null
)
{
file
.
closeSilently
();
file
=
null
;
}
if
(
closeException
!=
null
)
{
throw
closeException
;
}
readCount
=
writeCount
=
0
;
}
}
private
void
go
(
int
block
)
throws
SQLException
{
private
void
go
(
int
block
)
throws
SQLException
{
...
@@ -372,34 +383,36 @@ public class DiskFile implements CacheWriter {
...
@@ -372,34 +383,36 @@ public class DiskFile implements CacheWriter {
return
((
long
)
block
*
BLOCK_SIZE
)
+
OFFSET
;
return
((
long
)
block
*
BLOCK_SIZE
)
+
OFFSET
;
}
}
synchronized
Record
getRecordIfStored
(
Session
session
,
int
pos
,
RecordReader
reader
,
int
storageId
)
Record
getRecordIfStored
(
Session
session
,
int
pos
,
RecordReader
reader
,
int
storageId
)
throws
SQLException
{
throws
SQLException
{
try
{
synchronized
(
database
)
{
int
owner
=
getPageOwner
(
getPage
(
pos
));
try
{
if
(
owner
!=
storageId
)
{
int
owner
=
getPageOwner
(
getPage
(
pos
));
return
null
;
if
(
owner
!=
storageId
)
{
}
return
null
;
go
(
pos
);
}
rowBuff
.
reset
();
go
(
pos
);
byte
[]
buff
=
rowBuff
.
getBytes
();
rowBuff
.
reset
();
file
.
readFully
(
buff
,
0
,
BLOCK_SIZE
);
byte
[]
buff
=
rowBuff
.
getBytes
();
DataPage
s
=
DataPage
.
create
(
database
,
buff
);
file
.
readFully
(
buff
,
0
,
BLOCK_SIZE
);
s
.
readInt
();
// blockCount
DataPage
s
=
DataPage
.
create
(
database
,
buff
);
int
id
=
s
.
readInt
();
s
.
readInt
();
// blockCount
if
(
id
!=
storageId
)
{
int
id
=
s
.
readInt
();
if
(
id
!=
storageId
)
{
return
null
;
}
}
catch
(
Exception
e
)
{
return
null
;
return
null
;
}
}
}
catch
(
Exception
e
)
{
return
getRecord
(
session
,
pos
,
reader
,
storageId
);
return
null
;
}
}
return
getRecord
(
session
,
pos
,
reader
,
storageId
);
}
}
synchronized
Record
getRecord
(
Session
session
,
int
pos
,
RecordReader
reader
,
int
storageId
)
throws
SQLException
{
Record
getRecord
(
Session
session
,
int
pos
,
RecordReader
reader
,
int
storageId
)
throws
SQLException
{
if
(
file
==
null
)
{
synchronized
(
database
)
{
throw
Message
.
getSQLException
(
ErrorCode
.
SIMULATED_POWER_OFF
);
if
(
file
==
null
)
{
}
throw
Message
.
getSQLException
(
ErrorCode
.
SIMULATED_POWER_OFF
);
synchronized
(
this
)
{
}
Record
record
=
(
Record
)
cache
.
get
(
pos
);
Record
record
=
(
Record
)
cache
.
get
(
pos
);
if
(
record
!=
null
)
{
if
(
record
!=
null
)
{
return
record
;
return
record
;
...
@@ -438,50 +451,52 @@ public class DiskFile implements CacheWriter {
...
@@ -438,50 +451,52 @@ public class DiskFile implements CacheWriter {
}
}
}
}
synchronized
int
allocate
(
Storage
storage
,
int
blockCount
)
throws
SQLException
{
int
allocate
(
Storage
storage
,
int
blockCount
)
throws
SQLException
{
if
(
file
==
null
)
{
synchronized
(
database
)
{
throw
Message
.
getSQLException
(
ErrorCode
.
SIMULATED_POWER_OFF
);
if
(
file
==
null
)
{
}
throw
Message
.
getSQLException
(
ErrorCode
.
SIMULATED_POWER_OFF
);
blockCount
=
getPage
(
blockCount
+
BLOCKS_PER_PAGE
-
1
)
*
BLOCKS_PER_PAGE
;
}
int
lastPage
=
getPage
(
getBlockCount
());
blockCount
=
getPage
(
blockCount
+
BLOCKS_PER_PAGE
-
1
)
*
BLOCKS_PER_PAGE
;
int
pageCount
=
getPage
(
blockCount
);
int
lastPage
=
getPage
(
getBlockCount
());
int
pos
=
-
1
;
int
pageCount
=
getPage
(
blockCount
);
boolean
found
=
false
;
int
pos
=
-
1
;
for
(
int
i
=
0
;
i
<
lastPage
;
i
++)
{
boolean
found
=
false
;
found
=
true
;
for
(
int
i
=
0
;
i
<
lastPage
;
i
++)
{
for
(
int
j
=
i
;
j
<
i
+
pageCount
;
j
++)
{
found
=
true
;
if
(
j
>=
lastPage
||
getPageOwner
(
j
)
!=
-
1
)
{
for
(
int
j
=
i
;
j
<
i
+
pageCount
;
j
++)
{
found
=
false
;
if
(
j
>=
lastPage
||
getPageOwner
(
j
)
!=
-
1
)
{
found
=
false
;
break
;
}
}
if
(
found
)
{
pos
=
i
*
BLOCKS_PER_PAGE
;
break
;
break
;
}
}
}
}
if
(
found
)
{
if
(!
found
)
{
pos
=
i
*
BLOCKS_PER_PAGE
;
int
max
=
getBlockCount
();
break
;
pos
=
MathUtils
.
roundUp
(
max
,
BLOCKS_PER_PAGE
);
}
if
(
rowBuff
instanceof
DataPageText
)
{
}
if
(
pos
>
max
)
{
if
(!
found
)
{
writeDirectDeleted
(
max
,
pos
-
max
);
int
max
=
getBlockCount
();
}
pos
=
MathUtils
.
roundUp
(
max
,
BLOCKS_PER_PAGE
);
writeDirectDeleted
(
pos
,
blockCount
);
if
(
rowBuff
instanceof
DataPageText
)
{
}
else
{
if
(
pos
>
max
)
{
long
min
=
((
long
)
pos
+
blockCount
)
*
BLOCK_SIZE
;
writeDirectDeleted
(
max
,
pos
-
max
);
min
=
MathUtils
.
scaleUp50Percent
(
Constants
.
FILE_MIN_SIZE
,
min
,
Constants
.
FILE_PAGE_SIZE
,
Constants
.
FILE_MAX_INCREMENT
)
+
OFFSET
;
}
if
(
min
>
file
.
length
())
{
writeDirectDeleted
(
pos
,
blockCount
);
file
.
setLength
(
min
);
}
else
{
database
.
notifyFileSize
(
min
);
long
min
=
((
long
)
pos
+
blockCount
)
*
BLOCK_SIZE
;
}
min
=
MathUtils
.
scaleUp50Percent
(
Constants
.
FILE_MIN_SIZE
,
min
,
Constants
.
FILE_PAGE_SIZE
,
Constants
.
FILE_MAX_INCREMENT
)
+
OFFSET
;
if
(
min
>
file
.
length
())
{
file
.
setLength
(
min
);
database
.
notifyFileSize
(
min
);
}
}
}
}
setBlockOwner
(
storage
,
pos
,
blockCount
,
false
);
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
storage
.
free
(
i
+
pos
,
1
);
}
return
pos
;
}
}
setBlockOwner
(
storage
,
pos
,
blockCount
,
false
);
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
storage
.
free
(
i
+
pos
,
1
);
}
return
pos
;
}
}
private
void
setBlockOwner
(
Storage
storage
,
int
pos
,
int
blockCount
,
boolean
inUse
)
throws
SQLException
{
private
void
setBlockOwner
(
Storage
storage
,
int
pos
,
int
blockCount
,
boolean
inUse
)
throws
SQLException
{
...
@@ -550,24 +565,28 @@ public class DiskFile implements CacheWriter {
...
@@ -550,24 +565,28 @@ public class DiskFile implements CacheWriter {
pageOwners
.
set
(
page
,
storageId
);
pageOwners
.
set
(
page
,
storageId
);
}
}
synchronized
void
setUsed
(
int
pos
,
int
blockCount
)
{
void
setUsed
(
int
pos
,
int
blockCount
)
{
if
(
pos
+
blockCount
>
fileBlockCount
)
{
synchronized
(
database
)
{
setBlockCount
(
pos
+
blockCount
);
if
(
pos
+
blockCount
>
fileBlockCount
)
{
setBlockCount
(
pos
+
blockCount
);
}
used
.
setRange
(
pos
,
blockCount
,
true
);
deleted
.
setRange
(
pos
,
blockCount
,
false
);
}
}
used
.
setRange
(
pos
,
blockCount
,
true
);
deleted
.
setRange
(
pos
,
blockCount
,
false
);
}
}
public
synchronized
void
delete
()
throws
SQLException
{
public
void
delete
()
throws
SQLException
{
try
{
synchronized
(
database
)
{
cache
.
clear
();
try
{
file
.
close
();
cache
.
clear
();
FileUtils
.
delete
(
fileName
);
file
.
close
();
}
catch
(
IOException
e
)
{
FileUtils
.
delete
(
fileName
);
throw
Message
.
convertIOException
(
e
,
fileName
);
}
catch
(
IOException
e
)
{
}
finally
{
throw
Message
.
convertIOException
(
e
,
fileName
);
file
=
null
;
}
finally
{
fileName
=
null
;
file
=
null
;
fileName
=
null
;
}
}
}
}
}
...
@@ -584,10 +603,10 @@ public class DiskFile implements CacheWriter {
...
@@ -584,10 +603,10 @@ public class DiskFile implements CacheWriter {
// return start;
// return start;
// }
// }
public
synchronized
void
writeBack
(
CacheObject
obj
)
throws
SQLException
{
public
void
writeBack
(
CacheObject
obj
)
throws
SQLException
{
writeCount
++;
synchronized
(
database
)
{
Record
record
=
(
Record
)
obj
;
writeCount
++
;
synchronized
(
this
)
{
Record
record
=
(
Record
)
obj
;
int
blockCount
=
record
.
getBlockCount
();
int
blockCount
=
record
.
getBlockCount
();
record
.
prepareWrite
();
record
.
prepareWrite
();
go
(
record
.
getPos
());
go
(
record
.
getPos
());
...
@@ -600,8 +619,8 @@ public class DiskFile implements CacheWriter {
...
@@ -600,8 +619,8 @@ public class DiskFile implements CacheWriter {
buff
.
fill
(
blockCount
*
BLOCK_SIZE
);
buff
.
fill
(
blockCount
*
BLOCK_SIZE
);
buff
.
updateChecksum
();
buff
.
updateChecksum
();
file
.
write
(
buff
.
getBytes
(),
0
,
buff
.
length
());
file
.
write
(
buff
.
getBytes
(),
0
,
buff
.
length
());
record
.
setChanged
(
false
);
}
}
record
.
setChanged
(
false
);
}
}
/*
/*
...
@@ -611,31 +630,33 @@ public class DiskFile implements CacheWriter {
...
@@ -611,31 +630,33 @@ public class DiskFile implements CacheWriter {
return
used
;
return
used
;
}
}
synchronized
void
updateRecord
(
Session
session
,
Record
record
)
throws
SQLException
{
void
updateRecord
(
Session
session
,
Record
record
)
throws
SQLException
{
record
.
setChanged
(
true
);
synchronized
(
database
)
{
int
pos
=
record
.
getPos
();
record
.
setChanged
(
true
);
Record
old
=
(
Record
)
cache
.
update
(
pos
,
record
);
int
pos
=
record
.
getPos
();
if
(
SysProperties
.
CHECK
)
{
Record
old
=
(
Record
)
cache
.
update
(
pos
,
record
);
if
(
old
!=
null
)
{
if
(
SysProperties
.
CHECK
)
{
if
(
old
!=
record
)
{
if
(
old
!=
null
)
{
database
.
checkPowerOff
();
if
(
old
!=
record
)
{
throw
Message
.
getInternalError
(
"old != record old="
+
old
+
" new="
+
record
);
database
.
checkPowerOff
();
}
throw
Message
.
getInternalError
(
"old != record old="
+
old
+
" new="
+
record
);
int
blockCount
=
record
.
getBlockCount
();
}
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
int
blockCount
=
record
.
getBlockCount
();
if
(
deleted
.
get
(
i
+
pos
))
{
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
throw
Message
.
getInternalError
(
"update marked as deleted: "
+
(
i
+
pos
));
if
(
deleted
.
get
(
i
+
pos
))
{
throw
Message
.
getInternalError
(
"update marked as deleted: "
+
(
i
+
pos
));
}
}
}
}
}
}
}
}
if
(
logChanges
)
{
if
(
logChanges
)
{
log
.
add
(
session
,
this
,
record
);
log
.
add
(
session
,
this
,
record
);
}
}
}
}
}
synchronized
void
writeDirectDeleted
(
int
recordId
,
int
blockCount
)
throws
SQLException
{
void
writeDirectDeleted
(
int
recordId
,
int
blockCount
)
throws
SQLException
{
synchronized
(
this
)
{
synchronized
(
database
)
{
go
(
recordId
);
go
(
recordId
);
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
file
.
write
(
freeBlock
.
getBytes
(),
0
,
freeBlock
.
length
());
file
.
write
(
freeBlock
.
getBytes
(),
0
,
freeBlock
.
length
());
...
@@ -644,73 +665,81 @@ public class DiskFile implements CacheWriter {
...
@@ -644,73 +665,81 @@ public class DiskFile implements CacheWriter {
}
}
}
}
synchronized
void
writeDirect
(
Storage
storage
,
int
pos
,
byte
[]
data
,
int
offset
)
throws
SQLException
{
void
writeDirect
(
Storage
storage
,
int
pos
,
byte
[]
data
,
int
offset
)
throws
SQLException
{
go
(
pos
);
synchronized
(
database
)
{
file
.
write
(
data
,
offset
,
BLOCK_SIZE
);
go
(
pos
);
setBlockOwner
(
storage
,
pos
,
1
,
true
);
file
.
write
(
data
,
offset
,
BLOCK_SIZE
);
setBlockOwner
(
storage
,
pos
,
1
,
true
);
}
}
}
public
synchronized
int
copyDirect
(
int
pos
,
OutputStream
out
)
throws
SQLException
{
public
int
copyDirect
(
int
pos
,
OutputStream
out
)
throws
SQLException
{
try
{
synchronized
(
database
)
{
if
(
pos
<
0
)
{
try
{
// read the header
if
(
pos
<
0
)
{
byte
[]
buffer
=
new
byte
[
OFFSET
];
// read the header
file
.
seek
(
0
);
byte
[]
buffer
=
new
byte
[
OFFSET
];
file
.
readFullyDirect
(
buffer
,
0
,
OFFSET
);
file
.
seek
(
0
);
out
.
write
(
buffer
);
file
.
readFullyDirect
(
buffer
,
0
,
OFFSET
);
return
0
;
out
.
write
(
buffer
);
}
return
0
;
if
(
pos
>=
fileBlockCount
)
{
}
return
-
1
;
if
(
pos
>=
fileBlockCount
)
{
}
return
-
1
;
int
blockSize
=
DiskFile
.
BLOCK_SIZE
;
}
byte
[]
buff
=
new
byte
[
blockSize
];
int
blockSize
=
DiskFile
.
BLOCK_SIZE
;
DataPage
s
=
DataPage
.
create
(
database
,
buff
);
byte
[]
buff
=
new
byte
[
blockSize
];
database
.
setProgress
(
DatabaseEventListener
.
STATE_BACKUP_FILE
,
this
.
fileName
,
pos
,
fileBlockCount
);
DataPage
s
=
DataPage
.
create
(
database
,
buff
);
go
(
pos
);
database
.
setProgress
(
DatabaseEventListener
.
STATE_BACKUP_FILE
,
this
.
fileName
,
pos
,
fileBlockCount
);
file
.
readFully
(
buff
,
0
,
blockSize
);
s
.
reset
();
int
blockCount
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
blockCount
<
0
)
{
throw
Message
.
getInternalError
();
}
if
(
blockCount
==
0
)
{
blockCount
=
1
;
}
int
id
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
id
<
0
)
{
throw
Message
.
getInternalError
();
}
s
.
checkCapacity
(
blockCount
*
blockSize
);
if
(
blockCount
>
1
)
{
file
.
readFully
(
s
.
getBytes
(),
blockSize
,
blockCount
*
blockSize
-
blockSize
);
}
if
(
file
.
isEncrypted
())
{
s
.
reset
();
go
(
pos
);
go
(
pos
);
file
.
readFullyDirect
(
s
.
getBytes
(),
0
,
blockCount
*
blockSize
);
file
.
readFully
(
buff
,
0
,
blockSize
);
s
.
reset
();
int
blockCount
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
blockCount
<
0
)
{
throw
Message
.
getInternalError
();
}
if
(
blockCount
==
0
)
{
blockCount
=
1
;
}
int
id
=
s
.
readInt
();
if
(
SysProperties
.
CHECK
&&
id
<
0
)
{
throw
Message
.
getInternalError
();
}
s
.
checkCapacity
(
blockCount
*
blockSize
);
if
(
blockCount
>
1
)
{
file
.
readFully
(
s
.
getBytes
(),
blockSize
,
blockCount
*
blockSize
-
blockSize
);
}
if
(
file
.
isEncrypted
())
{
s
.
reset
();
go
(
pos
);
file
.
readFullyDirect
(
s
.
getBytes
(),
0
,
blockCount
*
blockSize
);
}
out
.
write
(
s
.
getBytes
(),
0
,
blockCount
*
blockSize
);
return
pos
+
blockCount
;
}
catch
(
IOException
e
)
{
throw
Message
.
convertIOException
(
e
,
fileName
);
}
}
out
.
write
(
s
.
getBytes
(),
0
,
blockCount
*
blockSize
);
return
pos
+
blockCount
;
}
catch
(
IOException
e
)
{
throw
Message
.
convertIOException
(
e
,
fileName
);
}
}
}
}
synchronized
void
removeRecord
(
Session
session
,
int
pos
,
Record
record
,
int
blockCount
)
throws
SQLException
{
void
removeRecord
(
Session
session
,
int
pos
,
Record
record
,
int
blockCount
)
throws
SQLException
{
if
(
logChanges
)
{
synchronized
(
database
)
{
log
.
add
(
session
,
this
,
record
);
if
(
logChanges
)
{
log
.
add
(
session
,
this
,
record
);
}
cache
.
remove
(
pos
);
deleted
.
setRange
(
pos
,
blockCount
,
true
);
setUnused
(
pos
,
blockCount
);
}
}
cache
.
remove
(
pos
);
deleted
.
setRange
(
pos
,
blockCount
,
true
);
setUnused
(
pos
,
blockCount
);
}
}
synchronized
void
addRecord
(
Session
session
,
Record
record
)
throws
SQLException
{
void
addRecord
(
Session
session
,
Record
record
)
throws
SQLException
{
if
(
logChanges
)
{
synchronized
(
database
)
{
log
.
add
(
session
,
this
,
record
);
if
(
logChanges
)
{
log
.
add
(
session
,
this
,
record
);
}
cache
.
put
(
record
);
}
}
cache
.
put
(
record
);
}
}
/*
/*
...
@@ -720,47 +749,53 @@ public class DiskFile implements CacheWriter {
...
@@ -720,47 +749,53 @@ public class DiskFile implements CacheWriter {
return
cache
;
return
cache
;
}
}
synchronized
void
free
(
int
pos
,
int
blockCount
)
{
void
free
(
int
pos
,
int
blockCount
)
{
used
.
setRange
(
pos
,
blockCount
,
false
);
synchronized
(
database
)
{
used
.
setRange
(
pos
,
blockCount
,
false
);
}
}
}
public
int
getRecordOverhead
()
{
public
int
getRecordOverhead
()
{
return
recordOverhead
;
return
recordOverhead
;
}
}
public
synchronized
void
truncateStorage
(
Session
session
,
Storage
storage
,
IntArray
pages
)
throws
SQLException
{
public
void
truncateStorage
(
Session
session
,
Storage
storage
,
IntArray
pages
)
throws
SQLException
{
int
storageId
=
storage
.
getId
();
synchronized
(
database
)
{
// make sure the cache records of this storage are not flushed to disk
int
storageId
=
storage
.
getId
();
// afterwards
// make sure the cache records of this storage are not flushed to disk
ObjectArray
list
=
cache
.
getAllChanged
();
// afterwards
for
(
int
i
=
0
;
i
<
list
.
size
();
i
++)
{
ObjectArray
list
=
cache
.
getAllChanged
();
Record
r
=
(
Record
)
list
.
get
(
i
);
for
(
int
i
=
0
;
i
<
list
.
size
();
i
++)
{
if
(
r
.
getStorageId
()
==
storageId
)
{
Record
r
=
(
Record
)
list
.
get
(
i
);
r
.
setChanged
(
false
);
if
(
r
.
getStorageId
()
==
storageId
)
{
}
r
.
setChanged
(
false
);
}
}
int
[]
pagesCopy
=
new
int
[
pages
.
size
()];
// can not use pages directly, because setUnused removes rows from there
pages
.
toArray
(
pagesCopy
);
for
(
int
i
=
0
;
i
<
pagesCopy
.
length
;
i
++)
{
int
page
=
pagesCopy
[
i
];
if
(
logChanges
)
{
log
.
addTruncate
(
session
,
this
,
storageId
,
page
*
BLOCKS_PER_PAGE
,
BLOCKS_PER_PAGE
);
}
}
for
(
int
j
=
0
;
j
<
BLOCKS_PER_PAGE
;
j
++)
{
int
[]
pagesCopy
=
new
int
[
pages
.
size
()];
Record
r
=
(
Record
)
cache
.
find
(
page
*
BLOCKS_PER_PAGE
+
j
);
// can not use pages directly, because setUnused removes rows from there
if
(
r
!=
null
)
{
pages
.
toArray
(
pagesCopy
);
cache
.
remove
(
r
.
getPos
());
for
(
int
i
=
0
;
i
<
pagesCopy
.
length
;
i
++)
{
int
page
=
pagesCopy
[
i
];
if
(
logChanges
)
{
log
.
addTruncate
(
session
,
this
,
storageId
,
page
*
BLOCKS_PER_PAGE
,
BLOCKS_PER_PAGE
);
}
}
for
(
int
j
=
0
;
j
<
BLOCKS_PER_PAGE
;
j
++)
{
Record
r
=
(
Record
)
cache
.
find
(
page
*
BLOCKS_PER_PAGE
+
j
);
if
(
r
!=
null
)
{
cache
.
remove
(
r
.
getPos
());
}
}
deleted
.
setRange
(
page
*
BLOCKS_PER_PAGE
,
BLOCKS_PER_PAGE
,
true
);
setUnused
(
page
*
BLOCKS_PER_PAGE
,
BLOCKS_PER_PAGE
);
}
}
deleted
.
setRange
(
page
*
BLOCKS_PER_PAGE
,
BLOCKS_PER_PAGE
,
true
);
setUnused
(
page
*
BLOCKS_PER_PAGE
,
BLOCKS_PER_PAGE
);
}
}
}
}
public
synchronized
void
sync
()
{
public
void
sync
()
{
if
(
file
!=
null
)
{
synchronized
(
database
)
{
file
.
sync
();
if
(
file
!=
null
)
{
file
.
sync
();
}
}
}
}
}
...
@@ -768,70 +803,76 @@ public class DiskFile implements CacheWriter {
...
@@ -768,70 +803,76 @@ public class DiskFile implements CacheWriter {
return
dataFile
;
return
dataFile
;
}
}
public
synchronized
void
setLogChanges
(
boolean
b
)
{
public
void
setLogChanges
(
boolean
b
)
{
this
.
logChanges
=
b
;
synchronized
(
database
)
{
this
.
logChanges
=
b
;
}
}
}
public
synchronized
void
addRedoLog
(
Storage
storage
,
int
recordId
,
int
blockCount
,
DataPage
rec
)
throws
SQLException
{
public
void
addRedoLog
(
Storage
storage
,
int
recordId
,
int
blockCount
,
DataPage
rec
)
throws
SQLException
{
byte
[]
data
=
null
;
synchronized
(
database
)
{
if
(
rec
!=
null
)
{
byte
[]
data
=
null
;
DataPage
all
=
rowBuff
;
if
(
rec
!=
null
)
{
all
.
reset
();
DataPage
all
=
rowBuff
;
all
.
writeInt
(
blockCount
);
all
.
reset
();
all
.
writeInt
(
storage
.
getId
());
all
.
writeInt
(
blockCount
);
all
.
writeDataPageNoSize
(
rec
);
all
.
writeInt
(
storage
.
getId
());
// the buffer may have some additional fillers - just ignore them
all
.
writeDataPageNoSize
(
rec
);
all
.
fill
(
blockCount
*
BLOCK_SIZE
);
// the buffer may have some additional fillers - just ignore them
all
.
updateChecksum
();
all
.
fill
(
blockCount
*
BLOCK_SIZE
);
if
(
SysProperties
.
CHECK
&&
all
.
length
()
!=
BLOCK_SIZE
*
blockCount
)
{
all
.
updateChecksum
();
throw
Message
.
getInternalError
(
"blockCount:"
+
blockCount
+
" length: "
+
all
.
length
()
*
BLOCK_SIZE
);
if
(
SysProperties
.
CHECK
&&
all
.
length
()
!=
BLOCK_SIZE
*
blockCount
)
{
throw
Message
.
getInternalError
(
"blockCount:"
+
blockCount
+
" length: "
+
all
.
length
()
*
BLOCK_SIZE
);
}
data
=
new
byte
[
all
.
length
()];
System
.
arraycopy
(
all
.
getBytes
(),
0
,
data
,
0
,
all
.
length
());
}
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
RedoLogRecord
log
=
new
RedoLogRecord
();
log
.
recordId
=
recordId
+
i
;
log
.
offset
=
i
*
BLOCK_SIZE
;
log
.
storage
=
storage
;
log
.
data
=
data
;
log
.
sequenceId
=
redoBuffer
.
size
();
redoBuffer
.
add
(
log
);
redoBufferSize
+=
log
.
getSize
();
}
if
(
redoBufferSize
>
SysProperties
.
REDO_BUFFER_SIZE
)
{
flushRedoLog
();
}
}
data
=
new
byte
[
all
.
length
()];
System
.
arraycopy
(
all
.
getBytes
(),
0
,
data
,
0
,
all
.
length
());
}
for
(
int
i
=
0
;
i
<
blockCount
;
i
++)
{
RedoLogRecord
log
=
new
RedoLogRecord
();
log
.
recordId
=
recordId
+
i
;
log
.
offset
=
i
*
BLOCK_SIZE
;
log
.
storage
=
storage
;
log
.
data
=
data
;
log
.
sequenceId
=
redoBuffer
.
size
();
redoBuffer
.
add
(
log
);
redoBufferSize
+=
log
.
getSize
();
}
if
(
redoBufferSize
>
SysProperties
.
REDO_BUFFER_SIZE
)
{
flushRedoLog
();
}
}
}
}
public
synchronized
void
flushRedoLog
()
throws
SQLException
{
public
void
flushRedoLog
()
throws
SQLException
{
if
(
redoBuffer
.
size
()
==
0
)
{
synchronized
(
database
)
{
return
;
if
(
redoBuffer
.
size
()
==
0
)
{
}
return
;
redoBuffer
.
sort
(
new
Comparator
()
{
}
public
int
compare
(
Object
o1
,
Object
o2
)
{
redoBuffer
.
sort
(
new
Comparator
()
{
RedoLogRecord
e1
=
(
RedoLogRecord
)
o1
;
public
int
compare
(
Object
o1
,
Object
o2
)
{
RedoLogRecord
e2
=
(
RedoLogRecord
)
o2
;
RedoLogRecord
e1
=
(
RedoLogRecord
)
o1
;
int
comp
=
e1
.
recordId
-
e2
.
recordId
;
RedoLogRecord
e2
=
(
RedoLogRecord
)
o2
;
if
(
comp
==
0
)
{
int
comp
=
e1
.
recordId
-
e2
.
recordId
;
comp
=
e1
.
sequenceId
-
e2
.
sequenceId
;
if
(
comp
==
0
)
{
}
comp
=
e1
.
sequenceId
-
e2
.
sequenceId
;
return
comp
;
}
}
return
comp
;
});
}
RedoLogRecord
last
=
null
;
});
for
(
int
i
=
0
;
i
<
redoBuffer
.
size
();
i
++)
{
RedoLogRecord
last
=
null
;
RedoLogRecord
entry
=
(
RedoLogRecord
)
redoBuffer
.
get
(
i
);
for
(
int
i
=
0
;
i
<
redoBuffer
.
size
();
i
++)
{
if
(
last
!=
null
&&
entry
.
recordId
!=
last
.
recordId
)
{
RedoLogRecord
entry
=
(
RedoLogRecord
)
redoBuffer
.
get
(
i
);
if
(
last
!=
null
&&
entry
.
recordId
!=
last
.
recordId
)
{
writeRedoLog
(
last
);
}
last
=
entry
;
}
if
(
last
!=
null
)
{
writeRedoLog
(
last
);
writeRedoLog
(
last
);
}
}
last
=
entry
;
redoBuffer
.
clear
();
}
redoBufferSize
=
0
;
if
(
last
!=
null
)
{
writeRedoLog
(
last
);
}
}
redoBuffer
.
clear
();
redoBufferSize
=
0
;
}
}
private
void
writeRedoLog
(
RedoLogRecord
entry
)
throws
SQLException
{
private
void
writeRedoLog
(
RedoLogRecord
entry
)
throws
SQLException
{
...
...
h2/src/main/org/h2/store/Storage.java
浏览文件 @
cb967665
...
@@ -90,28 +90,30 @@ public class Storage {
...
@@ -90,28 +90,30 @@ public class Storage {
lastCheckedPage
=
file
.
getPage
(
record
.
getPos
());
lastCheckedPage
=
file
.
getPage
(
record
.
getPos
());
next
=
record
.
getPos
()
+
blockCount
;
next
=
record
.
getPos
()
+
blockCount
;
}
}
BitField
used
=
file
.
getUsed
();
synchronized
(
database
)
{
while
(
true
)
{
BitField
used
=
file
.
getUsed
();
int
page
=
file
.
getPage
(
next
);
while
(
true
)
{
if
(
lastCheckedPage
!=
page
)
{
int
page
=
file
.
getPage
(
next
);
if
(
pageIndex
<
0
)
{
if
(
lastCheckedPage
!=
page
)
{
pageIndex
=
pages
.
findNextValueIndex
(
page
);
if
(
pageIndex
<
0
)
{
}
else
{
pageIndex
=
pages
.
findNextValueIndex
(
page
);
pageIndex
++;
}
else
{
pageIndex
++;
}
if
(
pageIndex
>=
pages
.
size
())
{
return
-
1
;
}
lastCheckedPage
=
pages
.
get
(
pageIndex
);
next
=
Math
.
max
(
next
,
DiskFile
.
BLOCKS_PER_PAGE
*
lastCheckedPage
);
}
}
if
(
pageIndex
>=
pages
.
size
())
{
if
(
used
.
get
(
next
))
{
return
-
1
;
return
next
;
}
if
(
used
.
getLong
(
next
)
==
0
)
{
next
=
MathUtils
.
roundUp
(
next
+
1
,
64
);
}
else
{
next
++;
}
}
lastCheckedPage
=
pages
.
get
(
pageIndex
);
next
=
Math
.
max
(
next
,
DiskFile
.
BLOCKS_PER_PAGE
*
lastCheckedPage
);
}
if
(
used
.
get
(
next
))
{
return
next
;
}
if
(
used
.
getLong
(
next
)
==
0
)
{
next
=
MathUtils
.
roundUp
(
next
+
1
,
64
);
}
else
{
next
++;
}
}
}
}
}
}
...
@@ -153,18 +155,20 @@ public class Storage {
...
@@ -153,18 +155,20 @@ public class Storage {
}
}
private
boolean
isFreeAndMine
(
int
pos
,
int
blocks
)
{
private
boolean
isFreeAndMine
(
int
pos
,
int
blocks
)
{
BitField
used
=
file
.
getUsed
();
synchronized
(
database
)
{
for
(
int
i
=
blocks
+
pos
-
1
;
i
>=
pos
;
i
--)
{
BitField
used
=
file
.
getUsed
();
if
(
file
.
getPageOwner
(
file
.
getPage
(
i
))
!=
id
||
used
.
get
(
i
))
{
for
(
int
i
=
blocks
+
pos
-
1
;
i
>=
pos
;
i
--)
{
return
false
;
if
(
file
.
getPageOwner
(
file
.
getPage
(
i
))
!=
id
||
used
.
get
(
i
))
{
return
false
;
}
}
}
return
true
;
}
}
return
true
;
}
}
public
int
allocate
(
int
blockCount
)
throws
SQLException
{
public
int
allocate
(
int
blockCount
)
throws
SQLException
{
if
(
freeList
.
size
()
>
0
)
{
if
(
freeList
.
size
()
>
0
)
{
synchronized
(
fil
e
)
{
synchronized
(
databas
e
)
{
BitField
used
=
file
.
getUsed
();
BitField
used
=
file
.
getUsed
();
for
(
int
i
=
0
;
i
<
freeList
.
size
();
i
++)
{
for
(
int
i
=
0
;
i
<
freeList
.
size
();
i
++)
{
int
px
=
freeList
.
get
(
i
);
int
px
=
freeList
.
get
(
i
);
...
...
h2/src/main/org/h2/table/TableData.java
浏览文件 @
cb967665
...
@@ -322,12 +322,12 @@ public class TableData extends Table implements RecordReader {
...
@@ -322,12 +322,12 @@ public class TableData extends Table implements RecordReader {
return
;
return
;
}
}
if
(
lockShared
.
isEmpty
())
{
if
(
lockShared
.
isEmpty
())
{
traceLock
(
session
,
exclusive
,
"
ok
"
);
traceLock
(
session
,
exclusive
,
"
added for
"
);
session
.
addLock
(
this
);
session
.
addLock
(
this
);
lockExclusive
=
session
;
lockExclusive
=
session
;
return
;
return
;
}
else
if
(
lockShared
.
size
()
==
1
&&
lockShared
.
contains
(
session
))
{
}
else
if
(
lockShared
.
size
()
==
1
&&
lockShared
.
contains
(
session
))
{
traceLock
(
session
,
exclusive
,
"
ok (upgrade)
"
);
traceLock
(
session
,
exclusive
,
"
add (upgraded) for
"
);
lockExclusive
=
session
;
lockExclusive
=
session
;
return
;
return
;
}
}
...
@@ -353,11 +353,11 @@ public class TableData extends Table implements RecordReader {
...
@@ -353,11 +353,11 @@ public class TableData extends Table implements RecordReader {
}
}
long
now
=
System
.
currentTimeMillis
();
long
now
=
System
.
currentTimeMillis
();
if
(
now
>=
max
)
{
if
(
now
>=
max
)
{
traceLock
(
session
,
exclusive
,
"timeout "
+
session
.
getLockTimeout
());
traceLock
(
session
,
exclusive
,
"timeout
after
"
+
session
.
getLockTimeout
());
throw
Message
.
getSQLException
(
ErrorCode
.
LOCK_TIMEOUT_1
,
getName
());
throw
Message
.
getSQLException
(
ErrorCode
.
LOCK_TIMEOUT_1
,
getName
());
}
}
try
{
try
{
traceLock
(
session
,
exclusive
,
"waiting"
);
traceLock
(
session
,
exclusive
,
"waiting
for
"
);
if
(
database
.
getLockMode
()
==
Constants
.
LOCK_MODE_TABLE_GC
)
{
if
(
database
.
getLockMode
()
==
Constants
.
LOCK_MODE_TABLE_GC
)
{
for
(
int
i
=
0
;
i
<
20
;
i
++)
{
for
(
int
i
=
0
;
i
<
20
;
i
++)
{
long
free
=
Runtime
.
getRuntime
().
freeMemory
();
long
free
=
Runtime
.
getRuntime
().
freeMemory
();
...
@@ -378,7 +378,7 @@ public class TableData extends Table implements RecordReader {
...
@@ -378,7 +378,7 @@ public class TableData extends Table implements RecordReader {
private
void
traceLock
(
Session
session
,
boolean
exclusive
,
String
s
)
{
private
void
traceLock
(
Session
session
,
boolean
exclusive
,
String
s
)
{
if
(
traceLock
.
debug
())
{
if
(
traceLock
.
debug
())
{
traceLock
.
debug
(
session
.
getId
()
+
" "
+
(
exclusive
?
"
xlock"
:
"s
lock"
)
+
" "
+
s
+
" "
+
getName
());
traceLock
.
debug
(
session
.
getId
()
+
" "
+
(
exclusive
?
"
exclusive write lock"
:
"shared read
lock"
)
+
" "
+
s
+
" "
+
getName
());
}
}
}
}
...
...
h2/src/test/org/h2/test/TestAll.java
浏览文件 @
cb967665
...
@@ -4,7 +4,6 @@
...
@@ -4,7 +4,6 @@
*/
*/
package
org
.
h2
.
test
;
package
org
.
h2
.
test
;
import
java.sql.PreparedStatement
;
import
java.sql.SQLException
;
import
java.sql.SQLException
;
import
java.util.Properties
;
import
java.util.Properties
;
...
@@ -79,6 +78,7 @@ import org.h2.test.unit.TestFile;
...
@@ -79,6 +78,7 @@ import org.h2.test.unit.TestFile;
import
org.h2.test.unit.TestFileLock
;
import
org.h2.test.unit.TestFileLock
;
import
org.h2.test.unit.TestIntArray
;
import
org.h2.test.unit.TestIntArray
;
import
org.h2.test.unit.TestIntIntHashMap
;
import
org.h2.test.unit.TestIntIntHashMap
;
import
org.h2.test.unit.TestMultiThreadedKernel
;
import
org.h2.test.unit.TestOverflow
;
import
org.h2.test.unit.TestOverflow
;
import
org.h2.test.unit.TestPattern
;
import
org.h2.test.unit.TestPattern
;
import
org.h2.test.unit.TestReader
;
import
org.h2.test.unit.TestReader
;
...
@@ -147,8 +147,6 @@ java org.h2.test.TestAll timer
...
@@ -147,8 +147,6 @@ java org.h2.test.TestAll timer
web page translation
web page translation
TestMultiThreadedKernel and integrate in unit tests; use also in-memory and so on
At startup, when corrupted, say if LOG=0 was used before
At startup, when corrupted, say if LOG=0 was used before
add MVCC
add MVCC
...
@@ -479,6 +477,7 @@ write tests using the PostgreSQL JDBC driver
...
@@ -479,6 +477,7 @@ write tests using the PostgreSQL JDBC driver
new
TestFileLock
().
runTest
(
this
);
new
TestFileLock
().
runTest
(
this
);
new
TestIntArray
().
runTest
(
this
);
new
TestIntArray
().
runTest
(
this
);
new
TestIntIntHashMap
().
runTest
(
this
);
new
TestIntIntHashMap
().
runTest
(
this
);
new
TestMultiThreadedKernel
().
runTest
(
this
);
new
TestOverflow
().
runTest
(
this
);
new
TestOverflow
().
runTest
(
this
);
new
TestPattern
().
runTest
(
this
);
new
TestPattern
().
runTest
(
this
);
new
TestReader
().
runTest
(
this
);
new
TestReader
().
runTest
(
this
);
...
...
h2/src/test/org/h2/test/jdbc/TestResultSet.java
浏览文件 @
cb967665
...
@@ -35,6 +35,7 @@ public class TestResultSet extends TestBase {
...
@@ -35,6 +35,7 @@ public class TestResultSet extends TestBase {
stat
=
conn
.
createStatement
();
stat
=
conn
.
createStatement
();
testColumnLength
();
testArray
();
testArray
();
testLimitMaxRows
();
testLimitMaxRows
();
...
@@ -57,24 +58,29 @@ public class TestResultSet extends TestBase {
...
@@ -57,24 +58,29 @@ public class TestResultSet extends TestBase {
}
}
private
void
testColumnLength
()
throws
Exception
{
trace
(
"Test ColumnLength"
);
}
private
void
testLimitMaxRows
()
throws
Exception
{
private
void
testLimitMaxRows
()
throws
Exception
{
trace
(
"Test LimitMaxRows"
);
trace
(
"Test LimitMaxRows"
);
ResultSet
rs
;
ResultSet
rs
;
stat
.
execute
(
"CREATE TABLE
TEST(ID INT PRIMARY KEY
)"
);
stat
.
execute
(
"CREATE TABLE
one (C CHARACTER(10)
)"
);
stat
.
execute
(
"INSERT INTO TEST VALUES(1), (2), (3), (4)
"
);
rs
=
stat
.
executeQuery
(
"SELECT C || C FROM one;
"
);
rs
=
stat
.
executeQuery
(
"SELECT * FROM TEST"
);
ResultSetMetaData
md
=
rs
.
getMetaData
(
);
check
ResultRowCount
(
rs
,
4
);
check
(
20
,
md
.
getPrecision
(
1
)
);
rs
=
stat
.
executeQuery
(
"SELECT * FROM TEST LIMIT 2
"
);
ResultSet
rs2
=
stat
.
executeQuery
(
"SELECT UPPER (C) FROM one;
"
);
checkResultRowCount
(
rs
,
2
);
ResultSetMetaData
md2
=
rs2
.
getMetaData
(
);
stat
.
setMaxRows
(
2
);
check
(
10
,
md2
.
getPrecision
(
1
)
);
rs
=
stat
.
executeQuery
(
"SELECT
* FROM TEST
"
);
rs
=
stat
.
executeQuery
(
"SELECT
UPPER (C), CHAR(10), CONCAT(C,C,C), HEXTORAW(C), RAWTOHEX(C) FROM one
"
);
checkResultRowCount
(
rs
,
2
);
ResultSetMetaData
meta
=
rs
.
getMetaData
(
);
rs
=
stat
.
executeQuery
(
"SELECT * FROM TEST LIMIT 1"
);
check
(
10
,
meta
.
getPrecision
(
1
)
);
check
ResultRowCount
(
rs
,
1
);
check
(
1
,
meta
.
getPrecision
(
2
)
);
rs
=
stat
.
executeQuery
(
"SELECT * FROM TEST LIMIT 3"
);
check
(
30
,
meta
.
getPrecision
(
3
)
);
check
ResultRowCount
(
rs
,
2
);
check
(
3
,
meta
.
getPrecision
(
4
)
);
stat
.
setMaxRows
(
0
);
check
(
40
,
meta
.
getPrecision
(
5
)
);
stat
.
execute
(
"DROP TABLE
TEST
"
);
stat
.
execute
(
"DROP TABLE
one
"
);
}
}
void
testAutoIncrement
()
throws
Exception
{
void
testAutoIncrement
()
throws
Exception
{
...
...
h2/src/test/org/h2/test/unit/TestMultiThreadedKernel.java
0 → 100644
浏览文件 @
cb967665
/*
* Copyright 2004-2007 H2 Group. Licensed under the H2 License, Version 1.0 (http://h2database.com/html/license.html).
* Initial Developer: H2 Group
*/
package
org
.
h2
.
test
.
unit
;
import
java.sql.Connection
;
import
java.sql.DriverManager
;
import
java.sql.PreparedStatement
;
import
java.sql.SQLException
;
import
org.h2.constant.SysProperties
;
import
org.h2.test.TestBase
;
public
class
TestMultiThreadedKernel
extends
TestBase
implements
Runnable
{
private
String
url
,
user
,
password
;
private
int
id
;
private
TestMultiThreadedKernel
master
;
private
volatile
boolean
stop
;
public
void
test
()
throws
Exception
{
if
(
config
.
networked
)
{
return
;
}
deleteDb
(
"multiThreadedKernel"
);
int
count
=
getSize
(
2
,
5
);
Thread
[]
list
=
new
Thread
[
count
];
for
(
int
i
=
0
;
i
<
count
;
i
++)
{
TestMultiThreadedKernel
r
=
new
TestMultiThreadedKernel
();
r
.
url
=
getURL
(
"multiThreadedKernel"
,
true
);
r
.
user
=
getUser
();
r
.
password
=
getPassword
();
r
.
master
=
this
;
r
.
id
=
i
;
Thread
thread
=
new
Thread
(
r
);
thread
.
setName
(
"Thread "
+
i
);
thread
.
start
();
list
[
i
]
=
thread
;
}
Thread
.
sleep
(
getSize
(
2000
,
5000
));
stop
=
true
;
for
(
int
i
=
0
;
i
<
count
;
i
++)
{
list
[
i
].
join
();
}
SysProperties
.
multiThreadedKernel
=
false
;
}
public
void
run
()
{
try
{
org
.
h2
.
Driver
.
load
();
Connection
conn
=
DriverManager
.
getConnection
(
url
+
";MULTI_THREADED=1;LOCK_MODE=3;WRITE_DELAY=0"
,
user
,
password
);
conn
.
createStatement
().
execute
(
"CREATE TABLE TEST"
+
id
+
"(COL1 BIGINT AUTO_INCREMENT PRIMARY KEY, COL2 BIGINT)"
);
PreparedStatement
prep
=
conn
.
prepareStatement
(
"insert into TEST"
+
id
+
"(col2) values (?)"
);
for
(
int
i
=
0
;
!
master
.
stop
;
i
++)
{
prep
.
setLong
(
1
,
i
);
prep
.
execute
();
}
conn
.
close
();
}
catch
(
SQLException
e
)
{
e
.
printStackTrace
();
}
}
}
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论