Compare commits

..

26 Commits
v0.8.4 ... main

Author SHA1 Message Date
Donny
74074dfa2f
Update update_abis.shFixing bug with incorrect argument handling in copy_abis function (#342)
Some checks failed
abi-consistent-check / build-and-compare (push) Has been cancelled
code-coverage / unittest-cov (push) Has been cancelled
rust / check (push) Has been cancelled
rust / test (push) Has been cancelled
rust / lints (push) Has been cancelled
functional-test / test (push) Has been cancelled
2025-03-24 17:03:58 +08:00
Maximilian Hubert
12538e4b6c
Typo Fixes in Documentation (#340)
* Update contributing.md

* Update mining-reward.md

* Update proof-of-random-access.md

* Update run.md
2025-03-24 17:02:54 +08:00
Alex Pikme
2fd8ffc2ea
fix spelling error and correct minor error (#343)
* Update network_behaviour.rs

* Update client.rs

---------

Co-authored-by: XxAlex74xX <30472093+XxAlex74xX@users.noreply.github.com>
2025-03-24 16:56:12 +08:00
PixelPilot
4cf45149cb
Update file_location_cache.rs (#351) 2025-03-24 16:55:28 +08:00
0g-peterzhb
cfe4b45c41
add api of getting available file info by root (#357)
* add api of getting available file info by root
2025-03-24 16:51:53 +08:00
0g-peterzhb
d43a616b56
fix shard config init (#354)
Some checks failed
abi-consistent-check / build-and-compare (push) Has been cancelled
code-coverage / unittest-cov (push) Has been cancelled
rust / check (push) Has been cancelled
rust / test (push) Has been cancelled
rust / lints (push) Has been cancelled
functional-test / test (push) Has been cancelled
2025-03-12 22:27:34 +08:00
Eric Norberg
898350e271
fix: errors in code comments (#333)
Some checks failed
abi-consistent-check / build-and-compare (push) Has been cancelled
code-coverage / unittest-cov (push) Has been cancelled
rust / check (push) Has been cancelled
rust / test (push) Has been cancelled
rust / lints (push) Has been cancelled
functional-test / test (push) Has been cancelled
* lib.rs

* architecture.md

* chunk_write_control.rs
2025-02-18 16:47:01 +08:00
comfsrt
6a26c336e7
Update config.rs (#331)
Some checks are pending
abi-consistent-check / build-and-compare (push) Waiting to run
code-coverage / unittest-cov (push) Waiting to run
rust / check (push) Waiting to run
rust / test (push) Waiting to run
rust / lints (push) Waiting to run
functional-test / test (push) Waiting to run
2025-02-18 09:19:10 +08:00
Tronica
a915766840
Fix spelling errors and correct minor errors (#337)
* fix typo lib.rs

* fix typos mod.rs

* fix typo config.rs
2025-02-18 09:11:13 +08:00
SamiAlHassan
3d9aa8c940
docs: fix broken figure image paths across docs (#335)
* patch figure links on proof-of-random-access.md

* patch figure links on transaction-processing.md

* patch figure links on k-v-store.md

* patch figure links on architecture.md
2025-02-18 09:10:51 +08:00
0g-peterzhb
538afb00e1
add gas auto adjustment (#330)
* add gas auto adjustment

* refactor config
2025-02-18 09:09:50 +08:00
0g-peterzhb
7ad3f717b4
refactor submit pora loop (#325)
Some checks failed
abi-consistent-check / build-and-compare (push) Has been cancelled
code-coverage / unittest-cov (push) Has been cancelled
rust / check (push) Has been cancelled
rust / test (push) Has been cancelled
rust / lints (push) Has been cancelled
functional-test / test (push) Has been cancelled
2025-02-11 18:23:19 +08:00
Helen Grachtz
26cc19b92d
Fix code comments and bug report errors (#319)
fix typos
2025-02-11 16:56:40 +08:00
Brawn
a3335eed82
fix: Fix incorrect -in-place='' syntax Update update_config.sh (#320)
Fix incorrect -in-place='' syntax in sed command
2025-02-11 16:55:18 +08:00
Dmitry
2272b5dbfd
fix: Fix path handling in script execution (#318) 2025-02-11 16:53:28 +08:00
witty
760d4b4a53
docs: Fix imperative mood in documentation instructions Update onebox-test.md (#316) 2025-02-11 16:50:39 +08:00
Ocenka
91680f2e33
Fix spelling errors (#314)
* fix spelling lib.rs

* fix spelling error environment.rs
2025-02-11 16:48:24 +08:00
Radovenchyk
c9bca86add
Update README.md (#312) 2025-02-11 16:46:40 +08:00
udhaykumarbala
93f587c407
replaced broken links (#311) 2025-02-11 16:46:00 +08:00
Akaonetwo
1f71aadeec
Typo fixed in: Update README.md (#310)
fix typos
2025-02-11 16:43:32 +08:00
dashangcun
656a092cf8
chore: fix some comments (#313)
Signed-off-by: dashangcun <907225865@qq.com>
2025-02-11 16:40:55 +08:00
crStiv
8014f51b6d
Update introduction.md (#309) 2025-02-11 16:40:31 +08:00
Dmitry
b0a9a415f7
docs: Fixing Documentation Typos (#308)
* docs: typo fix Update transaction-processing.md

* docs: typo fix Update run.md

* docs: typo fix Update onebox-test.md
2025-02-11 16:37:33 +08:00
Maxim Evtush
bc6bcf857c
Update LICENSE (#321) 2025-02-11 16:36:50 +08:00
peilun-conflux
d15ef5ba3d
Add tokio console for debug. (#317)
It requires `tokio_unstable` in rust flags, so a `tokio-console`
features is added to be compatible with the default config.
To compile with the tokio console enabled, compile with
``RUSTFLAGS="--cfg tokio_unstable" cargo build --release --features tokio-console``
The usage of the tokio console can be found in https://github.com/tokio-rs/console.
2025-02-11 16:36:08 +08:00
0g-peterzhb
9ce215b919
add dynamic gas price adjustment when submitting pora (#324) 2025-02-11 16:34:58 +08:00
90 changed files with 1401 additions and 501 deletions

View File

@ -45,6 +45,11 @@ jobs:
python-version: '3.9' python-version: '3.9'
cache: 'pip' cache: 'pip'
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.22'
- name: Install dependencies - name: Install dependencies
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip

402
Cargo.lock generated
View File

@ -453,6 +453,28 @@ dependencies = [
"trust-dns-resolver", "trust-dns-resolver",
] ]
[[package]]
name = "async-stream"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b5a71a6f37880a80d1d7f19efd781e4b5de42c88f0722cc13bcb6cc2cfe8476"
dependencies = [
"async-stream-impl",
"futures-core",
"pin-project-lite 0.2.14",
]
[[package]]
name = "async-stream-impl"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c7c24de15d275a1ecfd47a380fb4d5ec9bfe0933f309ed5e705b775596a3574d"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.68",
]
[[package]] [[package]]
name = "async-task" name = "async-task"
version = "4.7.1" version = "4.7.1"
@ -506,7 +528,7 @@ version = "0.16.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fdb8867f378f33f78a811a8eb9bf108ad99430d7aad43315dd9319c827ef6247" checksum = "fdb8867f378f33f78a811a8eb9bf108ad99430d7aad43315dd9319c827ef6247"
dependencies = [ dependencies = [
"http", "http 0.2.12",
"log", "log",
"url", "url",
"wildmatch", "wildmatch",
@ -529,6 +551,53 @@ version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c4b4d0bd25bd0b74681c0ad21497610ce1b7c91b1022cd21c80c6fbdd9476b0" checksum = "0c4b4d0bd25bd0b74681c0ad21497610ce1b7c91b1022cd21c80c6fbdd9476b0"
[[package]]
name = "axum"
version = "0.7.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3a6c9af12842a67734c9a2e355436e5d03b22383ed60cf13cd0c18fbfe3dcbcf"
dependencies = [
"async-trait",
"axum-core",
"bytes",
"futures-util",
"http 1.2.0",
"http-body 1.0.1",
"http-body-util",
"itoa",
"matchit",
"memchr",
"mime",
"percent-encoding",
"pin-project-lite 0.2.14",
"rustversion",
"serde",
"sync_wrapper 1.0.2",
"tower",
"tower-layer",
"tower-service",
]
[[package]]
name = "axum-core"
version = "0.4.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09f2bd6146b97ae3359fa0cc6d6b376d9539582c7b4220f041a33ec24c226199"
dependencies = [
"async-trait",
"bytes",
"futures-util",
"http 1.2.0",
"http-body 1.0.1",
"http-body-util",
"mime",
"pin-project-lite 0.2.14",
"rustversion",
"sync_wrapper 1.0.2",
"tower-layer",
"tower-service",
]
[[package]] [[package]]
name = "backtrace" name = "backtrace"
version = "0.3.73" version = "0.3.73"
@ -568,6 +637,12 @@ version = "0.21.7"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d297deb1925b89f2ccc13d7635fa0714f12c87adce1c75356b39ca9b7178567" checksum = "9d297deb1925b89f2ccc13d7635fa0714f12c87adce1c75356b39ca9b7178567"
[[package]]
name = "base64"
version = "0.22.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6"
[[package]] [[package]]
name = "base64ct" name = "base64ct"
version = "1.6.0" version = "1.6.0"
@ -1104,6 +1179,45 @@ dependencies = [
"yaml-rust", "yaml-rust",
] ]
[[package]]
name = "console-api"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8030735ecb0d128428b64cd379809817e620a40e5001c54465b99ec5feec2857"
dependencies = [
"futures-core",
"prost 0.13.4",
"prost-types 0.13.4",
"tonic",
"tracing-core",
]
[[package]]
name = "console-subscriber"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6539aa9c6a4cd31f4b1c040f860a1eac9aa80e7df6b05d506a6e7179936d6a01"
dependencies = [
"console-api",
"crossbeam-channel",
"crossbeam-utils",
"futures-task",
"hdrhistogram",
"humantime",
"hyper-util",
"prost 0.13.4",
"prost-types 0.13.4",
"serde",
"serde_json",
"thread_local",
"tokio",
"tokio-stream",
"tonic",
"tracing",
"tracing-core",
"tracing-subscriber",
]
[[package]] [[package]]
name = "const-hex" name = "const-hex"
version = "1.12.0" version = "1.12.0"
@ -1157,6 +1271,17 @@ dependencies = [
"serde_json", "serde_json",
] ]
[[package]]
name = "contract-wrapper"
version = "0.1.0"
dependencies = [
"ethers",
"serde",
"serde_json",
"tokio",
"tracing",
]
[[package]] [[package]]
name = "convert_case" name = "convert_case"
version = "0.6.0" version = "0.6.0"
@ -2286,7 +2411,7 @@ dependencies = [
"futures-timer", "futures-timer",
"futures-util", "futures-util",
"hashers", "hashers",
"http", "http 0.2.12",
"instant", "instant",
"jsonwebtoken", "jsonwebtoken",
"once_cell", "once_cell",
@ -2905,7 +3030,26 @@ dependencies = [
"futures-core", "futures-core",
"futures-sink", "futures-sink",
"futures-util", "futures-util",
"http", "http 0.2.12",
"indexmap 2.2.6",
"slab",
"tokio",
"tokio-util 0.7.11",
"tracing",
]
[[package]]
name = "h2"
version = "0.4.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ccae279728d634d083c00f6099cb58f01cc99c145b84b8be2f6c74618d79922e"
dependencies = [
"atomic-waker",
"bytes",
"fnv",
"futures-core",
"futures-sink",
"http 1.2.0",
"indexmap 2.2.6", "indexmap 2.2.6",
"slab", "slab",
"tokio", "tokio",
@ -3004,6 +3148,19 @@ dependencies = [
"tokio-util 0.6.10", "tokio-util 0.6.10",
] ]
[[package]]
name = "hdrhistogram"
version = "7.5.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "765c9198f173dd59ce26ff9f95ef0aafd0a0fe01fb9d72841bc5066a4c06511d"
dependencies = [
"base64 0.21.7",
"byteorder",
"flate2",
"nom",
"num-traits",
]
[[package]] [[package]]
name = "heck" name = "heck"
version = "0.3.3" version = "0.3.3"
@ -3125,6 +3282,17 @@ dependencies = [
"itoa", "itoa",
] ]
[[package]]
name = "http"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f16ca2af56261c99fba8bac40a10251ce8188205a4c448fbb745a2e4daa76fea"
dependencies = [
"bytes",
"fnv",
"itoa",
]
[[package]] [[package]]
name = "http-body" name = "http-body"
version = "0.4.6" version = "0.4.6"
@ -3132,7 +3300,30 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7ceab25649e9960c0311ea418d17bee82c0dcec1bd053b5f9a66e265a693bed2" checksum = "7ceab25649e9960c0311ea418d17bee82c0dcec1bd053b5f9a66e265a693bed2"
dependencies = [ dependencies = [
"bytes", "bytes",
"http", "http 0.2.12",
"pin-project-lite 0.2.14",
]
[[package]]
name = "http-body"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1efedce1fb8e6913f23e0c92de8e62cd5b772a67e7b3946df930a62566c93184"
dependencies = [
"bytes",
"http 1.2.0",
]
[[package]]
name = "http-body-util"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "793429d76616a256bcb62c2a2ec2bed781c8307e797e2598c50010f2bee2544f"
dependencies = [
"bytes",
"futures-util",
"http 1.2.0",
"http-body 1.0.1",
"pin-project-lite 0.2.14", "pin-project-lite 0.2.14",
] ]
@ -3164,9 +3355,9 @@ dependencies = [
"futures-channel", "futures-channel",
"futures-core", "futures-core",
"futures-util", "futures-util",
"h2", "h2 0.3.26",
"http", "http 0.2.12",
"http-body", "http-body 0.4.6",
"httparse", "httparse",
"httpdate", "httpdate",
"itoa", "itoa",
@ -3178,14 +3369,35 @@ dependencies = [
"want", "want",
] ]
[[package]]
name = "hyper"
version = "1.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "256fb8d4bd6413123cc9d91832d78325c48ff41677595be797d90f42969beae0"
dependencies = [
"bytes",
"futures-channel",
"futures-util",
"h2 0.4.7",
"http 1.2.0",
"http-body 1.0.1",
"httparse",
"httpdate",
"itoa",
"pin-project-lite 0.2.14",
"smallvec",
"tokio",
"want",
]
[[package]] [[package]]
name = "hyper-rustls" name = "hyper-rustls"
version = "0.23.2" version = "0.23.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1788965e61b367cd03a62950836d5cd41560c3577d90e40e0819373194d1661c" checksum = "1788965e61b367cd03a62950836d5cd41560c3577d90e40e0819373194d1661c"
dependencies = [ dependencies = [
"http", "http 0.2.12",
"hyper", "hyper 0.14.29",
"log", "log",
"rustls 0.20.9", "rustls 0.20.9",
"rustls-native-certs", "rustls-native-certs",
@ -3201,8 +3413,8 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec3efd23720e2049821a693cbc7e65ea87c72f1c58ff2f9522ff332b1491e590" checksum = "ec3efd23720e2049821a693cbc7e65ea87c72f1c58ff2f9522ff332b1491e590"
dependencies = [ dependencies = [
"futures-util", "futures-util",
"http", "http 0.2.12",
"hyper", "hyper 0.14.29",
"rustls 0.21.12", "rustls 0.21.12",
"tokio", "tokio",
"tokio-rustls 0.24.1", "tokio-rustls 0.24.1",
@ -3216,12 +3428,25 @@ checksum = "6eea26c5d0b6ab9d72219f65000af310f042a740926f7b2fa3553e774036e2e7"
dependencies = [ dependencies = [
"derive_builder", "derive_builder",
"dns-lookup", "dns-lookup",
"hyper", "hyper 0.14.29",
"tokio", "tokio",
"tower-service", "tower-service",
"tracing", "tracing",
] ]
[[package]]
name = "hyper-timeout"
version = "0.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2b90d566bffbce6a75bd8b09a05aa8c2cb1fabb6cb348f8840c9e4c90a0d83b0"
dependencies = [
"hyper 1.5.2",
"hyper-util",
"pin-project-lite 0.2.14",
"tokio",
"tower-service",
]
[[package]] [[package]]
name = "hyper-tls" name = "hyper-tls"
version = "0.5.0" version = "0.5.0"
@ -3229,12 +3454,31 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d6183ddfa99b85da61a140bea0efc93fdf56ceaa041b37d553518030827f9905" checksum = "d6183ddfa99b85da61a140bea0efc93fdf56ceaa041b37d553518030827f9905"
dependencies = [ dependencies = [
"bytes", "bytes",
"hyper", "hyper 0.14.29",
"native-tls", "native-tls",
"tokio", "tokio",
"tokio-native-tls", "tokio-native-tls",
] ]
[[package]]
name = "hyper-util"
version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df2dcfbe0677734ab2f3ffa7fa7bfd4706bfdc1ef393f2ee30184aed67e631b4"
dependencies = [
"bytes",
"futures-channel",
"futures-util",
"http 1.2.0",
"http-body 1.0.1",
"hyper 1.5.2",
"pin-project-lite 0.2.14",
"socket2 0.5.7",
"tokio",
"tower-service",
"tracing",
]
[[package]] [[package]]
name = "iana-time-zone" name = "iana-time-zone"
version = "0.1.60" version = "0.1.60"
@ -3586,7 +3830,7 @@ dependencies = [
"futures-timer", "futures-timer",
"futures-util", "futures-util",
"gloo-net", "gloo-net",
"http", "http 0.2.12",
"jsonrpsee-core", "jsonrpsee-core",
"jsonrpsee-types", "jsonrpsee-types",
"pin-project 1.1.5", "pin-project 1.1.5",
@ -3615,7 +3859,7 @@ dependencies = [
"futures-timer", "futures-timer",
"futures-util", "futures-util",
"globset", "globset",
"hyper", "hyper 0.14.29",
"jsonrpsee-types", "jsonrpsee-types",
"lazy_static", "lazy_static",
"parking_lot 0.12.3", "parking_lot 0.12.3",
@ -3638,7 +3882,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5fc1d8c0e4f455c47df21f8a29f4bbbcb75eb71bfee919b92e92502b48358392" checksum = "5fc1d8c0e4f455c47df21f8a29f4bbbcb75eb71bfee919b92e92502b48358392"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"hyper", "hyper 0.14.29",
"hyper-rustls 0.23.2", "hyper-rustls 0.23.2",
"jsonrpsee-core", "jsonrpsee-core",
"jsonrpsee-types", "jsonrpsee-types",
@ -3658,7 +3902,7 @@ checksum = "bdd69efeb3ce2cba767f126872f4eeb4624038a29098e75d77608b2b4345ad03"
dependencies = [ dependencies = [
"futures-channel", "futures-channel",
"futures-util", "futures-util",
"hyper", "hyper 0.14.29",
"jsonrpsee-core", "jsonrpsee-core",
"jsonrpsee-types", "jsonrpsee-types",
"serde", "serde",
@ -4713,6 +4957,12 @@ version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2532096657941c2fea9c289d370a250971c689d4f143798ff67113ec042024a5" checksum = "2532096657941c2fea9c289d370a250971c689d4f143798ff67113ec042024a5"
[[package]]
name = "matchit"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0e7465ac9959cc2b1404e8e2367b43684a6d13790fe23056cc8c6c5a6b7bcb94"
[[package]] [[package]]
name = "md-5" name = "md-5"
version = "0.10.6" version = "0.10.6"
@ -4778,6 +5028,7 @@ dependencies = [
"async-trait", "async-trait",
"blake2", "blake2",
"contract-interface", "contract-interface",
"contract-wrapper",
"ethereum-types 0.14.1", "ethereum-types 0.14.1",
"ethers", "ethers",
"hex", "hex",
@ -6025,6 +6276,16 @@ dependencies = [
"prost-derive 0.10.1", "prost-derive 0.10.1",
] ]
[[package]]
name = "prost"
version = "0.13.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2c0fef6c4230e4ccf618a35c59d7ede15dea37de8427500f50aff708806e42ec"
dependencies = [
"bytes",
"prost-derive 0.13.4",
]
[[package]] [[package]]
name = "prost-build" name = "prost-build"
version = "0.9.0" version = "0.9.0"
@ -6106,6 +6367,19 @@ dependencies = [
"syn 1.0.109", "syn 1.0.109",
] ]
[[package]]
name = "prost-derive"
version = "0.13.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "157c5a9d7ea5c2ed2d9fb8f495b64759f7816c7eaea54ba3978f0d63000162e3"
dependencies = [
"anyhow",
"itertools 0.13.0",
"proc-macro2",
"quote",
"syn 2.0.68",
]
[[package]] [[package]]
name = "prost-types" name = "prost-types"
version = "0.9.0" version = "0.9.0"
@ -6126,6 +6400,15 @@ dependencies = [
"prost 0.10.4", "prost 0.10.4",
] ]
[[package]]
name = "prost-types"
version = "0.13.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cc2f1e56baa61e93533aebc21af4d2134b70f66275e0fcdf3cbe43d77ff7e8fc"
dependencies = [
"prost 0.13.4",
]
[[package]] [[package]]
name = "protobuf" name = "protobuf"
version = "2.28.0" version = "2.28.0"
@ -6159,8 +6442,8 @@ dependencies = [
"dns-lookup", "dns-lookup",
"futures-core", "futures-core",
"futures-util", "futures-util",
"http", "http 0.2.12",
"hyper", "hyper 0.14.29",
"hyper-system-resolver", "hyper-system-resolver",
"pin-project-lite 0.2.14", "pin-project-lite 0.2.14",
"thiserror", "thiserror",
@ -6403,10 +6686,10 @@ dependencies = [
"encoding_rs", "encoding_rs",
"futures-core", "futures-core",
"futures-util", "futures-util",
"h2", "h2 0.3.26",
"http", "http 0.2.12",
"http-body", "http-body 0.4.6",
"hyper", "hyper 0.14.29",
"hyper-rustls 0.24.2", "hyper-rustls 0.24.2",
"hyper-tls", "hyper-tls",
"ipnet", "ipnet",
@ -6422,7 +6705,7 @@ dependencies = [
"serde", "serde",
"serde_json", "serde_json",
"serde_urlencoded", "serde_urlencoded",
"sync_wrapper", "sync_wrapper 0.1.2",
"system-configuration", "system-configuration",
"tokio", "tokio",
"tokio-native-tls", "tokio-native-tls",
@ -7497,6 +7780,12 @@ version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2047c6ded9c721764247e62cd3b03c09ffc529b2ba5b10ec482ae507a4a70160" checksum = "2047c6ded9c721764247e62cd3b03c09ffc529b2ba5b10ec482ae507a4a70160"
[[package]]
name = "sync_wrapper"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0bf256ce5efdfa370213c1dabab5935a12e49f2c58d15e9eac2870d3b4f27263"
[[package]] [[package]]
name = "synstructure" name = "synstructure"
version = "0.12.6" version = "0.12.6"
@ -7729,6 +8018,7 @@ dependencies = [
"signal-hook-registry", "signal-hook-registry",
"socket2 0.5.7", "socket2 0.5.7",
"tokio-macros", "tokio-macros",
"tracing",
"windows-sys 0.48.0", "windows-sys 0.48.0",
] ]
@ -7786,9 +8076,9 @@ dependencies = [
[[package]] [[package]]
name = "tokio-stream" name = "tokio-stream"
version = "0.1.15" version = "0.1.17"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "267ac89e0bec6e691e5813911606935d77c476ff49024f98abcea3e7b15e37af" checksum = "eca58d7bba4a75707817a2c44174253f9236b2d5fbd055602e9d5c07c139a047"
dependencies = [ dependencies = [
"futures-core", "futures-core",
"pin-project-lite 0.2.14", "pin-project-lite 0.2.14",
@ -7895,6 +8185,62 @@ dependencies = [
"winnow 0.6.13", "winnow 0.6.13",
] ]
[[package]]
name = "tonic"
version = "0.12.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "877c5b330756d856ffcc4553ab34a5684481ade925ecc54bcd1bf02b1d0d4d52"
dependencies = [
"async-stream",
"async-trait",
"axum",
"base64 0.22.1",
"bytes",
"h2 0.4.7",
"http 1.2.0",
"http-body 1.0.1",
"http-body-util",
"hyper 1.5.2",
"hyper-timeout",
"hyper-util",
"percent-encoding",
"pin-project 1.1.5",
"prost 0.13.4",
"socket2 0.5.7",
"tokio",
"tokio-stream",
"tower",
"tower-layer",
"tower-service",
"tracing",
]
[[package]]
name = "tower"
version = "0.4.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b8fa9be0de6cf49e536ce1851f987bd21a43b771b09473c3549a6c853db37c1c"
dependencies = [
"futures-core",
"futures-util",
"indexmap 1.9.3",
"pin-project 1.1.5",
"pin-project-lite 0.2.14",
"rand 0.8.5",
"slab",
"tokio",
"tokio-util 0.7.11",
"tower-layer",
"tower-service",
"tracing",
]
[[package]]
name = "tower-layer"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "121c2a6cda46980bb0fcd1647ffaf6cd3fc79a013de288782836f6df9c48780e"
[[package]] [[package]]
name = "tower-service" name = "tower-service"
version = "0.3.2" version = "0.3.2"
@ -8124,7 +8470,7 @@ dependencies = [
"byteorder", "byteorder",
"bytes", "bytes",
"data-encoding", "data-encoding",
"http", "http 0.2.12",
"httparse", "httparse",
"log", "log",
"rand 0.8.5", "rand 0.8.5",
@ -8922,6 +9268,8 @@ dependencies = [
"chunk_pool", "chunk_pool",
"clap", "clap",
"config", "config",
"console-subscriber",
"contract-wrapper",
"ctrlc", "ctrlc",
"duration-str", "duration-str",
"error-chain", "error-chain",

View File

@ -16,7 +16,7 @@ Across the two lanes, 0G Storage supports the following features:
* **General Purpose Design**: Supports atomic transactions, mutable key-value stores, and archive log systems, enabling a wide range of applications with various data types. * **General Purpose Design**: Supports atomic transactions, mutable key-value stores, and archive log systems, enabling a wide range of applications with various data types.
* **Validated Incentivization**: Utilizes the PoRA (Proof of Random Access) mining algorithm to mitigate the data outsourcing issue and to ensure rewards are distributed to nodes who contribute to the storage network. * **Validated Incentivization**: Utilizes the PoRA (Proof of Random Access) mining algorithm to mitigate the data outsourcing issue and to ensure rewards are distributed to nodes who contribute to the storage network.
For in-depth technical details about 0G Storage, please read our [Intro to 0G Storage](https://docs.0g.ai/og-storage). For in-depth technical details about 0G Storage, please read our [Intro to 0G Storage](https://docs.0g.ai/0g-storage).
## Documentation ## Documentation

View File

@ -0,0 +1,17 @@
[package]
name = "contract-wrapper"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
tokio = { version = "1.28", features = ["macros"] }
ethers = "2.0"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tracing = "0.1.35"
# or `tracing` if you prefer
[features]
dev = []

View File

@ -0,0 +1,204 @@
use ethers::{
abi::Detokenize,
contract::ContractCall,
providers::{Middleware, ProviderError},
types::{TransactionReceipt, U256},
};
use serde::Deserialize;
use std::{sync::Arc, time::Duration};
use tokio::time::sleep;
use tracing::{debug, info};
/// The result of a single submission attempt.
#[derive(Debug)]
pub enum SubmissionAction {
Success(TransactionReceipt),
/// Generic "retry" signal, but we still need to know if it's "mempool/timeout" or something else.
/// We'll parse the error string or have a separate reason in a real app.
Retry(String),
Error(String),
}
/// Configuration for submission retries, gas price, etc.
#[derive(Clone, Copy, Debug, Deserialize)]
pub struct SubmitConfig {
/// If `Some`, use this gas price for the first attempt.
/// If `None`, fetch the current network gas price.
pub(crate) initial_gas_price: Option<U256>,
/// If `Some`, clamp increased gas price to this limit.
/// If `None`, do not bump gas for mempool/timeout errors.
pub(crate) max_gas_price: Option<U256>,
/// Gas limit of the transaction
pub(crate) max_gas: Option<U256>,
/// Factor by which to multiply the gas price on each mempool/timeout error.
/// E.g. if factor=11 => a 10% bump => newGas = (gas * factor) / 10
pub(crate) gas_increase_factor: Option<u64>,
/// The maximum number of gas bumps (for mempool/timeout). If `max_gas_price` is set,
/// we typically rely on clamping. But you can still cap the number of bumps if you want.
pub(crate) max_retries: Option<usize>,
/// Seconds to wait between attempts.
pub(crate) interval_secs: Option<u64>,
}
const DEFAULT_INTERVAL_SECS: u64 = 2;
const DEFAULT_MAX_RETRIES: usize = 5;
impl Default for SubmitConfig {
fn default() -> Self {
Self {
initial_gas_price: None,
max_gas_price: None,
max_gas: None,
gas_increase_factor: Some(11), // implies 10% bump if we do (gas*11)/10
max_retries: Some(DEFAULT_MAX_RETRIES),
interval_secs: Some(DEFAULT_INTERVAL_SECS),
}
}
}
/// A simple function to detect if the retry is from a mempool or timeout error.
/// Right now, we rely on `submit_once` returning `SubmissionAction::Retry` for ANY error
/// that is "retryable," so we must parse the error string from `submit_once`, or
/// store that string. Another approach is to return an enum with a reason from `submit_once`.
fn is_mempool_or_timeout_error(error_str: String) -> bool {
let lower = error_str.to_lowercase();
lower.contains("mempool") || lower.contains("timeout")
}
/// A function that performs a single submission attempt:
/// - Sends the transaction
/// - Awaits the receipt with limited internal retries
/// - Returns a `SubmissionAction` indicating success, retry, or error.
pub async fn submit_once<M, T>(call: ContractCall<M, T>) -> SubmissionAction
where
M: Middleware + 'static,
T: Detokenize,
{
let pending_tx = match call.send().await {
Ok(tx) => tx,
Err(e) => {
let msg = e.to_string();
if is_mempool_or_timeout_error(msg.clone()) {
return SubmissionAction::Retry(format!("mempool/timeout: {:?}", e));
}
debug!("Error sending transaction: {:?}", msg);
return SubmissionAction::Error(format!("Transaction failed: {}", msg));
}
};
debug!("Signed tx hash: {:?}", pending_tx.tx_hash());
let receipt_result = pending_tx.await;
match receipt_result {
Ok(Some(receipt)) => {
info!("Transaction mined, receipt: {:?}", receipt);
SubmissionAction::Success(receipt)
}
Ok(None) => {
debug!("Transaction probably timed out; retrying");
SubmissionAction::Retry("timeout, receipt is none".to_string())
}
Err(ProviderError::HTTPError(e)) => {
debug!("HTTP error retrieving receipt: {:?}", e);
SubmissionAction::Retry(format!("http error: {:?}", e))
}
Err(e) => SubmissionAction::Error(format!("Transaction unrecoverable: {:?}", e)),
}
}
/// Increase gas price using integer arithmetic: (gp * factor_num) / factor_den
fn increase_gas_price_u256(gp: U256, factor_num: u64, factor_den: u64) -> U256 {
let num = U256::from(factor_num);
let den = U256::from(factor_den);
gp.checked_mul(num).unwrap_or(U256::MAX) / den
}
/// A higher-level function that wraps `submit_once` in a gas-priceadjustment loop,
/// plus a global timeout, plus distinct behavior for mempool/timeout vs other errors.
pub async fn submit_with_retry<M, T>(
mut call: ContractCall<M, T>,
config: &SubmitConfig,
middleware: Arc<M>,
) -> Result<TransactionReceipt, String>
where
M: Middleware + 'static,
T: Detokenize,
{
if let Some(max_gas) = config.max_gas {
call = call.gas(max_gas);
}
let mut gas_price = if let Some(gp) = config.initial_gas_price {
gp
} else {
middleware
.get_gas_price()
.await
.map_err(|e| format!("Failed to fetch gas price: {:?}", e))?
};
// If no factor is set, default to 11 => 10% bump
let factor_num = config.gas_increase_factor.unwrap_or(11);
let factor_den = 10u64;
// Two counters: one for gas bumps, one for non-gas retries
let mut non_gas_retries = 0;
let max_retries = config.max_retries.unwrap_or(DEFAULT_MAX_RETRIES);
loop {
// Set gas price on the call
call = call.gas_price(gas_price);
match submit_once(call.clone()).await {
SubmissionAction::Success(receipt) => {
return Ok(receipt);
}
SubmissionAction::Retry(error_str) => {
// We need to figure out if it's "mempool/timeout" or some other reason.
// Right now, we don't have the error string from `submit_once` easily,
// so let's assume we store it or we do a separate function that returns it.
// For simplicity, let's do a hack: let's define a placeholder "error_str" and parse it.
// In reality, you'd likely return `SubmissionAction::Retry(reason_str)` from `submit_once`.
if is_mempool_or_timeout_error(error_str.clone()) {
// Mempool/timeout error
if let Some(max_gp) = config.max_gas_price {
if gas_price >= max_gp {
return Err(format!(
"Exceeded max gas price: {}, with error msg: {}",
max_gp, error_str
));
}
// Bump the gas
let new_price = increase_gas_price_u256(gas_price, factor_num, factor_den);
gas_price = std::cmp::min(new_price, max_gp);
debug!("Bumping gas price to {}", gas_price);
} else {
// No maxGasPrice => we do NOT bump => fail
return Err(
"Mempool/timeout error, no maxGasPrice set => aborting".to_string()
);
}
} else {
// Non-gas error => increment nonGasRetries
non_gas_retries += 1;
if non_gas_retries > max_retries {
return Err(format!("Exceeded non-gas retries: {}", max_retries));
}
debug!(
"Non-gas retry #{} (same gas price: {})",
non_gas_retries, gas_price
);
}
}
SubmissionAction::Error(e) => {
return Err(e);
}
}
// Sleep between attempts
sleep(Duration::from_secs(
config.interval_secs.unwrap_or(DEFAULT_INTERVAL_SECS),
))
.await;
}
}

View File

@ -7,7 +7,7 @@
//! block processing time). //! block processing time).
//! - `IncCounter`: used to represent an ideally ever-growing, never-shrinking integer (e.g., //! - `IncCounter`: used to represent an ideally ever-growing, never-shrinking integer (e.g.,
//! number of block processing requests). //! number of block processing requests).
//! - `IntGauge`: used to represent an varying integer (e.g., number of attestations per block). //! - `IntGauge`: used to represent a varying integer (e.g., number of attestations per block).
//! //!
//! ## Important //! ## Important
//! //!

View File

@ -11,7 +11,7 @@ pub fn unused_tcp_port() -> Result<u16, String> {
unused_port(Transport::Tcp) unused_port(Transport::Tcp)
} }
/// A convenience function for `unused_port(Transport::Tcp)`. /// A convenience function for `unused_port(Transport::Udp)`.
pub fn unused_udp_port() -> Result<u16, String> { pub fn unused_udp_port() -> Result<u16, String> {
unused_port(Transport::Udp) unused_port(Transport::Udp)
} }

View File

@ -4,7 +4,7 @@
### Checks ### Checks
* [ ] I've made sure the lint is passing in this PR. * [ ] I've made sure the linter is passing in this PR.
* [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, in that case, please comment that they are not relevant. * [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, in that case, please comment that they are not relevant.
* [ ] Testing Strategy * [ ] Testing Strategy
* [ ] Unit tests * [ ] Unit tests

View File

@ -4,15 +4,15 @@
ZeroGravity system consists of a data availability layer (0G DA) on top of a decentralized storage system (0G Storage). There is a separate consensus network that is part of both the 0G DA and the 0G Storage. For 0G Storage, the consensus is responsible for determining the ordering of the uploaded data blocks, realizing the storage mining verification and the corresponding incentive mechanism through smart contracts. ZeroGravity system consists of a data availability layer (0G DA) on top of a decentralized storage system (0G Storage). There is a separate consensus network that is part of both the 0G DA and the 0G Storage. For 0G Storage, the consensus is responsible for determining the ordering of the uploaded data blocks, realizing the storage mining verification and the corresponding incentive mechanism through smart contracts.
Figure 1 illustrates the architecture of the 0G system. When a data block enters the 0G DA, it is first erasure coded and organized into multiple consecutive chunks through erasure coding. The merkle root as a commitment of the encoded data block is then submitted to the consensus layer to keep the order of the data entering the system. The chunks are then dispersed to different storage nodes in 0G Storage where the data may be further replicated to other nodes depending on the storage fee that the user pays. The storage nodes periodically participate the mining process by interacting with the consensus network to accrue rewards from the system. Figure 1 illustrates the architecture of the 0G system. When a data block enters the 0G DA, it is first erasure coded and organized into multiple consecutive chunks through erasure coding. The merkle root as a commitment of the encoded data block is then submitted to the consensus layer to keep the order of the data entering the system. The chunks are then dispersed to different storage nodes in 0G Storage where the data may be further replicated to other nodes depending on the storage fee that the user pays. The storage nodes periodically participate in the mining process by interacting with the consensus network to accrue rewards from the system.
<figure><img src="../../.gitbook/assets/zg-storage-architecture.png" alt=""><figcaption><p>Figure 1. The Architecture of 0G System</p></figcaption></figure> <figure><img src="../.gitbook/assets/zg-storage-architecture.png" alt=""><figcaption><p>Figure 1. The Architecture of 0G System</p></figcaption></figure>
## 0G Storage ## 0G Storage
0G Storage employs layered design targeting to support different types of decentralized applications. Figure 2 shows the overview of the full stack layers of 0G Storage. 0G Storage employs layered design targeting to support different types of decentralized applications. Figure 2 shows the overview of the full stack layers of 0G Storage.
<figure><img src="../../.gitbook/assets/zg-storage-layer.png" alt=""><figcaption><p>Figure 2. Full Stack Solution of 0G Storage</p></figcaption></figure> <figure><img src="../.gitbook/assets/zg-storage-layer.png" alt=""><figcaption><p>Figure 2. Full Stack Solution of 0G Storage</p></figcaption></figure>
The lowest is a log layer which is a decentralized system. It consists of multiple storage nodes to form a storage network. The network has built-in incentive mechanism to reward the data storage. The ordering of the uploaded data is guaranteed by a sequencing mechanism to provide a log-based semantics and abstraction. This layer is used to store unstructured raw data for permanent persistency. The lowest is a log layer which is a decentralized system. It consists of multiple storage nodes to form a storage network. The network has built-in incentive mechanism to reward the data storage. The ordering of the uploaded data is guaranteed by a sequencing mechanism to provide a log-based semantics and abstraction. This layer is used to store unstructured raw data for permanent persistency.

View File

@ -1,6 +1,6 @@
# Mining Reward # Mining Reward
0G Storage creates pricing segments every 8 GB of data chunks over the data flow. Each pricing segment is associated with an Endowment Pool and a Reward Pool. The Endowment Pool collects the storage endowments of all the data chunks belongs to this pricing segment and releases a fixed ratio of balance to the Reward Pool every second. The rate of reward release is set to 4% per year. 0G Storage creates pricing segments every 8 GB of data chunks over the data flow. Each pricing segment is associated with an Endowment Pool and a Reward Pool. The Endowment Pool collects the storage endowments of all the data chunks belong to this pricing segment and releases a fixed ratio of balance to the Reward Pool every second. The rate of reward release is set to 4% per year.
The mining reward is paid to miners for providing data service. Miners receive mining reward when submit the first legitimate PoRA for a mining epoch to 0G Storage contract. The mining reward is paid to miners for providing data service. Miners receive mining reward when submit the first legitimate PoRA for a mining epoch to 0G Storage contract.

View File

@ -2,7 +2,7 @@
The ZeroGravity network adopts a Proof of Random Access (PoRA) mechanism to incentivize miners to store data. By requiring miners to answer randomly produced queries to archived data chunks, the PoRA mechanism establishes the relation between mining proof generation power and data storage. Miners answer the queries repeatedly and computes an output digest for each loaded chunk until find a digest that satisfies the mining difficulty (i.e., has enough leading zeros). PoRA will stress the miners' disk I/O and reduce their capability to respond user queries. So 0G Storage adopts intermittent mining, in which a mining epoch starts with a block generation at a specific block height on the host chain and stops when a valid PoRA is submitted to the 0G Storage contract. The ZeroGravity network adopts a Proof of Random Access (PoRA) mechanism to incentivize miners to store data. By requiring miners to answer randomly produced queries to archived data chunks, the PoRA mechanism establishes the relation between mining proof generation power and data storage. Miners answer the queries repeatedly and computes an output digest for each loaded chunk until find a digest that satisfies the mining difficulty (i.e., has enough leading zeros). PoRA will stress the miners' disk I/O and reduce their capability to respond user queries. So 0G Storage adopts intermittent mining, in which a mining epoch starts with a block generation at a specific block height on the host chain and stops when a valid PoRA is submitted to the 0G Storage contract.
In a strawman design, a PoRA iteration consists of a computing stage and a loading stage. In the computing stage, a miner computes a random recall position (the universal offset in the flow) based on an arbitrary picked random nonce and a mining status read from the host chain. In the loading stage, a miner loads the archived data chunks at the given recall position, and computes output digest by hashing the tuple of mining status and the data chunks. If the output digest satisfies the target difficulty, the miner can construct a legitimate PoRA consists of the chosen random nonce, the loaded data chunk and the proof for the correctness of data chunk to the mining contract. In a strawman design, a PoRA iteration consists of a computing stage and a loading stage. In the computing stage, a miner computes a random recall position (the universal offset in the flow) based on an arbitrary picked random nonce and a mining status read from the host chain. In the loading stage, a miner loads the archived data chunks at the given recall position, and computes output digest by hashing the tuple of mining status and the data chunks. If the output digest satisfies the target difficulty, the miner can construct a legitimate PoRA, which consists of the chosen random nonce, the loaded data chunk and the proof for the correctness of data chunk to the mining contract.
## Fairness ## Fairness
@ -30,4 +30,4 @@ Precisely, the mining process has the following steps:
6. For each piece $$\overrightarrow{v}$$, compute the Blake2b hash of the tuple ($$\mathsf{miner\_id}$$, $$\mathsf{nonce}$$, $$\mathsf{context\_digest}$$, $$\mathsf{start\_position}$$, $$\mathsf{mine\_length}$$, $$\overrightarrow{v}$$). 6. For each piece $$\overrightarrow{v}$$, compute the Blake2b hash of the tuple ($$\mathsf{miner\_id}$$, $$\mathsf{nonce}$$, $$\mathsf{context\_digest}$$, $$\mathsf{start\_position}$$, $$\mathsf{mine\_length}$$, $$\overrightarrow{v}$$).
7. If one of Blake2b hash output is smaller than a target value, the miner finds a legitimate PoRA solution. 7. If one of Blake2b hash output is smaller than a target value, the miner finds a legitimate PoRA solution.
<figure><img src="../../../.gitbook/assets/zg-storage-algorithm.png" alt=""><figcaption><p>Figure 1. Recall Position and Scratchpad Computation</p></figcaption></figure> <figure><img src="../../.gitbook/assets/zg-storage-algorithm.png" alt=""><figcaption><p>Figure 1. Recall Position and Scratchpad Computation</p></figcaption></figure>

View File

@ -10,7 +10,7 @@
## Integration ## Integration
We provide a [SDK](https://github.com/0glabs/0g-js-storage-sdk) for users to easily integrate 0G Storage in their applications with the following features: We provide a [SDK](https://github.com/0glabs/0g-ts-sdk) for users to easily integrate 0G Storage in their applications with the following features:
* File Merkle Tree Class * File Merkle Tree Class
* Flow Contract Types * Flow Contract Types
@ -22,7 +22,7 @@ We provide a [SDK](https://github.com/0glabs/0g-js-storage-sdk) for users to eas
## Deployment ## Deployment
Please refer to [Deployment](../0G%20Storage/doc/install.md) page for detailed steps to compile and start a 0G Storage node. Please refer to [Deployment](run.md) page for detailed steps to compile and start a 0G Storage node.
## Test ## Test

View File

@ -6,4 +6,4 @@ A user-defined function will be used to deserialize the raw content in the log e
When a new key-value node just joins the network, it connects to the log layer and plays the log entries from head to tail to construct the latest state of the key-value store. During the log entry playing, an application-specific key-value node can skip irrelevant log entries which do not contain stream IDs that it cares. When a new key-value node just joins the network, it connects to the log layer and plays the log entries from head to tail to construct the latest state of the key-value store. During the log entry playing, an application-specific key-value node can skip irrelevant log entries which do not contain stream IDs that it cares.
<figure><img src="../../.gitbook/assets/zg-storage-log.png" alt=""><figcaption><p>Figure 1. Decentralized K-V Store</p></figcaption></figure> <figure><img src="../.gitbook/assets/zg-storage-log.png" alt=""><figcaption><p>Figure 1. Decentralized K-V Store</p></figcaption></figure>

View File

@ -5,7 +5,7 @@
## Prerequisites ## Prerequisites
- Requires python version: 3.8, 3.9 or 3.10, higher version is not guaranteed (e.g. failed to install `pysha3`). - Requires python version: 3.8, 3.9 or 3.10, higher version is not guaranteed (e.g. failed to install `pysha3`).
- Installs dependencies under root folder: `pip3 install -r requirements.txt` - Install dependencies under root folder: `pip3 install -r requirements.txt`
## Install Blockchain Nodes ## Install Blockchain Nodes
@ -19,7 +19,7 @@ The blockchain node binaries will be compiled or downloaded from github to `test
## Run Tests ## Run Tests
Changes to the `tests` folder and run the following command to run all tests: Change to the `tests` folder and run the following command to run all tests:
``` ```
python test_all.py python test_all.py

View File

@ -4,7 +4,7 @@
### Setup Environment ### Setup Environment
Install dependencies Node.js, yarn, hardhat. Install the dependencies: Node.js, yarn, hardhat.
- Linux - Linux

View File

@ -8,8 +8,8 @@ When an application server linking with the 0G Storage key-value runtime starts
When an application server with the key-value runtime encounters the commit record during playing the log, it identifies a conflict window consisting of all the log entries between the start log position of the transaction and the position of the commit record. The log entries in the conflict window therefore contain the key-value operations concurrent with the transaction submitting the commit record. The runtime further detects whether these concurrent operations contain the updates on the keys belonging to the read set of the transaction. If yes, the transaction is aborted, otherwise committed successfully. When an application server with the key-value runtime encounters the commit record during playing the log, it identifies a conflict window consisting of all the log entries between the start log position of the transaction and the position of the commit record. The log entries in the conflict window therefore contain the key-value operations concurrent with the transaction submitting the commit record. The runtime further detects whether these concurrent operations contain the updates on the keys belonging to the read set of the transaction. If yes, the transaction is aborted, otherwise committed successfully.
<figure><img src="../../.gitbook/assets/zg-storage-transaction.png" alt=""><figcaption><p>Figure 1. Transaction Processing on 0G K-V Store</p></figcaption></figure> <figure><img src="../.gitbook/assets/zg-storage-transaction.png" alt=""><figcaption><p>Figure 1. Transaction Processing on 0G K-V Store</p></figcaption></figure>
## Concurrent Assumption ## Concurrent Assumption
This transaction model assumes that the transaction participants are collaborative and will honestly compose the commit record with the correct content. Although this assumption in a decentralized environment is too strong, it is still achievable for specific applications. For example, for an application like Google Docs, a user normally shares the access to others who can be trusted. In case this assumption cannot hold, the code of the transaction can be stored in the ZeroGravity log and some mechanism of verifiable computation like zero-knowledge proof or hardware with trust execution environment (TEE) can be employed by the transaction executors to detect the validity of the commit record. This transaction model assumes that the transaction participants are collaborative and will honestly compose the commit record with the correct content. Although this assumption in a decentralized environment is too strong, it is still achievable for specific applications. For example, for an application like Google Docs, a user normally shares the access to others who can be trusted. In case this assumption cannot hold, the code of the transaction can be stored in the ZeroGravity log and some mechanism of verifiable computation like zero-knowledge proof or hardware with trusted execution environment (TEE) can be employed by the transaction executors to detect the validity of the commit record.

View File

@ -42,8 +42,13 @@ metrics = { workspace = true }
rust-log = { package = "log", version = "0.4.22" } rust-log = { package = "log", version = "0.4.22" }
tracing-core = "0.1.32" tracing-core = "0.1.32"
tracing-log = "0.2.0" tracing-log = "0.2.0"
console-subscriber = { version = "0.4.1", optional = true }
contract-wrapper = { path = "../common/contract-wrapper" }
[dependencies.libp2p] [dependencies.libp2p]
version = "0.45.1" version = "0.45.1"
default-features = true default-features = true
features = ["websocket", "identify", "mplex", "yamux", "noise", "gossipsub", "dns-tokio", "tcp-tokio", "plaintext", "secp256k1"] features = ["websocket", "identify", "mplex", "yamux", "noise", "gossipsub", "dns-tokio", "tcp-tokio", "plaintext", "secp256k1"]
[features]
tokio-console = ["console-subscriber"]

View File

@ -13,7 +13,7 @@ enum SlotStatus {
} }
/// Sliding window is used to control the concurrent uploading process of a file. /// Sliding window is used to control the concurrent uploading process of a file.
/// Bounded window allows segments to be uploaded concurrenly, while having a capacity /// Bounded window allows segments to be uploaded concurrently, while having a capacity
/// limit on writing threads per file. Meanwhile, the left_boundary field records /// limit on writing threads per file. Meanwhile, the left_boundary field records
/// how many segments have been uploaded. /// how many segments have been uploaded.
struct CtrlWindow { struct CtrlWindow {
@ -165,7 +165,7 @@ impl ChunkPoolWriteCtrl {
if file_ctrl.total_segments != total_segments { if file_ctrl.total_segments != total_segments {
bail!( bail!(
"file size in segment doesn't match with file size declared in previous segment. Previous total segments:{}, current total segments:{}s", "file size in segment doesn't match with file size declared in previous segment. Previous total segments:{}, current total segments:{}",
file_ctrl.total_segments, file_ctrl.total_segments,
total_segments total_segments
); );

View File

@ -358,7 +358,7 @@ mod tests {
} }
#[test] #[test]
fn test_annoucement_cache_peek_priority() { fn test_announcement_cache_peek_priority() {
let mut cache = AnnouncementCache::new(100, 3600); let mut cache = AnnouncementCache::new(100, 3600);
let now = timestamp_now(); let now = timestamp_now();
@ -382,7 +382,7 @@ mod tests {
} }
#[test] #[test]
fn test_annoucement_cache_pop_len() { fn test_announcement_cache_pop_len() {
let mut cache = AnnouncementCache::new(100, 3600); let mut cache = AnnouncementCache::new(100, 3600);
let now = timestamp_now(); let now = timestamp_now();
@ -404,7 +404,7 @@ mod tests {
} }
#[test] #[test]
fn test_annoucement_cache_garbage_collect() { fn test_announcement_cache_garbage_collect() {
let mut cache = AnnouncementCache::new(100, 3600); let mut cache = AnnouncementCache::new(100, 3600);
let now = timestamp_now(); let now = timestamp_now();
@ -422,7 +422,7 @@ mod tests {
} }
#[test] #[test]
fn test_annoucement_cache_insert_gc() { fn test_announcement_cache_insert_gc() {
let mut cache = AnnouncementCache::new(100, 3600); let mut cache = AnnouncementCache::new(100, 3600);
let now = timestamp_now(); let now = timestamp_now();
@ -438,7 +438,7 @@ mod tests {
} }
#[test] #[test]
fn test_annoucement_cache_insert_ignore_older() { fn test_announcement_cache_insert_ignore_older() {
let mut cache = AnnouncementCache::new(100, 3600); let mut cache = AnnouncementCache::new(100, 3600);
let now = timestamp_now(); let now = timestamp_now();
@ -461,7 +461,7 @@ mod tests {
} }
#[test] #[test]
fn test_annoucement_cache_insert_overwrite() { fn test_announcement_cache_insert_overwrite() {
let mut cache = AnnouncementCache::new(100, 3600); let mut cache = AnnouncementCache::new(100, 3600);
let now = timestamp_now(); let now = timestamp_now();
@ -479,7 +479,7 @@ mod tests {
} }
#[test] #[test]
fn test_annoucement_cache_insert_cap_exceeded() { fn test_announcement_cache_insert_cap_exceeded() {
let mut cache = AnnouncementCache::new(3, 3600); let mut cache = AnnouncementCache::new(3, 3600);
let now = timestamp_now(); let now = timestamp_now();
@ -499,7 +499,7 @@ mod tests {
} }
#[test] #[test]
fn test_annoucement_cache_random() { fn test_announcement_cache_random() {
let mut cache = AnnouncementCache::new(100, 3600); let mut cache = AnnouncementCache::new(100, 3600);
let now = timestamp_now(); let now = timestamp_now();
@ -515,7 +515,7 @@ mod tests {
} }
#[test] #[test]
fn test_annoucement_cache_all() { fn test_announcement_cache_all() {
let mut cache = AnnouncementCache::new(100, 3600); let mut cache = AnnouncementCache::new(100, 3600);
let now = timestamp_now(); let now = timestamp_now();

View File

@ -20,7 +20,7 @@ pub struct LogSyncConfig {
// blockchain provider retry params // blockchain provider retry params
// the number of retries after a connection times out // the number of retries after a connection times out
pub rate_limit_retries: u32, pub rate_limit_retries: u32,
// the nubmer of retries for rate limited responses // the number of retries for rate limited responses
pub timeout_retries: u32, pub timeout_retries: u32,
// the duration to wait before retry, in ms // the duration to wait before retry, in ms
pub initial_backoff: u64, pub initial_backoff: u64,

View File

@ -10,6 +10,7 @@ zgs_spec = { path = "../../common/spec" }
zgs_seal = { path = "../../common/zgs_seal" } zgs_seal = { path = "../../common/zgs_seal" }
task_executor = { path = "../../common/task_executor" } task_executor = { path = "../../common/task_executor" }
contract-interface = { path = "../../common/contract-interface" } contract-interface = { path = "../../common/contract-interface" }
contract-wrapper = { path = "../../common/contract-wrapper" }
lighthouse_metrics = { path = "../../common/lighthouse_metrics" } lighthouse_metrics = { path = "../../common/lighthouse_metrics" }
ethereum-types = "0.14" ethereum-types = "0.14"
tokio = { version = "1.19.2", features = ["full"] } tokio = { version = "1.19.2", features = ["full"] }

View File

@ -2,7 +2,8 @@ use std::str::FromStr;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use ethereum_types::{Address, H256, U256}; use contract_wrapper::SubmitConfig;
use ethereum_types::{Address, H256};
use ethers::core::k256::SecretKey; use ethers::core::k256::SecretKey;
use ethers::middleware::SignerMiddleware; use ethers::middleware::SignerMiddleware;
use ethers::providers::Http; use ethers::providers::Http;
@ -21,7 +22,6 @@ pub struct MinerConfig {
pub(crate) rpc_endpoint_url: String, pub(crate) rpc_endpoint_url: String,
pub(crate) mine_address: Address, pub(crate) mine_address: Address,
pub(crate) flow_address: Address, pub(crate) flow_address: Address,
pub(crate) submission_gas: Option<U256>,
pub(crate) cpu_percentage: u64, pub(crate) cpu_percentage: u64,
pub(crate) iter_batch: usize, pub(crate) iter_batch: usize,
pub(crate) shard_config: ShardConfig, pub(crate) shard_config: ShardConfig,
@ -29,6 +29,7 @@ pub struct MinerConfig {
pub(crate) rate_limit_retries: u32, pub(crate) rate_limit_retries: u32,
pub(crate) timeout_retries: u32, pub(crate) timeout_retries: u32,
pub(crate) initial_backoff: u64, pub(crate) initial_backoff: u64,
pub(crate) submission_config: SubmitConfig,
} }
pub type MineServiceMiddleware = SignerMiddleware<Arc<Provider<RetryClient<Http>>>, LocalWallet>; pub type MineServiceMiddleware = SignerMiddleware<Arc<Provider<RetryClient<Http>>>, LocalWallet>;
@ -41,7 +42,6 @@ impl MinerConfig {
rpc_endpoint_url: String, rpc_endpoint_url: String,
mine_address: Address, mine_address: Address,
flow_address: Address, flow_address: Address,
submission_gas: Option<U256>,
cpu_percentage: u64, cpu_percentage: u64,
iter_batch: usize, iter_batch: usize,
context_query_seconds: u64, context_query_seconds: u64,
@ -49,6 +49,7 @@ impl MinerConfig {
rate_limit_retries: u32, rate_limit_retries: u32,
timeout_retries: u32, timeout_retries: u32,
initial_backoff: u64, initial_backoff: u64,
submission_config: SubmitConfig,
) -> Option<MinerConfig> { ) -> Option<MinerConfig> {
miner_key.map(|miner_key| MinerConfig { miner_key.map(|miner_key| MinerConfig {
miner_id, miner_id,
@ -56,7 +57,6 @@ impl MinerConfig {
rpc_endpoint_url, rpc_endpoint_url,
mine_address, mine_address,
flow_address, flow_address,
submission_gas,
cpu_percentage, cpu_percentage,
iter_batch, iter_batch,
shard_config, shard_config,
@ -64,6 +64,7 @@ impl MinerConfig {
rate_limit_retries, rate_limit_retries,
timeout_retries, timeout_retries,
initial_backoff, initial_backoff,
submission_config,
}) })
} }

View File

@ -57,7 +57,7 @@ pub(crate) async fn check_and_request_miner_id(
} }
(None, None) => { (None, None) => {
let beneficiary = provider.address(); let beneficiary = provider.address();
let id = request_miner_id(&mine_contract, beneficiary).await?; let id = request_miner_id(config, &mine_contract, beneficiary).await?;
set_miner_id(store, &id) set_miner_id(store, &id)
.await .await
.map_err(|e| format!("set miner id on db corrupt: {:?}", e))?; .map_err(|e| format!("set miner id on db corrupt: {:?}", e))?;
@ -86,6 +86,7 @@ async fn check_miner_id(
} }
async fn request_miner_id( async fn request_miner_id(
config: &MinerConfig,
mine_contract: &PoraMine<MineServiceMiddleware>, mine_contract: &PoraMine<MineServiceMiddleware>,
beneficiary: Address, beneficiary: Address,
) -> Result<H256, String> { ) -> Result<H256, String> {
@ -94,16 +95,13 @@ async fn request_miner_id(
let submission_call: ContractCall<_, _> = let submission_call: ContractCall<_, _> =
mine_contract.request_miner_id(beneficiary, 0).legacy(); mine_contract.request_miner_id(beneficiary, 0).legacy();
let pending_tx = submission_call let receipt = contract_wrapper::submit_with_retry(
.send() submission_call,
.await &config.submission_config,
.map_err(|e| format!("Fail to request miner id: {:?}", e))?; mine_contract.client().clone(),
)
let receipt = pending_tx .await
.retries(3) .map_err(|e| format!("Fail to submit miner id request: {:?}", e))?;
.await
.map_err(|e| format!("Fail to execute mine answer transaction: {:?}", e))?
.ok_or("Request miner id transaction dropped after 3 retries")?;
let first_log = receipt let first_log = receipt
.logs .logs

View File

@ -1,13 +1,11 @@
use contract_interface::PoraAnswer; use contract_interface::PoraAnswer;
use contract_interface::{PoraMine, ZgsFlow}; use contract_interface::{PoraMine, ZgsFlow};
use ethereum_types::U256; use contract_wrapper::SubmitConfig;
use ethers::contract::ContractCall; use ethers::contract::ContractCall;
use ethers::prelude::{Http, Provider, RetryClient}; use ethers::prelude::{Http, Provider, RetryClient};
use ethers::providers::PendingTransaction;
use hex::ToHex; use hex::ToHex;
use shared_types::FlowRangeProof; use shared_types::FlowRangeProof;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration;
use storage::H256; use storage::H256;
use storage_async::Store; use storage_async::Store;
use task_executor::TaskExecutor; use task_executor::TaskExecutor;
@ -19,15 +17,13 @@ use crate::watcher::MineContextMessage;
use zgs_spec::{BYTES_PER_SEAL, SECTORS_PER_SEAL}; use zgs_spec::{BYTES_PER_SEAL, SECTORS_PER_SEAL};
const SUBMISSION_RETRIES: usize = 15;
pub struct Submitter { pub struct Submitter {
mine_answer_receiver: mpsc::UnboundedReceiver<AnswerWithoutProof>, mine_answer_receiver: mpsc::UnboundedReceiver<AnswerWithoutProof>,
mine_context_receiver: broadcast::Receiver<MineContextMessage>, mine_context_receiver: broadcast::Receiver<MineContextMessage>,
mine_contract: PoraMine<MineServiceMiddleware>, mine_contract: PoraMine<MineServiceMiddleware>,
flow_contract: ZgsFlow<Provider<RetryClient<Http>>>, flow_contract: ZgsFlow<Provider<RetryClient<Http>>>,
default_gas_limit: Option<U256>,
store: Arc<Store>, store: Arc<Store>,
config: SubmitConfig,
} }
impl Submitter { impl Submitter {
@ -41,8 +37,7 @@ impl Submitter {
config: &MinerConfig, config: &MinerConfig,
) { ) {
let mine_contract = PoraMine::new(config.mine_address, signing_provider); let mine_contract = PoraMine::new(config.mine_address, signing_provider);
let flow_contract = ZgsFlow::new(config.flow_address, provider); let flow_contract = ZgsFlow::new(config.flow_address, provider.clone());
let default_gas_limit = config.submission_gas;
let submitter = Submitter { let submitter = Submitter {
mine_answer_receiver, mine_answer_receiver,
@ -50,7 +45,7 @@ impl Submitter {
mine_contract, mine_contract,
flow_contract, flow_contract,
store, store,
default_gas_limit, config: config.submission_config,
}; };
executor.spawn( executor.spawn(
async move { Box::pin(submitter.start()).await }, async move { Box::pin(submitter.start()).await },
@ -134,11 +129,7 @@ impl Submitter {
}; };
trace!("submit_answer: answer={:?}", answer); trace!("submit_answer: answer={:?}", answer);
let mut submission_call: ContractCall<_, _> = self.mine_contract.submit(answer).legacy(); let submission_call: ContractCall<_, _> = self.mine_contract.submit(answer).legacy();
if let Some(gas_limit) = self.default_gas_limit {
submission_call = submission_call.gas(gas_limit);
}
if let Some(calldata) = submission_call.calldata() { if let Some(calldata) = submission_call.calldata() {
debug!( debug!(
@ -153,27 +144,13 @@ impl Submitter {
submission_call.estimate_gas().await submission_call.estimate_gas().await
); );
let pending_transaction: PendingTransaction<'_, _> = submission_call contract_wrapper::submit_with_retry(
.send() submission_call,
.await &self.config,
.map_err(|e| format!("Fail to send PoRA submission transaction: {:?}", e))?; self.mine_contract.client().clone(),
)
debug!( .await
"Signed submission transaction hash: {:?}", .map_err(|e| format!("Failed to submit mine answer: {:?}", e))?;
pending_transaction.tx_hash()
);
let receipt = pending_transaction
.retries(SUBMISSION_RETRIES)
.interval(Duration::from_secs(2))
.await
.map_err(|e| format!("Fail to execute PoRA submission transaction: {:?}", e))?
.ok_or(format!(
"PoRA submission transaction dropped after {} retries",
SUBMISSION_RETRIES
))?;
info!("Submit PoRA success, receipt: {:?}", receipt);
Ok(()) Ok(())
} }

View File

@ -740,7 +740,7 @@ where
&error, &error,
ConnectionDirection::Outgoing, ConnectionDirection::Outgoing,
); );
// inform failures of requests comming outside the behaviour // inform failures of requests coming outside the behaviour
if let RequestId::Application(id) = id { if let RequestId::Application(id) = id {
self.add_event(BehaviourEvent::RPCFailed { peer_id, id }); self.add_event(BehaviourEvent::RPCFailed { peer_id, id });
} }

View File

@ -103,7 +103,7 @@ pub struct Config {
/// Subscribe to all subnets for the duration of the runtime. /// Subscribe to all subnets for the duration of the runtime.
pub subscribe_all_subnets: bool, pub subscribe_all_subnets: bool,
/// Import/aggregate all attestations recieved on subscribed subnets for the duration of the /// Import/aggregate all attestations received on subscribed subnets for the duration of the
/// runtime. /// runtime.
pub import_all_attestations: bool, pub import_all_attestations: bool,

View File

@ -134,7 +134,7 @@ impl NetworkBehaviour for PeerManager {
BanResult::NotBanned => {} BanResult::NotBanned => {}
} }
// Count dialing peers in the limit if the peer dialied us. // Count dialing peers in the limit if the peer dialed us.
let count_dialing = endpoint.is_listener(); let count_dialing = endpoint.is_listener();
// Check the connection limits // Check the connection limits
if self.peer_limit_reached(count_dialing) if self.peer_limit_reached(count_dialing)

View File

@ -19,7 +19,7 @@ pub struct Client {
#[derive(Clone, Copy, Debug, Serialize, PartialEq, Eq, AsRefStr, IntoStaticStr, EnumIter)] #[derive(Clone, Copy, Debug, Serialize, PartialEq, Eq, AsRefStr, IntoStaticStr, EnumIter)]
pub enum ClientKind { pub enum ClientKind {
/// An Zgs node. /// A Zgs node.
Zgs, Zgs,
/// An unknown client. /// An unknown client.
Unknown, Unknown,

View File

@ -54,7 +54,7 @@ impl Service {
struct Ev(PeerManagerEvent); struct Ev(PeerManagerEvent);
impl From<void::Void> for Ev { impl From<void::Void> for Ev {
fn from(_: void::Void) -> Self { fn from(_: void::Void) -> Self {
unreachable!("No events are emmited") unreachable!("No events are emitted")
} }
} }
impl From<PeerManagerEvent> for Ev { impl From<PeerManagerEvent> for Ev {

View File

@ -276,7 +276,7 @@ impl Pruner {
} }
} }
async fn get_shard_config(store: &Store) -> Result<Option<ShardConfig>> { pub async fn get_shard_config(store: &Store) -> Result<Option<ShardConfig>> {
store store
.get_config_decoded(&SHARD_CONFIG_KEY, DATA_DB_KEY) .get_config_decoded(&SHARD_CONFIG_KEY, DATA_DB_KEY)
.await .await

View File

@ -63,7 +63,11 @@ pub trait Rpc {
async fn check_file_finalized(&self, tx_seq_or_root: TxSeqOrRoot) -> RpcResult<Option<bool>>; async fn check_file_finalized(&self, tx_seq_or_root: TxSeqOrRoot) -> RpcResult<Option<bool>>;
#[method(name = "getFileInfo")] #[method(name = "getFileInfo")]
async fn get_file_info(&self, data_root: DataRoot) -> RpcResult<Option<FileInfo>>; async fn get_file_info(
&self,
data_root: DataRoot,
need_available: bool,
) -> RpcResult<Option<FileInfo>>;
#[method(name = "getFileInfoByTxSeq")] #[method(name = "getFileInfoByTxSeq")]
async fn get_file_info_by_tx_seq(&self, tx_seq: u64) -> RpcResult<Option<FileInfo>>; async fn get_file_info_by_tx_seq(&self, tx_seq: u64) -> RpcResult<Option<FileInfo>>;

View File

@ -95,7 +95,7 @@ impl RpcServer for RpcServerImpl {
let tx_seq = try_option!( let tx_seq = try_option!(
self.ctx self.ctx
.log_store .log_store
.get_tx_seq_by_data_root(&data_root) .get_tx_seq_by_data_root(&data_root, true)
.await? .await?
); );
@ -121,7 +121,12 @@ impl RpcServer for RpcServerImpl {
) -> RpcResult<Option<SegmentWithProof>> { ) -> RpcResult<Option<SegmentWithProof>> {
info!(%data_root, %index, "zgs_downloadSegmentWithProof"); info!(%data_root, %index, "zgs_downloadSegmentWithProof");
let tx = try_option!(self.ctx.log_store.get_tx_by_data_root(&data_root).await?); let tx = try_option!(
self.ctx
.log_store
.get_tx_by_data_root(&data_root, true)
.await?
);
self.get_segment_with_proof_by_tx(tx, index).await self.get_segment_with_proof_by_tx(tx, index).await
} }
@ -144,7 +149,12 @@ impl RpcServer for RpcServerImpl {
let seq = match tx_seq_or_root { let seq = match tx_seq_or_root {
TxSeqOrRoot::TxSeq(v) => v, TxSeqOrRoot::TxSeq(v) => v,
TxSeqOrRoot::Root(v) => { TxSeqOrRoot::Root(v) => {
try_option!(self.ctx.log_store.get_tx_seq_by_data_root(&v).await?) try_option!(
self.ctx
.log_store
.get_tx_seq_by_data_root(&v, false)
.await?
)
} }
}; };
@ -163,10 +173,19 @@ impl RpcServer for RpcServerImpl {
} }
} }
async fn get_file_info(&self, data_root: DataRoot) -> RpcResult<Option<FileInfo>> { async fn get_file_info(
&self,
data_root: DataRoot,
need_available: bool,
) -> RpcResult<Option<FileInfo>> {
debug!(%data_root, "zgs_getFileInfo"); debug!(%data_root, "zgs_getFileInfo");
let tx = try_option!(self.ctx.log_store.get_tx_by_data_root(&data_root).await?); let tx = try_option!(
self.ctx
.log_store
.get_tx_by_data_root(&data_root, need_available)
.await?
);
Ok(Some(self.get_file_info_by_tx(tx).await?)) Ok(Some(self.get_file_info_by_tx(tx).await?))
} }
@ -288,7 +307,7 @@ impl RpcServerImpl {
let maybe_tx = self let maybe_tx = self
.ctx .ctx
.log_store .log_store
.get_tx_by_data_root(&segment.root) .get_tx_by_data_root(&segment.root, false)
.await?; .await?;
self.put_segment_with_maybe_tx(segment, maybe_tx).await self.put_segment_with_maybe_tx(segment, maybe_tx).await

View File

@ -7,7 +7,7 @@ use network::{
self, new_network_channel, Keypair, NetworkConfig, NetworkGlobals, NetworkReceiver, self, new_network_channel, Keypair, NetworkConfig, NetworkGlobals, NetworkReceiver,
NetworkSender, RequestId, Service as LibP2PService, NetworkSender, RequestId, Service as LibP2PService,
}; };
use pruner::{Pruner, PrunerConfig, PrunerMessage}; use pruner::{get_shard_config, Pruner, PrunerConfig, PrunerMessage};
use router::RouterService; use router::RouterService;
use rpc::RPCConfig; use rpc::RPCConfig;
use std::sync::Arc; use std::sync::Arc;
@ -203,7 +203,7 @@ impl ClientBuilder {
if let Some(config) = config { if let Some(config) = config {
let executor = require!("miner", self, runtime_context).clone().executor; let executor = require!("miner", self, runtime_context).clone().executor;
let network_send = require!("miner", self, network).send.clone(); let network_send = require!("miner", self, network).send.clone();
let store = self.async_store.as_ref().unwrap().clone(); let store = require!("miner", self, async_store).clone();
let send = MineService::spawn(executor, network_send, config, store).await?; let send = MineService::spawn(executor, network_send, config, store).await?;
self.miner = Some(MinerComponents { send }); self.miner = Some(MinerComponents { send });
@ -225,7 +225,11 @@ impl ClientBuilder {
Ok(self) Ok(self)
} }
pub async fn with_shard(self, config: ShardConfig) -> Result<Self, String> { pub async fn with_shard(self, mut config: ShardConfig) -> Result<Self, String> {
let store = require!("shard", self, async_store).clone();
if let Some(stored_config) = get_shard_config(store.as_ref()).await.unwrap_or(None) {
config = stored_config;
}
self.async_store self.async_store
.as_ref() .as_ref()
.unwrap() .unwrap()

View File

@ -1,4 +1,4 @@
//! This crate aims to provide a common set of tools that can be used to create a "environment" to //! This crate aims to provide a common set of tools that can be used to create an "environment" to
//! run Zgs services. This allows for the unification of creating tokio runtimes, etc. //! run Zgs services. This allows for the unification of creating tokio runtimes, etc.
//! //!
//! The idea is that the main thread creates an `Environment`, which is then used to spawn a //! The idea is that the main thread creates an `Environment`, which is then used to spawn a

View File

@ -1,7 +1,7 @@
#![allow(clippy::field_reassign_with_default)] #![allow(clippy::field_reassign_with_default)]
use crate::ZgsConfig; use crate::ZgsConfig;
use ethereum_types::{H256, U256}; use ethereum_types::H256;
use ethers::prelude::{Http, Middleware, Provider}; use ethers::prelude::{Http, Middleware, Provider};
use log_entry_sync::{CacheConfig, ContractAddress, LogSyncConfig}; use log_entry_sync::{CacheConfig, ContractAddress, LogSyncConfig};
use miner::MinerConfig; use miner::MinerConfig;
@ -179,7 +179,6 @@ impl ZgsConfig {
} else { } else {
None None
}; };
let submission_gas = self.miner_submission_gas.map(U256::from);
let cpu_percentage = self.miner_cpu_percentage; let cpu_percentage = self.miner_cpu_percentage;
let iter_batch = self.mine_iter_batch_size; let iter_batch = self.mine_iter_batch_size;
let context_query_seconds = self.mine_context_query_seconds; let context_query_seconds = self.mine_context_query_seconds;
@ -192,7 +191,6 @@ impl ZgsConfig {
self.blockchain_rpc_endpoint.clone(), self.blockchain_rpc_endpoint.clone(),
mine_address, mine_address,
flow_address, flow_address,
submission_gas,
cpu_percentage, cpu_percentage,
iter_batch, iter_batch,
context_query_seconds, context_query_seconds,
@ -200,6 +198,7 @@ impl ZgsConfig {
self.rate_limit_retries, self.rate_limit_retries,
self.timeout_retries, self.timeout_retries,
self.initial_backoff, self.initial_backoff,
self.submission_config,
)) ))
} }

View File

@ -74,7 +74,6 @@ build_config! {
(mine_contract_address, (String), "".to_string()) (mine_contract_address, (String), "".to_string())
(miner_id, (Option<String>), None) (miner_id, (Option<String>), None)
(miner_key, (Option<String>), None) (miner_key, (Option<String>), None)
(miner_submission_gas, (Option<u64>), None)
(miner_cpu_percentage, (u64), 100) (miner_cpu_percentage, (u64), 100)
(mine_iter_batch_size, (usize), 100) (mine_iter_batch_size, (usize), 100)
(reward_contract_address, (String), "".to_string()) (reward_contract_address, (String), "".to_string())
@ -106,6 +105,9 @@ pub struct ZgsConfig {
// rpc config, configured by [rpc] section by `config` crate. // rpc config, configured by [rpc] section by `config` crate.
pub rpc: rpc::RPCConfig, pub rpc: rpc::RPCConfig,
// submission config, configured by [submission_config] section by `config` crate.
pub submission_config: contract_wrapper::SubmitConfig,
// metrics config, configured by [metrics] section by `config` crate. // metrics config, configured by [metrics] section by `config` crate.
pub metrics: metrics::MetricsConfiguration, pub metrics: metrics::MetricsConfiguration,
} }

View File

@ -1,7 +1,8 @@
use task_executor::TaskExecutor; use task_executor::TaskExecutor;
use tracing::Level;
use tracing_log::AsLog; use tracing_log::AsLog;
use tracing_subscriber::EnvFilter; use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use tracing_subscriber::{EnvFilter, Layer};
const LOG_RELOAD_PERIOD_SEC: u64 = 30; const LOG_RELOAD_PERIOD_SEC: u64 = 30;
@ -15,19 +16,26 @@ pub fn configure(log_level_file: &str, log_directory: &str, executor: TaskExecut
.unwrap_or_default() .unwrap_or_default()
.trim_end() .trim_end()
.to_string(); .to_string();
let filter = EnvFilter::try_new(config.clone()).expect("invalid log level");
let (filter, reload_handle) = tracing_subscriber::reload::Layer::new(filter);
let builder = tracing_subscriber::fmt() let fmt_layer = tracing_subscriber::fmt::layer()
.with_max_level(Level::TRACE)
.with_env_filter(EnvFilter::try_new(config.clone()).expect("invalid log level"))
.with_writer(non_blocking) .with_writer(non_blocking)
.with_ansi(false) .with_ansi(false)
// .with_file(true) .compact()
// .with_line_number(true) .with_filter(filter);
// .with_thread_names(true) // .with_file(true)
.with_filter_reloading(); // .with_line_number(true)
// .with_thread_names(true)
let handle = builder.reload_handle(); let subscriber = tracing_subscriber::registry().with(fmt_layer);
builder.init(); #[cfg(feature = "tokio-console")]
{
subscriber.with(console_subscriber::spawn()).init();
}
#[cfg(not(feature = "tokio-console"))]
{
subscriber.init();
}
// periodically check for config changes // periodically check for config changes
executor.spawn( executor.spawn(
@ -57,7 +65,7 @@ pub fn configure(log_level_file: &str, log_directory: &str, executor: TaskExecut
println!("Updating log config to {:?}", new_config); println!("Updating log config to {:?}", new_config);
match handle.reload(&new_config) { match reload_handle.reload(&new_config) {
Ok(()) => { Ok(()) => {
rust_log::set_max_level(tracing_core::LevelFilter::current().as_log()); rust_log::set_max_level(tracing_core::LevelFilter::current().as_log());
config = new_config config = new_config

View File

@ -23,6 +23,8 @@ async fn start_node(context: RuntimeContext, config: ZgsConfig) -> Result<Client
ClientBuilder::default() ClientBuilder::default()
.with_runtime_context(context) .with_runtime_context(context)
.with_rocksdb_store(&storage_config)? .with_rocksdb_store(&storage_config)?
.with_shard(shard_config)
.await?
.with_log_sync(log_sync_config) .with_log_sync(log_sync_config)
.await? .await?
.with_file_location_cache(config.file_location_cache) .with_file_location_cache(config.file_location_cache)
@ -34,8 +36,6 @@ async fn start_node(context: RuntimeContext, config: ZgsConfig) -> Result<Client
.await? .await?
.with_miner(miner_config) .with_miner(miner_config)
.await? .await?
.with_shard(shard_config)
.await?
.with_pruner(pruner_config) .with_pruner(pruner_config)
.await? .await?
.with_rpc(config.rpc) .with_rpc(config.rpc)

View File

@ -59,15 +59,23 @@ impl Store {
delegate!(fn get_proof_at_root(root: Option<DataRoot>, index: u64, length: u64) -> Result<FlowRangeProof>); delegate!(fn get_proof_at_root(root: Option<DataRoot>, index: u64, length: u64) -> Result<FlowRangeProof>);
delegate!(fn get_context() -> Result<(DataRoot, u64)>); delegate!(fn get_context() -> Result<(DataRoot, u64)>);
pub async fn get_tx_seq_by_data_root(&self, data_root: &DataRoot) -> Result<Option<u64>> { pub async fn get_tx_seq_by_data_root(
&self,
data_root: &DataRoot,
need_available: bool,
) -> Result<Option<u64>> {
let root = *data_root; let root = *data_root;
self.spawn(move |store| store.get_tx_seq_by_data_root(&root)) self.spawn(move |store| store.get_tx_seq_by_data_root(&root, need_available))
.await .await
} }
pub async fn get_tx_by_data_root(&self, data_root: &DataRoot) -> Result<Option<Transaction>> { pub async fn get_tx_by_data_root(
&self,
data_root: &DataRoot,
need_available: bool,
) -> Result<Option<Transaction>> {
let root = *data_root; let root = *data_root;
self.spawn(move |store| store.get_tx_by_data_root(&root)) self.spawn(move |store| store.get_tx_by_data_root(&root, need_available))
.await .await
} }

View File

@ -511,7 +511,7 @@ impl LogStoreChunkRead for LogManager {
index_start: usize, index_start: usize,
index_end: usize, index_end: usize,
) -> crate::error::Result<Option<ChunkArray>> { ) -> crate::error::Result<Option<ChunkArray>> {
let tx_seq = try_option!(self.get_tx_seq_by_data_root(data_root)?); let tx_seq = try_option!(self.get_tx_seq_by_data_root(data_root, true)?);
self.get_chunks_by_tx_and_index_range(tx_seq, index_start, index_end) self.get_chunks_by_tx_and_index_range(tx_seq, index_start, index_end)
} }
@ -536,13 +536,27 @@ impl LogStoreRead for LogManager {
self.tx_store.get_tx_by_seq_number(seq) self.tx_store.get_tx_by_seq_number(seq)
} }
fn get_tx_seq_by_data_root(&self, data_root: &DataRoot) -> crate::error::Result<Option<u64>> { fn get_tx_seq_by_data_root(
&self,
data_root: &DataRoot,
need_available: bool,
) -> crate::error::Result<Option<u64>> {
let seq_list = self.tx_store.get_tx_seq_list_by_data_root(data_root)?; let seq_list = self.tx_store.get_tx_seq_list_by_data_root(data_root)?;
let mut available_seq = None;
for tx_seq in &seq_list { for tx_seq in &seq_list {
if self.tx_store.check_tx_completed(*tx_seq)? { if self.tx_store.check_tx_completed(*tx_seq)? {
// Return the first finalized tx if possible. // Return the first finalized tx if possible.
return Ok(Some(*tx_seq)); return Ok(Some(*tx_seq));
} }
if need_available
&& available_seq.is_none()
&& !self.tx_store.check_tx_pruned(*tx_seq)?
{
available_seq = Some(*tx_seq);
}
}
if need_available {
return Ok(available_seq);
} }
// No tx is finalized, return the first one. // No tx is finalized, return the first one.
Ok(seq_list.first().cloned()) Ok(seq_list.first().cloned())
@ -1157,6 +1171,7 @@ impl LogManager {
.get_tx_by_seq_number(from_tx_seq)? .get_tx_by_seq_number(from_tx_seq)?
.ok_or_else(|| anyhow!("from tx missing"))?; .ok_or_else(|| anyhow!("from tx missing"))?;
let mut to_tx_offset_list = Vec::with_capacity(to_tx_seq_list.len()); let mut to_tx_offset_list = Vec::with_capacity(to_tx_seq_list.len());
for seq in to_tx_seq_list { for seq in to_tx_seq_list {
// No need to copy data for completed tx. // No need to copy data for completed tx.
if self.check_tx_completed(seq)? { if self.check_tx_completed(seq)? {

View File

@ -31,14 +31,22 @@ pub trait LogStoreRead: LogStoreChunkRead {
fn get_tx_by_seq_number(&self, seq: u64) -> Result<Option<Transaction>>; fn get_tx_by_seq_number(&self, seq: u64) -> Result<Option<Transaction>>;
/// Get a transaction by the data root of its data. /// Get a transaction by the data root of its data.
/// If all txs are not finalized, return the first one. /// If all txs are not finalized, return the first one if need available is false.
/// Otherwise, return the first finalized tx. /// Otherwise, return the first finalized tx.
fn get_tx_seq_by_data_root(&self, data_root: &DataRoot) -> Result<Option<u64>>; fn get_tx_seq_by_data_root(
&self,
data_root: &DataRoot,
need_available: bool,
) -> Result<Option<u64>>;
/// If all txs are not finalized, return the first one. /// If all txs are not finalized, return the first one if need available is false.
/// Otherwise, return the first finalized tx. /// Otherwise, return the first finalized tx.
fn get_tx_by_data_root(&self, data_root: &DataRoot) -> Result<Option<Transaction>> { fn get_tx_by_data_root(
match self.get_tx_seq_by_data_root(data_root)? { &self,
data_root: &DataRoot,
need_available: bool,
) -> Result<Option<Transaction>> {
match self.get_tx_seq_by_data_root(data_root, need_available)? {
Some(seq) => self.get_tx_by_seq_number(seq), Some(seq) => self.get_tx_by_seq_number(seq),
None => Ok(None), None => Ok(None),
} }

View File

@ -238,7 +238,7 @@ auto_sync_enabled = true
# Enable to start a file sync via RPC (e.g. `admin_startSyncFile`). # Enable to start a file sync via RPC (e.g. `admin_startSyncFile`).
# sync_file_by_rpc_enabled = true # sync_file_by_rpc_enabled = true
# Maximum number of continous failures to terminate a file sync. # Maximum number of continuous failures to terminate a file sync.
# max_request_failures = 3 # max_request_failures = 3
# Timeout to dial peers. # Timeout to dial peers.

View File

@ -250,7 +250,7 @@ auto_sync_enabled = true
# Enable to start a file sync via RPC (e.g. `admin_startSyncFile`). # Enable to start a file sync via RPC (e.g. `admin_startSyncFile`).
# sync_file_by_rpc_enabled = true # sync_file_by_rpc_enabled = true
# Maximum number of continous failures to terminate a file sync. # Maximum number of continuous failures to terminate a file sync.
# max_request_failures = 3 # max_request_failures = 3
# Timeout to dial peers. # Timeout to dial peers.

View File

@ -252,7 +252,7 @@
# Enable to start a file sync via RPC (e.g. `admin_startSyncFile`). # Enable to start a file sync via RPC (e.g. `admin_startSyncFile`).
# sync_file_by_rpc_enabled = true # sync_file_by_rpc_enabled = true
# Maximum number of continous failures to terminate a file sync. # Maximum number of continuous failures to terminate a file sync.
# max_request_failures = 3 # max_request_failures = 3
# Timeout to dial peers. # Timeout to dial peers.

View File

@ -1,12 +1,11 @@
set -e set -e
artifacts_path="$1" artifacts_path="$1"
check_abis() { check_abis() {
for contract_name in "$@"; do for contract_name in "$@"; do
diff $(./scripts/search_abi.sh "$artifacts_path" "$contract_name.json") "storage-contracts-abis/$contract_name.json" diff "$(./scripts/search_abi.sh "$artifacts_path" "$contract_name.json")" "storage-contracts-abis/$contract_name.json"
done done
} }
check_abis DummyMarket DummyReward Flow PoraMine PoraMineTest FixedPrice ChunkLinearReward FixedPriceFlow
check_abis DummyMarket DummyReward Flow PoraMine PoraMineTest FixedPrice ChunkLinearReward FixedPriceFlow

View File

@ -50,7 +50,7 @@ copy_file() {
copy_abis() { copy_abis() {
for contract_name in "$@"; do for contract_name in "$@"; do
copy_file $(./scripts/search_abi.sh "$path/artifacts" "$contract_name.json") "storage-contracts-abis/$contract_name.json" copy_file "$(./scripts/search_abi.sh "$path/artifacts" "$contract_name.json")" "storage-contracts-abis/$contract_name.json"
done done
} }

View File

@ -10,22 +10,22 @@ PUBLIC_IP=$(curl -s https://ipinfo.io/ip)
FILE=run/config.toml FILE=run/config.toml
# enable sync # enable sync
sed -in-place='' 's/# \[sync\]/\[sync\]/g' $FILE sed -i 's/# \[sync\]/\[sync\]/g' $FILE
# enable auto_sync # enable auto_sync
sed -in-place='' 's/# auto_sync_enabled = false/auto_sync_enabled = true/g' $FILE sed -i 's/# auto_sync_enabled = false/auto_sync_enabled = true/g' $FILE
# reduce timeout for finding peers # reduce timeout for finding peers
sed -in-place='' 's/# find_peer_timeout = .*/find_peer_timeout = "10s"/g' $FILE sed -i 's/# find_peer_timeout = .*/find_peer_timeout = "10s"/g' $FILE
# set public ip # set public ip
sed -in-place='' "s/# network_enr_address = .*/network_enr_address = \"$PUBLIC_IP\"/g" $FILE sed -i "s/# network_enr_address = .*/network_enr_address = \"$PUBLIC_IP\"/g" $FILE
# set miner key # set miner key
sed -in-place='' "s/miner_key = \"\"/miner_key = \"$MINER_KEY\"/g" $FILE sed -i "s/miner_key = \"\"/miner_key = \"$MINER_KEY\"/g" $FILE
# set miner contract address # set miner contract address
sed -in-place='' "s/mine_contract_address = .*/mine_contract_address = \"$MINE_CONTRACT\"/g" $FILE sed -i "s/mine_contract_address = .*/mine_contract_address = \"$MINE_CONTRACT\"/g" $FILE
# set blockchain rpc endpoint # set blockchain rpc endpoint
sed -in-place='' "s|blockchain_rpc_endpoint = .*|blockchain_rpc_endpoint = \"$BLOCKCHAIN_RPC\"|g" $FILE sed -i "s|blockchain_rpc_endpoint = .*|blockchain_rpc_endpoint = \"$BLOCKCHAIN_RPC\"|g" $FILE
# set flow contract address # set flow contract address
sed -in-place='' "s/log_contract_address = .*/log_contract_address = \"$FLOW_CONTRACT\"/g" $FILE sed -i "s/log_contract_address = .*/log_contract_address = \"$FLOW_CONTRACT\"/g" $FILE
# set contract deployed block number # set contract deployed block number
sed -in-place='' "s/log_sync_start_block_number = .*/log_sync_start_block_number = $BLOCK_NUMBER/g" $FILE sed -i "s/log_sync_start_block_number = .*/log_sync_start_block_number = $BLOCK_NUMBER/g" $FILE
# update the boot node ids # update the boot node ids
sed -in-place='' 's|network_boot_nodes = .*|network_boot_nodes = ["/ip4/54.219.26.22/udp/1234/p2p/16Uiu2HAmTVDGNhkHD98zDnJxQWu3i1FL1aFYeh9wiQTNu4pDCgps","/ip4/52.52.127.117/udp/1234/p2p/16Uiu2HAkzRjxK2gorngB1Xq84qDrT4hSVznYDHj6BkbaE4SGx9oS","/ip4/18.167.69.68/udp/1234/p2p/16Uiu2HAm2k6ua2mGgvZ8rTMV8GhpW71aVzkQWy7D37TTDuLCpgmX"]|g' $FILE sed -i 's|network_boot_nodes = .*|network_boot_nodes = ["/ip4/54.219.26.22/udp/1234/p2p/16Uiu2HAmTVDGNhkHD98zDnJxQWu3i1FL1aFYeh9wiQTNu4pDCgps","/ip4/52.52.127.117/udp/1234/p2p/16Uiu2HAkzRjxK2gorngB1Xq84qDrT4hSVznYDHj6BkbaE4SGx9oS","/ip4/18.167.69.68/udp/1234/p2p/16Uiu2HAm2k6ua2mGgvZ8rTMV8GhpW71aVzkQWy7D37TTDuLCpgmX"]|g' $FILE

View File

@ -17,7 +17,10 @@ class ExampleTest(TestFramework):
self.contract.submit(submissions) self.contract.submit(submissions)
wait_until(lambda: self.contract.num_submissions() == 1) wait_until(lambda: self.contract.num_submissions() == 1)
wait_until(lambda: client.zgs_get_file_info(data_root) is not None) wait_until(lambda: client.zgs_get_file_info(data_root) is not None)
wait_until(lambda: not client.zgs_get_file_info(data_root)["isCached"] and client.zgs_get_file_info(data_root)["uploadedSegNum"] == 1) wait_until(
lambda: not client.zgs_get_file_info(data_root)["isCached"]
and client.zgs_get_file_info(data_root)["uploadedSegNum"] == 1
)
client.zgs_upload_segment(segments[1]) client.zgs_upload_segment(segments[1])
wait_until(lambda: client.zgs_get_file_info(data_root)["finalized"]) wait_until(lambda: client.zgs_get_file_info(data_root)["finalized"])

View File

@ -7,9 +7,7 @@ ZGS_CONFIG = {
"log_config_file": "log_config", "log_config_file": "log_config",
"confirmation_block_count": 1, "confirmation_block_count": 1,
"discv5_disable_ip_limit": True, "discv5_disable_ip_limit": True,
"network_peer_manager": { "network_peer_manager": {"heartbeat_interval": "1s"},
"heartbeat_interval": "1s"
},
"router": { "router": {
"private_ip_enabled": True, "private_ip_enabled": True,
}, },
@ -22,7 +20,7 @@ ZGS_CONFIG = {
"auto_sync_idle_interval": "1s", "auto_sync_idle_interval": "1s",
"sequential_find_peer_timeout": "10s", "sequential_find_peer_timeout": "10s",
"random_find_peer_timeout": "10s", "random_find_peer_timeout": "10s",
} },
} }
CONFIG_DIR = os.path.dirname(__file__) CONFIG_DIR = os.path.dirname(__file__)
@ -75,11 +73,12 @@ TX_PARAMS1 = {
NO_SEAL_FLAG = 0x1 NO_SEAL_FLAG = 0x1
NO_MERKLE_PROOF_FLAG = 0x2 NO_MERKLE_PROOF_FLAG = 0x2
def update_config(default: dict, custom: dict): def update_config(default: dict, custom: dict):
""" """
Supports to update configurations with dict value. Supports to update configurations with dict value.
""" """
for (key, value) in custom.items(): for key, value in custom.items():
if default.get(key) is None or type(value) != dict: if default.get(key) is None or type(value) != dict:
default[key] = value default[key] = value
else: else:

View File

@ -20,17 +20,15 @@ class CrashTest(TestFramework):
segment = submit_data(self.nodes[0], chunk_data) segment = submit_data(self.nodes[0], chunk_data)
self.log.info("segment: %s", segment) self.log.info("segment: %s", segment)
wait_until(lambda: self.nodes[0].zgs_get_file_info(data_root)["finalized"] is True) wait_until(
lambda: self.nodes[0].zgs_get_file_info(data_root)["finalized"] is True
)
for i in range(1, self.num_nodes): for i in range(1, self.num_nodes):
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root) is not None)
lambda: self.nodes[i].zgs_get_file_info(data_root) is not None
)
self.nodes[i].admin_start_sync_file(0) self.nodes[i].admin_start_sync_file(0)
self.log.info("wait for node: %s", i) self.log.info("wait for node: %s", i)
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root)["finalized"])
lambda: self.nodes[i].zgs_get_file_info(data_root)["finalized"]
)
# 2: first node runnging, other nodes killed # 2: first node runnging, other nodes killed
self.log.info("kill node") self.log.info("kill node")
@ -56,22 +54,16 @@ class CrashTest(TestFramework):
for i in range(2, self.num_nodes): for i in range(2, self.num_nodes):
self.start_storage_node(i) self.start_storage_node(i)
self.nodes[i].wait_for_rpc_connection() self.nodes[i].wait_for_rpc_connection()
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root) is not None)
lambda: self.nodes[i].zgs_get_file_info(data_root) is not None
)
self.nodes[i].admin_start_sync_file(1) self.nodes[i].admin_start_sync_file(1)
self.nodes[i].stop(kill=True) self.nodes[i].stop(kill=True)
self.start_storage_node(i) self.start_storage_node(i)
self.nodes[i].wait_for_rpc_connection() self.nodes[i].wait_for_rpc_connection()
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root) is not None)
lambda: self.nodes[i].zgs_get_file_info(data_root) is not None
)
self.nodes[i].admin_start_sync_file(1) self.nodes[i].admin_start_sync_file(1)
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root)["finalized"])
lambda: self.nodes[i].zgs_get_file_info(data_root)["finalized"]
)
# 4: node[1..] synced contract entries and killed # 4: node[1..] synced contract entries and killed
self.log.info("kill node 0") self.log.info("kill node 0")
@ -96,13 +88,9 @@ class CrashTest(TestFramework):
self.log.info("wait for node: %s", i) self.log.info("wait for node: %s", i)
self.start_storage_node(i) self.start_storage_node(i)
self.nodes[i].wait_for_rpc_connection() self.nodes[i].wait_for_rpc_connection()
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root) is not None)
lambda: self.nodes[i].zgs_get_file_info(data_root) is not None
)
self.nodes[i].admin_start_sync_file(2) self.nodes[i].admin_start_sync_file(2)
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root)["finalized"])
lambda: self.nodes[i].zgs_get_file_info(data_root)["finalized"]
)
# 5: node[1..] synced contract entries and killed, sync disorder # 5: node[1..] synced contract entries and killed, sync disorder
self.nodes[0].stop(kill=True) self.nodes[0].stop(kill=True)
@ -137,21 +125,13 @@ class CrashTest(TestFramework):
self.log.info("wait for node: %s", i) self.log.info("wait for node: %s", i)
self.start_storage_node(i) self.start_storage_node(i)
self.nodes[i].wait_for_rpc_connection() self.nodes[i].wait_for_rpc_connection()
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root1) is not None)
lambda: self.nodes[i].zgs_get_file_info(data_root1) is not None
)
self.nodes[i].admin_start_sync_file(4) self.nodes[i].admin_start_sync_file(4)
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root1)["finalized"])
lambda: self.nodes[i].zgs_get_file_info(data_root1)["finalized"]
)
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root) is not None)
lambda: self.nodes[i].zgs_get_file_info(data_root) is not None
)
self.nodes[i].admin_start_sync_file(3) self.nodes[i].admin_start_sync_file(3)
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root)["finalized"])
lambda: self.nodes[i].zgs_get_file_info(data_root)["finalized"]
)
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -45,13 +45,9 @@ class FuzzTest(TestFramework):
lock.release() lock.release()
log.info("submit data via client %s", idx) log.info("submit data via client %s", idx)
wait_until( wait_until(lambda: nodes[idx].zgs_get_file_info(data_root) is not None)
lambda: nodes[idx].zgs_get_file_info(data_root) is not None
)
segment = submit_data(nodes[idx], chunk_data) segment = submit_data(nodes[idx], chunk_data)
wait_until( wait_until(lambda: nodes[idx].zgs_get_file_info(data_root)["finalized"])
lambda: nodes[idx].zgs_get_file_info(data_root)["finalized"]
)
lock.acquire() lock.acquire()
nodes_index.append(idx) nodes_index.append(idx)
@ -65,17 +61,15 @@ class FuzzTest(TestFramework):
lambda: nodes[idx].zgs_get_file_info(data_root) is not None lambda: nodes[idx].zgs_get_file_info(data_root) is not None
) )
def wait_finalized(): def wait_finalized():
ret = nodes[idx].zgs_get_file_info(data_root) ret = nodes[idx].zgs_get_file_info(data_root)
if ret["finalized"]: if ret["finalized"]:
return True return True
else: else:
nodes[idx].admin_start_sync_file(ret['tx']['seq']) nodes[idx].admin_start_sync_file(ret["tx"]["seq"])
return False return False
wait_until( wait_until(lambda: wait_finalized(), timeout=180)
lambda: wait_finalized(), timeout = 180
)
def run_small_chunk_size(nodes, contract, log): def run_small_chunk_size(nodes, contract, log):
sizes = [i for i in range(1, SAMLL_SIZE + 1)] sizes = [i for i in range(1, SAMLL_SIZE + 1)]
@ -84,7 +78,7 @@ class FuzzTest(TestFramework):
run_chunk_size(sizes, nodes, contract, log) run_chunk_size(sizes, nodes, contract, log)
def run_large_chunk_size(nodes, contract, log): def run_large_chunk_size(nodes, contract, log):
sizes = [i for i in range(256 * 1024 * 256 - LARGE_SIZE, 256 * 1024 * 256 )] sizes = [i for i in range(256 * 1024 * 256 - LARGE_SIZE, 256 * 1024 * 256)]
random.shuffle(sizes) random.shuffle(sizes)
run_chunk_size(sizes, nodes, contract, log) run_chunk_size(sizes, nodes, contract, log)

View File

@ -18,7 +18,6 @@ class LongTimeMineTest(TestFramework):
self.mine_period = 15 self.mine_period = 15
self.launch_wait_seconds = 15 self.launch_wait_seconds = 15
def submit_data(self, item, size): def submit_data(self, item, size):
submissions_before = self.contract.num_submissions() submissions_before = self.contract.num_submissions()
client = self.nodes[0] client = self.nodes[0]
@ -44,7 +43,10 @@ class LongTimeMineTest(TestFramework):
self.submit_data(b"\x11", 2000) self.submit_data(b"\x11", 2000)
self.log.info("Start mine") self.log.info("Start mine")
wait_until(lambda: int(blockchain.eth_blockNumber(), 16) > self.mine_period, timeout=180) wait_until(
lambda: int(blockchain.eth_blockNumber(), 16) > self.mine_period,
timeout=180,
)
self.log.info("Wait for the first mine answer") self.log.info("Wait for the first mine answer")
wait_until(lambda: self.mine_contract.last_mined_epoch() == 1) wait_until(lambda: self.mine_contract.last_mined_epoch() == 1)

View File

@ -15,7 +15,11 @@ class MineTest(TestFramework):
} }
self.mine_period = int(45 / self.block_time) self.mine_period = int(45 / self.block_time)
self.launch_wait_seconds = 15 self.launch_wait_seconds = 15
self.log.info("Contract Info: Est. block time %.2f, Mine period %d", self.block_time, self.mine_period) self.log.info(
"Contract Info: Est. block time %.2f, Mine period %d",
self.block_time,
self.mine_period,
)
def submit_data(self, item, size): def submit_data(self, item, size):
submissions_before = self.contract.num_submissions() submissions_before = self.contract.num_submissions()
@ -37,7 +41,11 @@ class MineTest(TestFramework):
first_block = self.contract.first_block() first_block = self.contract.first_block()
self.log.info("Current block number %d", int(blockchain.eth_blockNumber(), 16)) self.log.info("Current block number %d", int(blockchain.eth_blockNumber(), 16))
self.log.info("Flow deployment block number %d, epoch 1 start %d", first_block, first_block + self.mine_period) self.log.info(
"Flow deployment block number %d, epoch 1 start %d",
first_block,
first_block + self.mine_period,
)
wait_until(lambda: self.contract.epoch() >= 1, timeout=180) wait_until(lambda: self.contract.epoch() >= 1, timeout=180)
quality = int(2**256 / 100 / estimate_st_performance()) quality = int(2**256 / 100 / estimate_st_performance())
@ -54,27 +62,39 @@ class MineTest(TestFramework):
self.contract.update_context() self.contract.update_context()
self.log.info("Wait for the first mine answer") self.log.info("Wait for the first mine answer")
wait_until(lambda: self.mine_contract.last_mined_epoch() == start_epoch + 1 and not self.mine_contract.can_submit(), timeout=180) wait_until(
lambda: self.mine_contract.last_mined_epoch() == start_epoch + 1
and not self.mine_contract.can_submit(),
timeout=180,
)
self.log.info("Wait for the second mine context release") self.log.info("Wait for the second mine context release")
wait_until(lambda: self.contract.epoch() >= start_epoch + 2, timeout=180) wait_until(lambda: self.contract.epoch() >= start_epoch + 2, timeout=180)
self.contract.update_context() self.contract.update_context()
self.log.info("Wait for the second mine answer") self.log.info("Wait for the second mine answer")
wait_until(lambda: self.mine_contract.last_mined_epoch() == start_epoch + 2 and not self.mine_contract.can_submit(), timeout=180) wait_until(
lambda: self.mine_contract.last_mined_epoch() == start_epoch + 2
and not self.mine_contract.can_submit(),
timeout=180,
)
self.nodes[0].miner_stop() self.nodes[0].miner_stop()
self.log.info("Wait for the third mine context release") self.log.info("Wait for the third mine context release")
wait_until(lambda: self.contract.epoch() >= start_epoch + 3, timeout=180) wait_until(lambda: self.contract.epoch() >= start_epoch + 3, timeout=180)
self.contract.update_context() self.contract.update_context()
self.log.info("Submit the second data chunk") self.log.info("Submit the second data chunk")
self.submit_data(b"\x22", 2000) self.submit_data(b"\x22", 2000)
# Now the storage node should have the latest flow, but the mining context is using an old one. # Now the storage node should have the latest flow, but the mining context is using an old one.
self.nodes[0].miner_start() self.nodes[0].miner_start()
self.log.info("Wait for the third mine answer") self.log.info("Wait for the third mine answer")
wait_until(lambda: self.mine_contract.last_mined_epoch() == start_epoch + 3 and not self.mine_contract.can_submit(), timeout=180) wait_until(
lambda: self.mine_contract.last_mined_epoch() == start_epoch + 3
and not self.mine_contract.can_submit(),
timeout=180,
)
self.log.info("Current block number %d", int(blockchain.eth_blockNumber(), 16)) self.log.info("Current block number %d", int(blockchain.eth_blockNumber(), 16))

View File

@ -2,13 +2,18 @@
from test_framework.test_framework import TestFramework from test_framework.test_framework import TestFramework
from config.node_config import MINER_ID, GENESIS_PRIV_KEY from config.node_config import MINER_ID, GENESIS_PRIV_KEY
from utility.submission import create_submission, submit_data from utility.submission import create_submission, submit_data
from utility.utils import wait_until, assert_equal, assert_greater_than, estimate_st_performance from utility.utils import (
wait_until,
assert_equal,
assert_greater_than,
estimate_st_performance,
)
from test_framework.blockchain_node import BlockChainNodeType from test_framework.blockchain_node import BlockChainNodeType
import time import time
import math import math
PRICE_PER_SECTOR = math.ceil(10 * (10 ** 18) / (2 ** 30) * 256 / 12) PRICE_PER_SECTOR = math.ceil(10 * (10**18) / (2**30) * 256 / 12)
class MineTest(TestFramework): class MineTest(TestFramework):
@ -23,17 +28,21 @@ class MineTest(TestFramework):
self.enable_market = True self.enable_market = True
self.mine_period = int(50 / self.block_time) self.mine_period = int(50 / self.block_time)
self.launch_wait_seconds = 15 self.launch_wait_seconds = 15
self.log.info("Contract Info: Est. block time %.2f, Mine period %d", self.block_time, self.mine_period) self.log.info(
"Contract Info: Est. block time %.2f, Mine period %d",
self.block_time,
self.mine_period,
)
def submit_data(self, item, size, no_submit = False): def submit_data(self, item, size, no_submit=False):
submissions_before = self.contract.num_submissions() submissions_before = self.contract.num_submissions()
client = self.nodes[0] client = self.nodes[0]
chunk_data = item * 256 * size chunk_data = item * 256 * size
submissions, data_root = create_submission(chunk_data) submissions, data_root = create_submission(chunk_data)
value = int(size * PRICE_PER_SECTOR * 1.1) value = int(size * PRICE_PER_SECTOR * 1.1)
self.contract.submit(submissions, tx_prarams = {"value": value}) self.contract.submit(submissions, tx_prarams={"value": value})
wait_until(lambda: self.contract.num_submissions() == submissions_before + 1) wait_until(lambda: self.contract.num_submissions() == submissions_before + 1)
if not no_submit: if not no_submit:
wait_until(lambda: client.zgs_get_file_info(data_root) is not None) wait_until(lambda: client.zgs_get_file_info(data_root) is not None)
segment = submit_data(client, chunk_data) segment = submit_data(client, chunk_data)
@ -48,11 +57,15 @@ class MineTest(TestFramework):
difficulty = int(2**256 / 5 / estimate_st_performance()) difficulty = int(2**256 / 5 / estimate_st_performance())
self.mine_contract.set_quality(difficulty) self.mine_contract.set_quality(difficulty)
SECTORS_PER_PRICING = int(8 * ( 2 ** 30 ) / 256) SECTORS_PER_PRICING = int(8 * (2**30) / 256)
first_block = self.contract.first_block() first_block = self.contract.first_block()
self.log.info("Current block number %d", int(blockchain.eth_blockNumber(), 16)) self.log.info("Current block number %d", int(blockchain.eth_blockNumber(), 16))
self.log.info("Flow deployment block number %d, epoch 1 start %d, wait for epoch 1 start", first_block, first_block + self.mine_period) self.log.info(
"Flow deployment block number %d, epoch 1 start %d, wait for epoch 1 start",
first_block,
first_block + self.mine_period,
)
wait_until(lambda: self.contract.epoch() >= 1, timeout=180) wait_until(lambda: self.contract.epoch() >= 1, timeout=180)
self.log.info("Submit the actual data chunk (256 MB)") self.log.info("Submit the actual data chunk (256 MB)")
@ -63,8 +76,12 @@ class MineTest(TestFramework):
# wait_until(lambda: self.contract.epoch() >= 1, timeout=180) # wait_until(lambda: self.contract.epoch() >= 1, timeout=180)
start_epoch = self.contract.epoch() start_epoch = self.contract.epoch()
self.log.info("Submission Done, epoch is %d, current block number %d", start_epoch, int(blockchain.eth_blockNumber(), 16)) self.log.info(
"Submission Done, epoch is %d, current block number %d",
start_epoch,
int(blockchain.eth_blockNumber(), 16),
)
self.log.info("Wait for mine context release") self.log.info("Wait for mine context release")
wait_until(lambda: self.contract.epoch() >= start_epoch + 1, timeout=180) wait_until(lambda: self.contract.epoch() >= start_epoch + 1, timeout=180)
@ -72,27 +89,38 @@ class MineTest(TestFramework):
self.contract.update_context() self.contract.update_context()
self.log.info("Wait for mine answer") self.log.info("Wait for mine answer")
wait_until(lambda: self.mine_contract.last_mined_epoch() == start_epoch + 1 and not self.mine_contract.can_submit(), timeout=120) wait_until(
lambda: self.mine_contract.last_mined_epoch() == start_epoch + 1
and not self.mine_contract.can_submit(),
timeout=120,
)
rewards = self.reward_contract.reward_distributes() rewards = self.reward_contract.reward_distributes()
assert_equal(len(rewards), 4) assert_equal(len(rewards), 4)
firstReward = rewards[0].args.amount firstReward = rewards[0].args.amount
self.log.info("Received reward %d Gwei", firstReward / (10**9)) self.log.info("Received reward %d Gwei", firstReward / (10**9))
self.reward_contract.donate(10000 * 10 ** 18) self.reward_contract.donate(10000 * 10**18)
self.log.info("Donation Done") self.log.info("Donation Done")
self.log.info("Submit the data hash only (8 GB)") self.log.info("Submit the data hash only (8 GB)")
self.submit_data(b"\x11", int(SECTORS_PER_PRICING), no_submit=True) self.submit_data(b"\x11", int(SECTORS_PER_PRICING), no_submit=True)
current_epoch = self.contract.epoch() current_epoch = self.contract.epoch()
assert_equal(current_epoch, start_epoch + 1); assert_equal(current_epoch, start_epoch + 1)
self.log.info("Sumission Done, epoch is %d, current block number %d", self.contract.epoch(), int(blockchain.eth_blockNumber(), 16)) self.log.info(
"Sumission Done, epoch is %d, current block number %d",
self.contract.epoch(),
int(blockchain.eth_blockNumber(), 16),
)
self.log.info("Wait for mine context release") self.log.info("Wait for mine context release")
wait_until(lambda: self.contract.epoch() >= start_epoch + 2, timeout=180) wait_until(lambda: self.contract.epoch() >= start_epoch + 2, timeout=180)
self.contract.update_context() self.contract.update_context()
self.log.info("Wait for mine answer") self.log.info("Wait for mine answer")
wait_until(lambda: self.mine_contract.last_mined_epoch() == start_epoch + 2 and not self.mine_contract.can_submit()) wait_until(
lambda: self.mine_contract.last_mined_epoch() == start_epoch + 2
and not self.mine_contract.can_submit()
)
assert_equal(self.contract.epoch(), start_epoch + 2) assert_equal(self.contract.epoch(), start_epoch + 2)
rewards = self.reward_contract.reward_distributes() rewards = self.reward_contract.reward_distributes()

View File

@ -7,6 +7,7 @@ from config.node_config import ZGS_KEY_FILE, ZGS_NODEID
from test_framework.test_framework import TestFramework from test_framework.test_framework import TestFramework
from utility.utils import p2p_port from utility.utils import p2p_port
class NetworkDiscoveryTest(TestFramework): class NetworkDiscoveryTest(TestFramework):
""" """
This is to test whether community nodes could connect to each other via UDP discovery. This is to test whether community nodes could connect to each other via UDP discovery.
@ -24,7 +25,6 @@ class NetworkDiscoveryTest(TestFramework):
"network_enr_address": "127.0.0.1", "network_enr_address": "127.0.0.1",
"network_enr_tcp_port": bootnode_port, "network_enr_tcp_port": bootnode_port,
"network_enr_udp_port": bootnode_port, "network_enr_udp_port": bootnode_port,
# disable trusted nodes # disable trusted nodes
"network_libp2p_nodes": [], "network_libp2p_nodes": [],
} }
@ -37,7 +37,6 @@ class NetworkDiscoveryTest(TestFramework):
"network_enr_address": "127.0.0.1", "network_enr_address": "127.0.0.1",
"network_enr_tcp_port": p2p_port(i), "network_enr_tcp_port": p2p_port(i),
"network_enr_udp_port": p2p_port(i), "network_enr_udp_port": p2p_port(i),
# disable trusted nodes and enable bootnodes # disable trusted nodes and enable bootnodes
"network_libp2p_nodes": [], "network_libp2p_nodes": [],
"network_boot_nodes": bootnodes, "network_boot_nodes": bootnodes,
@ -57,7 +56,13 @@ class NetworkDiscoveryTest(TestFramework):
total_connected += info["connectedPeers"] total_connected += info["connectedPeers"]
self.log.info( self.log.info(
"Node[%s] peers: total = %s, banned = %s, disconnected = %s, connected = %s (in = %s, out = %s)", "Node[%s] peers: total = %s, banned = %s, disconnected = %s, connected = %s (in = %s, out = %s)",
i, info["totalPeers"], info["bannedPeers"], info["disconnectedPeers"], info["connectedPeers"], info["connectedIncomingPeers"], info["connectedOutgoingPeers"], i,
info["totalPeers"],
info["bannedPeers"],
info["disconnectedPeers"],
info["connectedPeers"],
info["connectedIncomingPeers"],
info["connectedOutgoingPeers"],
) )
if total_connected >= self.num_nodes * (self.num_nodes - 1): if total_connected >= self.num_nodes * (self.num_nodes - 1):
@ -66,5 +71,6 @@ class NetworkDiscoveryTest(TestFramework):
self.log.info("====================================") self.log.info("====================================")
self.log.info("All nodes connected to each other successfully") self.log.info("All nodes connected to each other successfully")
if __name__ == "__main__": if __name__ == "__main__":
NetworkDiscoveryTest().main() NetworkDiscoveryTest().main()

View File

@ -7,6 +7,7 @@ from config.node_config import ZGS_KEY_FILE, ZGS_NODEID
from test_framework.test_framework import TestFramework from test_framework.test_framework import TestFramework
from utility.utils import p2p_port from utility.utils import p2p_port
class NetworkDiscoveryUpgradeTest(TestFramework): class NetworkDiscoveryUpgradeTest(TestFramework):
""" """
This is to test that low version community nodes could not connect to bootnodes. This is to test that low version community nodes could not connect to bootnodes.
@ -24,7 +25,6 @@ class NetworkDiscoveryUpgradeTest(TestFramework):
"network_enr_address": "127.0.0.1", "network_enr_address": "127.0.0.1",
"network_enr_tcp_port": bootnode_port, "network_enr_tcp_port": bootnode_port,
"network_enr_udp_port": bootnode_port, "network_enr_udp_port": bootnode_port,
# disable trusted nodes # disable trusted nodes
"network_libp2p_nodes": [], "network_libp2p_nodes": [],
} }
@ -37,11 +37,9 @@ class NetworkDiscoveryUpgradeTest(TestFramework):
"network_enr_address": "127.0.0.1", "network_enr_address": "127.0.0.1",
"network_enr_tcp_port": p2p_port(i), "network_enr_tcp_port": p2p_port(i),
"network_enr_udp_port": p2p_port(i), "network_enr_udp_port": p2p_port(i),
# disable trusted nodes and enable bootnodes # disable trusted nodes and enable bootnodes
"network_libp2p_nodes": [], "network_libp2p_nodes": [],
"network_boot_nodes": bootnodes, "network_boot_nodes": bootnodes,
# disable network identity in ENR # disable network identity in ENR
"discv5_disable_enr_network_id": True, "discv5_disable_enr_network_id": True,
} }
@ -57,7 +55,13 @@ class NetworkDiscoveryUpgradeTest(TestFramework):
total_connected += info["connectedPeers"] total_connected += info["connectedPeers"]
self.log.info( self.log.info(
"Node[%s] peers: total = %s, banned = %s, disconnected = %s, connected = %s (in = %s, out = %s)", "Node[%s] peers: total = %s, banned = %s, disconnected = %s, connected = %s (in = %s, out = %s)",
i, info["totalPeers"], info["bannedPeers"], info["disconnectedPeers"], info["connectedPeers"], info["connectedIncomingPeers"], info["connectedOutgoingPeers"], i,
info["totalPeers"],
info["bannedPeers"],
info["disconnectedPeers"],
info["connectedPeers"],
info["connectedIncomingPeers"],
info["connectedOutgoingPeers"],
) )
# ENR incompatible and should not discover each other for TCP connection # ENR incompatible and should not discover each other for TCP connection
@ -66,5 +70,6 @@ class NetworkDiscoveryUpgradeTest(TestFramework):
self.log.info("====================================") self.log.info("====================================")
self.log.info("ENR incompatible nodes do not connect to each other") self.log.info("ENR incompatible nodes do not connect to each other")
if __name__ == "__main__": if __name__ == "__main__":
NetworkDiscoveryUpgradeTest().main() NetworkDiscoveryUpgradeTest().main()

View File

@ -7,6 +7,7 @@ from config.node_config import ZGS_KEY_FILE, ZGS_NODEID
from test_framework.test_framework import TestFramework from test_framework.test_framework import TestFramework
from utility.utils import p2p_port from utility.utils import p2p_port
class NetworkTcpShardTest(TestFramework): class NetworkTcpShardTest(TestFramework):
""" """
This is to test TCP connection for shard config mismatched peers of UDP discovery. This is to test TCP connection for shard config mismatched peers of UDP discovery.
@ -24,12 +25,10 @@ class NetworkTcpShardTest(TestFramework):
"network_enr_address": "127.0.0.1", "network_enr_address": "127.0.0.1",
"network_enr_tcp_port": bootnode_port, "network_enr_tcp_port": bootnode_port,
"network_enr_udp_port": bootnode_port, "network_enr_udp_port": bootnode_port,
# disable trusted nodes # disable trusted nodes
"network_libp2p_nodes": [], "network_libp2p_nodes": [],
# custom shard config # custom shard config
"shard_position": "0/4" "shard_position": "0/4",
} }
# setup node 1 & 2 as community nodes # setup node 1 & 2 as community nodes
@ -40,13 +39,11 @@ class NetworkTcpShardTest(TestFramework):
"network_enr_address": "127.0.0.1", "network_enr_address": "127.0.0.1",
"network_enr_tcp_port": p2p_port(i), "network_enr_tcp_port": p2p_port(i),
"network_enr_udp_port": p2p_port(i), "network_enr_udp_port": p2p_port(i),
# disable trusted nodes and enable bootnodes # disable trusted nodes and enable bootnodes
"network_libp2p_nodes": [], "network_libp2p_nodes": [],
"network_boot_nodes": bootnodes, "network_boot_nodes": bootnodes,
# custom shard config # custom shard config
"shard_position": f"{i}/4" "shard_position": f"{i}/4",
} }
def run_test(self): def run_test(self):
@ -60,7 +57,13 @@ class NetworkTcpShardTest(TestFramework):
info = self.nodes[i].rpc.admin_getNetworkInfo() info = self.nodes[i].rpc.admin_getNetworkInfo()
self.log.info( self.log.info(
"Node[%s] peers: total = %s, banned = %s, disconnected = %s, connected = %s (in = %s, out = %s)", "Node[%s] peers: total = %s, banned = %s, disconnected = %s, connected = %s (in = %s, out = %s)",
i, info["totalPeers"], info["bannedPeers"], info["disconnectedPeers"], info["connectedPeers"], info["connectedIncomingPeers"], info["connectedOutgoingPeers"], i,
info["totalPeers"],
info["bannedPeers"],
info["disconnectedPeers"],
info["connectedPeers"],
info["connectedIncomingPeers"],
info["connectedOutgoingPeers"],
) )
if i == timeout_secs - 1: if i == timeout_secs - 1:
@ -72,5 +75,6 @@ class NetworkTcpShardTest(TestFramework):
self.log.info("====================================") self.log.info("====================================")
self.log.info("All nodes discovered but not connected for each other") self.log.info("All nodes discovered but not connected for each other")
if __name__ == "__main__": if __name__ == "__main__":
NetworkTcpShardTest().main() NetworkTcpShardTest().main()

View File

@ -23,7 +23,11 @@ class PrunerTest(TestFramework):
self.mine_period = int(45 / self.block_time) self.mine_period = int(45 / self.block_time)
self.lifetime_seconds = 240 self.lifetime_seconds = 240
self.launch_wait_seconds = 15 self.launch_wait_seconds = 15
self.log.info("Contract Info: Est. block time %.2f, Mine period %d", self.block_time, self.mine_period) self.log.info(
"Contract Info: Est. block time %.2f, Mine period %d",
self.block_time,
self.mine_period,
)
def run_test(self): def run_test(self):
client = self.nodes[0] client = self.nodes[0]
@ -31,7 +35,10 @@ class PrunerTest(TestFramework):
chunk_data = b"\x02" * 16 * 256 * 1024 chunk_data = b"\x02" * 16 * 256 * 1024
# chunk_data = b"\x02" * 5 * 1024 * 1024 * 1024 # chunk_data = b"\x02" * 5 * 1024 * 1024 * 1024
submissions, data_root = create_submission(chunk_data) submissions, data_root = create_submission(chunk_data)
self.contract.submit(submissions, tx_prarams = {"value": int(len(chunk_data) / 256 * PRICE_PER_SECTOR * 1.1)}) self.contract.submit(
submissions,
tx_prarams={"value": int(len(chunk_data) / 256 * PRICE_PER_SECTOR * 1.1)},
)
wait_until(lambda: self.contract.num_submissions() == 1) wait_until(lambda: self.contract.num_submissions() == 1)
wait_until(lambda: client.zgs_get_file_info(data_root) is not None) wait_until(lambda: client.zgs_get_file_info(data_root) is not None)

View File

@ -7,6 +7,7 @@ from utility.submission import create_submission
from utility.submission import submit_data from utility.submission import submit_data
from utility.utils import wait_until from utility.utils import wait_until
class RandomTest(TestFramework): class RandomTest(TestFramework):
def setup_params(self): def setup_params(self):
self.num_blockchain_nodes = 1 self.num_blockchain_nodes = 1
@ -32,14 +33,18 @@ class RandomTest(TestFramework):
else: else:
size = random.randint(0, max_size) size = random.randint(0, max_size)
no_data = random.random() <= no_data_ratio no_data = random.random() <= no_data_ratio
self.log.info(f"choose {chosen_node}, seq={i}, size={size}, no_data={no_data}") self.log.info(
f"choose {chosen_node}, seq={i}, size={size}, no_data={no_data}"
)
client = self.nodes[chosen_node] client = self.nodes[chosen_node]
chunk_data = random.randbytes(size) chunk_data = random.randbytes(size)
submissions, data_root = create_submission(chunk_data) submissions, data_root = create_submission(chunk_data)
self.contract.submit(submissions) self.contract.submit(submissions)
wait_until(lambda: self.contract.num_submissions() == i + 1) wait_until(lambda: self.contract.num_submissions() == i + 1)
wait_until(lambda: client.zgs_get_file_info(data_root) is not None, timeout=120) wait_until(
lambda: client.zgs_get_file_info(data_root) is not None, timeout=120
)
if not no_data: if not no_data:
submit_data(client, chunk_data) submit_data(client, chunk_data)
wait_until(lambda: client.zgs_get_file_info(data_root)["finalized"]) wait_until(lambda: client.zgs_get_file_info(data_root)["finalized"])
@ -47,8 +52,17 @@ class RandomTest(TestFramework):
for node_index in range(len(self.nodes)): for node_index in range(len(self.nodes)):
if node_index != chosen_node: if node_index != chosen_node:
self.log.debug(f"check {node_index}") self.log.debug(f"check {node_index}")
wait_until(lambda: self.nodes[node_index].zgs_get_file_info(data_root) is not None, timeout=300) wait_until(
wait_until(lambda: self.nodes[node_index].zgs_get_file_info(data_root)["finalized"], timeout=300) lambda: self.nodes[node_index].zgs_get_file_info(data_root)
is not None,
timeout=300,
)
wait_until(
lambda: self.nodes[node_index].zgs_get_file_info(data_root)[
"finalized"
],
timeout=300,
)
# TODO(zz): This is a temp solution to trigger auto sync after all nodes started. # TODO(zz): This is a temp solution to trigger auto sync after all nodes started.
if i >= tx_count - 2: if i >= tx_count - 2:
continue continue
@ -72,7 +86,10 @@ class RandomTest(TestFramework):
if not no_data: if not no_data:
for node in self.nodes: for node in self.nodes:
self.log.debug(f"check {data_root}, {node.index}") self.log.debug(f"check {data_root}, {node.index}")
wait_until(lambda: node.zgs_get_file_info(data_root)["finalized"], timeout=300) wait_until(
lambda: node.zgs_get_file_info(data_root)["finalized"],
timeout=300,
)
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -32,8 +32,6 @@ class RootConsistencyTest(TestFramework):
assert_equal(contract_length, expected_length) assert_equal(contract_length, expected_length)
assert_equal(contract_root, node_root[2:]) assert_equal(contract_root, node_root[2:])
def run_test(self): def run_test(self):
self.assert_flow_status(1) self.assert_flow_status(1)
@ -48,7 +46,6 @@ class RootConsistencyTest(TestFramework):
self.submit_data(b"\x13", 512 + 256) self.submit_data(b"\x13", 512 + 256)
self.assert_flow_status(1024 + 512 + 256) self.assert_flow_status(1024 + 512 + 256)
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -29,6 +29,7 @@ class RpcTest(TestFramework):
wait_until(lambda: client1.zgs_get_file_info(data_root) is not None) wait_until(lambda: client1.zgs_get_file_info(data_root) is not None)
assert_equal(client1.zgs_get_file_info(data_root)["finalized"], False) assert_equal(client1.zgs_get_file_info(data_root)["finalized"], False)
assert_equal(client1.zgs_get_file_info(data_root)["pruned"], False)
wait_until(lambda: client2.zgs_get_file_info(data_root) is not None) wait_until(lambda: client2.zgs_get_file_info(data_root) is not None)
assert_equal(client2.zgs_get_file_info(data_root)["finalized"], False) assert_equal(client2.zgs_get_file_info(data_root)["finalized"], False)
@ -37,17 +38,13 @@ class RpcTest(TestFramework):
self.log.info("segment: %s", segment) self.log.info("segment: %s", segment)
wait_until(lambda: client1.zgs_get_file_info(data_root)["finalized"]) wait_until(lambda: client1.zgs_get_file_info(data_root)["finalized"])
assert_equal( assert_equal(client1.zgs_download_segment(data_root, 0, 1), segment[0]["data"])
client1.zgs_download_segment(data_root, 0, 1), segment[0]["data"]
)
client2.admin_start_sync_file(0) client2.admin_start_sync_file(0)
wait_until(lambda: client2.sync_status_is_completed_or_unknown(0)) wait_until(lambda: client2.sync_status_is_completed_or_unknown(0))
wait_until(lambda: client2.zgs_get_file_info(data_root)["finalized"]) wait_until(lambda: client2.zgs_get_file_info(data_root)["finalized"])
assert_equal( assert_equal(client2.zgs_download_segment(data_root, 0, 1), segment[0]["data"])
client2.zgs_download_segment(data_root, 0, 1), segment[0]["data"]
)
self.__test_upload_file_with_cli(client1) self.__test_upload_file_with_cli(client1)
@ -89,9 +86,7 @@ class RpcTest(TestFramework):
) )
) )
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(root)["finalized"])
lambda: self.nodes[i].zgs_get_file_info(root)["finalized"]
)
assert_equal( assert_equal(
client1.zgs_download_segment(root, 0, 2), client1.zgs_download_segment(root, 0, 2),

View File

@ -41,7 +41,11 @@ class SubmissionTest(TestFramework):
for tx_offset in range(same_root_tx_count + 1): for tx_offset in range(same_root_tx_count + 1):
tx_seq = next_tx_seq - 1 - tx_offset tx_seq = next_tx_seq - 1 - tx_offset
# old txs are finalized after finalizing the new tx, so we may need to wait here. # old txs are finalized after finalizing the new tx, so we may need to wait here.
wait_until(lambda: self.nodes[0].zgs_get_file_info_by_tx_seq(tx_seq)["finalized"]) wait_until(
lambda: self.nodes[0].zgs_get_file_info_by_tx_seq(tx_seq)[
"finalized"
]
)
# Send tx after uploading data # Send tx after uploading data
for _ in range(same_root_tx_count): for _ in range(same_root_tx_count):
@ -57,7 +61,10 @@ class SubmissionTest(TestFramework):
client = self.nodes[node_idx] client = self.nodes[node_idx]
wait_until(lambda: client.zgs_get_file_info_by_tx_seq(tx_seq) is not None) wait_until(lambda: client.zgs_get_file_info_by_tx_seq(tx_seq) is not None)
wait_until(lambda: client.zgs_get_file_info_by_tx_seq(tx_seq)["finalized"] == data_finalized) wait_until(
lambda: client.zgs_get_file_info_by_tx_seq(tx_seq)["finalized"]
== data_finalized
)
def submit_data(self, chunk_data, node_idx=0): def submit_data(self, chunk_data, node_idx=0):
_, data_root = create_submission(chunk_data) _, data_root = create_submission(chunk_data)

View File

@ -18,30 +18,30 @@ class ShardSubmitTest(TestFramework):
self.num_blockchain_nodes = 1 self.num_blockchain_nodes = 1
self.num_nodes = 4 self.num_nodes = 4
self.zgs_node_configs[0] = { self.zgs_node_configs[0] = {
"db_max_num_sectors": 2 ** 30, "db_max_num_sectors": 2**30,
"shard_position": "0/4" "shard_position": "0/4",
} }
self.zgs_node_configs[1] = { self.zgs_node_configs[1] = {
"db_max_num_sectors": 2 ** 30, "db_max_num_sectors": 2**30,
"shard_position": "1/4" "shard_position": "1/4",
} }
self.zgs_node_configs[2] = { self.zgs_node_configs[2] = {
"db_max_num_sectors": 2 ** 30, "db_max_num_sectors": 2**30,
"shard_position": "2/4" "shard_position": "2/4",
} }
self.zgs_node_configs[3] = { self.zgs_node_configs[3] = {
"db_max_num_sectors": 2 ** 30, "db_max_num_sectors": 2**30,
"shard_position": "3/4" "shard_position": "3/4",
} }
def run_test(self): def run_test(self):
data_size = [ data_size = [
256*960, 256 * 960,
256*1024, 256 * 1024,
2, 2,
255, 255,
256*960, 256 * 960,
256*120, 256 * 120,
256, 256,
257, 257,
1023, 1023,
@ -77,5 +77,6 @@ class ShardSubmitTest(TestFramework):
submit_data(client, chunk_data) submit_data(client, chunk_data)
wait_until(lambda: client.zgs_get_file_info(data_root)["finalized"]) wait_until(lambda: client.zgs_get_file_info(data_root)["finalized"])
if __name__ == "__main__": if __name__ == "__main__":
ShardSubmitTest().main() ShardSubmitTest().main()

View File

@ -13,16 +13,16 @@ class PrunerTest(TestFramework):
self.num_blockchain_nodes = 1 self.num_blockchain_nodes = 1
self.num_nodes = 4 self.num_nodes = 4
self.zgs_node_configs[0] = { self.zgs_node_configs[0] = {
"db_max_num_sectors": 2 ** 30, "db_max_num_sectors": 2**30,
"shard_position": "0/2" "shard_position": "0/2",
} }
self.zgs_node_configs[1] = { self.zgs_node_configs[1] = {
"db_max_num_sectors": 2 ** 30, "db_max_num_sectors": 2**30,
"shard_position": "1/2" "shard_position": "1/2",
} }
self.zgs_node_configs[3] = { self.zgs_node_configs[3] = {
"db_max_num_sectors": 2 ** 30, "db_max_num_sectors": 2**30,
"shard_position": "1/4" "shard_position": "1/4",
} }
self.enable_market = True self.enable_market = True
@ -31,7 +31,10 @@ class PrunerTest(TestFramework):
chunk_data = b"\x02" * 8 * 256 * 1024 chunk_data = b"\x02" * 8 * 256 * 1024
submissions, data_root = create_submission(chunk_data) submissions, data_root = create_submission(chunk_data)
self.contract.submit(submissions, tx_prarams = {"value": int(len(chunk_data) / 256 * PRICE_PER_SECTOR * 1.1)}) self.contract.submit(
submissions,
tx_prarams={"value": int(len(chunk_data) / 256 * PRICE_PER_SECTOR * 1.1)},
)
wait_until(lambda: self.contract.num_submissions() == 1) wait_until(lambda: self.contract.num_submissions() == 1)
wait_until(lambda: client.zgs_get_file_info(data_root) is not None) wait_until(lambda: client.zgs_get_file_info(data_root) is not None)
@ -57,10 +60,18 @@ class PrunerTest(TestFramework):
for i in range(len(segments)): for i in range(len(segments)):
index_store = i % 2 index_store = i % 2
index_empty = 1 - i % 2 index_empty = 1 - i % 2
seg0 = self.nodes[index_store].zgs_download_segment(data_root, i * 1024, (i + 1) * 1024) seg0 = self.nodes[index_store].zgs_download_segment(
seg1 = self.nodes[index_empty].zgs_download_segment(data_root, i * 1024, (i + 1) * 1024) data_root, i * 1024, (i + 1) * 1024
seg2 = self.nodes[2].zgs_download_segment(data_root, i * 1024, (i + 1) * 1024) )
seg3 = self.nodes[3].zgs_download_segment(data_root, i * 1024, (i + 1) * 1024) seg1 = self.nodes[index_empty].zgs_download_segment(
data_root, i * 1024, (i + 1) * 1024
)
seg2 = self.nodes[2].zgs_download_segment(
data_root, i * 1024, (i + 1) * 1024
)
seg3 = self.nodes[3].zgs_download_segment(
data_root, i * 1024, (i + 1) * 1024
)
# base64 encoding size # base64 encoding size
assert_equal(len(seg0), 349528) assert_equal(len(seg0), 349528)
assert_equal(seg1, None) assert_equal(seg1, None)

View File

@ -5,6 +5,7 @@ import shutil
from test_framework.test_framework import TestFramework from test_framework.test_framework import TestFramework
from utility.utils import wait_until from utility.utils import wait_until
class SnapshotTask(TestFramework): class SnapshotTask(TestFramework):
def setup_params(self): def setup_params(self):
self.num_nodes = 2 self.num_nodes = 2
@ -28,11 +29,11 @@ class SnapshotTask(TestFramework):
# Start the last node to verify historical file sync # Start the last node to verify historical file sync
self.nodes[1].shutdown() self.nodes[1].shutdown()
shutil.rmtree(os.path.join(self.nodes[1].data_dir, 'db/data_db')) shutil.rmtree(os.path.join(self.nodes[1].data_dir, "db/data_db"))
self.start_storage_node(1) self.start_storage_node(1)
self.nodes[1].wait_for_rpc_connection() self.nodes[1].wait_for_rpc_connection()
wait_until(lambda: self.nodes[1].zgs_get_file_info(data_root_1) is not None) wait_until(lambda: self.nodes[1].zgs_get_file_info(data_root_1) is not None)
wait_until(lambda: self.nodes[1].zgs_get_file_info(data_root_1)["finalized"]) wait_until(lambda: self.nodes[1].zgs_get_file_info(data_root_1)["finalized"])

View File

@ -77,9 +77,7 @@ class SubmissionTest(TestFramework):
continue continue
# Wait for log entry before file sync, otherwise, admin_startSyncFile will be failed. # Wait for log entry before file sync, otherwise, admin_startSyncFile will be failed.
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root) is not None)
lambda: self.nodes[i].zgs_get_file_info(data_root) is not None
)
self.nodes[i].admin_start_sync_file(submission_index - 1) self.nodes[i].admin_start_sync_file(submission_index - 1)
@ -89,15 +87,11 @@ class SubmissionTest(TestFramework):
) )
) )
wait_until( wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root)["finalized"])
lambda: self.nodes[i].zgs_get_file_info(data_root)["finalized"]
)
assert_equal( assert_equal(
base64.b64decode( base64.b64decode(
self.nodes[i] self.nodes[i].zgs_download_segment(data_root, 0, 1).encode("utf-8")
.zgs_download_segment(data_root, 0, 1)
.encode("utf-8")
), ),
first_entry, first_entry,
) )

View File

@ -3,17 +3,14 @@
from test_framework.test_framework import TestFramework from test_framework.test_framework import TestFramework
from utility.utils import wait_until from utility.utils import wait_until
class AutoSyncHistoricalTest(TestFramework): class AutoSyncHistoricalTest(TestFramework):
def setup_params(self): def setup_params(self):
self.num_nodes = 4 self.num_nodes = 4
# Enable auto sync # Enable auto sync
for i in range(self.num_nodes): for i in range(self.num_nodes):
self.zgs_node_configs[i] = { self.zgs_node_configs[i] = {"sync": {"auto_sync_enabled": True}}
"sync": {
"auto_sync_enabled": True
}
}
def run_test(self): def run_test(self):
# Stop the last node to verify historical file sync # Stop the last node to verify historical file sync
@ -26,17 +23,36 @@ class AutoSyncHistoricalTest(TestFramework):
# Files should be available on other nodes via auto sync # Files should be available on other nodes via auto sync
for i in range(1, self.num_nodes - 1): for i in range(1, self.num_nodes - 1):
wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root_1) is not None) wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root_1) is not None)
wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root_1)["finalized"]) wait_until(
lambda: self.nodes[i].zgs_get_file_info(data_root_1)["finalized"]
)
wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root_2) is not None) wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root_2) is not None)
wait_until(lambda: self.nodes[i].zgs_get_file_info(data_root_2)["finalized"]) wait_until(
lambda: self.nodes[i].zgs_get_file_info(data_root_2)["finalized"]
)
# Start the last node to verify historical file sync # Start the last node to verify historical file sync
self.start_storage_node(self.num_nodes - 1) self.start_storage_node(self.num_nodes - 1)
self.nodes[self.num_nodes - 1].wait_for_rpc_connection() self.nodes[self.num_nodes - 1].wait_for_rpc_connection()
wait_until(lambda: self.nodes[self.num_nodes - 1].zgs_get_file_info(data_root_1) is not None) wait_until(
wait_until(lambda: self.nodes[self.num_nodes - 1].zgs_get_file_info(data_root_1)["finalized"]) lambda: self.nodes[self.num_nodes - 1].zgs_get_file_info(data_root_1)
wait_until(lambda: self.nodes[self.num_nodes - 1].zgs_get_file_info(data_root_2) is not None) is not None
wait_until(lambda: self.nodes[self.num_nodes - 1].zgs_get_file_info(data_root_2)["finalized"]) )
wait_until(
lambda: self.nodes[self.num_nodes - 1].zgs_get_file_info(data_root_1)[
"finalized"
]
)
wait_until(
lambda: self.nodes[self.num_nodes - 1].zgs_get_file_info(data_root_2)
is not None
)
wait_until(
lambda: self.nodes[self.num_nodes - 1].zgs_get_file_info(data_root_2)[
"finalized"
]
)
if __name__ == "__main__": if __name__ == "__main__":
AutoSyncHistoricalTest().main() AutoSyncHistoricalTest().main()

View File

@ -3,17 +3,14 @@
from test_framework.test_framework import TestFramework from test_framework.test_framework import TestFramework
from utility.utils import wait_until from utility.utils import wait_until
class AutoSyncTest(TestFramework): class AutoSyncTest(TestFramework):
def setup_params(self): def setup_params(self):
self.num_nodes = 2 self.num_nodes = 2
# Enable auto sync # Enable auto sync
for i in range(self.num_nodes): for i in range(self.num_nodes):
self.zgs_node_configs[i] = { self.zgs_node_configs[i] = {"sync": {"auto_sync_enabled": True}}
"sync": {
"auto_sync_enabled": True
}
}
def run_test(self): def run_test(self):
# Submit and upload files on node 0 # Submit and upload files on node 0
@ -26,5 +23,6 @@ class AutoSyncTest(TestFramework):
wait_until(lambda: self.nodes[1].zgs_get_file_info(data_root_2) is not None) wait_until(lambda: self.nodes[1].zgs_get_file_info(data_root_2) is not None)
wait_until(lambda: self.nodes[1].zgs_get_file_info(data_root_2)["finalized"]) wait_until(lambda: self.nodes[1].zgs_get_file_info(data_root_2)["finalized"])
if __name__ == "__main__": if __name__ == "__main__":
AutoSyncTest().main() AutoSyncTest().main()

View File

@ -6,6 +6,7 @@ from test_framework.test_framework import TestFramework
from utility.submission import data_to_segments from utility.submission import data_to_segments
from utility.utils import assert_equal, wait_until from utility.utils import assert_equal, wait_until
class SyncChunksTest(TestFramework): class SyncChunksTest(TestFramework):
""" """
By default, auto_sync_enabled and sync_file_on_announcement_enabled are both false, By default, auto_sync_enabled and sync_file_on_announcement_enabled are both false,
@ -17,9 +18,7 @@ class SyncChunksTest(TestFramework):
# enable find chunks topic # enable find chunks topic
for i in range(self.num_nodes): for i in range(self.num_nodes):
self.zgs_node_configs[i] = { self.zgs_node_configs[i] = {"network_find_chunks_enabled": True}
"network_find_chunks_enabled": True
}
def run_test(self): def run_test(self):
client1 = self.nodes[0] client1 = self.nodes[0]
@ -35,20 +34,25 @@ class SyncChunksTest(TestFramework):
# Upload only 2nd segment to storage node # Upload only 2nd segment to storage node
segments = data_to_segments(chunk_data) segments = data_to_segments(chunk_data)
self.log.info("segments: %s", [(s["root"], s["index"], s["proof"]) for s in segments]) self.log.info(
assert(client1.zgs_upload_segment(segments[1]) is None) "segments: %s", [(s["root"], s["index"], s["proof"]) for s in segments]
)
assert client1.zgs_upload_segment(segments[1]) is None
# segment 0 is not able to download # segment 0 is not able to download
assert(client1.zgs_download_segment_decoded(data_root, 0, 1024) is None) assert client1.zgs_download_segment_decoded(data_root, 0, 1024) is None
# segment 1 is available to download # segment 1 is available to download
assert_equal(client1.zgs_download_segment_decoded(data_root, 1024, 2048), chunk_data[1024*256:2048*256]) assert_equal(
client1.zgs_download_segment_decoded(data_root, 1024, 2048),
chunk_data[1024 * 256 : 2048 * 256],
)
# segment 2 is not able to download # segment 2 is not able to download
assert(client1.zgs_download_segment_decoded(data_root, 2048, 3072) is None) assert client1.zgs_download_segment_decoded(data_root, 2048, 3072) is None
# Segment 1 should not be able to download on node 2 # Segment 1 should not be able to download on node 2
wait_until(lambda: client2.zgs_get_file_info(data_root) is not None) wait_until(lambda: client2.zgs_get_file_info(data_root) is not None)
assert_equal(client2.zgs_get_file_info(data_root)["finalized"], False) assert_equal(client2.zgs_get_file_info(data_root)["finalized"], False)
assert(client2.zgs_download_segment_decoded(data_root, 1024, 2048) is None) assert client2.zgs_download_segment_decoded(data_root, 1024, 2048) is None
# Restart node 1 to check if the proof nodes are persisted. # Restart node 1 to check if the proof nodes are persisted.
self.stop_storage_node(0) self.stop_storage_node(0)
@ -56,12 +60,19 @@ class SyncChunksTest(TestFramework):
self.nodes[0].wait_for_rpc_connection() self.nodes[0].wait_for_rpc_connection()
# Trigger chunks sync by rpc # Trigger chunks sync by rpc
assert(client2.admin_start_sync_chunks(0, 1024, 2048) is None) assert client2.admin_start_sync_chunks(0, 1024, 2048) is None
wait_until(lambda: client2.sync_status_is_completed_or_unknown(0)) wait_until(lambda: client2.sync_status_is_completed_or_unknown(0))
wait_until(lambda: client2.zgs_download_segment_decoded(data_root, 1024, 2048) is not None) wait_until(
lambda: client2.zgs_download_segment_decoded(data_root, 1024, 2048)
is not None
)
# Validate data # Validate data
assert_equal(client2.zgs_download_segment_decoded(data_root, 1024, 2048), chunk_data[1024*256:2048*256]) assert_equal(
client2.zgs_download_segment_decoded(data_root, 1024, 2048),
chunk_data[1024 * 256 : 2048 * 256],
)
if __name__ == "__main__": if __name__ == "__main__":
SyncChunksTest().main() SyncChunksTest().main()

View File

@ -5,6 +5,7 @@ import time
from test_framework.test_framework import TestFramework from test_framework.test_framework import TestFramework
from utility.utils import assert_equal, wait_until from utility.utils import assert_equal, wait_until
class SyncFileTest(TestFramework): class SyncFileTest(TestFramework):
""" """
By default, auto_sync_enabled and sync_file_on_announcement_enabled are both false, By default, auto_sync_enabled and sync_file_on_announcement_enabled are both false,
@ -26,7 +27,7 @@ class SyncFileTest(TestFramework):
# restart client2 # restart client2
client2.start() client2.start()
client2.wait_for_rpc_connection() client2.wait_for_rpc_connection()
# File should not be auto sync on node 2 and there is no cached file locations # File should not be auto sync on node 2 and there is no cached file locations
wait_until(lambda: client2.zgs_get_file_info(data_root) is not None) wait_until(lambda: client2.zgs_get_file_info(data_root) is not None)
time.sleep(3) time.sleep(3)
@ -35,7 +36,7 @@ class SyncFileTest(TestFramework):
# assert(client2.admin_get_file_location(0) is None) # assert(client2.admin_get_file_location(0) is None)
# Trigger file sync by rpc # Trigger file sync by rpc
assert(client2.admin_start_sync_file(0) is None) assert client2.admin_start_sync_file(0) is None
wait_until(lambda: client2.sync_status_is_completed_or_unknown(0)) wait_until(lambda: client2.sync_status_is_completed_or_unknown(0))
wait_until(lambda: client2.zgs_get_file_info(data_root)["finalized"]) wait_until(lambda: client2.zgs_get_file_info(data_root)["finalized"])
# file sync use ASK_FILE & ANSWER FILE protocol, and do not cache file announcement anymore. # file sync use ASK_FILE & ANSWER FILE protocol, and do not cache file announcement anymore.
@ -47,5 +48,6 @@ class SyncFileTest(TestFramework):
client1.zgs_download_segment(data_root, 0, 1024), client1.zgs_download_segment(data_root, 0, 1024),
) )
if __name__ == "__main__": if __name__ == "__main__":
SyncFileTest().main() SyncFileTest().main()

View File

@ -6,8 +6,8 @@ from utility.run_all import run_all
if __name__ == "__main__": if __name__ == "__main__":
run_all( run_all(
test_dir = os.path.dirname(__file__), test_dir=os.path.dirname(__file__),
slow_tests={"mine_test.py", "random_test.py", "same_root_test.py"}, slow_tests={"mine_test.py", "random_test.py", "same_root_test.py"},
long_manual_tests={"fuzz_test.py"}, long_manual_tests={"fuzz_test.py"},
single_run_tests={"mine_with_market_test.py"}, single_run_tests={"mine_with_market_test.py"},
) )

View File

@ -12,11 +12,7 @@ from config.node_config import (
TX_PARAMS, TX_PARAMS,
) )
from utility.simple_rpc_proxy import SimpleRpcProxy from utility.simple_rpc_proxy import SimpleRpcProxy
from utility.utils import ( from utility.utils import initialize_config, wait_until, estimate_st_performance
initialize_config,
wait_until,
estimate_st_performance
)
from test_framework.contracts import load_contract_metadata from test_framework.contracts import load_contract_metadata
@ -36,6 +32,7 @@ class BlockChainNodeType(Enum):
else: else:
raise AssertionError("Unsupported blockchain type") raise AssertionError("Unsupported blockchain type")
@unique @unique
class NodeType(Enum): class NodeType(Enum):
BlockChain = 0 BlockChain = 0
@ -50,7 +47,7 @@ class TestNode:
def __init__( def __init__(
self, node_type, index, data_dir, rpc_url, binary, config, log, rpc_timeout=10 self, node_type, index, data_dir, rpc_url, binary, config, log, rpc_timeout=10
): ):
assert os.path.exists(binary), ("Binary not found: %s" % binary) assert os.path.exists(binary), "Binary not found: %s" % binary
self.node_type = node_type self.node_type = node_type
self.index = index self.index = index
self.data_dir = data_dir self.data_dir = data_dir
@ -270,12 +267,14 @@ class BlockchainNode(TestNode):
def deploy_contract(name, args=None): def deploy_contract(name, args=None):
if args is None: if args is None:
args = [] args = []
contract_interface = load_contract_metadata(path=self.contract_path, name=name) contract_interface = load_contract_metadata(
path=self.contract_path, name=name
)
contract = w3.eth.contract( contract = w3.eth.contract(
abi=contract_interface["abi"], abi=contract_interface["abi"],
bytecode=contract_interface["bytecode"], bytecode=contract_interface["bytecode"],
) )
tx_params = TX_PARAMS.copy() tx_params = TX_PARAMS.copy()
del tx_params["gas"] del tx_params["gas"]
tx_hash = contract.constructor(*args).transact(tx_params) tx_hash = contract.constructor(*args).transact(tx_params)
@ -285,7 +284,7 @@ class BlockchainNode(TestNode):
abi=contract_interface["abi"], abi=contract_interface["abi"],
) )
return contract, tx_hash return contract, tx_hash
def deploy_no_market(): def deploy_no_market():
self.log.debug("Start deploy contracts") self.log.debug("Start deploy contracts")
@ -301,12 +300,16 @@ class BlockchainNode(TestNode):
mine_contract, _ = deploy_contract("PoraMineTest", [0]) mine_contract, _ = deploy_contract("PoraMineTest", [0])
self.log.debug("Mine deployed") self.log.debug("Mine deployed")
mine_contract.functions.initialize(1, flow_contract.address, dummy_reward_contract.address).transact(TX_PARAMS) mine_contract.functions.initialize(
1, flow_contract.address, dummy_reward_contract.address
).transact(TX_PARAMS)
mine_contract.functions.setDifficultyAdjustRatio(1).transact(TX_PARAMS) mine_contract.functions.setDifficultyAdjustRatio(1).transact(TX_PARAMS)
mine_contract.functions.setTargetSubmissions(2).transact(TX_PARAMS) mine_contract.functions.setTargetSubmissions(2).transact(TX_PARAMS)
self.log.debug("Mine Initialized") self.log.debug("Mine Initialized")
flow_initialize_hash = flow_contract.functions.initialize(dummy_market_contract.address, mine_period).transact(TX_PARAMS) flow_initialize_hash = flow_contract.functions.initialize(
dummy_market_contract.address, mine_period
).transact(TX_PARAMS)
self.log.debug("Flow Initialized") self.log.debug("Flow Initialized")
self.wait_for_transaction_receipt(w3, flow_initialize_hash) self.wait_for_transaction_receipt(w3, flow_initialize_hash)
@ -315,45 +318,60 @@ class BlockchainNode(TestNode):
# tx_hash = mine_contract.functions.setMiner(decode_hex(MINER_ID)).transact(TX_PARAMS) # tx_hash = mine_contract.functions.setMiner(decode_hex(MINER_ID)).transact(TX_PARAMS)
# self.wait_for_transaction_receipt(w3, tx_hash) # self.wait_for_transaction_receipt(w3, tx_hash)
return flow_contract, flow_initialize_hash, mine_contract, dummy_reward_contract return (
flow_contract,
flow_initialize_hash,
mine_contract,
dummy_reward_contract,
)
def deploy_with_market(lifetime_seconds): def deploy_with_market(lifetime_seconds):
self.log.debug("Start deploy contracts") self.log.debug("Start deploy contracts")
mine_contract, _ = deploy_contract("PoraMineTest", [0]) mine_contract, _ = deploy_contract("PoraMineTest", [0])
self.log.debug("Mine deployed") self.log.debug("Mine deployed")
market_contract, _ = deploy_contract("FixedPrice", []) market_contract, _ = deploy_contract("FixedPrice", [])
self.log.debug("Market deployed") self.log.debug("Market deployed")
reward_contract, _ = deploy_contract("ChunkLinearReward", [lifetime_seconds]) reward_contract, _ = deploy_contract(
"ChunkLinearReward", [lifetime_seconds]
)
self.log.debug("Reward deployed") self.log.debug("Reward deployed")
flow_contract, _ = deploy_contract("FixedPriceFlow", [0]) flow_contract, _ = deploy_contract("FixedPriceFlow", [0])
self.log.debug("Flow deployed") self.log.debug("Flow deployed")
mine_contract.functions.initialize(1, flow_contract.address, reward_contract.address).transact(TX_PARAMS) mine_contract.functions.initialize(
1, flow_contract.address, reward_contract.address
).transact(TX_PARAMS)
mine_contract.functions.setDifficultyAdjustRatio(1).transact(TX_PARAMS) mine_contract.functions.setDifficultyAdjustRatio(1).transact(TX_PARAMS)
mine_contract.functions.setTargetSubmissions(2).transact(TX_PARAMS) mine_contract.functions.setTargetSubmissions(2).transact(TX_PARAMS)
self.log.debug("Mine Initialized") self.log.debug("Mine Initialized")
market_contract.functions.initialize(int(lifetime_seconds * 256 * 10 * 10 ** 18 / market_contract.functions.initialize(
2 ** 30 / 12 / 31 / 86400), int(lifetime_seconds * 256 * 10 * 10**18 / 2**30 / 12 / 31 / 86400),
flow_contract.address, reward_contract.address).transact(TX_PARAMS) flow_contract.address,
reward_contract.address,
).transact(TX_PARAMS)
self.log.debug("Market Initialized") self.log.debug("Market Initialized")
reward_contract.functions.initialize(market_contract.address, mine_contract.address).transact(TX_PARAMS) reward_contract.functions.initialize(
reward_contract.functions.setBaseReward(10 ** 18).transact(TX_PARAMS) market_contract.address, mine_contract.address
).transact(TX_PARAMS)
reward_contract.functions.setBaseReward(10**18).transact(TX_PARAMS)
self.log.debug("Reward Initialized") self.log.debug("Reward Initialized")
flow_initialize_hash = flow_contract.functions.initialize(market_contract.address, mine_period).transact(TX_PARAMS) flow_initialize_hash = flow_contract.functions.initialize(
market_contract.address, mine_period
).transact(TX_PARAMS)
self.log.debug("Flow Initialized") self.log.debug("Flow Initialized")
self.wait_for_transaction_receipt(w3, flow_initialize_hash) self.wait_for_transaction_receipt(w3, flow_initialize_hash)
self.log.info("All contracts deployed") self.log.info("All contracts deployed")
return flow_contract, flow_initialize_hash, mine_contract, reward_contract return flow_contract, flow_initialize_hash, mine_contract, reward_contract
if enable_market: if enable_market:
return deploy_with_market(lifetime_seconds) return deploy_with_market(lifetime_seconds)
else: else:
@ -376,4 +394,7 @@ class BlockchainNode(TestNode):
w3.eth.wait_for_transaction_receipt(tx_hash) w3.eth.wait_for_transaction_receipt(tx_hash)
def start(self): def start(self):
super().start(self.blockchain_node_type == BlockChainNodeType.BSC or self.blockchain_node_type == BlockChainNodeType.ZG) super().start(
self.blockchain_node_type == BlockChainNodeType.BSC
or self.blockchain_node_type == BlockChainNodeType.ZG
)

View File

@ -36,28 +36,36 @@ class ContractProxy:
tx_params = copy(TX_PARAMS) tx_params = copy(TX_PARAMS)
tx_params["value"] = value tx_params["value"] = value
return getattr(contract.functions, fn_name)(**args).transact(tx_params) return getattr(contract.functions, fn_name)(**args).transact(tx_params)
def _logs(self, event_name, node_idx, **args): def _logs(self, event_name, node_idx, **args):
assert node_idx < len(self.blockchain_nodes) assert node_idx < len(self.blockchain_nodes)
contract = self._get_contract(node_idx) contract = self._get_contract(node_idx)
return getattr(contract.events, event_name).create_filter(from_block=0, to_block="latest").get_all_entries()
def transfer(self, value, node_idx = 0): return (
getattr(contract.events, event_name)
.create_filter(from_block=0, to_block="latest")
.get_all_entries()
)
def transfer(self, value, node_idx=0):
tx_params = copy(TX_PARAMS) tx_params = copy(TX_PARAMS)
tx_params["value"] = value tx_params["value"] = value
contract = self._get_contract(node_idx) contract = self._get_contract(node_idx)
contract.receive.transact(tx_params) contract.receive.transact(tx_params)
def address(self): def address(self):
return self.contract_address return self.contract_address
class FlowContractProxy(ContractProxy): class FlowContractProxy(ContractProxy):
def submit( def submit(
self, submission_nodes, node_idx=0, tx_prarams=None, parent_hash=None, self,
submission_nodes,
node_idx=0,
tx_prarams=None,
parent_hash=None,
): ):
assert node_idx < len(self.blockchain_nodes) assert node_idx < len(self.blockchain_nodes)
@ -65,11 +73,12 @@ class FlowContractProxy(ContractProxy):
if tx_prarams is not None: if tx_prarams is not None:
combined_tx_prarams.update(tx_prarams) combined_tx_prarams.update(tx_prarams)
contract = self._get_contract(node_idx) contract = self._get_contract(node_idx)
# print(contract.functions.submit(submission_nodes).estimate_gas(combined_tx_prarams)) # print(contract.functions.submit(submission_nodes).estimate_gas(combined_tx_prarams))
tx_hash = contract.functions.submit(submission_nodes).transact(combined_tx_prarams) tx_hash = contract.functions.submit(submission_nodes).transact(
combined_tx_prarams
)
receipt = self.blockchain_nodes[node_idx].wait_for_transaction_receipt( receipt = self.blockchain_nodes[node_idx].wait_for_transaction_receipt(
contract.w3, tx_hash, parent_hash=parent_hash contract.w3, tx_hash, parent_hash=parent_hash
) )
@ -85,46 +94,46 @@ class FlowContractProxy(ContractProxy):
def epoch(self, node_idx=0): def epoch(self, node_idx=0):
return self.get_mine_context(node_idx)[0] return self.get_mine_context(node_idx)[0]
def update_context(self, node_idx=0): def update_context(self, node_idx=0):
return self._send("makeContext", node_idx) return self._send("makeContext", node_idx)
def get_mine_context(self, node_idx=0): def get_mine_context(self, node_idx=0):
return self._call("makeContextWithResult", node_idx) return self._call("makeContextWithResult", node_idx)
def get_flow_root(self, node_idx=0): def get_flow_root(self, node_idx=0):
return self._call("computeFlowRoot", node_idx) return self._call("computeFlowRoot", node_idx)
def get_flow_length(self, node_idx=0): def get_flow_length(self, node_idx=0):
return self._call("tree", node_idx)[0] return self._call("tree", node_idx)[0]
class MineContractProxy(ContractProxy): class MineContractProxy(ContractProxy):
def last_mined_epoch(self, node_idx=0): def last_mined_epoch(self, node_idx=0):
return self._call("lastMinedEpoch", node_idx) return self._call("lastMinedEpoch", node_idx)
def can_submit(self, node_idx=0): def can_submit(self, node_idx=0):
return self._call("canSubmit", node_idx) return self._call("canSubmit", node_idx)
def current_submissions(self, node_idx=0): def current_submissions(self, node_idx=0):
return self._call("currentSubmissions", node_idx) return self._call("currentSubmissions", node_idx)
def set_quality(self, quality, node_idx=0): def set_quality(self, quality, node_idx=0):
return self._send("setQuality", node_idx, _targetQuality=quality) return self._send("setQuality", node_idx, _targetQuality=quality)
class RewardContractProxy(ContractProxy): class RewardContractProxy(ContractProxy):
def reward_distributes(self, node_idx=0): def reward_distributes(self, node_idx=0):
return self._logs("DistributeReward", node_idx) return self._logs("DistributeReward", node_idx)
def donate(self, value, node_idx = 0): def donate(self, value, node_idx=0):
return self._send_payable("donate", node_idx, value) return self._send_payable("donate", node_idx, value)
def base_reward(self, node_idx = 0): def base_reward(self, node_idx=0):
return self._call("baseReward", node_idx) return self._call("baseReward", node_idx)
def first_rewardable_chunk(self, node_idx = 0): def first_rewardable_chunk(self, node_idx=0):
return self._call("firstRewardableChunk", node_idx) return self._call("firstRewardableChunk", node_idx)
def reward_deadline(self, node_idx = 0): def reward_deadline(self, node_idx=0):
return self._call("rewardDeadline", node_idx, 0) return self._call("rewardDeadline", node_idx, 0)

View File

@ -1,6 +1,7 @@
from pathlib import Path from pathlib import Path
import json import json
def load_contract_metadata(path: str, name: str): def load_contract_metadata(path: str, name: str):
path = Path(path) path = Path(path)
try: try:
@ -8,4 +9,3 @@ def load_contract_metadata(path: str, name: str):
return json.loads(open(found_file, "r").read()) return json.loads(open(found_file, "r").read())
except StopIteration: except StopIteration:
raise Exception(f"Cannot found contract {name}'s metadata") raise Exception(f"Cannot found contract {name}'s metadata")

View File

@ -15,7 +15,11 @@ from pathlib import Path
from eth_utils import encode_hex from eth_utils import encode_hex
from test_framework.bsc_node import BSCNode from test_framework.bsc_node import BSCNode
from test_framework.contract_proxy import FlowContractProxy, MineContractProxy, RewardContractProxy from test_framework.contract_proxy import (
FlowContractProxy,
MineContractProxy,
RewardContractProxy,
)
from test_framework.zgs_node import ZgsNode from test_framework.zgs_node import ZgsNode
from test_framework.blockchain_node import BlockChainNodeType from test_framework.blockchain_node import BlockChainNodeType
from test_framework.conflux_node import ConfluxNode, connect_sample_nodes from test_framework.conflux_node import ConfluxNode, connect_sample_nodes
@ -74,13 +78,17 @@ class TestFramework:
root_dir, "target", "release", "zgs_node" + binary_ext root_dir, "target", "release", "zgs_node" + binary_ext
) )
self.__default_zgs_cli_binary__ = os.path.join( self.__default_zgs_cli_binary__ = os.path.join(
tests_dir, "tmp", "0g-storage-client" + binary_ext tests_dir, "tmp", "0g-storage-client" + binary_ext
) )
def __setup_blockchain_node(self): def __setup_blockchain_node(self):
if self.blockchain_node_type == BlockChainNodeType.ZG: if self.blockchain_node_type == BlockChainNodeType.ZG:
zg_node_init_genesis(self.blockchain_binary, self.root_dir, self.num_blockchain_nodes) zg_node_init_genesis(
self.log.info("0gchain genesis initialized for %s nodes" % self.num_blockchain_nodes) self.blockchain_binary, self.root_dir, self.num_blockchain_nodes
)
self.log.info(
"0gchain genesis initialized for %s nodes" % self.num_blockchain_nodes
)
for i in range(self.num_blockchain_nodes): for i in range(self.num_blockchain_nodes):
if i in self.blockchain_node_configs: if i in self.blockchain_node_configs:
@ -171,15 +179,20 @@ class TestFramework:
self.log.debug("Wait for 0gchain node to generate first block") self.log.debug("Wait for 0gchain node to generate first block")
time.sleep(0.5) time.sleep(0.5)
for node in self.blockchain_nodes: for node in self.blockchain_nodes:
wait_until(lambda: node.net_peerCount() == self.num_blockchain_nodes - 1) wait_until(
lambda: node.net_peerCount() == self.num_blockchain_nodes - 1
)
wait_until(lambda: node.eth_blockNumber() is not None) wait_until(lambda: node.eth_blockNumber() is not None)
wait_until(lambda: int(node.eth_blockNumber(), 16) > 0) wait_until(lambda: int(node.eth_blockNumber(), 16) > 0)
contract, tx_hash, mine_contract, reward_contract = self.blockchain_nodes[0].setup_contract(self.enable_market, self.mine_period, self.lifetime_seconds) contract, tx_hash, mine_contract, reward_contract = self.blockchain_nodes[
0
].setup_contract(self.enable_market, self.mine_period, self.lifetime_seconds)
self.contract = FlowContractProxy(contract, self.blockchain_nodes) self.contract = FlowContractProxy(contract, self.blockchain_nodes)
self.mine_contract = MineContractProxy(mine_contract, self.blockchain_nodes) self.mine_contract = MineContractProxy(mine_contract, self.blockchain_nodes)
self.reward_contract = RewardContractProxy(reward_contract, self.blockchain_nodes) self.reward_contract = RewardContractProxy(
reward_contract, self.blockchain_nodes
)
for node in self.blockchain_nodes[1:]: for node in self.blockchain_nodes[1:]:
node.wait_for_transaction(tx_hash) node.wait_for_transaction(tx_hash)
@ -216,12 +229,14 @@ class TestFramework:
time.sleep(1) time.sleep(1)
node.start() node.start()
self.log.info("Wait the zgs_node launch for %d seconds", self.launch_wait_seconds) self.log.info(
"Wait the zgs_node launch for %d seconds", self.launch_wait_seconds
)
time.sleep(self.launch_wait_seconds) time.sleep(self.launch_wait_seconds)
for node in self.nodes: for node in self.nodes:
node.wait_for_rpc_connection() node.wait_for_rpc_connection()
def add_arguments(self, parser: argparse.ArgumentParser): def add_arguments(self, parser: argparse.ArgumentParser):
parser.add_argument( parser.add_argument(
"--conflux-binary", "--conflux-binary",
@ -336,10 +351,12 @@ class TestFramework:
self.log.addHandler(ch) self.log.addHandler(ch)
def _check_cli_binary(self): def _check_cli_binary(self):
if Path(self.cli_binary).absolute() == Path(self.__default_zgs_cli_binary__).absolute() and not os.path.exists(self.cli_binary): if Path(self.cli_binary).absolute() == Path(
self.__default_zgs_cli_binary__
).absolute() and not os.path.exists(self.cli_binary):
dir = Path(self.cli_binary).parent.absolute() dir = Path(self.cli_binary).parent.absolute()
build_cli(dir) build_cli(dir)
assert os.path.exists(self.cli_binary), ( assert os.path.exists(self.cli_binary), (
"zgs CLI binary not found: %s" % self.cli_binary "zgs CLI binary not found: %s" % self.cli_binary
) )
@ -353,14 +370,12 @@ class TestFramework:
file_to_upload, file_to_upload,
): ):
self._check_cli_binary() self._check_cli_binary()
upload_args = [ upload_args = [
self.cli_binary, self.cli_binary,
"upload", "upload",
"--url", "--url",
blockchain_node_rpc_url, blockchain_node_rpc_url,
"--contract",
contract_address,
"--key", "--key",
encode_hex(key), encode_hex(key),
"--node", "--node",
@ -372,7 +387,9 @@ class TestFramework:
"--file", "--file",
] ]
output = tempfile.NamedTemporaryFile(dir=self.root_dir, delete=False, prefix="zgs_client_output_") output = tempfile.NamedTemporaryFile(
dir=self.root_dir, delete=False, prefix="zgs_client_output_"
)
output_name = output.name output_name = output.name
output_fileno = output.fileno() output_fileno = output.fileno()
@ -383,7 +400,7 @@ class TestFramework:
stdout=output_fileno, stdout=output_fileno,
stderr=output_fileno, stderr=output_fileno,
) )
return_code = proc.wait(timeout=60) return_code = proc.wait(timeout=60)
output.seek(0) output.seek(0)
@ -392,17 +409,27 @@ class TestFramework:
line = line.decode("utf-8") line = line.decode("utf-8")
self.log.debug("line: %s", line) self.log.debug("line: %s", line)
if "root" in line: if "root" in line:
filtered_line = re.sub(r'\x1b\[([0-9,A-Z]{1,2}(;[0-9]{1,2})?(;[0-9]{3})?)?[m|K]?', '', line) filtered_line = re.sub(
r"\x1b\[([0-9,A-Z]{1,2}(;[0-9]{1,2})?(;[0-9]{3})?)?[m|K]?",
"",
line,
)
index = filtered_line.find("root=") index = filtered_line.find("root=")
if index > 0: if index > 0:
root = filtered_line[index + 5 : index + 5 + 66] root = filtered_line[index + 5 : index + 5 + 66]
except Exception as ex: except Exception as ex:
self.log.error("Failed to upload file via CLI tool, output: %s", output_name) self.log.error(
"Failed to upload file via CLI tool, output: %s", output_name
)
raise ex raise ex
finally: finally:
output.close() output.close()
assert return_code == 0, "%s upload file failed, output: %s, log: %s" % (self.cli_binary, output_name, lines) assert return_code == 0, "%s upload file failed, output: %s, log: %s" % (
self.cli_binary,
output_name,
lines,
)
return root return root
@ -410,8 +437,15 @@ class TestFramework:
submissions, data_root = create_submission(chunk_data) submissions, data_root = create_submission(chunk_data)
self.contract.submit(submissions) self.contract.submit(submissions)
self.num_deployed_contracts += 1 self.num_deployed_contracts += 1
wait_until(lambda: self.contract.num_submissions() == self.num_deployed_contracts) wait_until(
self.log.info("Submission completed, data root: %s, submissions(%s) = %s", data_root, len(submissions), submissions) lambda: self.contract.num_submissions() == self.num_deployed_contracts
)
self.log.info(
"Submission completed, data root: %s, submissions(%s) = %s",
data_root,
len(submissions),
submissions,
)
return data_root return data_root
def __upload_file__(self, node_index: int, random_data_size: int) -> str: def __upload_file__(self, node_index: int, random_data_size: int) -> str:
@ -426,7 +460,9 @@ class TestFramework:
# Upload file to storage node # Upload file to storage node
segments = submit_data(client, chunk_data) segments = submit_data(client, chunk_data)
self.log.info("segments: %s", [(s["root"], s["index"], s["proof"]) for s in segments]) self.log.info(
"segments: %s", [(s["root"], s["index"], s["proof"]) for s in segments]
)
wait_until(lambda: client.zgs_get_file_info(data_root)["finalized"]) wait_until(lambda: client.zgs_get_file_info(data_root)["finalized"])
return data_root return data_root
@ -452,7 +488,6 @@ class TestFramework:
if clean: if clean:
self.nodes[index].clean_data() self.nodes[index].clean_data()
def start_storage_node(self, index): def start_storage_node(self, index):
self.nodes[index].start() self.nodes[index].start()
@ -484,7 +519,7 @@ class TestFramework:
if os.path.islink(dst): if os.path.islink(dst):
os.remove(dst) os.remove(dst)
elif os.path.isdir(dst): elif os.path.isdir(dst):
shutil.rmtree(dst) shutil.rmtree(dst)
elif os.path.exists(dst): elif os.path.exists(dst):
os.remove(dst) os.remove(dst)

View File

@ -3,24 +3,35 @@ import subprocess
import tempfile import tempfile
from test_framework.blockchain_node import BlockChainNodeType, BlockchainNode from test_framework.blockchain_node import BlockChainNodeType, BlockchainNode
from utility.utils import blockchain_p2p_port, blockchain_rpc_port, blockchain_ws_port, blockchain_rpc_port_tendermint, pprof_port from utility.utils import (
blockchain_p2p_port,
blockchain_rpc_port,
blockchain_ws_port,
blockchain_rpc_port_tendermint,
pprof_port,
)
from utility.build_binary import build_zg from utility.build_binary import build_zg
def zg_node_init_genesis(binary: str, root_dir: str, num_nodes: int): def zg_node_init_genesis(binary: str, root_dir: str, num_nodes: int):
assert num_nodes > 0, "Invalid number of blockchain nodes: %s" % num_nodes assert num_nodes > 0, "Invalid number of blockchain nodes: %s" % num_nodes
if not os.path.exists(binary): if not os.path.exists(binary):
build_zg(os.path.dirname(binary)) build_zg(os.path.dirname(binary))
shell_script = os.path.join( shell_script = os.path.join(
os.path.dirname(os.path.realpath(__file__)), # test_framework folder os.path.dirname(os.path.realpath(__file__)), # test_framework folder
"..", "config", "0gchain-init-genesis.sh" "..",
"config",
"0gchain-init-genesis.sh",
) )
zgchaind_dir = os.path.join(root_dir, "0gchaind") zgchaind_dir = os.path.join(root_dir, "0gchaind")
os.mkdir(zgchaind_dir) os.mkdir(zgchaind_dir)
log_file = tempfile.NamedTemporaryFile(dir=zgchaind_dir, delete=False, prefix="init_genesis_", suffix=".log") log_file = tempfile.NamedTemporaryFile(
dir=zgchaind_dir, delete=False, prefix="init_genesis_", suffix=".log"
)
p2p_port_start = blockchain_p2p_port(0) p2p_port_start = blockchain_p2p_port(0)
ret = subprocess.run( ret = subprocess.run(
@ -31,7 +42,11 @@ def zg_node_init_genesis(binary: str, root_dir: str, num_nodes: int):
log_file.close() log_file.close()
assert ret.returncode == 0, "Failed to init 0gchain genesis, see more details in log file: %s" % log_file.name assert ret.returncode == 0, (
"Failed to init 0gchain genesis, see more details in log file: %s"
% log_file.name
)
class ZGNode(BlockchainNode): class ZGNode(BlockchainNode):
def __init__( def __init__(
@ -61,19 +76,27 @@ class ZGNode(BlockchainNode):
self.config_file = None self.config_file = None
self.args = [ self.args = [
binary, "start", binary,
"--home", data_dir, "start",
"--home",
data_dir,
# overwrite json rpc http port: 8545 # overwrite json rpc http port: 8545
"--json-rpc.address", "127.0.0.1:%s" % blockchain_rpc_port(index), "--json-rpc.address",
"127.0.0.1:%s" % blockchain_rpc_port(index),
# overwrite json rpc ws port: 8546 # overwrite json rpc ws port: 8546
"--json-rpc.ws-address", "127.0.0.1:%s" % blockchain_ws_port(index), "--json-rpc.ws-address",
"127.0.0.1:%s" % blockchain_ws_port(index),
# overwrite p2p port: 26656 # overwrite p2p port: 26656
"--p2p.laddr", "tcp://127.0.0.1:%s" % blockchain_p2p_port(index), "--p2p.laddr",
"tcp://127.0.0.1:%s" % blockchain_p2p_port(index),
# overwrite rpc port: 26657 # overwrite rpc port: 26657
"--rpc.laddr", "tcp://127.0.0.1:%s" % blockchain_rpc_port_tendermint(index), "--rpc.laddr",
"tcp://127.0.0.1:%s" % blockchain_rpc_port_tendermint(index),
# overwrite pprof port: 6060 # overwrite pprof port: 6060
"--rpc.pprof_laddr", "127.0.0.1:%s" % pprof_port(index), "--rpc.pprof_laddr",
"--log_level", "debug" "127.0.0.1:%s" % pprof_port(index),
"--log_level",
"debug",
] ]
for k, v in updated_config.items(): for k, v in updated_config.items():
@ -82,4 +105,4 @@ class ZGNode(BlockchainNode):
self.args.append(str(v)) self.args.append(str(v))
def setup_config(self): def setup_config(self):
""" Already batch initialized by shell script in framework """ """Already batch initialized by shell script in framework"""

View File

@ -11,6 +11,7 @@ from utility.utils import (
blockchain_rpc_port, blockchain_rpc_port,
) )
class ZgsNode(TestNode): class ZgsNode(TestNode):
def __init__( def __init__(
self, self,
@ -98,17 +99,21 @@ class ZgsNode(TestNode):
def zgs_download_segment(self, data_root, start_index, end_index): def zgs_download_segment(self, data_root, start_index, end_index):
return self.rpc.zgs_downloadSegment([data_root, start_index, end_index]) return self.rpc.zgs_downloadSegment([data_root, start_index, end_index])
def zgs_download_segment_decoded(self, data_root: str, start_chunk_index: int, end_chunk_index: int) -> bytes: def zgs_download_segment_decoded(
encodedSegment = self.rpc.zgs_downloadSegment([data_root, start_chunk_index, end_chunk_index]) self, data_root: str, start_chunk_index: int, end_chunk_index: int
) -> bytes:
encodedSegment = self.rpc.zgs_downloadSegment(
[data_root, start_chunk_index, end_chunk_index]
)
return None if encodedSegment is None else base64.b64decode(encodedSegment) return None if encodedSegment is None else base64.b64decode(encodedSegment)
def zgs_get_file_info(self, data_root): def zgs_get_file_info(self, data_root):
return self.rpc.zgs_getFileInfo([data_root]) return self.rpc.zgs_getFileInfo([data_root, True])
def zgs_get_file_info_by_tx_seq(self, tx_seq): def zgs_get_file_info_by_tx_seq(self, tx_seq):
return self.rpc.zgs_getFileInfoByTxSeq([tx_seq]) return self.rpc.zgs_getFileInfoByTxSeq([tx_seq])
def zgs_get_flow_context(self, tx_seq): def zgs_get_flow_context(self, tx_seq):
return self.rpc.zgs_getFlowContext([tx_seq]) return self.rpc.zgs_getFlowContext([tx_seq])
@ -118,9 +123,13 @@ class ZgsNode(TestNode):
def admin_start_sync_file(self, tx_seq): def admin_start_sync_file(self, tx_seq):
return self.rpc.admin_startSyncFile([tx_seq]) return self.rpc.admin_startSyncFile([tx_seq])
def admin_start_sync_chunks(self, tx_seq: int, start_chunk_index: int, end_chunk_index: int): def admin_start_sync_chunks(
return self.rpc.admin_startSyncChunks([tx_seq, start_chunk_index, end_chunk_index]) self, tx_seq: int, start_chunk_index: int, end_chunk_index: int
):
return self.rpc.admin_startSyncChunks(
[tx_seq, start_chunk_index, end_chunk_index]
)
def admin_get_sync_status(self, tx_seq): def admin_get_sync_status(self, tx_seq):
return self.rpc.admin_getSyncStatus([tx_seq]) return self.rpc.admin_getSyncStatus([tx_seq])
@ -128,8 +137,8 @@ class ZgsNode(TestNode):
def sync_status_is_completed_or_unknown(self, tx_seq): def sync_status_is_completed_or_unknown(self, tx_seq):
status = self.rpc.admin_getSyncStatus([tx_seq]) status = self.rpc.admin_getSyncStatus([tx_seq])
return status == "Completed" or status == "unknown" return status == "Completed" or status == "unknown"
def admin_get_file_location(self, tx_seq, all_shards = True): def admin_get_file_location(self, tx_seq, all_shards=True):
return self.rpc.admin_getFileLocation([tx_seq, all_shards]) return self.rpc.admin_getFileLocation([tx_seq, all_shards])
def clean_data(self): def clean_data(self):

View File

@ -9,31 +9,40 @@ from enum import Enum, unique
from utility.utils import is_windows_platform, wait_until from utility.utils import is_windows_platform, wait_until
# v1.0.0-ci release # v1.0.0-ci release
GITHUB_DOWNLOAD_URL="https://api.github.com/repos/0glabs/0g-storage-node/releases/152560136" GITHUB_DOWNLOAD_URL = (
"https://api.github.com/repos/0glabs/0g-storage-node/releases/152560136"
)
CONFLUX_BINARY = "conflux.exe" if is_windows_platform() else "conflux" CONFLUX_BINARY = "conflux.exe" if is_windows_platform() else "conflux"
BSC_BINARY = "geth.exe" if is_windows_platform() else "geth" BSC_BINARY = "geth.exe" if is_windows_platform() else "geth"
ZG_BINARY = "0gchaind.exe" if is_windows_platform() else "0gchaind" ZG_BINARY = "0gchaind.exe" if is_windows_platform() else "0gchaind"
CLIENT_BINARY = "0g-storage-client.exe" if is_windows_platform() else "0g-storage-client" CLIENT_BINARY = (
"0g-storage-client.exe" if is_windows_platform() else "0g-storage-client"
)
CLI_GIT_REV = "98d74b7e7e6084fc986cb43ce2c66692dac094a6" CLI_GIT_REV = "98d74b7e7e6084fc986cb43ce2c66692dac094a6"
@unique @unique
class BuildBinaryResult(Enum): class BuildBinaryResult(Enum):
AlreadyExists = 0 AlreadyExists = 0
Installed = 1 Installed = 1
NotInstalled = 2 NotInstalled = 2
def build_conflux(dir: str) -> BuildBinaryResult: def build_conflux(dir: str) -> BuildBinaryResult:
# Download or build conflux binary if absent # Download or build conflux binary if absent
result = __download_from_github( result = __download_from_github(
dir=dir, dir=dir,
binary_name=CONFLUX_BINARY, binary_name=CONFLUX_BINARY,
github_url=GITHUB_DOWNLOAD_URL, github_url=GITHUB_DOWNLOAD_URL,
asset_name=__asset_name(CONFLUX_BINARY, zip=True), asset_name=__asset_name(CONFLUX_BINARY, zip=True),
) )
if result == BuildBinaryResult.AlreadyExists or result == BuildBinaryResult.Installed: if (
result == BuildBinaryResult.AlreadyExists
or result == BuildBinaryResult.Installed
):
return result return result
return __build_from_github( return __build_from_github(
@ -44,6 +53,7 @@ def build_conflux(dir: str) -> BuildBinaryResult:
compiled_relative_path=["target", "release"], compiled_relative_path=["target", "release"],
) )
def build_bsc(dir: str) -> BuildBinaryResult: def build_bsc(dir: str) -> BuildBinaryResult:
# Download bsc binary if absent # Download bsc binary if absent
result = __download_from_github( result = __download_from_github(
@ -55,10 +65,13 @@ def build_bsc(dir: str) -> BuildBinaryResult:
# Requires to download binary successfully, since it is not ready to build # Requires to download binary successfully, since it is not ready to build
# binary from source code. # binary from source code.
assert result != BuildBinaryResult.NotInstalled, "Cannot download binary from github [%s]" % BSC_BINARY assert result != BuildBinaryResult.NotInstalled, (
"Cannot download binary from github [%s]" % BSC_BINARY
)
return result return result
def build_zg(dir: str) -> BuildBinaryResult: def build_zg(dir: str) -> BuildBinaryResult:
# Download or build 0gchain binary if absent # Download or build 0gchain binary if absent
result = __download_from_github( result = __download_from_github(
@ -68,7 +81,10 @@ def build_zg(dir: str) -> BuildBinaryResult:
asset_name=__asset_name(ZG_BINARY, zip=True), asset_name=__asset_name(ZG_BINARY, zip=True),
) )
if result == BuildBinaryResult.AlreadyExists or result == BuildBinaryResult.Installed: if (
result == BuildBinaryResult.AlreadyExists
or result == BuildBinaryResult.Installed
):
return result return result
return __build_from_github( return __build_from_github(
@ -79,17 +95,18 @@ def build_zg(dir: str) -> BuildBinaryResult:
compiled_relative_path=[], compiled_relative_path=[],
) )
def build_cli(dir: str) -> BuildBinaryResult: def build_cli(dir: str) -> BuildBinaryResult:
# Build 0g-storage-client binary if absent # Build 0g-storage-client binary if absent
return __build_from_github( return __build_from_github(
dir=dir, dir=dir,
binary_name=CLIENT_BINARY, binary_name=CLIENT_BINARY,
github_url="https://github.com/0glabs/0g-storage-client.git", github_url="https://github.com/0glabs/0g-storage-client.git",
git_rev=CLI_GIT_REV,
build_cmd="go build", build_cmd="go build",
compiled_relative_path=[], compiled_relative_path=[],
) )
def __asset_name(binary_name: str, zip: bool = False) -> str: def __asset_name(binary_name: str, zip: bool = False) -> str:
sys = platform.system().lower() sys = platform.system().lower()
if sys == "linux": if sys == "linux":
@ -102,24 +119,34 @@ def __asset_name(binary_name: str, zip: bool = False) -> str:
else: else:
raise RuntimeError("Unable to recognize platform") raise RuntimeError("Unable to recognize platform")
def __build_from_github(dir: str, binary_name: str, github_url: str, build_cmd: str, compiled_relative_path: list[str], git_rev = None) -> BuildBinaryResult:
def __build_from_github(
dir: str,
binary_name: str,
github_url: str,
build_cmd: str,
compiled_relative_path: list[str],
git_rev=None,
) -> BuildBinaryResult:
if git_rev is None: if git_rev is None:
versioned_binary_name = binary_name versioned_binary_name = binary_name
elif binary_name.endswith(".exe"): elif binary_name.endswith(".exe"):
versioned_binary_name = binary_name.removesuffix(".exe") + f"_{git_rev}.exe" versioned_binary_name = binary_name.removesuffix(".exe") + f"_{git_rev}.exe"
else: else:
versioned_binary_name = f"{binary_name}_{git_rev}" versioned_binary_name = f"{binary_name}_{git_rev}"
binary_path = os.path.join(dir, binary_name) binary_path = os.path.join(dir, binary_name)
versioned_binary_path = os.path.join(dir, versioned_binary_name) versioned_binary_path = os.path.join(dir, versioned_binary_name)
if os.path.exists(versioned_binary_path): if os.path.exists(versioned_binary_path):
__create_sym_link(versioned_binary_name, binary_name, dir) __create_sym_link(versioned_binary_name, binary_name, dir)
return BuildBinaryResult.AlreadyExists return BuildBinaryResult.AlreadyExists
start_time = time.time() start_time = time.time()
# clone code from github to a temp folder # clone code from github to a temp folder
code_tmp_dir_name = (binary_name[:-4] if is_windows_platform() else binary_name) + "_tmp" code_tmp_dir_name = (
binary_name[:-4] if is_windows_platform() else binary_name
) + "_tmp"
code_tmp_dir = os.path.join(dir, code_tmp_dir_name) code_tmp_dir = os.path.join(dir, code_tmp_dir_name)
if os.path.exists(code_tmp_dir): if os.path.exists(code_tmp_dir):
shutil.rmtree(code_tmp_dir) shutil.rmtree(code_tmp_dir)
@ -145,14 +172,22 @@ def __build_from_github(dir: str, binary_name: str, github_url: str, build_cmd:
shutil.rmtree(code_tmp_dir, ignore_errors=True) shutil.rmtree(code_tmp_dir, ignore_errors=True)
print("Completed to build binary " + binary_name + ", Elapsed: " + str(int(time.time() - start_time)) + " seconds", flush=True) print(
"Completed to build binary "
+ binary_name
+ ", Elapsed: "
+ str(int(time.time() - start_time))
+ " seconds",
flush=True,
)
return BuildBinaryResult.Installed return BuildBinaryResult.Installed
def __create_sym_link(src, dst, path = None):
def __create_sym_link(src, dst, path=None):
if src == dst: if src == dst:
return return
origin_path = os.getcwd() origin_path = os.getcwd()
if path is not None: if path is not None:
os.chdir(path) os.chdir(path)
@ -171,16 +206,19 @@ def __create_sym_link(src, dst, path = None):
os.chdir(origin_path) os.chdir(origin_path)
def __download_from_github(dir: str, binary_name: str, github_url: str, asset_name: str) -> BuildBinaryResult:
def __download_from_github(
dir: str, binary_name: str, github_url: str, asset_name: str
) -> BuildBinaryResult:
if not os.path.exists(dir): if not os.path.exists(dir):
os.makedirs(dir, exist_ok=True) os.makedirs(dir, exist_ok=True)
binary_path = os.path.join(dir, binary_name) binary_path = os.path.join(dir, binary_name)
if os.path.exists(binary_path): if os.path.exists(binary_path):
return BuildBinaryResult.AlreadyExists return BuildBinaryResult.AlreadyExists
print("Begin to download binary from github: %s" % binary_name, flush=True) print("Begin to download binary from github: %s" % binary_name, flush=True)
start_time = time.time() start_time = time.time()
req = requests.get(github_url) req = requests.get(github_url)
@ -194,7 +232,7 @@ def __download_from_github(dir: str, binary_name: str, github_url: str, asset_na
if download_url is None: if download_url is None:
print(f"Cannot find asset by name {asset_name}", flush=True) print(f"Cannot find asset by name {asset_name}", flush=True)
return BuildBinaryResult.NotInstalled return BuildBinaryResult.NotInstalled
content = requests.get(download_url).content content = requests.get(download_url).content
# Supports to read from zipped binary # Supports to read from zipped binary
@ -203,17 +241,24 @@ def __download_from_github(dir: str, binary_name: str, github_url: str, asset_na
with open(asset_path, "xb") as f: with open(asset_path, "xb") as f:
f.write(content) f.write(content)
shutil.unpack_archive(asset_path, dir) shutil.unpack_archive(asset_path, dir)
assert os.path.exists(binary_path), f"Cannot find binary after unzip, binary = {binary_name}, asset = {asset_name}" assert os.path.exists(
binary_path
), f"Cannot find binary after unzip, binary = {binary_name}, asset = {asset_name}"
else: else:
with open(binary_path, "xb") as f: with open(binary_path, "xb") as f:
f.write(content) f.write(content)
if not is_windows_platform(): if not is_windows_platform():
st = os.stat(binary_path) st = os.stat(binary_path)
os.chmod(binary_path, st.st_mode | stat.S_IEXEC) os.chmod(binary_path, st.st_mode | stat.S_IEXEC)
wait_until(lambda: os.access(binary_path, os.X_OK), timeout=120) wait_until(lambda: os.access(binary_path, os.X_OK), timeout=120)
print("Completed to download binary, Elapsed: " + str(int(time.time() - start_time)) + " seconds", flush=True) print(
"Completed to download binary, Elapsed: "
+ str(int(time.time() - start_time))
+ " seconds",
flush=True,
)
return BuildBinaryResult.Installed return BuildBinaryResult.Installed

View File

@ -162,15 +162,20 @@ class MerkleTree:
n = len(data) n = len(data)
if n < ENTRY_SIZE or (n & (n - 1)) != 0: if n < ENTRY_SIZE or (n & (n - 1)) != 0:
raise Exception("Input length is not power of 2") raise Exception("Input length is not power of 2")
leaves = [Leaf.from_data(data[i:i + ENTRY_SIZE], tree.hasher) for i in range(0, n, ENTRY_SIZE)] leaves = [
Leaf.from_data(data[i : i + ENTRY_SIZE], tree.hasher)
for i in range(0, n, ENTRY_SIZE)
]
tree.__leaves = leaves tree.__leaves = leaves
nodes = leaves nodes = leaves
while len(nodes) > 1: while len(nodes) > 1:
next_nodes = [] next_nodes = []
for i in range(0, len(nodes), 2): for i in range(0, len(nodes), 2):
next_nodes.append(Node.from_children(nodes[i], nodes[i+1], tree.hasher)) next_nodes.append(
Node.from_children(nodes[i], nodes[i + 1], tree.hasher)
)
nodes = next_nodes nodes = next_nodes

View File

@ -12,8 +12,20 @@ DEFAULT_PORT_MIN = 11000
DEFAULT_PORT_MAX = 65535 DEFAULT_PORT_MAX = 65535
DEFAULT_PORT_RANGE = 500 DEFAULT_PORT_RANGE = 500
def print_testcase_result(color, glyph, script, start_time): def print_testcase_result(color, glyph, script, start_time):
print(color[1] + glyph + " Testcase " + script + "\telapsed: " + str(int(time.time() - start_time)) + " seconds" + color[0], flush=True) print(
color[1]
+ glyph
+ " Testcase "
+ script
+ "\telapsed: "
+ str(int(time.time() - start_time))
+ " seconds"
+ color[0],
flush=True,
)
def run_single_test(py, script, test_dir, index, port_min, port_max): def run_single_test(py, script, test_dir, index, port_min, port_max):
try: try:
@ -58,7 +70,14 @@ def run_single_test(py, script, test_dir, index, port_min, port_max):
raise err raise err
print_testcase_result(BLUE, TICK, script, start_time) print_testcase_result(BLUE, TICK, script, start_time)
def run_all(test_dir: str, test_subdirs: list[str]=[], slow_tests: set[str]={}, long_manual_tests: set[str]={}, single_run_tests: set[str]={}):
def run_all(
test_dir: str,
test_subdirs: list[str] = [],
slow_tests: set[str] = {},
long_manual_tests: set[str] = {},
single_run_tests: set[str] = {},
):
tmp_dir = os.path.join(test_dir, "tmp") tmp_dir = os.path.join(test_dir, "tmp")
if not os.path.exists(tmp_dir): if not os.path.exists(tmp_dir):
os.makedirs(tmp_dir, exist_ok=True) os.makedirs(tmp_dir, exist_ok=True)
@ -102,7 +121,11 @@ def run_all(test_dir: str, test_subdirs: list[str]=[], slow_tests: set[str]={},
for file in os.listdir(subdir_path): for file in os.listdir(subdir_path):
if file.endswith("_test.py"): if file.endswith("_test.py"):
rel_path = os.path.join(subdir, file) rel_path = os.path.join(subdir, file)
if rel_path not in slow_tests and rel_path not in long_manual_tests and rel_path not in single_run_tests: if (
rel_path not in slow_tests
and rel_path not in long_manual_tests
and rel_path not in single_run_tests
):
TEST_SCRIPTS.append(rel_path) TEST_SCRIPTS.append(rel_path)
executor = ProcessPoolExecutor(max_workers=options.max_workers) executor = ProcessPoolExecutor(max_workers=options.max_workers)
@ -154,4 +177,3 @@ def run_all(test_dir: str, test_subdirs: list[str]=[], slow_tests: set[str]={},
for c in failed: for c in failed:
print(c) print(c)
sys.exit(1) sys.exit(1)

View File

@ -25,7 +25,13 @@ class RpcCaller:
if isinstance(parsed, Ok): if isinstance(parsed, Ok):
return parsed.result return parsed.result
else: else:
print("Failed to call RPC, method = %s(%s), error = %s" % (self.method, str(*args)[-1500:], parsed)) print(
"Failed to call RPC, method = %s(%s), error = %s"
% (self.method, str(*args)[-1500:], parsed)
)
except Exception as ex: except Exception as ex:
print("Failed to call RPC, method = %s(%s), exception = %s" % (self.method, str(*args)[-1500:], ex)) print(
"Failed to call RPC, method = %s(%s), exception = %s"
% (self.method, str(*args)[-1500:], ex)
)
return None return None

View File

@ -1,2 +1,2 @@
ENTRY_SIZE = 256 ENTRY_SIZE = 256
PORA_CHUNK_SIZE = 1024 PORA_CHUNK_SIZE = 1024

View File

@ -5,6 +5,7 @@ from math import log2
from utility.merkle_tree import add_0x_prefix, Leaf, MerkleTree from utility.merkle_tree import add_0x_prefix, Leaf, MerkleTree
from utility.spec import ENTRY_SIZE, PORA_CHUNK_SIZE from utility.spec import ENTRY_SIZE, PORA_CHUNK_SIZE
def log2_pow2(n): def log2_pow2(n):
return int(log2(((n ^ (n - 1)) >> 1) + 1)) return int(log2(((n ^ (n - 1)) >> 1) + 1))
@ -106,29 +107,27 @@ def create_segment_node(data, offset, batch, size):
else: else:
tree.add_leaf(Leaf(segment_root(data[start:end]))) tree.add_leaf(Leaf(segment_root(data[start:end])))
return tree.get_root_hash() return tree.get_root_hash()
segment_root_cached_chunks = None segment_root_cached_chunks = None
segment_root_cached_output = None segment_root_cached_output = None
def segment_root(chunks): def segment_root(chunks):
global segment_root_cached_chunks, segment_root_cached_output global segment_root_cached_chunks, segment_root_cached_output
if segment_root_cached_chunks == chunks: if segment_root_cached_chunks == chunks:
return segment_root_cached_output return segment_root_cached_output
data_len = len(chunks) data_len = len(chunks)
if data_len == 0: if data_len == 0:
return b"\x00" * 32 return b"\x00" * 32
tree = MerkleTree() tree = MerkleTree()
for i in range(0, data_len, ENTRY_SIZE): for i in range(0, data_len, ENTRY_SIZE):
tree.encrypt(chunks[i : i + ENTRY_SIZE]) tree.encrypt(chunks[i : i + ENTRY_SIZE])
digest = tree.get_root_hash() digest = tree.get_root_hash()
segment_root_cached_chunks = chunks segment_root_cached_chunks = chunks
@ -241,4 +240,4 @@ def data_to_segments(data):
segments.append(segment) segments.append(segment)
idx += 1 idx += 1
return segments return segments

View File

@ -5,6 +5,7 @@ import rtoml
import time import time
import sha3 import sha3
class PortMin: class PortMin:
# Must be initialized with a unique integer for each process # Must be initialized with a unique integer for each process
n = 11000 n = 11000
@ -35,15 +36,19 @@ def blockchain_rpc_port(n):
def blockchain_rpc_port_core(n): def blockchain_rpc_port_core(n):
return PortMin.n + MAX_NODES + 3 * MAX_BLOCKCHAIN_NODES + n return PortMin.n + MAX_NODES + 3 * MAX_BLOCKCHAIN_NODES + n
def blockchain_ws_port(n): def blockchain_ws_port(n):
return PortMin.n + MAX_NODES + 4 * MAX_BLOCKCHAIN_NODES + n return PortMin.n + MAX_NODES + 4 * MAX_BLOCKCHAIN_NODES + n
def blockchain_rpc_port_tendermint(n): def blockchain_rpc_port_tendermint(n):
return PortMin.n + MAX_NODES + 5 * MAX_BLOCKCHAIN_NODES + n return PortMin.n + MAX_NODES + 5 * MAX_BLOCKCHAIN_NODES + n
def pprof_port(n): def pprof_port(n):
return PortMin.n + MAX_NODES + 6 * MAX_BLOCKCHAIN_NODES + n return PortMin.n + MAX_NODES + 6 * MAX_BLOCKCHAIN_NODES + n
def wait_until(predicate, *, attempts=float("inf"), timeout=float("inf"), lock=None): def wait_until(predicate, *, attempts=float("inf"), timeout=float("inf"), lock=None):
if attempts == float("inf") and timeout == float("inf"): if attempts == float("inf") and timeout == float("inf"):
timeout = 60 timeout = 60
@ -135,11 +140,12 @@ def assert_greater_than_or_equal(thing1, thing2):
if thing1 < thing2: if thing1 < thing2:
raise AssertionError("%s < %s" % (str(thing1), str(thing2))) raise AssertionError("%s < %s" % (str(thing1), str(thing2)))
# 14900K has the performance point 100
# 14900K has the performance point 100
def estimate_st_performance(): def estimate_st_performance():
hasher = sha3.keccak_256() hasher = sha3.keccak_256()
input = b"\xcc" * (1<<26) input = b"\xcc" * (1 << 26)
start_time = time.perf_counter() start_time = time.perf_counter()
hasher.update(input) hasher.update(input)
digest = hasher.hexdigest() digest = hasher.hexdigest()
return 10 / (time.perf_counter() - start_time) return 10 / (time.perf_counter() - start_time)

View File

@ -20,12 +20,12 @@ This is a rust implementation of the [Discovery v5](https://github.com/ethereum/
peer discovery protocol. peer discovery protocol.
Discovery v5 is a protocol designed for encrypted peer discovery and topic advertisement. Each peer/node Discovery v5 is a protocol designed for encrypted peer discovery and topic advertisement. Each peer/node
on the network is identified via it's `ENR` ([Ethereum Node on the network is identified via its `ENR` ([Ethereum Node
Record](https://eips.ethereum.org/EIPS/eip-778)), which is essentially a signed key-value store Record](https://eips.ethereum.org/EIPS/eip-778)), which is essentially a signed key-value store
containing the node's public key and optionally IP address and port. containing the node's public key and optionally IP address and port.
Discv5 employs a kademlia-like routing table to store and manage discovered peers and topics. The Discv5 employs a kademlia-like routing table to store and manage discovered peers and topics. The
protocol allows for external IP discovery in NAT environments through regular PING/PONG's with protocol allows for external IP discovery in NAT environments through regular PING/PONGs with
discovered nodes. Nodes return the external IP address that they have received and a simple discovered nodes. Nodes return the external IP address that they have received and a simple
majority is chosen as our external IP address. If an external IP address is updated, this is majority is chosen as our external IP address. If an external IP address is updated, this is
produced as an event to notify the swarm (if one is used for this behaviour). produced as an event to notify the swarm (if one is used for this behaviour).

View File

@ -736,7 +736,7 @@ enum ClosestBucketsIterState {
/// The starting state of the iterator yields the first bucket index and /// The starting state of the iterator yields the first bucket index and
/// then transitions to `ZoomIn`. /// then transitions to `ZoomIn`.
Start(BucketIndex), Start(BucketIndex),
/// The iterator "zooms in" to to yield the next bucket containing nodes that /// The iterator "zooms in" to yield the next bucket containing nodes that
/// are incrementally closer to the local node but further from the `target`. /// are incrementally closer to the local node but further from the `target`.
/// These buckets are identified by a `1` in the corresponding bit position /// These buckets are identified by a `1` in the corresponding bit position
/// of the distance bit string. When bucket `0` is reached, the iterator /// of the distance bit string. When bucket `0` is reached, the iterator

View File

@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2020 Age Manning Copyright (c) 2025 Age Manning
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@ -42,9 +42,9 @@ pub trait EnrKey: Send + Sync + Unpin + 'static {
/// Returns the public key associated with current key pair. /// Returns the public key associated with current key pair.
fn public(&self) -> Self::PublicKey; fn public(&self) -> Self::PublicKey;
/// Provides a method to decode a raw public key from an ENR `BTreeMap` to a useable public key. /// Provides a method to decode a raw public key from an ENR `BTreeMap` to a usable public key.
/// ///
/// This method allows a key type to decode the raw bytes in an ENR to a useable /// This method allows a key type to decode the raw bytes in an ENR to a usable
/// `EnrPublicKey`. It takes the ENR's `BTreeMap` and returns a public key. /// `EnrPublicKey`. It takes the ENR's `BTreeMap` and returns a public key.
/// ///
/// Note: This specifies the supported key schemes for an ENR. /// Note: This specifies the supported key schemes for an ENR.

View File

@ -1195,7 +1195,7 @@ mod tests {
assert_eq!(enr.tcp4(), Some(tcp)); assert_eq!(enr.tcp4(), Some(tcp));
assert!(enr.verify()); assert!(enr.verify());
// Compare the encoding as the key itself can be differnet // Compare the encoding as the key itself can be different
assert_eq!(enr.public_key().encode(), key.public().encode(),); assert_eq!(enr.public_key().encode(), key.public().encode(),);
} }

View File

@ -406,7 +406,7 @@ impl_for_vec!(SmallVec<[T; 8]>, Some(8));
/// Decodes `bytes` as if it were a list of variable-length items. /// Decodes `bytes` as if it were a list of variable-length items.
/// ///
/// The `ssz::SszDecoder` can also perform this functionality, however it it significantly faster /// The `ssz::SszDecoder` can also perform this functionality, however it is significantly faster
/// as it is optimized to read same-typed items whilst `ssz::SszDecoder` supports reading items of /// as it is optimized to read same-typed items whilst `ssz::SszDecoder` supports reading items of
/// differing types. /// differing types.
pub fn decode_list_of_variable_length_items<T: Decode>( pub fn decode_list_of_variable_length_items<T: Decode>(