@peter/doc (#22)

* add documentation
This commit is contained in:
0g-peterzhb 2024-03-24 22:37:46 +08:00 committed by GitHub
parent 16dfc56437
commit 06a335add2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
23 changed files with 365 additions and 121 deletions

3
.bookignore Normal file
View File

@ -0,0 +1,3 @@
version-meld/*
common/*

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 251 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 610 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 687 KiB

4
NOTICE
View File

@ -1,5 +1,5 @@
ZeroGStorage
Copyright 2023 ZeroGStorage
0G Storage
Copyright 2023 0G Storage
The Initial Developer of some parts of the framework, which are copied from, derived from, or
inspired by Lighthouse, is Sigma Prime Pty Ltd (https://sigmaprime.io).

76
README.md Normal file
View File

@ -0,0 +1,76 @@
# 0G Storage
## Overview
0G Storage is the storage layer for the ZeroGravity data availability (DA) system. The 0G Storage layer holds three important features:
* Buit-in - It is natively built into the ZeroGravity DA system for data storage and retrieval.
* General purpose - It is designed to support atomic transactions, mutable kv stores as well as archive log systems to enable wide range of applications with various data types.
* Incentive - Instead of being just a decentralized database, 0G Storage introduces PoRA mining algorithm to incentivize storage network participants.
To dive deep into the technical details, continue reading [0G Storage Spec.](docs/)
## Integration
We provide a [SDK](https://github.com/0glabs/0g-js-storage-sdk) for users to easily integrate 0G Storage in their applications with the following features:
* File Merkle Tree Class
* Flow Contract Types
* RPC methods support
* File upload
* Support browser environment
* Tests for different environments (In Progress)
* File download (In Progress)
## Deployment
Please refer to [Deployment](docs/run.md) page for detailed steps to compile and start a 0G Storage node.
## Test
### Prerequisites
* Required python version: 3.8, 3.9, 3.10, higher version is not guaranteed (e.g. failed to install `pysha3`).
* Install dependencies under root folder: `pip3 install -r requirements.txt`
### Dependencies
Python test framework will launch blockchain fullnodes at local for storage node to interact with. There are 2 kinds of fullnodes supported:
* Conflux eSpace node (by default).
* BSC node (geth).
For Conflux eSpace node, the test framework will automatically compile the binary at runtime, and copy the binary to `tests/tmp` folder. For BSC node, the test framework will automatically download the latest version binary from [github](https://github.com/bnb-chain/bsc/releases) to `tests/tmp` folder.
Alternatively, you could also manually copy specific versoin binaries (conflux or geth) to the `tests/tmp` folder. Note, do **NOT** copy released conflux binary on github, since block height of some CIPs are hardcoded.
For testing, it's also dependent on the following repos:
* [0G Storage Contract](https://github.com/0glabs/0g-storage-contracts): It essentially provides two abi interfaces for 0G Storage Node to interact with the on-chain contracts.
* ZgsFlow: It contains apis to submit chunk data.
* PoraMine: It contains apis to submit PoRA answers.
* [0G Storage Client](https://github.com/0glabs/0g-storage-client): It is used to interact with certain 0G Storage Nodes to upload/download files.
### Run Tests
Go to the `tests` folder and run following command to run all tests:
```
python test_all.py
```
or, run any single test, e.g.
```
python sync_test.py
```
### Troubleshooting
1. Test failed due to blockchain full node rpc inaccessible.
* Traceback: `node.wait_for_rpc_connection()`
* Solution: unset the `http_proxy` and `https_proxy` environment variables if configured.
## Contributing
To make contributions to the project, please follow the guidelines [here](contributing.md).

15
SUMMARY.md Normal file
View File

@ -0,0 +1,15 @@
# Table of contents
* [0G Storage](README.md)
* [0G Storage Spec](docs/README.md)
* [Introduction](<docs/introduction.md>)
* [Architecture](<docs/architecture.md>)
* [Log System](<docs/log-system.md>)
* [K-V Store](<docs/k-v-store.md>)
* [Transaction Processing](<docs/transaction-processing.md>)
* [Incentive Mechanism](<docs/incentive-mechanism/README.md>)
* [Proof of Random Access](<docs/incentive-mechanism/proof-of-random-access.md>)
* [Storage Pricing](<docs/incentive-mechanism/storage-pricing.md>)
* [Mining Reward](<docs/incentive-mechanism/mining-reward.md>)
* [Deployment](docs/run.md)
* [Contributing](contributing.md)

12
contributing.md Normal file
View File

@ -0,0 +1,12 @@
# Contributing
### Why are these changes needed?
### Checks
* [ ] I've made sure the lint is passing in this PR.
* [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, in that case, please comment that they are not relevant.
* [ ] Testing Strategy
* [ ] Unit tests
* [ ] Integration tests
* [ ] This PR is not tested :(

View File

@ -1,70 +0,0 @@
# Install
ZeroGStorage requires Rust 1.71.0 and Go to build.
## Install Rust
We recommend installing Rust through [rustup](https://www.rustup.rs/).
* Linux
Install Rust
```shell
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
rustup install 1.65.0
```
Other dependencies
* Ubuntu
```shell
sudo apt-get install clang cmake build-essential
```
* Mac
Install Rust
```shell
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
rustup install 1.65.0
```
```shell
brew install llvm cmake
```
* Windows
Download and run the rustup installer from [this link](https://static.rust-lang.org/rustup/dist/i686-pc-windows-gnu/rustup-init.exe).
Install LLVM, pre-built binaries can be downloaded from [this link](https://releases.llvm.org/download.html).
## Install Go
* Linux
```shell
# Download the Go installer
wget https://go.dev/dl/go1.19.3.linux-amd64.tar.gz
# Extract the archive
sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.19.3.linux-amd64.tar.gz
# Add /usr/local/go/bin to the PATH environment variable by adding the following line to your ~/.profile.
export PATH=$PATH:/usr/local/go/bin
```
* Mac
Download the Go installer from https://go.dev/dl/go1.19.3.darwin-amd64.pkg.
Open the package file you downloaded and follow the prompts to install Go.
* Windows
Download the Go installer from https://go.dev/dl/go1.19.3.windows-amd64.msi.
Open the MSI file you downloaded and follow the prompts to install Go.
## Build from source
```shell
# Download code
$ git clone https://github.com/0glabs/0g-storage-node.git
$ cd 0g-storage-node
$ git submodule update --init
# Build in release mode
$ cargo build --release
```

33
docs/README.md Normal file
View File

@ -0,0 +1,33 @@
# 0G Storage
## Organization
The [0G Storage repo](https://github.com/0glabs/0g-storage-node) is organized with two main modules, `common` and `node`, each with several submodules. `common` contains basic components needed for the `node` to run, while `node` contains key roles that compose the network.
## Directory structure
```
┌── : common
| ├── : channel
| ├── : directory
| ├── : hashset_delay
| ├── : lighthouse_metrics
| ├── : merkle_tree
| ├── : task_executor
| ├── : zgs_version
| ├── : append_merkle
| └── : unused port
┌── : node
| ├── : chunk_pool
| ├── : file_location_cache
| ├── : log_entry_sync
| ├── : miner
| ├── : network
| ├── : router
| ├── : rpc
| ├── : shared_types
| ├── : storage
| ├── : storage async
| └── : sync
├── : tests
```

25
docs/architecture.md Normal file
View File

@ -0,0 +1,25 @@
# Architecture
## 0G System
ZeroGravity system consists of a data availability layer (0G DA) on top of a decentralized storage system (0G Storage). There is a separate consensus network that is part of both the 0G DA and the 0G Storage. For 0G Storage, the consensus is responsible for determining the ordering of the uploaded data blocks, realizing the storage mining verification and the corresponding incentive mechanism through smart contracts.
Figure 1 illustrates the architecture of the 0G system. When a data block enters the 0G DA, it is first erasure coded and organized into multiple consecutive chunks through erasure coding. The merkle root as a commitment of the encoded data block is then submitted to the consensus layer to keep the order of the data entering the system. The chunks are then dispersed to different storage nodes in 0G Storage where the data may be further replicated to other nodes depending on the storage fee that the user pays. The storage nodes periodically participate the mining process by interacting with the consensus network to accrue rewards from the system.
<figure><img src="../../.gitbook/assets/zg-storage-architecture.png" alt=""><figcaption><p>Figure 1. The Architecture of 0G System</p></figcaption></figure>
## 0G Storage
0G Storage employs layered design targetting to support different types of decentralized applications. Figure 2 shows the overview of the full stack layers of 0G Storage.
<figure><img src="../../.gitbook/assets/zg-storage-layer.png" alt=""><figcaption><p>Figure 2. Full Stack Solution of 0G Storage</p></figcaption></figure>
The lowest is a log layer that is a decentralized system. It consists of multiple storage nodes to form a storage network. The network has built-in incentive mechanism to reward the data storage. The ordering of the uploaded data is guaranteed by a sequencing mechanism to provide a log-based semantics and abstraction. This layer is used to store unstructured raw data for permanent persistency.
On top of the log layer, 0G Storage provides a Key-Value store runtime to manage structured data with mutability. Multiple key-value store nodes share the underlying log system. They put the structured key-value data structure into the log entry and append to the log system. They play the log entries in the shared log to construct the consistent state snapshot of the key-value store. The throughput and latency of the key-value store are bounded by the log system, so that the efficiency of the log layer is critical to the performance of the entire system. The key-value store can associate access control information with the keys to manage the update permission for the data. This enables the applications like social network, e.g., decentralized Twitter, which requires the maintenance for the ownership of the messages created by the users.
0G Storage further provides transactional semantics on the key-value store runtime to enable concurrent updates for the keys from multiple users who have the write access permission. The total order of the log entries guaranteed by the underlying log system provides the foundation for the concurrency control of the transactional executions on top of the key-value store. With this capability, 0G Storage can support decentralized applications like collaborative editing and even database workloads.
## Dependencies
The 0G Storage Node is depended by the [0G Storage KV](https://github.com/0glabs/0g-storage-kv). 0G Storage KV is essentially a wrapper layer on top of 0G Storage Node in order to provide mutable KV Store and transaction processing to applications. 0G DA uses the KV Store to store metadata of the data blobs.

View File

@ -0,0 +1,5 @@
# Incentive Mechanism
This section describes the incentive mechanism design of the 0G Storage, which consists of two types of actors: users and miners (a.k.a. storage nodes). Users pay tokens (ZG) to create data entries in the log and add data to the network. Miners provide data service and receive tokens (ZG) as reward from the network. The payment from users to miners is mediated by the ZeroGravity network, since the service is sustained by the whole network rather than some specific miner. 0G Storage implements storage service in a "pay once, storage forever" manner. Users pay a one-shot storage endowment for each created data entry, and thereafter the endowment is used to incentivize miners who maintain that data entry.
The storage endowment is maintained per data entry, and a miner is only eligible for storage reward from data entries that he has access to. The total storage reward paid for a data entry is independent from the popularity of that data entry. For instance, a popular data entry stored by many miners will be frequently mined, but the reward is amortized among those miners; on the other hand, a less popular data entry is rarely mined, then the storage reward accumulates and hence induces a higher payoff to miners who store this rare data entry.

View File

@ -0,0 +1,12 @@
# Mining Reward
0G Storage creates pricing segments every 8 GB of data chunks over the data flow. Each pricing segment is associated with an Endowment Pool and a Reward Pool. The Endowment Pool collects the storage endowments of all the data chunks belongs to this pricing segment and releases a fixed ratio of balance to the Reward Pool every second. The rate of reward release is set to 4% per year.
The mining reward is paid to miners for providing data service. Miners receive mining reward when submit the first legitimate PoRA for a mining epoch to 0G Storage contract. The mining reward consists of two parts:
The mining reward consists of two parts:
* Base reward: the base reward, denoted by $$R_{base}$$, is paid for every accepted mining proof. The base reward per proof decreases over time.
* Storage reward: the storage reward, denoted by $$R_{storage}$$, is the perpetual reward from storing data. When a PoRA falls in a pricing segment, half of the balance in its Reward Pool are claimed as the storage reward.
The total reward for a new mining proof is thus: $$R_{total} = R_{base} + R_{storage}$$

View File

@ -0,0 +1,33 @@
# Proof of Random Access
The ZeroGravity network adopts a Proof of Random Access (PoRA) mechanism to incentivize miners to store data. By requiring miners to answer randomly produced queries to archived data chunks, the PoRA mechanism establishes the relation between mining proof generation power and data storage. Miners answer the queries repeatedly and computes an output digest for each loaded chunk util find a digest that satisfies the mining difficulty (i.e., has enough leading zeros). PoRA will stress the miners' disk I/O and reduce their capability in responding user queries. So 0G Storage adopts intermittent mining, in which a mining epoch starts with a block generation at a specific block height on the host chain and stops when a valid PoRA is submitted to the 0G Storage contract.
In a strawman design, a PoRA iteration consists of a computing stage and a loading stage. In the computing stage, a miner computes a random recall position (the universal offset in the flow) based on an arbitrary picked random nonce and a mining status read from the host chain. In the loading stage, a miner loads the archived data chunks at the given recall position, and computes output digest by hashing the tuple of mining status and the data chunks. If the output digest satisfies the target difficulty, the miner can construct a legitimate PoRA consists of the chosen random nonce, the loaded data chunk and the proof for the correctness of data chunk to the mining contract.
## Fairness
The PoRA is designed with the following properties to improve the overall fairness in PoRA mining.
- Fairness for Small Miners
- Disincentivize Storage Outsourcing
- Disincentivize Distributed Mining
## Algorithm
Precisely, the mining process has following steps:
1. Register the miner id on the mining contract
2. For each mining epoch, repeat the following steps:
1. Wait for the layer-1 blockchain release a block at a given epoch height.
2. Get the block hash $$\mathsf{block\_hash}$$ of this block and the relevant context (including $$\mathsf{merkle\_root}$$, $$\mathsf{data\_length}$$, $$\mathsf{context\_digest}$$) at this time.
3. Compute the number of minable entries $$\text{n} = [\mathsf{data\_length}/256\mathsf{KB}]$$.
4. For each iteration, repeat the following steps:
1. Pick a random 32-byte $$\mathsf{nonce}$$.
2. Decide the mining range parameters $$\mathsf{start\_position}$$ and $$\mathsf{mine\_length}$$; $$\mathsf{mine\_length}$$ should be equal to $$\text{min}(8\mathrm{TB}, n \times 256 \mathrm{KB})$$.
3. Compute the recall position $$\tau$$ and the scratchpad $$\overrightarrow{s}$$ by the algorithm in Figure 1.
4. Load the 256-kilobyte sealed data chunk $$\overrightarrow{d}$$ started from the position of $$h \cdot 256\mathrm{KB}$$.
5. Compute $$\overrightarrow{w} = \overrightarrow{d}\ \mathtt{XOR}\ \overrightarrow{s}$$ and divide $$\overrightarrow{w}$$ into 64 4-kilobyte pieces.
6. For each piece $$\overrightarrow{v}$$, compute the Blake2b hash of the tuple ($$\mathsf{miner\_id}$$, $$\mathsf{nonce}$$, $$\mathsf{context\_digest}$$, $$\mathsf{start\_position}$$, $$\mathsf{mine\_length}$$, $$\overrightarrow{v}$$).
7. If one of Blake2b hash output is smaller than a target value, the miner finds a legitimate PoRA solution.
<figure><img src="../../../.gitbook/assets/zg-storage-algorithm.png" alt=""><figcaption><p>Figure 1. Recall Position and Scratchpat Computation</p></figcaption></figure>

View File

@ -0,0 +1,9 @@
# Storage Pricing
The cost of each 0G Storage request is composed of two parts: fee and storage endowment. The fee part is paid to host chain miners/validators for invoking the ZeroGravity contract to process storage request and add new data entry into the log, which is priced as other smart contract invocation transactions. In what follows we focus on the storage endowment part, which supports the perpetual reward to 0G Storage miners who serve the corresponding data.
Given a data storage request $$\mathsf{SR}$$ with specific amount of endowment $$\mathsf{SR}_{endowment}$$ and size of committed data $$\mathsf{SR}_{data\_size}$$ (measured in number of 256 B sectors), the unit price of $$\mathsf{SR}$$ is calculated as follows:
$$\mathsf{SR}_{unit\_price} = {\mathsf{SR}_{endowment} \over \mathsf{SR}_{data\_size}}$$
This unit price $$\mathsf{SR}_{unit}$$ price must exceed a globally specified lower bound to be added to the log, otherwise the request will be pending until when the lower bound decreased below $$\mathsf{SR}_{unit}$$ price (in the meanwhile miners will most likely not store these unpaid data). Users are free to set a higher unit price $$\mathsf{SR}_{unit\_price}$$, which would motivate more storage nodes mining on that data entry and hence lead to better data availability.

71
docs/introduction.md Normal file
View File

@ -0,0 +1,71 @@
# Introduction
## Overview
0G Storage is the storage layer for the ZeroGravity data availability (DA) system. The 0G Storage layer holds three important features:
* Buit-in - It is natively built into the ZeroGravity DA system for data storage and retrieval.
* General purpose - It is designed to support atomic transactions, mutable kv stores as well as archive log systems to enable wide range of applications with various data types.
* Incentive - Instead of being just a decentralized database, 0G Storage introduces PoRA mining algorithm to incentivize storage network participants.
## Integration
We provide a [SDK](https://github.com/0glabs/0g-js-storage-sdk) for users to easily integrate 0G Storage in their applications with the following features:
* File Merkle Tree Class
* Flow Contract Types
* RPC methods support
* File upload
* Support browser environment
* Tests for different environments (In Progress)
* File download (In Progress)
## Deployment
Please refer to [Deployment](../0G%20Storage/doc/install.md) page for detailed steps to compile and start a 0G Storage node.
## Test
### Prerequisites
* Required python version: 3.8, 3.9, 3.10, higher version is not guaranteed (e.g. failed to install `pysha3`).
* Install dependencies under root folder: `pip3 install -r requirements.txt`
### Dependencies
Python test framework will launch blockchain fullnodes at local for storage node to interact with. There are 2 kinds of fullnodes supported:
* Conflux eSpace node (by default).
* BSC node (geth).
For Conflux eSpace node, the test framework will automatically compile the binary at runtime, and copy the binary to `tests/tmp` folder. For BSC node, the test framework will automatically download the latest version binary from [github](https://github.com/bnb-chain/bsc/releases) to `tests/tmp` folder.
Alternatively, you could also manually copy specific versoin binaries (conflux or geth) to the `tests/tmp` folder. Note, do **NOT** copy released conflux binary on github, since block height of some CIPs are hardcoded.
For testing, it's also dependent on the following repos:
* [0G Storage Contract](https://github.com/0glabs/0g-storage-contracts): It essentially provides two abi interfaces for 0G Storage Node to interact with the on-chain contracts.
* ZgsFlow: It contains apis to submit chunk data.
* PoraMine: It contains apis to submit PoRA answers.
* [0G Storage Client](https://github.com/0glabs/0g-storage-client): It is used to interact with certain 0G Storage Nodes to upload/download files.
### Run Tests
Go to the `tests` folder and run following command to run all tests:
```
python test_all.py
```
or, run any single test, e.g.
```
python sync_test.py
```
### Troubleshooting
1. Test failed due to blockchain full node rpc inaccessible.
* Traceback: `node.wait_for_rpc_connection()`
* Solution: unset the `http_proxy` and `https_proxy` environment variables if configured.

9
docs/k-v-store.md Normal file
View File

@ -0,0 +1,9 @@
# K-V Store
0G Storage provides a Key-Value runtime upon the log layer. Each key-value node can access the key-value store state through the runtime interface. The key-value runtime provides the standard interface like `Put()` and `Get()`, and accepts serialized key-value pair from any application-specific structure. During the normal execution of the key-value store node, it maintains the latest key-value state locally. It updates the value of a key through `Put()` API which composes a log entry containing the updated key-value pair and appends to the log. The runtime constantly monitors the new log entries in the log and fetches them back to the key-value node and updates the local key-value state according to the log entry contents. In this sense, multiple key-value store nodes essentially synchronize with each other through the shared decentralized log.
A user-defined function will be used to deserialize the raw content in the log entry to the application-specific key-value structure. Application can use `Get()` API to access the latest value of a given key. To improve the efficiency of the updates for small key-value pairs, the `Put()` allows batched updates with multiple key-value pairs at once. Figure 1 illustrates the architecture of the decentralized key- value store. To manage the access control, the ownership information of each key can also be maintained in the log entries. All the honest key-value nodes follow the same update rule for the keys based on the ownership to achieve the state consistency.
When a new key-value node just joins the network, it connects to the log layer and plays the log entries from head to tail to construct the latest state of the key-value store. During the log entry playing, an application-specific key-value node can skip irrelevant log entries which do not contain stream IDs that it cares.
<figure><img src="../../.gitbook/assets/zg-storage-log.png" alt=""><figcaption><p>Figure 1. Decentralized K-V Store</p></figcaption></figure>

30
docs/log-system.md Normal file
View File

@ -0,0 +1,30 @@
# Log System
The log layer of 0G Storage provides decentralized storage service via a permissionless network of storage nodes. These storage nodes collaboratively serve archived data, where each node optionally specifies which portion of data it keeps in local storage.
## Protocol
The storage state of 0G Storage network is maintained in a smart contract deployed on an existing blockchain. The design of 0G Storage network fully decouples data creation, reward distribution, and token circulation.
The 0G Storage Contract is responsible for data storage requests processing, data entries creation, and reward distribution.
- Data storage requests are submitted by users who wish to store data in the 0G Storage network, where each request includes necessary metadata such as data size and commitments, and it comes along with the payment for storage service.
- Data entries are created for accepted data requests, to keep record of stored data.
- Reward distribution is handled independently through a mining process. Storage nodes submit mining proofs to the 0G Storage contract to claim rewards for maintaining the 0G Storage network. The token circulation of 0G is fully embedded into the host chain ecosystem, as an ERC20 token maintained by another contract called the ZG ledger.
This embedding design brings significant advantages:
- Simplicity: there is no need to maintain a full-fledged consensus protocol, which reduces complexity and enables 0G Storage to focus on decentralized storage service.
- Safety: the consensus is outsourced to the host blockchain, and hence inherits security of the host blockchain. Typically the more developed host blockchain would provide stronger safety guarantee than a newly-built blockchain.
- Accessibility: every smart contract on the host blockchain is able to access the original state of ZeroGravity directly, without relying on some trusted off-chain notary. This difference is essential comparing to the projection of an external ledger managed by a third-party.
- Composability: 0G tokens can always be transferred directly on the host blockchain, like any other ERC20 tokens. This is much more convenient than typical layer-2 ledgers, where transactions are  first processed by layer-2 validators and then committed to the host chain after a significant latency. This feature empowers 0G Storage stronger composability as a new lego to the ecosystem.
## Storage Granularity
The log layer of 0G Storage is updated (append-only) at the granularity of log entries, where every entry is created by a storage-request transaction sent to the 0G Storage contract. When realizing the log layer as a filesystem, every log entry corresponds to a file. The log system is addressed at the level of fixed-size sectors, where each sector stores 256 B of data. To avoid the case that one sector is shared by distinct log entries, every log entry must be padded to a multiple of sectors.
The mining process of 0G Storage requires to prove data accessibility to random challenge queries. To maximize the competitive edge of SSD storage, the challenge queries are set to the level of 256 KB chunks, i.e. 1024 sectors. That is, every challenge query requires the miner to prove accessibility to a whole chunk of data. Therefore storage nodes would maintain data at the granularity of chunks.
## Data Flow
In 0G Storage, committed data are organized sequentially. Such a sequence of data is called a data flow, which can be interpreted as a list of data entries or equivalently a sequence of fixed-size data sectors. Thus, every piece of data in ZeroGravity can be indexed conveniently with a universal offset. This offset will be used to sample challenges in the mining process of PoRA. The default data flow is called the "main flow" of ZeroGravity. It incorporates all new log entries (unless otherwise specified) in an append-only manner. There are also specialized flows that only accept some category of log entries, e.g. data related to a specifc application. The most significant advantage of specialized flows is a consecutive addressing space, which may be crucial in some use cases. Furthermore, a specialized flow can apply customized storage price, which is typically significantly higher than the floor price of the default flow, and hence achieves better data availability and reliability.

View File

@ -6,8 +6,10 @@
Install dependencies Node.js, yarn, hardhat.
* Linux
* Ubuntu
- Linux
- Ubuntu
```shell
# node >=12.18
sudo apt install npm
@ -15,27 +17,30 @@ Install dependencies Node.js, yarn, hardhat.
sudo npm install --global hardhat
```
* Mac
- Mac
```shell
brew install node
sudo npm install --global yarn
sudo npm install --global hardhat
```
* Windows
Download and install node from [here](https://nodejs.org/en/download/)
- Windows
Download and install node from [here](https://nodejs.org/en/download/)
```shell
npm install --global yarn
npm install --global hardhat
```
### Download contract source code
```shell
git clone https://github.com/0glabs/0g-storage-contracts.git
cd 0g-storage-contracts
```
Add target network to your hardhat.config.js, i.e.
```shell
# example
networks: {
@ -49,19 +54,22 @@ networks: {
```
### Compile
```shell
yarn
yarn compile
```
### Deploy contract
```shell
npx hardhat run scripts/deploy.ts --network targetnetwork
```
Keep contracts addresses
## Run ZeroGStorage
## Run 0G Storage
Update coinfig run/config.toml as required:
```shell
@ -85,6 +93,7 @@ blockchain_rpc_endpoint
```
Run node
```shell
cd run
../target/release/zgs_node --config config.toml

View File

@ -0,0 +1,15 @@
# Transaction Processing
0G Storage employs concurrency control in the key-value runtime to support transactional processing for concurrent operations on multiple keys. This concurrency control mechanism is optimistic and hinges on the total ordering of log entries enforced by the underlying log layer. Figure 1 illustrates the mechanism.
## Atomicity
When an application server linking with the 0G Storage key-value runtime starts a transaction using `BeginTx()` interface, it notifies the runtime that the transaction will work on the current state snapshot constructed by playing the log to the current tail. The further key-value operations before the invocation of `EndTx()` updates the key-values locally in the server without exposing the updates to the log. When `EndTx()` is invoked, the runtime composes a commit record containing the log position the transaction starts from and the read-write set of the transaction. This commit record is then appended to the log.
When an application server with the key-value runtime encounters the commit record during playing the log, it identifies a conflict window consists of all the log entries between the start log position of the transaction and the position of the commit record. The log entries in the conflict window therefore contain the key-value operations concurrent with the transaction submitting the commit record. The runtime further detects whether these concurrent operations contain the updates on the keys that belong to the read set of the transaction. If yes, the transaction is aborted, otherwise committed successfully.
<figure><img src="../../.gitbook/assets/zg-storage-transaction.png" alt=""><figcaption><p>Figure 1. Transaction Processing on 0G K-V Store</p></figcaption></figure>
## Concurrent Assumption
This transaction model assumes that the transaction participants are collaborative and will honestly compose the commit record with correct content. Although this assumption in decentralized environment is too strong, it is still achievable for specific applications. For example, for an application like Google Docs, a user normally shares the access to others who can be trusted. In case this assumption cannot hold, the code of the transaction can be stored in ZeroGravity log and some mechanism of verifiable computation like zero-knowledge proof or hardware with trust execution environment (TEE) can be employed by the transaction executors to detect the validity of the commit record.

View File

@ -1,43 +0,0 @@
# Python Tests for Storage node
## Prerequisites
1. Required python version: 3.8, 3.9, 3.10, higher version is not guaranteed (e.g. failed to install `pysha3`).
2. Install dependencies under root folder:
```
pip3 install -r requirements.txt
```
## Dependent Binaries
Python test framework will launch blockchain fullnodes at local for storage node to interact with. There are 2 kinds of fullnodes supported:
- Conflux eSpace node (by default).
- BSC node (geth).
For Conflux eSpace node, the test framework will automatically compile the binary at runtime, and copy the binary to `tests/tmp` folder. For BSC node, the test framework will automatically download the latest version binary from [github](https://github.com/bnb-chain/bsc/releases) to `tests/tmp` folder.
Alternatively, you could also manually copy specific versoin binaries (conflux or geth) to the `tests/tmp` folder. Note, do **NOT** copy released conflux binary on github, since block height of some CIPs are hardcoded.
## Run Tests
Go to the `tests` folder and run following command to run all tests:
```
python test_all.py
```
or, run any single test, e.g.
```
python sync_test.py
```
## Troubleshootings
1. Test failed due to blockchain fullnode rpc inaccessible.
* Traceback: `node.wait_for_rpc_connection()`
* Solution: unset the `http_proxy` and `https_proxy` environment variables if configured.