For most existing decentralized systems, their decentralization can be only estimated but not computed exactly for a point in time. For example; although the number of nodes in the Bitcoin network at a point in time can be estimated, this estimate does not reveal how many distinct actors control those nodes.
Understood in this way, decentralization appears to be an important characteristic for reliability. The more actors that must be taken out or corrupted to take a system down or corrupt it, the better the system’s reliability.
Orbis Labs is committed to the full decentralization of Orbis. This commitment does not mean that Orbis will be fully decentralized out of the box but that Orbis will eventually be fully decentralized. We plan to deliver the initial centralized version of Orbis incrementally, in which each step introduces a new system version that builds on the previous version.
Evaluating transactions and generating relevant zk-SNARK proofs is computationally difficult. Therefore, Orbis will efficiently leverage the computational power of all nodes that are part of the system and the work will be distributed among the cluster of nodes, such that each node will compute a portion of the submitted transactions, and one designated node will gather the results, merge them, to create a new block on the rollup, and eventually post the proof to L1 (Cardano).
In this setup, the system runs on a single machine controlled by Orbis Labs.
This system is fully centralized. Because Orbis Labs will control this single node, it becomes a single point of failure. It is neither crash-only nor resistant to Byzantine failure.
This system should be considered a milestone that enables testing of all user-facing features like sending funds, submitting transactions, inspecting the rollup, retrieving funds, and syncing the rollup with layer 1.
This distributed system has a single master node and multiple worker nodes. To create blocks on the rollup, it must perform a leader election to establish a master node.
In this setting, the system becomes resilient to crash-only failures as long as the cluster can successfully perform a leader election.
Alongside the nodes, the system will run a fully distributed database that will store both transactions and rollup. All nodes will have read-write access to the database.
Creating a block and updating the rollup state will work exactly as in the Static master/workers setup.
This distributed system has a single master node and multiple worker nodes. This configuration is static. This means it is fixed upon initialization and cannot change during runtime. The master node divides work among the worker nodes. After receiving partial results, the master node combines them and updates layer 1.
Although the system becomes distributed, it is still fully centralized. Because Orbis Labs control all nodes, the failure of either the master node or all worker nodes shuts down the entire system in this setting. Nevertheless, the system can fully recover from failures of individual worker nodes. The system remains live and operational as long as at least one worker node and master node are running.
The system will run a fully distributed database that stores the transaction pool and the rollup. The master node will be the only machine with read-write access to the database, whereas worker nodes will have read-only access.