Table of Contents
In the BitShares blockchain network, you might find the slightly different naming of**node**, such as a full node, witness node, or block producer. Basically, nodes are keeping all transactions (histories) that have occurred on the blockchain and maintaining the integrity of a blockchain. A full node has every single block that has ever been created on the blockchain.
In BitShares blockchain network, a witness node is a full node, and each witness validates a transaction and broadcasts it. Witnesses nodes provide the complete history and preserve it as a failsafe.
API nodes do not need to have all blockchain histories. They might contain partial blockchain histories. (Public) API nodes provide essential services without having all blockchain history data (i.e., accept API calls and query the data on witness nodes blockchain).
Seed nodes are the first nodes of the network that accept incoming P2P connection. Also, a seed node can be an API node. In BitShares, its address is hard coded in the program, so when a new node starts, it will connect to the seed nodes by default. Every node (including seed nodes) tells the connected nodes where other nodes are, so all nodes can connect to each other.
So, when and who producing blocks? We call Block Producers ; they serve an important role of validating signatures and timestamping transactions by including them in a block and broadcast it. Block Producers are witnesses who got enough votes to become active witnesses and can produce a block when they got their turn “time slot”.
Each time, block producers produce a block, they get paid for their services. Their pay rate is set by the BTS holders via their elected committee members. If failing to produce a block in a time slot, then the block producer is not paid. Currently, BitShares blockchain has 27 Block Producers (Active Witnesses). You can find BitShares Blockchain witnesses in BitShares UI wallet Explorer-witnesses, or other Explorers.
You could think about BitShares blockchain nodes like the below.
- Node Types
- API Nodes (i.e., nodes with an open RPC port)
- Nodes with Full History (a.k.a. full nodes)
- Nodes with Partial History
- Seed Nodes (i.e., nodes with an open P2P port)
- Block Producing Nodes
- Block Producer or Active Witness Node (belongs to a voted-in witness and is configured with correct key)
- Standby Witness Node (either a not-voted-in witness, or a voted-in witness with different key)
- API Nodes (i.e., nodes with an open RPC port)
After proper BitShares-Core release installed, each type node is implemented by the same executable, however, each node would need to set up different configurations to start the node.
For example; although a block producing node can have full history, that would be a waste of resources. Block producing nodes should run with minimal plugins (i.e., only witness_plugin). Also, Block producing nodes should not have open network ports.
We strongly recommend all node service providers to run and maintain their own nodes for reliability and security reasons.
IMPORTANT: By default the witness node will start in reduced memory mode by using some of the commands detailed in Memory reduction for nodes.
In order to run a full node with all the account history you need to remove
max-ops-per-account from your config file. Please note that currently(2018-10-17) a full node will need more than 160GB of RAM to operate and required memory is growing fast. Consider the following table as minimal requirements before running a node:
|100G SSD, 16G RAM||200G SSD, 120G RAM||80G SSD, 8G RAM||500G SSD, 32G RAM|
How to launch the full node:
It takes an optional –data-dir parameter to define a working and data directory to store the configuration, blockchain and local databases (defaults to witness_node_data_dir). Those will be automatically created with default settings if they don’t exist locally set.
The configuration file
config.ini in your data directory is commented and contains the following essential settings:
Example of a
config.ini file in the data directory
As a default, most parameters have been commented. You have to uncomment to set values.
# Endpoint for P2P node to listen on #p2p-endpoint = 0.0.0.0:11010 # P2P nodes to connect to on startup (may specify multiple times) # seed-node = # JSON array of P2P nodes to connect to on startup # seed-nodes = # Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints. # checkpoint = # Endpoint for websocket RPC to listen on #rpc-endpoint = 127.0.0.1:8090 # Endpoint for TLS websocket RPC to listen on # rpc-tls-endpoint = # The TLS certificate file for this server # server-pem = # Password for this certificate # server-pem-password = # File to read Genesis State from # genesis-json = # Block signing key to use for init witnesses, overrides genesis file # dbg-init-key = # JSON file specifying API permissions # api-access = # Space-separated list of plugins to activate # plugins = # Enable block production, even if the chain is stale. enable-stale-production = false # Percent of witnesses (0-99) that must be participating in order to produce blocks required-participation = false # ID of witness controlled by this node (e.g. "1.6.5", quotes are required, may specify multiple times) witness-id = "1.6.1" # Tuple of [PublicKey, WIF private key] (may specify multiple times) #private-key = ["BTS6MRyAjQq8...........GDW5CV","5KQwrPbwdL..............yXtP79zkvFD3"] # Tuple of [PublicKey, WIF private key] (may specify multiple times) debug-private-key = ["BTS6MRyAjQq8...........GDW5CV","5KQwrPbwdL..............yXtP79zkvFD3"] # Account ID to track history for (may specify multiple times) # track-account = # Keep only those operations in memory that are related to account history tracking partial-operations = 1 # Maximum number of operations per account will be kept in memory max-ops-per-account = 1000 # Elastic Search database node url # elasticsearch-node-url = # Number of bulk documents to index on replay(5000) # elasticsearch-bulk-replay = # Number of bulk documents to index on a syncronied chain(10) # elasticsearch-bulk-sync = # Log bulk events to database # elasticsearch-logs = # Use visitor to index additional data(slows down the replay) # elasticsearch-visitor = # Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers bucket-size = [60,300,900,1800,3600,14400,86400] # How far back in time to track history for each bucket size, measured in the number of buckets (default: 1000) history-per-size = 1000 # Will only store this amount of matched orders for each market in order history for querying, or those meet the other option, which has more data (default: 1000) max-order-his-records-per-market = 1000 # Will only store matched orders in last X seconds for each market in order history for querying, or those meet the other option, which has more data (default: 259200 (3 days)) max-order-his-seconds-per-market = 259200 # RPC endpoint of a trusted validating node (required) # trusted-node = # Block number after which to do a snapshot # snapshot-at-block = # Block time (ISO format) after which to do a snapshot # snapshot-at-time = # Pathname of JSON file where to store the snapshot # snapshot-to = # Elasticsearch node url # es-objects-elasticsearch-url = # Log bulk events to database # es-objects-logs = # Number of bulk documents to index on replay(5000) # es-objects-bulk-replay = # Number of bulk documents to index on a syncronied chain(10) # es-objects-bulk-sync = # Store proposal objects # es-objects-proposals = # Store account objects # es-objects-accounts = # Store asset objects # es-objects-assets = # Store balances objects # es-objects-balances = # Store limit order objects # es-objects-limit-orders = # Store feed data # es-objects-asset-bitasset = # Group orders by percentage increase on price. Specify a JSON array of numbers here, each number is a group, number 1 means 0.01%. tracked-groups = [10,100] [log.console_appender.stderr] stream=std_error # declare an appender named "p2p" that writes messages to p2p.log [log.file_appender.p2p] filename=logs/p2p/p2p.log # filename can be absolute or relative to this config file # route any messages logged to the default logger to the "stderr" logger we # declared above, if they are info level are higher [logger.default] level=info appenders=stderr # route messages sent to the "p2p" logger to the p2p appender declared above [logger.p2p] level=info appenders=p2p
Folders and files are considered to be relative to the working directory (i.e. the directory from which the executables are launched from)
In order to allow RPC calls for blockchain operations you need to modify the following entry in the configuration file:
rpc-endpoint = 0.0.0.0:8090
This will open the port 8090 for global queries only. Since the witness node only maintains the blockchain and (unless you are an actively block producing witness) no private keys are involved, it is safe to expose your witness to the internet.
Although you would get default parameters in your config file for better memory reduction, you might want to learn about the plugin.
- Memory Reduction for Nodes:
- Help to reduce RAM usage significantly by using witness_node executable options
More information about how to manage Witness(Full) Node, in our tutorials.
When restarting the witness node, it may be required to append the
--replay-blockchain parameter to regenerate the local (in-memory) blockchain state.
If you want to close the Witness Node in a clean way, use
Ctrl-C in Windows.