Table of Contents
In the BitShares blockchain network, you might find the slightly different naming of nodes, such as a full node, witness node, or block producer. All nodes keep updating an internal database by applying the transactions as they arrive in incoming blocks. The difference between the node types lies in the amount of history they keep track of, and in the functionality they provide.
A witness node is a node run by a witness. Each witness node validates all blocks and transactions it receives. The nodes of elected witnesses take turns in bundling new transactions into blocks and broadcasting them to the network.
API nodes provide network services to client applications. They usually have account transaction histories accessible though API calls, but can vary in the amount of available history. Full nodes are API nodes with a complete transaction history of all accounts.
Seed nodes are nodes that accept incoming P2P connections. They are the first nodes contacted by a freshly started node. In that sense they serve as an entry point into the network. Once a node has entered the network it will receive additional node addresses from its peers, so all nodes can connect to each other. A seed node can also be an API node. The BitShares core software, also called witness_node, comes with a preconfigured list of seed nodes for easy bootstrapping.
So, when and who producing blocks? We call Block Producers ; they serve an important role of validating signatures and timestamping transactions by including them in a block and broadcast it. Block Producers are witnesses who got enough votes to become active witnesses and can produce a block when they got their turn “time slot”.
Each time, block producers produce a block, they get paid for their services. Their pay rate is set by the BTS holders via their elected committee members. If failing to produce a block in a time slot, then the block producer is not paid. Currently, BitShares blockchain has 27 Block Producers (Active Witnesses). You can find BitShares Blockchain witnesses in BitShares UI wallet Explorer-witnesses, or other Explorers.
You could think about BitShares blockchain nodes like the below.
- Node Types
- API Nodes (i.e., nodes with an open RPC port)
- Nodes with Full History (a.k.a. full nodes)
- Nodes with Partial History
- Seed Nodes (i.e., nodes with an open P2P port)
- Block Producing Nodes
- Block Producer or Active Witness Node (belongs to a voted-in witness and is configured with correct key)
- Standby Witness Node (either a not-voted-in witness, or a voted-in witness with different key)
- API Nodes (i.e., nodes with an open RPC port)
After proper BitShares-Core release installed, each type node is implemented by the same executable, however, each node would need to set up different configurations to start the node.
For example; although a block producing node can have full history, that would be a waste of resources. Block producing nodes should run with minimal plugins (i.e., only witness_plugin). Also, Block producing nodes should not have open network ports.
We strongly recommend all node service providers to run and maintain their own nodes for reliability and security reasons.
IMPORTANT: By default the witness node will start in reduced memory mode by using some of the commands detailed in Memory reduction for nodes.
In order to run a full node with all the account history you need to remove
max-ops-per-account from your config file. Please note that currently(2018-10-17) a full node will need more than 160GB of RAM to operate and required memory is growing fast. Consider the following table as minimal requirements before running a node:
|100G SSD, 16G RAM||200G SSD, 120G RAM||80G SSD, 8G RAM||500G SSD, 32G RAM|
How to launch the full node:
It takes an optional –data-dir parameter to define a working and data directory to store the configuration, blockchain and local databases (defaults to witness_node_data_dir). Those will be automatically created with default settings if they don’t exist locally set.
This section shows the BitShres Blockchain configuration file
config.ini parameter items. The configuration file locates in your data directory and the most parameter items are commented and contain the essential settings to adjust.
The below cinfig.ini output is an example (default values). You should modify
private-key and debug-private-key`` and adjust other parameter values respectively.
Example of a
config.ini file in the data directory.
As a default, most parameters have been commented. You have to uncomment to set the values.
# Endpoint for P2P node to listen on # p2p-endpoint = # P2P nodes to connect to on startup (may specify multiple times) # seed-node = # JSON array of P2P nodes to connect to on startup # seed-nodes = # Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints. # checkpoint = # Endpoint for websocket RPC to listen on # rpc-endpoint = # Endpoint for TLS websocket RPC to listen on # rpc-tls-endpoint = # The TLS certificate file for this server # server-pem = # Password for this certificate # server-pem-password = # File to read Genesis State from # genesis-json = # Block signing key to use for init witnesses, overrides genesis file # dbg-init-key = # JSON file specifying API permissions # api-access = # Space-separated list of plugins to activate # plugins = # Number of IO threads, default to 0 for auto-configuration # io-threads = # Whether allow API clients to subscribe to universal object creation and removal events # enable-subscribe-to-all = # Whether to enable tracking of votes of standby witnesses and committee members. Set it to true to provide accurate data to API clients, set to false for slightly better performance. # enable-standby-votes-tracking = # Enable block production, even if the chain is stale. enable-stale-production = false # Percent of witnesses (0-99) that must be participating in order to produce blocks required-participation = false # ID of witness controlled by this node (e.g. "1.6.5", quotes are required, may specify multiple times) # witness-id = # Tuple of [PublicKey, WIF private key] (may specify multiple times) private-key = ["BTS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV","5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3"] # Tuple of [PublicKey, WIF private key] (may specify multiple times) debug-private-key = ["BTS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV","5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3"] # Account ID to track history for (may specify multiple times) # track-account = # Keep only those operations in memory that are related to account history tracking partial-operations = 1 # Maximum number of operations per account will be kept in memory max-ops-per-account = 100 # Elastic Search database node url(http://localhost:9200/) # elasticsearch-node-url = # Number of bulk documents to index on replay(10000) # elasticsearch-bulk-replay = # Number of bulk documents to index on a syncronied chain(100) # elasticsearch-bulk-sync = # Use visitor to index additional data(slows down the replay(false)) # elasticsearch-visitor = # Pass basic auth to elasticsearch database('') # elasticsearch-basic-auth = # Add a prefix to the index(bitshares-) # elasticsearch-index-prefix = # Save operation as object(false) # elasticsearch-operation-object = # Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers bucket-size = [60,300,900,1800,3600,14400,86400] # How far back in time to track history for each bucket size, measured in the number of buckets (default: 1000) history-per-size = 1000 # Will only store this amount of matched orders for each market in order history for querying, or those meet the other option, which has more data (default: 1000) max-order-his-records-per-market = 1000 # Will only store matched orders in last X seconds for each market in order history for querying, or those meet the other option, which has more data (default: 259200 (3 days)) max-order-his-seconds-per-market = 259200 # RPC endpoint of a trusted validating node (required) # trusted-node = # Block number after which to do a snapshot # snapshot-at-block = # Block time (ISO format) after which to do a snapshot # snapshot-at-time = # Pathname of JSON file where to store the snapshot # snapshot-to = # Elasticsearch node url(http://localhost:9200/) # es-objects-elasticsearch-url = # Basic auth username:password('') # es-objects-auth = # Number of bulk documents to index on replay(10000) # es-objects-bulk-replay = # Number of bulk documents to index on a synchronized chain(100) # es-objects-bulk-sync = # Store proposal objects(true) # es-objects-proposals = # Store account objects(true) # es-objects-accounts = # Store asset objects(true) # es-objects-assets = # Store balances objects(true) # es-objects-balances = # Store limit order objects(true) # es-objects-limit-orders = # Store feed data(true) # es-objects-asset-bitasset = # Add a prefix to the index(objects-) # es-objects-index-prefix = # Keep only current state of the objects(true) # es-objects-keep-only-current = # Group orders by percentage increase on price. Specify a JSON array of numbers here, each number is a group, number 1 means 0.01%. tracked-groups = [10,100] # Logging configuration is loaded from logging.ini by default. # If logging.ini exists, logging configuration added in this file will be ignored.
# declare an appender named "stderr" that writes messages to the console [log.console_appender.stderr] stream=std_error # declare an appender named "default" that writes messages to default.log [log.file_appender.default] # filename can be absolute or relative to this config file filename=logs/default/default.log # Rotate log every ? minutes, if leave out default to 60 rotation_interval=60 # how long will logs be kept (in days), if leave out default to 1 rotation_limit=7 # declare an appender named "p2p" that writes messages to p2p.log [log.file_appender.p2p] # filename can be absolute or relative to this config file filename=logs/p2p/p2p.log # Rotate log every ? minutes, if leave out default to 60 rotation_interval=60 # how long will logs be kept (in days), if leave out default to 1 rotation_limit=7 # declare an appender named "rpc" that writes messages to rpc.log [log.file_appender.rpc] # filename can be absolute or relative to this config file filename=logs/rpc/rpc.log # Rotate log every ? minutes, if leave out default to 60 rotation_interval=60 # how long will logs be kept (in days), if leave out default to 1 rotation_limit=7 # route any messages logged to the default logger to the "stderr" appender and # "default" appender we declared above, if they are info level or higher [logger.default] level=info appenders=stderr,default # route messages sent to the "p2p" logger to the "p2p" appender declared above [logger.p2p] level=warn appenders=p2p # route messages sent to the "rpc" logger to the "rpc" appender declared above [logger.rpc] level=error appenders=rpc
Folders and files are considered to be relative to the working directory (i.e. the directory from which the executables are launched from)
In order to allow RPC calls for blockchain operations you need to modify the following entry in the configuration file:
rpc-endpoint = 0.0.0.0:8090
This will open the port 8090 for global queries only. Since the witness node only maintains the blockchain and (unless you are an actively block producing witness) no private keys are involved, it is safe to expose your witness to the internet.
Although you would get default parameters in your config file for better memory reduction, you might want to learn about the plugin.
- Memory Reduction for Nodes:
- Help to reduce RAM usage significantly by using witness_node executable options
Find more information about Node, check Node Tutorials.
When restarting the witness node, it may be required to append the
--replay-blockchain parameter to regenerate the local (in-memory) blockchain state.
If you want to close the Witness Node in a clean way, use
Ctrl-C in Windows.
- BitShares Public Full Nodes (apiConfig)
- Global BitShares Node Distribution Map
- Seed Nodes information (application.cpp)
How can a Let’s Encrypt certificate be used with an API node to provide a secure web socket (wss) connection.
- This can be done with two steps
- Concatenate the fullchain.pem with the privkey.pem –
cat fullchain.pem privkey.pem > combined.pem`
- In the node’s configuration file, point the TLS certificate to combined.pem, and leave the TLS certificate password as blank.
- Concatenate the fullchain.pem with the privkey.pem –