diff --git a/docs/server/source/index.rst b/docs/server/source/index.rst
index 750316df..65bd8774 100644
--- a/docs/server/source/index.rst
+++ b/docs/server/source/index.rst
@@ -10,7 +10,6 @@ BigchainDB Server Documentation
production-nodes/index
clusters
production-deployment-template/index
- production-deployment-template-tendermint/index
dev-and-test/index
server-reference/index
http-client-server-api
diff --git a/docs/server/source/production-deployment-template-tendermint/architecture.rst b/docs/server/source/production-deployment-template-tendermint/architecture.rst
deleted file mode 100644
index 7778e45f..00000000
--- a/docs/server/source/production-deployment-template-tendermint/architecture.rst
+++ /dev/null
@@ -1,210 +0,0 @@
-Architecture of a BigchainDB Node
-==================================
-
-A BigchainDB Production deployment is hosted on a Kubernetes cluster and includes:
-
-* NGINX, OpenResty, BigchainDB, MongoDB and Tendermint
- `Kubernetes Services `_.
-* NGINX, OpenResty, BigchainDB and MongoDB Monitoring Agent.
- `Kubernetes Deployments `_.
-* MongoDB and Tendermint `Kubernetes StatefulSet `_.
-* Third party services like `3scale `_,
- `MongoDB Cloud Manager `_ and the
- `Azure Operations Management Suite
- `_.
-
-
-.. _bigchaindb-node:
-
-BigchainDB Node
----------------
-
-.. aafig::
- :aspect: 60
- :scale: 100
- :background: #rgb
- :proportional:
-
- + +
- +--------------------------------------------------------------------------------------------------------------------------------------+
- | | | |
- | | | |
- | | | |
- | | | |
- | | | |
- | | | |
- | "BigchainDB API" | | "Tendermint P2P" |
- | | | "Communication/" |
- | | | "Public Key Exchange" |
- | | | |
- | | | |
- | v v |
- | |
- | +------------------+ |
- | |"NGINX Service" | |
- | +-------+----------+ |
- | | |
- | v |
- | |
- | +------------------+ |
- | | "NGINX" | |
- | | "Deployment" | |
- | | | |
- | +-------+----------+ |
- | | |
- | | |
- | | |
- | v |
- | |
- | "443" +----------+ "46656/9986" |
- | | "Rate" | |
- | +---------------------------+"Limiting"+-----------------------+ |
- | | | "Logic" | | |
- | | +----+-----+ | |
- | | | | |
- | | | | |
- | | | | |
- | | | | |
- | | | | |
- | | "27017" | | |
- | v | v |
- | +-------------+ | +------------+ |
- | |"HTTPS" | | +------------------> |"Tendermint"| |
- | |"Termination"| | | "9986" |"Service" | "46656" |
- | | | | | +-------+ | <----+ |
- | +-----+-------+ | | | +------------+ | |
- | | | | | | |
- | | | | v v |
- | | | | +------------+ +------------+ |
- | | | | |"NGINX" | |"Tendermint"| |
- | | | | |"Deployment"| |"Stateful" | |
- | | | | |"Pub-Key-Ex"| |"Set" | |
- | ^ | | +------------+ +------------+ |
- | +-----+-------+ | | |
- | "POST" |"Analyze" | "GET" | | |
- | |"Request" | | | |
- | +-----------+ +--------+ | | |
- | | +-------------+ | | | |
- | | | | | "Bi+directional, communication between" |
- | | | | | "BigchainDB(APP) and Tendermint" |
- | | | | | "BFT consensus Engine" |
- | | | | | |
- | v v | | |
- | | | |
- | +-------------+ +--------------+ +----+-------------------> +--------------+ |
- | | "OpenResty" | | "BigchainDB" | | | "MongoDB" | |
- | | "Service" | | "Service" | | | "Service" | |
- | | | +----->| | | +-------> | | |
- | +------+------+ | +------+-------+ | | +------+-------+ |
- | | | | | | | |
- | | | | | | | |
- | v | v | | v |
- | +-------------+ | +-------------+ | | +----------+ |
- | | | | | | <------------+ | |"MongoDB" | |
- | |"OpenResty" | | | "BigchainDB"| | |"Stateful"| |
- | |"Deployment" | | | "Deployment"| | |"Set" | |
- | | | | | | | +-----+----+ |
- | | | | | +---------------------------+ | |
- | | | | | | | |
- | +-----+-------+ | +-------------+ | |
- | | | | |
- | | | | |
- | v | | |
- | +-----------+ | v |
- | | "Auth" | | +------------+ |
- | | "Logic" |----------+ |"MongoDB" | |
- | | | |"Monitoring"| |
- | | | |"Agent" | |
- | +---+-------+ +-----+------+ |
- | | | |
- | | | |
- | | | |
- | | | |
- | | | |
- | | | |
- +---------------+---------------------------------------------------------------------------------------+------------------------------+
- | |
- | |
- | |
- v v
- +------------------------------------+ +------------------------------------+
- | | | |
- | | | |
- | | | |
- | "3Scale" | | "MongoDB Cloud" |
- | | | |
- | | | |
- | | | |
- +------------------------------------+ +------------------------------------+
-
-
-
-
-.. note::
- The arrows in the diagram represent the client-server communication. For
- example, A-->B implies that A initiates the connection to B.
- It does not represent the flow of data; the communication channel is always
- fully duplex.
-
-
-NGINX: Entrypoint and Gateway
------------------------------
-
-We use an NGINX as HTTP proxy on port 443 (configurable) at the cloud
-entrypoint for:
-
-#. Rate Limiting: We configure NGINX to allow only a certain number of requests
- (configurable) which prevents DoS attacks.
-
-#. HTTPS Termination: The HTTPS connection does not carry through all the way
- to BigchainDB and terminates at NGINX for now.
-
-#. Request Routing: For HTTPS connections on port 443 (or the configured BigchainDB public api port),
- the connection is proxied to:
-
- #. OpenResty Service if it is a POST request.
- #. BigchainDB Service if it is a GET request.
-
-
-We use an NGINX TCP proxy on port 27017 (configurable) at the cloud
-entrypoint for:
-
-#. Rate Limiting: We configure NGINX to allow only a certain number of requests
- (configurable) which prevents DoS attacks.
-
-#. Request Routing: For connections on port 27017 (or the configured MongoDB
- public api port), the connection is proxied to the MongoDB Service.
-
-
-OpenResty: API Management, Authentication and Authorization
------------------------------------------------------------
-
-We use `OpenResty `_ to perform authorization checks
-with 3scale using the ``app_id`` and ``app_key`` headers in the HTTP request.
-
-OpenResty is NGINX plus a bunch of other
-`components `_. We primarily depend
-on the LuaJIT compiler to execute the functions to authenticate the ``app_id``
-and ``app_key`` with the 3scale backend.
-
-
-MongoDB: Standalone
--------------------
-
-We use MongoDB as the backend database for BigchainDB.
-
-We achieve security by avoiding DoS attacks at the NGINX proxy layer and by
-ensuring that MongoDB has TLS enabled for all its connections.
-
-
-Tendermint: BFT consensus engine
---------------------------------
-
-We use Tendermint as the backend consensus engine for BFT replication of BigchainDB.
-In a multi-node deployment, Tendermint nodes/peers communicate with each other via
-the public ports exposed by the NGINX gateway.
-
-We use port **9986** (configurable) to allow tendermint nodes to access the public keys
-of the peers and port **46656** (configurable) for the rest of the communications between
-the peers.
-
diff --git a/docs/server/source/production-deployment-template-tendermint/index.rst b/docs/server/source/production-deployment-template-tendermint/index.rst
deleted file mode 100644
index 8692d180..00000000
--- a/docs/server/source/production-deployment-template-tendermint/index.rst
+++ /dev/null
@@ -1,20 +0,0 @@
-Production Deployment Template: Tendermint BFT
-==============================================
-
-This section outlines how *we* deploy production BigchainDB,
-integrated with Tendermint(backend for BFT consensus),
-clusters on Microsoft Azure using
-Kubernetes. We improve it constantly.
-You may choose to use it as a template or reference for your own deployment,
-but *we make no claim that it is suitable for your purposes*.
-Feel free change things to suit your needs or preferences.
-
-
-.. toctree::
- :maxdepth: 1
-
- workflow
- architecture
- node-on-kubernetes
- node-config-map-and-secrets
- bigchaindb-network-on-kubernetes
\ No newline at end of file
diff --git a/docs/server/source/production-deployment-template-tendermint/node-config-map-and-secrets.rst b/docs/server/source/production-deployment-template-tendermint/node-config-map-and-secrets.rst
deleted file mode 100644
index 2e488a38..00000000
--- a/docs/server/source/production-deployment-template-tendermint/node-config-map-and-secrets.rst
+++ /dev/null
@@ -1,356 +0,0 @@
-.. _how-to-configure-a-bigchaindb-tendermint-node:
-
-How to Configure a BigchainDB + Tendermint Node
-===============================================
-
-This page outlines the steps to set a bunch of configuration settings
-in your BigchainDB node.
-They are pushed to the Kubernetes cluster in two files,
-named ``config-map.yaml`` (a set of ConfigMaps)
-and ``secret.yaml`` (a set of Secrets).
-They are stored in the Kubernetes cluster's key-value store (etcd).
-
-Make sure you did all the things listed in the section titled
-:ref:`things-each-node-operator-must-do-tmt`
-(including generation of all the SSL certificates needed
-for MongoDB auth).
-
-
-Edit config-map.yaml
---------------------
-
-Make a copy of the file ``k8s/configuration/config-map.yaml``
-and edit the data values in the various ConfigMaps.
-That file already contains many comments to help you
-understand each data value, but we make some additional
-remarks on some of the values below.
-
-Note: None of the data values in ``config-map.yaml`` need
-to be base64-encoded. (This is unlike ``secret.yaml``,
-where all data values must be base64-encoded.
-This is true of all Kubernetes ConfigMaps and Secrets.)
-
-
-vars.cluster-fqdn
-~~~~~~~~~~~~~~~~~
-
-The ``cluster-fqdn`` field specifies the domain you would have
-:ref:`registered before `.
-
-
-vars.cluster-frontend-port
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The ``cluster-frontend-port`` field specifies the port on which your cluster
-will be available to all external clients.
-It is set to the HTTPS port ``443`` by default.
-
-
-vars.cluster-health-check-port
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The ``cluster-healthcheck-port`` is the port number on which health check
-probes are sent to the main NGINX instance.
-It is set to ``8888`` by default.
-
-
-vars.cluster-dns-server-ip
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The ``cluster-dns-server-ip`` is the IP of the DNS server for a node.
-We use DNS for service discovery. A Kubernetes deployment always has a DNS
-server (``kube-dns``) running at 10.0.0.10, and since we use Kubernetes, this is
-set to ``10.0.0.10`` by default, which is the default ``kube-dns`` IP address.
-
-
-vars.mdb-instance-name and Similar
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Your BigchainDB cluster organization should have a standard way
-of naming instances, so the instances in your BigchainDB node
-should conform to that standard (i.e. you can't just make up some names).
-There are some things worth noting about the ``mdb-instance-name``:
-
-* This field will be the DNS name of your MongoDB instance, and Kubernetes
- maps this name to its internal DNS.
-* We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in our
- documentation. Your BigchainDB cluster may use a different naming convention.
-
-
-vars.ngx-mdb-instance-name and Similar
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-NGINX needs the FQDN of the servers inside the cluster to be able to forward
-traffic.
-The ``ngx-openresty-instance-name``, ``ngx-mdb-instance-name`` and
-``ngx-bdb-instance-name`` are the FQDNs of the OpenResty instance, the MongoDB
-instance, and the BigchainDB instance in this Kubernetes cluster respectively.
-In Kubernetes, this is usually the name of the module specified in the
-corresponding ``vars.*-instance-name`` followed by the
-``.svc.cluster.local``. For example, if you run OpenResty in
-the default Kubernetes namespace, this will be
-``.default.svc.cluster.local``
-
-
-vars.mongodb-frontend-port and vars.mongodb-backend-port
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The ``mongodb-frontend-port`` is the port number on which external clients can
-access MongoDB. This needs to be restricted to only other MongoDB instances
-by enabling an authentication mechanism on MongoDB cluster.
-It is set to ``27017`` by default.
-
-The ``mongodb-backend-port`` is the port number on which MongoDB is actually
-available/listening for requests in your cluster.
-It is also set to ``27017`` by default.
-
-
-vars.openresty-backend-port
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The ``openresty-backend-port`` is the port number on which OpenResty is
-listening for requests.
-This is used by the NGINX instance to forward requests
-destined for the OpenResty instance to the right port.
-This is also used by OpenResty instance to bind to the correct port to
-receive requests from NGINX instance.
-It is set to ``80`` by default.
-
-
-vars.bigchaindb-wsserver-advertised-scheme
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The ``bigchaindb-wsserver-advertised-scheme`` is the protocol used to access
-the WebSocket API in BigchainDB. This can be set to ``wss`` or ``ws``.
-It is set to ``wss`` by default.
-
-
-vars.bigchaindb-api-port, vars.bigchaindb-ws-port and Similar
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The ``bigchaindb-api-port`` is the port number on which BigchainDB is
-listening for HTTP requests. Currently set to ``9984`` by default.
-
-The ``bigchaindb-ws-port`` is the port number on which BigchainDB is
-listening for Websocket requests. Currently set to ``9985`` by default.
-
-There's another :doc:`page with a complete listing of all the BigchainDB Server
-configuration settings <../server-reference/configuration>`.
-
-
-bdb-config.bdb-user
-~~~~~~~~~~~~~~~~~~~
-
-This is the user name that BigchainDB uses to authenticate itself to the
-backend MongoDB database.
-
-We need to specify the user name *as seen in the certificate* issued to
-the BigchainDB instance in order to authenticate correctly. Use
-the following ``openssl`` command to extract the user name from the
-certificate:
-
-.. code:: bash
-
- $ openssl x509 -in \
- -inform PEM -subject -nameopt RFC2253
-
-You should see an output line that resembles:
-
-.. code:: bash
-
- subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
-
-The ``subject`` line states the complete user name we need to use for this
-field (``bdb-config.bdb-user``), i.e.
-
-.. code:: bash
-
- emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
-
-
-tendermint-config.tm-instance-name
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Your BigchainDB cluster organization should have a standard way
-of naming instances, so the instances in your BigchainDB node
-should conform to that standard. There are some things worth noting
-about the ``tm-instance-name``:
-
-* This field will be the DNS name of your Tendermint instance, and Kubernetes
- maps this name to its internal DNS, so all the peer to peer communication
- depends on this, in case of a network/multi-node deployment.
-* This parameter is also used to access the public key of a particular node.
-* We use ``tm-instance-0``, ``tm-instance-1`` and so on in our
- documentation. Your BigchainDB cluster may use a different naming convention.
-
-
-tendermint-config.ngx-tm-instance-name
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-NGINX needs the FQDN of the servers inside the cluster to be able to forward
-traffic.
-``ngx-tm-instance-name`` is the FQDN of the Tendermint
-instance in this Kubernetes cluster.
-In Kubernetes, this is usually the name of the module specified in the
-corresponding ``tendermint-config.*-instance-name`` followed by the
-``.svc.cluster.local``. For example, if you run Tendermint in
-the default Kubernetes namespace, this will be
-``.default.svc.cluster.local``
-
-
-tendermint-config.tm-seeds
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-``tm-seeds`` is the initial set of peers to connect to. It is a comma separated
-list of all the peers part of the cluster.
-
-If you are deploying a stand-alone BigchainDB node the value should the same as
-````. If you are deploying a network this parameter will look
-like this:
-
-.. code::
-
- ,,,
-
-
-tendermint-config.tm-validators
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-``tm-validators`` is the initial set of validators in the network. It is a comma separated list
-of all the participant validator nodes.
-
-If you are deploying a stand-alone BigchainDB node the value should be the same as
-````. If you are deploying a network this parameter will look like
-this:
-
-.. code::
-
- ,,,
-
-
-tendermint-config.tm-validator-power
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-``tm-validator-power`` represents the voting power of each validator. It is a comma separated
-list of all the participants in the network.
-
-**Note**: The order of the validator power list should be the same as the ``tm-validators`` list.
-
-.. code::
-
- tm-validators: ,,,
-
-For the above list of validators the ``tm-validator-power`` list should look like this:
-
-.. code::
-
- tm-validator-power: ,,,
-
-
-tendermint-config.tm-genesis-time
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-``tm-genesis-time`` represents the official time of blockchain start. Details regarding, how to generate
-this parameter are covered :ref:`here `.
-
-
-tendermint-config.tm-chain-id
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-``tm-chain-id`` represents the ID of the blockchain. This must be unique for every blockchain.
-Details regarding, how to generate this parameter are covered
-:ref:`here `.
-
-
-tendermint-config.tm-abci-port
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-``tm-abci-port`` has a default value ``46658`` which is used by Tendermint Core for
-ABCI(Application BlockChain Interface) traffic. BigchainDB nodes use this port
-internally to communicate with Tendermint Core.
-
-
-tendermint-config.tm-p2p-port
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-``tm-p2p-port`` has a default value ``46656`` which is used by Tendermint Core for
-peer to peer communication.
-
-For a multi-node/zone deployment, this port needs to be available publicly for P2P
-communication between Tendermint nodes.
-
-
-tendermint-config.tm-rpc-port
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-``tm-rpc-port`` has a default value ``46657`` which is used by Tendermint Core for RPC
-traffic. BigchainDB nodes use this port with RPC listen address.
-
-
-tendermint-config.tm-pub-key-access
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-``tm-pub-key-access`` has a default value ``9986``, which is used to discover the public
-key of a tendermint node. Each Tendermint StatefulSet(Pod, Tendermint + NGINX) hosts its
-public key.
-
-.. code::
-
- http://tendermint-instance-1:9986/pub_key.json
-
-
-Edit secret.yaml
-----------------
-
-Make a copy of the file ``k8s/configuration/secret.yaml``
-and edit the data values in the various Secrets.
-That file includes many comments to explain the required values.
-**In particular, note that all values must be base64-encoded.**
-There are tips at the top of the file
-explaining how to convert values into base64-encoded values.
-
-Your BigchainDB node might not need all the Secrets.
-For example, if you plan to access the BigchainDB API over HTTP, you
-don't need the ``https-certs`` Secret.
-You can delete the Secrets you don't need,
-or set their data values to ``""``.
-
-Note that ``ca.pem`` is just another name for ``ca.crt``
-(the certificate of your BigchainDB cluster's self-signed CA).
-
-
-threescale-credentials.*
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-If you're not using 3scale,
-you can delete the ``threescale-credentials`` Secret
-or leave all the values blank (``""``).
-
-If you *are* using 3scale, get the values for ``secret-token``,
-``service-id``, ``version-header`` and ``service-token`` by logging in to 3scale
-portal using your admin account, click **APIs** and click on **Integration**
-for the relevant API.
-Scroll to the bottom of the page and click the small link
-in the lower right corner, labelled **Download the NGINX Config files**.
-Unzip it(if it is a ``zip`` file). Open the ``.conf`` and the ``.lua`` file.
-You should be able to find all the values in those files.
-You have to be careful because it will have values for **all** your APIs,
-and some values vary from API to API.
-The ``version-header`` is the timestamp in a line that looks like:
-
-.. code::
-
- proxy_set_header X-3scale-Version "2017-06-28T14:57:34Z";
-
-
-Deploy Your config-map.yaml and secret.yaml
--------------------------------------------
-
-You can deploy your edited ``config-map.yaml`` and ``secret.yaml``
-files to your Kubernetes cluster using the commands:
-
-.. code:: bash
-
- $ kubectl apply -f config-map.yaml
-
- $ kubectl apply -f secret.yaml
diff --git a/docs/server/source/production-deployment-template-tendermint/node-on-kubernetes.rst b/docs/server/source/production-deployment-template-tendermint/node-on-kubernetes.rst
deleted file mode 100644
index 45695b9c..00000000
--- a/docs/server/source/production-deployment-template-tendermint/node-on-kubernetes.rst
+++ /dev/null
@@ -1,1178 +0,0 @@
-.. _kubernetes-template-deploy-a-single-bigchaindb-node-with-tendermint:
-
-Kubernetes Template: Deploy a Single BigchainDB Node with Tendermint
-====================================================================
-
-This page describes how to deploy a stand-alone BigchainDB + Tendermint node,
-or a static network of BigchainDB + Tendermint nodes.
-using `Kubernetes `_.
-It assumes you already have a running Kubernetes cluster.
-
-Below, we refer to many files by their directory and filename,
-such as ``configuration/config-map-tm.yaml``. Those files are files in the
-`bigchaindb/bigchaindb repository on GitHub `_
-in the ``k8s/`` directory.
-Make sure you're getting those files from the appropriate Git branch on
-GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB
-cluster is using.
-
-
-Step 1: Install and Configure kubectl
--------------------------------------
-
-kubectl is the Kubernetes CLI.
-If you don't already have it installed,
-then see the `Kubernetes docs to install it
-`_.
-
-The default location of the kubectl configuration file is ``~/.kube/config``.
-If you don't have that file, then you need to get it.
-
-**Azure.** If you deployed your Kubernetes cluster on Azure
-using the Azure CLI 2.0 (as per :doc:`our template
-<../production-deployment-template/template-kubernetes-azure>`),
-then you can get the ``~/.kube/config`` file using:
-
-.. code:: bash
-
- $ az acs kubernetes get-credentials \
- --resource-group \
- --name
-
-If it asks for a password (to unlock the SSH key)
-and you enter the correct password,
-but you get an error message,
-then try adding ``--ssh-key-file ~/.ssh/``
-to the above command (i.e. the path to the private key).
-
-.. note::
-
- **About kubectl contexts.** You might manage several
- Kubernetes clusters. To make it easy to switch from one to another,
- kubectl has a notion of "contexts," e.g. the context for cluster 1 or
- the context for cluster 2. To find out the current context, do:
-
- .. code:: bash
-
- $ kubectl config view
-
- and then look for the ``current-context`` in the output.
- The output also lists all clusters, contexts and users.
- (You might have only one of each.)
- You can switch to a different context using:
-
- .. code:: bash
-
- $ kubectl config use-context
-
- You can also switch to a different context for just one command
- by inserting ``--context `` into any kubectl command.
- For example:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 get pods
-
- will get a list of the pods in the Kubernetes cluster associated
- with the context named ``k8s-bdb-test-cluster-0``.
-
-Step 2: Connect to Your Cluster's Web UI (Optional)
----------------------------------------------------
-
-You can connect to your cluster's
-`Kubernetes Dashboard `_
-(also called the Web UI) using:
-
-.. code:: bash
-
- $ kubectl proxy -p 8001
-
- or
-
- $ az acs kubernetes browse -g [Resource Group] -n [Container service instance name] --ssh-key-file /path/to/privateKey
-
-or, if you prefer to be explicit about the context (explained above):
-
-.. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 proxy -p 8001
-
-The output should be something like ``Starting to serve on 127.0.0.1:8001``.
-That means you can visit the dashboard in your web browser at
-`http://127.0.0.1:8001/ui `_.
-
-
-Step 3: Configure Your BigchainDB Node
---------------------------------------
-
-See the page titled :ref:`how-to-configure-a-bigchaindb-tendermint-node`.
-
-
-.. _start-the-nginx-service-tmt:
-
-Step 4: Start the NGINX Service
--------------------------------
-
- * This will will give us a public IP for the cluster.
-
- * Once you complete this step, you might need to wait up to 10 mins for the
- public IP to be assigned.
-
- * You have the option to use vanilla NGINX without HTTPS support or an
- NGINX with HTTPS support.
-
-
-Step 4.1: Vanilla NGINX
-^^^^^^^^^^^^^^^^^^^^^^^
-
- * This configuration is located in the file ``nginx-http/nginx-http-svc-tm.yaml``.
-
- * Set the ``metadata.name`` and ``metadata.labels.name`` to the value
- set in ``ngx-instance-name`` in the ConfigMap above.
-
- * Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in
- the ConfigMap followed by ``-dep``. For example, if the value set in the
- ``ngx-instance-name`` is ``ngx-http-instance-0``, set the
- ``spec.selector.app`` to ``ngx-http-instance-0-dep``.
-
- * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
- ``cluster-frontend-port`` in the ConfigMap above. This is the
- ``public-cluster-port`` in the file which is the ingress in to the cluster.
-
- * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
- ``tm-pub-access-port`` in the ConfigMap above. This is the
- ``tm-pub-key-access`` in the file which specifies where Public Key for
- the Tendermint instance is available.
-
- * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
- ``tm-p2p-port`` in the ConfigMap above. This is the
- ``tm-p2p-port`` in the file which is used for P2P communication for Tendermint
- nodes.
-
- * Start the Kubernetes Service:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc-tm.yaml
-
-
-Step 4.2: NGINX with HTTPS
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-
- * You have to enable HTTPS for this one and will need an HTTPS certificate
- for your domain.
-
- * You should have already created the necessary Kubernetes Secrets in the previous
- step (i.e. ``https-certs``).
-
- * This configuration is located in the file ``nginx-https/nginx-https-svc-tm.yaml``.
-
- * Set the ``metadata.name`` and ``metadata.labels.name`` to the value
- set in ``ngx-instance-name`` in the ConfigMap above.
-
- * Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in
- the ConfigMap followed by ``-dep``. For example, if the value set in the
- ``ngx-instance-name`` is ``ngx-https-instance-0``, set the
- ``spec.selector.app`` to ``ngx-https-instance-0-dep``.
-
- * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
- ``cluster-frontend-port`` in the ConfigMap above. This is the
- ``public-secure-cluster-port`` in the file which is the ingress in to the cluster.
-
- * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
- ``mongodb-frontend-port`` in the ConfigMap above. This is the
- ``public-mdb-port`` in the file which specifies where MongoDB is
- available.
-
- * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
- ``tm-pub-access-port`` in the ConfigMap above. This is the
- ``tm-pub-key-access`` in the file which specifies where Public Key for
- the Tendermint instance is available.
-
- * Set ``ports[3].port`` and ``ports[3].targetPort`` to the value set in the
- ``tm-p2p-port`` in the ConfigMap above. This is the
- ``tm-p2p-port`` in the file which is used for P2P communication between Tendermint
- nodes.
-
-
- * Start the Kubernetes Service:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc-tm.yaml
-
-
-.. _assign-dns-name-to-nginx-public-ip-tmt:
-
-Step 5: Assign DNS Name to the NGINX Public IP
-----------------------------------------------
-
- * This step is required only if you are planning to set up multiple
- `BigchainDB nodes
- `_ or are using
- HTTPS certificates tied to a domain.
-
- * The following command can help you find out if the NGINX service started
- above has been assigned a public IP or external IP address:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 get svc -w
-
- * Once a public IP is assigned, you can map it to
- a DNS name.
- We usually assign ``bdb-test-cluster-0``, ``bdb-test-cluster-1`` and
- so on in our documentation.
- Let's assume that we assign the unique name of ``bdb-test-cluster-0`` here.
-
-
-**Set up DNS mapping in Azure.**
-Select the current Azure resource group and look for the ``Public IP``
-resource. You should see at least 2 entries there - one for the Kubernetes
-master and the other for the NGINX instance. You may have to ``Refresh`` the
-Azure web page listing the resources in a resource group for the latest
-changes to be reflected.
-Select the ``Public IP`` resource that is attached to your service (it should
-have the Azure DNS prefix name along with a long random string, without the
-``master-ip`` string), select ``Configuration``, add the DNS assigned above
-(for example, ``bdb-test-cluster-0``), click ``Save``, and wait for the
-changes to be applied.
-
-To verify the DNS setting is operational, you can run ``nslookup `` from your local Linux shell.
-
-This will ensure that when you scale to different geographical zones, other Tendermint
-nodes in the network can reach this instance.
-
-
-.. _start-the-mongodb-kubernetes-service-tmt:
-
-Step 6: Start the MongoDB Kubernetes Service
---------------------------------------------
-
- * This configuration is located in the file ``mongodb/mongo-svc-tm.yaml``.
-
- * Set the ``metadata.name`` and ``metadata.labels.name`` to the value
- set in ``mdb-instance-name`` in the ConfigMap above.
-
- * Set the ``spec.selector.app`` to the value set in ``mdb-instance-name`` in
- the ConfigMap followed by ``-ss``. For example, if the value set in the
- ``mdb-instance-name`` is ``mdb-instance-0``, set the
- ``spec.selector.app`` to ``mdb-instance-0-ss``.
-
- * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
- ``mongodb-backend-port`` in the ConfigMap above.
- This is the ``mdb-port`` in the file which specifies where MongoDB listens
- for API requests.
-
- * Start the Kubernetes Service:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc-tm.yaml
-
-
-.. _start-the-bigchaindb-kubernetes-service-tmt:
-
-Step 7: Start the BigchainDB Kubernetes Service
------------------------------------------------
-
- * This configuration is located in the file ``bigchaindb/bigchaindb-svc-tm.yaml``.
-
- * Set the ``metadata.name`` and ``metadata.labels.name`` to the value
- set in ``bdb-instance-name`` in the ConfigMap above.
-
- * Set the ``spec.selector.app`` to the value set in ``bdb-instance-name`` in
- the ConfigMap followed by ``-dep``. For example, if the value set in the
- ``bdb-instance-name`` is ``bdb-instance-0``, set the
- ``spec.selector.app`` to ``bdb-instance-0-dep``.
-
- * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
- ``bigchaindb-api-port`` in the ConfigMap above.
- This is the ``bdb-api-port`` in the file which specifies where BigchainDB
- listens for HTTP API requests.
-
- * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
- ``bigchaindb-ws-port`` in the ConfigMap above.
- This is the ``bdb-ws-port`` in the file which specifies where BigchainDB
- listens for Websocket connections.
-
- * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
- ``tm-abci-port`` in the ConfigMap above.
- This is the ``tm-abci-port`` in the file which specifies the port used
- for ABCI communication.
-
- * Start the Kubernetes Service:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc-tm.yaml
-
-
-.. _start-the-openresty-kubernetes-service-tmt:
-
-Step 8: Start the OpenResty Kubernetes Service
-----------------------------------------------
-
- * This configuration is located in the file ``nginx-openresty/nginx-openresty-svc-tm.yaml``.
-
- * Set the ``metadata.name`` and ``metadata.labels.name`` to the value
- set in ``openresty-instance-name`` in the ConfigMap above.
-
- * Set the ``spec.selector.app`` to the value set in ``openresty-instance-name`` in
- the ConfigMap followed by ``-dep``. For example, if the value set in the
- ``openresty-instance-name`` is ``openresty-instance-0``, set the
- ``spec.selector.app`` to ``openresty-instance-0-dep``.
-
- * Start the Kubernetes Service:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc-tm.yaml
-
-
-.. _start-the-tendermint-kubernetes-service-tmt:
-
-Step 9: Start the Tendermint Kubernetes Service
------------------------------------------------
-
- * This configuration is located in the file ``tendermint/tendermint-svc.yaml``.
-
- * Set the ``metadata.name`` and ``metadata.labels.name`` to the value
- set in ``tm-instance-name`` in the ConfigMap above.
-
- * Set the ``spec.selector.app`` to the value set in ``tm-instance-name`` in
- the ConfigMap followed by ``-ss``. For example, if the value set in the
- ``tm-instance-name`` is ``tm-instance-0``, set the
- ``spec.selector.app`` to ``tm-instance-0-ss``.
-
- * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
- ``tm-p2p-port`` in the ConfigMap above.
- This is the ``p2p`` in the file which specifies where Tendermint peers
- communicate.
-
- * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
- ``tm-rpc-port`` in the ConfigMap above.
- This is the ``rpc`` in the file which specifies the port used by Tendermint core
- for RPC traffic.
-
- * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
- ``tm-pub-key-access`` in the ConfigMap above.
- This is the ``pub-key-access`` in the file which specifies the port to host/distribute
- the public key for the Tendermint node.
-
- * Start the Kubernetes Service:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-svc.yaml
-
-
-.. _start-the-nginx-deployment-tmt:
-
-Step 10: Start the NGINX Kubernetes Deployment
-----------------------------------------------
-
- * NGINX is used as a proxy to OpenResty, BigchainDB, Tendermint and MongoDB instances in
- the node. It proxies HTTP/HTTPS requests on the ``cluster-frontend-port``
- to the corresponding OpenResty or BigchainDB backend, TCP connections
- on ``mongodb-frontend-port``, ``tm-p2p-port`` and ``tm-pub-key-access``
- to MongoDB and Tendermint respectively.
-
- * As in step 4, you have the option to use vanilla NGINX without HTTPS or
- NGINX with HTTPS support.
-
-Step 10.1: Vanilla NGINX
-^^^^^^^^^^^^^^^^^^^^^^^^
-
- * This configuration is located in the file ``nginx-http/nginx-http-dep-tm.yaml``.
-
- * Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
- to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
- ``-dep``. For example, if the value set in the ``ngx-instance-name`` is
- ``ngx-http-instance-0``, set the fields to ``ngx-http-instance-0-dep``.
-
- * Set the ports to be exposed from the pod in the
- ``spec.containers[0].ports`` section. We currently expose 5 ports -
- ``mongodb-frontend-port``, ``cluster-frontend-port``,
- ``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port``.
- Set them to the values specified in the
- ConfigMap.
-
- * The configuration uses the following values set in the ConfigMap:
-
- - ``cluster-frontend-port``
- - ``cluster-health-check-port``
- - ``cluster-dns-server-ip``
- - ``mongodb-frontend-port``
- - ``ngx-mdb-instance-name``
- - ``mongodb-backend-port``
- - ``ngx-bdb-instance-name``
- - ``bigchaindb-api-port``
- - ``bigchaindb-ws-port``
- - ``ngx-tm-instance-name``
- - ``tm-pub-key-access``
- - ``tm-p2p-port``
-
- * Start the Kubernetes Deployment:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep-tm.yaml
-
-
-Step 10.2: NGINX with HTTPS
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
- * This configuration is located in the file
- ``nginx-https/nginx-https-dep-tm.yaml``.
-
- * Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
- to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
- ``-dep``. For example, if the value set in the ``ngx-instance-name`` is
- ``ngx-https-instance-0``, set the fields to ``ngx-https-instance-0-dep``.
-
- * Set the ports to be exposed from the pod in the
- ``spec.containers[0].ports`` section. We currently expose 6 ports -
- ``mongodb-frontend-port``, ``cluster-frontend-port``,
- ``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port``
- . Set them to the values specified in the
- ConfigMap.
-
- * The configuration uses the following values set in the ConfigMap:
-
- - ``cluster-frontend-port``
- - ``cluster-health-check-port``
- - ``cluster-fqdn``
- - ``cluster-dns-server-ip``
- - ``mongodb-frontend-port``
- - ``ngx-mdb-instance-name``
- - ``mongodb-backend-port``
- - ``openresty-backend-port``
- - ``ngx-openresty-instance-name``
- - ``ngx-bdb-instance-name``
- - ``bigchaindb-api-port``
- - ``bigchaindb-ws-port``
- - ``ngx-tm-instance-name``
- - ``tm-pub-key-access``
- - ``tm-p2p-port```
-
- * The configuration uses the following values set in the Secret:
-
- - ``https-certs``
-
- * Start the Kubernetes Deployment:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep-tm.yaml
-
-
-.. _create-kubernetes-storage-class-mdb-tmt:
-
-Step 11: Create Kubernetes Storage Classes for MongoDB
-------------------------------------------------------
-
-MongoDB needs somewhere to store its data persistently,
-outside the container where MongoDB is running.
-Our MongoDB Docker container
-(based on the official MongoDB Docker container)
-exports two volume mounts with correct
-permissions from inside the container:
-
-* The directory where the mongod instance stores its data: ``/data/db``.
- There's more explanation in the MongoDB docs about `storage.dbpath `_.
-
-* The directory where the mongodb instance stores the metadata for a sharded
- cluster: ``/data/configdb/``.
- There's more explanation in the MongoDB docs about `sharding.configDB `_.
-
-Explaining how Kubernetes handles persistent volumes,
-and the associated terminology,
-is beyond the scope of this documentation;
-see `the Kubernetes docs about persistent volumes
-`_.
-
-The first thing to do is create the Kubernetes storage classes.
-
-**Set up Storage Classes in Azure.**
-First, you need an Azure storage account.
-If you deployed your Kubernetes cluster on Azure
-using the Azure CLI 2.0
-(as per :doc:`our template <../production-deployment-template/template-kubernetes-azure>`),
-then the `az acs create` command already created a
-storage account in the same location and resource group
-as your Kubernetes cluster.
-Both should have the same "storage account SKU": ``Standard_LRS``.
-Standard storage is lower-cost and lower-performance.
-It uses hard disk drives (HDD).
-LRS means locally-redundant storage: three replicas
-in the same data center.
-Premium storage is higher-cost and higher-performance.
-It uses solid state drives (SSD).
-You can create a `storage account `_
-for Premium storage and associate it with your Azure resource group.
-For future reference, the command to create a storage account is
-`az storage account create `_.
-
-.. Note::
- Please refer to `Azure documentation `_
- for the list of VMs that are supported by Premium Storage.
-
-The Kubernetes template for configuration of Storage Class is located in the
-file ``mongodb/mongo-sc.yaml``.
-
-You may have to update the ``parameters.location`` field in the file to
-specify the location you are using in Azure.
-
-If you want to use a custom storage account with the Storage Class, you
-can also update `parameters.storageAccount` and provide the Azure storage
-account name.
-
-Create the required storage classes using:
-
-.. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-sc.yaml
-
-
-You can check if it worked using ``kubectl get storageclasses``.
-
-
-.. _create-kubernetes-persistent-volume-claim-mdb-tmt:
-
-Step 12: Create Kubernetes Persistent Volume Claims for MongoDB
----------------------------------------------------------------
-
-Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and
-``mongo-configdb-claim``.
-
-This configuration is located in the file ``mongodb/mongo-pvc.yaml``.
-
-Note how there's no explicit mention of Azure, AWS or whatever.
-``ReadWriteOnce`` (RWO) means the volume can be mounted as
-read-write by a single Kubernetes node.
-(``ReadWriteOnce`` is the *only* access mode supported
-by AzureDisk.)
-``storage: 20Gi`` means the volume has a size of 20
-`gibibytes `_.
-
-You may want to update the ``spec.resources.requests.storage`` field in both
-the files to specify a different disk size.
-
-Create the required Persistent Volume Claims using:
-
-.. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-pvc.yaml
-
-
-You can check its status using: ``kubectl get pvc -w``
-
-Initially, the status of persistent volume claims might be "Pending"
-but it should become "Bound" fairly quickly.
-
-.. Note::
- The default Reclaim Policy for dynamically created persistent volumes is ``Delete``
- which means the PV and its associated Azure storage resource will be automatically
- deleted on deletion of PVC or PV. In order to prevent this from happening do
- the following steps to change default reclaim policy of dyanmically created PVs
- from ``Delete`` to ``Retain``
-
- * Run the following command to list existing PVs
-
- .. Code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 get pv
-
- * Run the following command to update a PV's reclaim policy to
-
- .. Code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
-
- For notes on recreating a private volume form a released Azure disk resource consult
- :doc:`the page about cluster troubleshooting <../production-deployment-template/troubleshoot>`.
-
-.. _start-kubernetes-stateful-set-mongodb-tmt:
-
-Step 13: Start a Kubernetes StatefulSet for MongoDB
----------------------------------------------------
-
- * This configuration is located in the file ``mongodb/mongo-ss-tm.yaml``.
-
- * Set the ``spec.serviceName`` to the value set in ``mdb-instance-name`` in
- the ConfigMap.
- For example, if the value set in the ``mdb-instance-name``
- is ``mdb-instance-0``, set the field to ``mdb-instance-0``.
-
- * Set ``metadata.name``, ``spec.template.metadata.name`` and
- ``spec.template.metadata.labels.app`` to the value set in
- ``mdb-instance-name`` in the ConfigMap, followed by
- ``-ss``.
- For example, if the value set in the
- ``mdb-instance-name`` is ``mdb-instance-0``, set the fields to the value
- ``mdb-insance-0-ss``.
-
- * Note how the MongoDB container uses the ``mongo-db-claim`` and the
- ``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and
- ``/data/configdb`` directories (mount paths).
-
- * Note also that we use the pod's ``securityContext.capabilities.add``
- specification to add the ``FOWNER`` capability to the container. That is
- because the MongoDB container has the user ``mongodb``, with uid ``999`` and
- group ``mongodb``, with gid ``999``.
- When this container runs on a host with a mounted disk, the writes fail
- when there is no user with uid ``999``. To avoid this, we use the Docker
- feature of ``--cap-add=FOWNER``. This bypasses the uid and gid permission
- checks during writes and allows data to be persisted to disk.
- Refer to the `Docker docs
- `_
- for details.
-
- * As we gain more experience running MongoDB in testing and production, we
- will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
-
- * Set the ports to be exposed from the pod in the
- ``spec.containers[0].ports`` section. We currently only expose the MongoDB
- backend port. Set it to the value specified for ``mongodb-backend-port``
- in the ConfigMap.
-
- * The configuration uses the following values set in the ConfigMap:
-
- - ``mdb-instance-name``
- - ``mongodb-backend-port``
-
- * The configuration uses the following values set in the Secret:
-
- - ``mdb-certs``
- - ``ca-auth``
-
- * **Optional**: You can change the value for ``STORAGE_ENGINE_CACHE_SIZE`` in the ConfigMap ``storage-engine-cache-size``, for more information
- regarding this configuration, please consult the `MongoDB Official
- Documentation `_.
-
- * **Optional**: If you are not using the **Standard_D2_v2** virtual machines for Kubernetes agents as per the guide,
- please update the ``resources`` for ``mongo-ss``. We suggest allocating ``memory`` using the following scheme
- for a MongoDB StatefulSet:
-
- .. code:: bash
-
- memory = (Total_Memory_Agent_VM_GB - 2GB)
- STORAGE_ENGINE_CACHE_SIZE = memory / 2
-
- * Create the MongoDB StatefulSet using:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss-tm.yaml
-
- * It might take up to 10 minutes for the disks, specified in the Persistent
- Volume Claims above, to be created and attached to the pod.
- The UI might show that the pod has errored with the message
- "timeout expired waiting for volumes to attach/mount". Use the CLI below
- to check the status of the pod in this case, instead of the UI.
- This happens due to a bug in Azure ACS.
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 get pods -w
-
-
-.. _configure-users-and-access-control-mongodb-tmt:
-
-Step 14: Configure Users and Access Control for MongoDB
--------------------------------------------------------
-
- * In this step, you will create a user on MongoDB with authorization
- to create more users and assign
- roles to them.
- Note: You need to do this only when setting up the first MongoDB node of
- the cluster.
-
- * Find out the name of your MongoDB pod by reading the output
- of the ``kubectl ... get pods`` command at the end of the last step.
- It should be something like ``mdb-instance-0-ss-0``.
-
- * Log in to the MongoDB pod using:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 exec -it bash
-
- * Open a mongo shell using the certificates
- already present at ``/etc/mongod/ssl/``
-
- .. code:: bash
-
- $ mongo --host localhost --port 27017 --verbose --ssl \
- --sslCAFile /etc/mongod/ca/ca.pem \
- --sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
-
- * Create a user ``adminUser`` on the ``admin`` database with the
- authorization to create other users. This will only work the first time you
- log in to the mongo shell. For further details, see `localhost
- exception `_
- in MongoDB.
-
- .. code:: bash
-
- PRIMARY> use admin
- PRIMARY> db.createUser( {
- user: "adminUser",
- pwd: "superstrongpassword",
- roles: [ { role: "userAdminAnyDatabase", db: "admin" },
- { role: "clusterManager", db: "admin"} ]
- } )
-
- * Exit and restart the mongo shell using the above command.
- Authenticate as the ``adminUser`` we created earlier:
-
- .. code:: bash
-
- PRIMARY> use admin
- PRIMARY> db.auth("adminUser", "superstrongpassword")
-
- ``db.auth()`` returns 0 when authentication is not successful,
- and 1 when successful.
-
- * We need to specify the user name *as seen in the certificate* issued to
- the BigchainDB instance in order to authenticate correctly. Use
- the following ``openssl`` command to extract the user name from the
- certificate:
-
- .. code:: bash
-
- $ openssl x509 -in \
- -inform PEM -subject -nameopt RFC2253
-
- You should see an output line that resembles:
-
- .. code:: bash
-
- subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
-
- The ``subject`` line states the complete user name we need to use for
- creating the user on the mongo shell as follows:
-
- .. code:: bash
-
- PRIMARY> db.getSiblingDB("$external").runCommand( {
- createUser: 'emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE',
- writeConcern: { w: 'majority' , wtimeout: 5000 },
- roles: [
- { role: 'clusterAdmin', db: 'admin' },
- { role: 'readWriteAnyDatabase', db: 'admin' }
- ]
- } )
-
- * You can similarly create user for MongoDB Monitoring Agent. For example:
-
- .. code:: bash
-
- PRIMARY> db.getSiblingDB("$external").runCommand( {
- createUser: 'emailAddress=dev@bigchaindb.com,CN=test-mdb-mon-ssl,OU=MongoDB-Mon-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE',
- writeConcern: { w: 'majority' , wtimeout: 5000 },
- roles: [
- { role: 'clusterMonitor', db: 'admin' }
- ]
- } )
-
-
-.. _create-kubernetes-storage-class-tmt:
-
-Step 15: Create Kubernetes Storage Classes for Tendermint
-----------------------------------------------------------
-
-Tendermint needs somewhere to store its data persistently, it uses
-LevelDB as the persistent storage layer.
-
-The Kubernetes template for configuration of Storage Class is located in the
-file ``tendermint/tendermint-sc.yaml``.
-
-Details about how to create a Azure Storage account and how Kubernetes Storage Class works
-are already covered in this document: :ref:`create-kubernetes-storage-class-mdb-tmt`.
-
-Create the required storage classes using:
-
-.. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-sc.yaml
-
-
-You can check if it worked using ``kubectl get storageclasses``.
-
-.. _create-kubernetes-persistent-volume-claim-tmt:
-
-Step 16: Create Kubernetes Persistent Volume Claims for Tendermint
-------------------------------------------------------------------
-
-Next, you will create two PersistentVolumeClaim objects ``tendermint-db-claim`` and
-``tendermint-config-db-claim``.
-
-This configuration is located in the file ``tendermint/tendermint-pvc.yaml``.
-
-Details about Kubernetes Persistent Volumes, Persistent Volume Claims
-and how they work with Azure are already covered in this
-document: :ref:`create-kubernetes-persistent-volume-claim-mdb-tmt`.
-
-Create the required Persistent Volume Claims using:
-
-.. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-pvc.yaml
-
-You can check its status using:
-
-.. code::
-
- kubectl get pvc -w
-
-
-.. _create-kubernetes-stateful-set-tmt:
-
-Step 17: Start a Kubernetes StatefulSet for Tendermint
-------------------------------------------------------
-
- * This configuration is located in the file ``tendermint/tendermint-ss.yaml``.
-
- * Set the ``spec.serviceName`` to the value set in ``tm-instance-name`` in
- the ConfigMap.
- For example, if the value set in the ``tm-instance-name``
- is ``tm-instance-0``, set the field to ``tm-instance-0``.
-
- * Set ``metadata.name``, ``spec.template.metadata.name`` and
- ``spec.template.metadata.labels.app`` to the value set in
- ``tm-instance-name`` in the ConfigMap, followed by
- ``-ss``.
- For example, if the value set in the
- ``tm-instance-name`` is ``tm-instance-0``, set the fields to the value
- ``tm-insance-0-ss``.
-
- * Note how the Tendermint container uses the ``tendermint-db-claim`` and the
- ``tendermint-config-db-claim`` PersistentVolumeClaims for its ``/tendermint`` and
- ``/tendermint_node_data`` directories (mount paths).
-
- * As we gain more experience running Tendermint in testing and production, we
- will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
-
-We deploy Tendermint as POD(Tendermint + NGINX), Tendermint is used as the consensus
-engine while NGINX is used to serve the public key of the Tendermint instance.
-
- * For the NGINX container,set the ports to be exposed from the container
- ``spec.containers[0].ports[0]`` section. Set it to the value specified
- for ``tm-pub-key-access`` from ConfigMap.
-
- * For the Tendermint container, Set the ports to be exposed from the container in the
- ``spec.containers[1].ports`` section. We currently expose two Tendermint ports.
- Set it to the value specified for ``tm-p2p-port`` and ``tm-rpc-port``
- in the ConfigMap, repectively
-
- * The configuration uses the following values set in the ConfigMap:
-
- - ``tm-pub-key-access``
- - ``tm-seeds``
- - ``tm-validator-power``
- - ``tm-validators``
- - ``tm-genesis-time``
- - ``tm-chain-id``
- - ``tm-abci-port``
- - ``bdb-instance-name``
-
- * Create the Tendermint StatefulSet using:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-ss.yaml
-
- * It might take up to 10 minutes for the disks, specified in the Persistent
- Volume Claims above, to be created and attached to the pod.
- The UI might show that the pod has errored with the message
- "timeout expired waiting for volumes to attach/mount". Use the CLI below
- to check the status of the pod in this case, instead of the UI.
- This happens due to a bug in Azure ACS.
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 get pods -w
-
-.. _start-kubernetes-deployment-for-mdb-mon-agent-tmt:
-
-Step 18: Start a Kubernetes Deployment for MongoDB Monitoring Agent
--------------------------------------------------------------------
-
- * This configuration is located in the file
- ``mongodb-monitoring-agent/mongo-mon-dep.yaml``.
-
- * Set ``metadata.name``, ``spec.template.metadata.name`` and
- ``spec.template.metadata.labels.app`` to the value set in
- ``mdb-mon-instance-name`` in the ConfigMap, followed by
- ``-dep``.
- For example, if the value set in the
- ``mdb-mon-instance-name`` is ``mdb-mon-instance-0``, set the fields to the
- value ``mdb-mon-instance-0-dep``.
-
- * The configuration uses the following values set in the Secret:
-
- - ``mdb-mon-certs``
- - ``ca-auth``
- - ``cloud-manager-credentials``
-
- * Start the Kubernetes Deployment using:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml
-
-
-.. _start-kubernetes-deployment-bdb-tmt:
-
-Step 19: Start a Kubernetes Deployment for BigchainDB
------------------------------------------------------
-
- * This configuration is located in the file
- ``bigchaindb/bigchaindb-dep-tm.yaml``.
-
- * Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
- value set in ``bdb-instance-name`` in the ConfigMap, followed by
- ``-dep``.
- For example, if the value set in the
- ``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the
- value ``bdb-insance-0-dep``.
-
- * As we gain more experience running BigchainDB in testing and production,
- we will tweak the ``resources.limits`` values for CPU and memory, and as
- richer monitoring and probing becomes available in BigchainDB, we will
- tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
-
- * Set the ports to be exposed from the pod in the
- ``spec.containers[0].ports`` section. We currently expose 3 ports -
- ``bigchaindb-api-port``, ``bigchaindb-ws-port`` and ``tm-abci-port``. Set them to the
- values specified in the ConfigMap.
-
- * The configuration uses the following values set in the ConfigMap:
-
- - ``mdb-instance-name``
- - ``mongodb-backend-port``
- - ``mongodb-replicaset-name``
- - ``bigchaindb-database-name``
- - ``bigchaindb-server-bind``
- - ``bigchaindb-ws-interface``
- - ``cluster-fqdn``
- - ``bigchaindb-ws-port``
- - ``cluster-frontend-port``
- - ``bigchaindb-wsserver-advertised-scheme``
- - ``bdb-public-key``
- - ``bigchaindb-backlog-reassign-delay``
- - ``bigchaindb-database-maxtries``
- - ``bigchaindb-database-connection-timeout``
- - ``bigchaindb-log-level``
- - ``bdb-user``
- - ``tm-instance-name``
- - ``tm-rpc-port``
-
- * The configuration uses the following values set in the Secret:
-
- - ``bdb-certs``
- - ``ca-auth``
-
- * Create the BigchainDB Deployment using:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep-tm.yaml
-
-
- * You can check its status using the command ``kubectl get deployments -w``
-
-
-.. _start-kubernetes-deployment-openresty-tmt:
-
-Step 20: Start a Kubernetes Deployment for OpenResty
-----------------------------------------------------
-
- * This configuration is located in the file
- ``nginx-openresty/nginx-openresty-dep.yaml``.
-
- * Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
- value set in ``openresty-instance-name`` in the ConfigMap, followed by
- ``-dep``.
- For example, if the value set in the
- ``openresty-instance-name`` is ``openresty-instance-0``, set the fields to
- the value ``openresty-instance-0-dep``.
-
- * Set the port to be exposed from the pod in the
- ``spec.containers[0].ports`` section. We currently expose the port at
- which OpenResty is listening for requests, ``openresty-backend-port`` in
- the above ConfigMap.
-
- * The configuration uses the following values set in the Secret:
-
- - ``threescale-credentials``
-
- * The configuration uses the following values set in the ConfigMap:
-
- - ``cluster-dns-server-ip``
- - ``openresty-backend-port``
- - ``ngx-bdb-instance-name``
- - ``bigchaindb-api-port``
-
- * Create the OpenResty Deployment using:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-dep.yaml
-
-
- * You can check its status using the command ``kubectl get deployments -w``
-
-
-Step 21: Configure the MongoDB Cloud Manager
---------------------------------------------
-
-Refer to the
-:doc:`documentation <../production-deployment-template/cloud-manager>`
-for details on how to configure the MongoDB Cloud Manager to enable
-monitoring and backup.
-
-
-.. _verify-and-test-bdb-tmt:
-
-Step 22: Verify the BigchainDB Node Setup
------------------------------------------
-
-Step 22.1: Testing Internally
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-To test the setup of your BigchainDB node, you could use a Docker container
-that provides utilities like ``nslookup``, ``curl`` and ``dig``.
-For example, you could use a container based on our
-`bigchaindb/toolbox `_ image.
-(The corresponding
-`Dockerfile `_
-is in the ``bigchaindb/bigchaindb`` repository on GitHub.)
-You can use it as below to get started immediately:
-
-.. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 \
- run -it toolbox \
- --image bigchaindb/toolbox \
- --image-pull-policy=Always \
- --restart=Never --rm
-
-It will drop you to the shell prompt.
-
-To test the MongoDB instance:
-
-.. code:: bash
-
- $ nslookup mdb-instance-0
-
- $ dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV
-
- $ curl -X GET http://mdb-instance-0:27017
-
-The ``nslookup`` command should output the configured IP address of the service
-(in the cluster).
-The ``dig`` command should return the configured port numbers.
-The ``curl`` command tests the availability of the service.
-
-To test the BigchainDB instance:
-
-.. code:: bash
-
- $ nslookup bdb-instance-0
-
- $ dig +noall +answer _bdb-api-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
-
- $ dig +noall +answer _bdb-ws-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
-
- $ curl -X GET http://bdb-instance-0:9984
-
- $ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions
-
-To test the Tendermint instance:
-
-.. code:: bash
-
- $ nslookup tm-instance-0
-
- $ dig +noall +answer _bdb-api-port._tcp.tm-instance-0.default.svc.cluster.local SRV
-
- $ dig +noall +answer _bdb-ws-port._tcp.tm-instance-0.default.svc.cluster.local SRV
-
- $ curl -X GET http://tm-instance-0:9986/pub_key.json
-
-
-To test the OpenResty instance:
-
-.. code:: bash
-
- $ nslookup openresty-instance-0
-
- $ dig +noall +answer _openresty-svc-port._tcp.openresty-instance-0.default.svc.cluster.local SRV
-
-To verify if OpenResty instance forwards the requests properly, send a ``POST``
-transaction to OpenResty at post ``80`` and check the response from the backend
-BigchainDB instance.
-
-
-To test the vanilla NGINX instance:
-
-.. code:: bash
-
- $ nslookup ngx-http-instance-0
-
- $ dig +noall +answer _public-cluster-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
-
- $ dig +noall +answer _public-health-check-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
-
- $ wsc -er ws://ngx-http-instance-0/api/v1/streams/valid_transactions
-
- $ curl -X GET http://ngx-http-instance-0:27017
-
-The above curl command should result in the response
-``It looks like you are trying to access MongoDB over HTTP on the native driver port.``
-
-
-
-To test the NGINX instance with HTTPS and 3scale integration:
-
-.. code:: bash
-
- $ nslookup ngx-instance-0
-
- $ dig +noall +answer _public-secure-cluster-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
-
- $ dig +noall +answer _public-mdb-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
-
- $ dig +noall +answer _public-insecure-cluster-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
-
- $ wsc -er wss:///api/v1/streams/valid_transactions
-
- $ curl -X GET http://:27017
-
-The above curl command should result in the response
-``It looks like you are trying to access MongoDB over HTTP on the native driver port.``
-
-
-Step 22.2: Testing Externally
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Check the MongoDB monitoring agent on the MongoDB Cloud Manager
-portal to verify they are working fine.
-
-If you are using the NGINX with HTTP support, accessing the URL
-``http://:cluster-frontend-port``
-on your browser should result in a JSON response that shows the BigchainDB
-server version, among other things.
-If you are using the NGINX with HTTPS support, use ``https`` instead of
-``http`` above.
-
-Use the Python Driver to send some transactions to the BigchainDB node and
-verify that your node or cluster works as expected.
-
-Next, you can set up log analytics and monitoring, by following our templates:
-
-* :doc:`../production-deployment-template/log-analytics`.
diff --git a/docs/server/source/production-deployment-template-tendermint/workflow.rst b/docs/server/source/production-deployment-template-tendermint/workflow.rst
deleted file mode 100644
index 5e55ab4c..00000000
--- a/docs/server/source/production-deployment-template-tendermint/workflow.rst
+++ /dev/null
@@ -1,188 +0,0 @@
-Overview
-========
-
-This page summarizes the steps *we* go through
-to set up a production BigchainDB + Tendermint cluster.
-We are constantly improving them.
-You can modify them to suit your needs.
-
-.. Note::
- With our BigchainDB + Tendermint deployment model, we use standalone MongoDB
- (without Replica Set), BFT replication is handled by Tendermint.
-
-
-1. Set Up a Self-Signed Certificate Authority
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-We use SSL/TLS and self-signed certificates
-for MongoDB authentication (and message encryption).
-The certificates are signed by the organization managing the :ref:`bigchaindb-node`.
-If your organization already has a process
-for signing certificates
-(i.e. an internal self-signed certificate authority [CA]),
-then you can skip this step.
-Otherwise, your organization must
-:ref:`set up its own self-signed certificate authority `.
-
-
-.. _register-a-domain-and-get-an-ssl-certificate-for-it-tmt:
-
-2. Register a Domain and Get an SSL Certificate for It
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The BigchainDB APIs (HTTP API and WebSocket API) should be served using TLS,
-so the organization running the cluster
-should choose an FQDN for their API (e.g. api.organization-x.com),
-register the domain name,
-and buy an SSL/TLS certificate for the FQDN.
-
-.. _things-each-node-operator-must-do-tmt:
-
-Things Each Node Operator Must Do
----------------------------------
-
-Use a standard and unique naming convention for all instances.
-
-☐ Name of the MongoDB instance (``mdb-instance-*``)
-
-☐ Name of the BigchainDB instance (``bdb-instance-*``)
-
-☐ Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``)
-
-☐ Name of the OpenResty instance (``openresty-instance-*``)
-
-☐ Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``)
-
-☐ Name of the Tendermint instance (``tendermint-instance-*``)
-
-Example
-^^^^^^^
-
-.. code:: text
-
- {
- "MongoDB": [
- "mdb-instance-1",
- "mdb-instance-2",
- "mdb-instance-3",
- "mdb-instance-4"
- ],
- "BigchainDB": [
- "bdb-instance-1",
- "bdb-instance-2",
- "bdb-instance-3",
- "bdb-instance-4"
- ],
- "NGINX": [
- "ngx-instance-1",
- "ngx-instance-2",
- "ngx-instance-3",
- "ngx-instance-4"
- ],
- "OpenResty": [
- "openresty-instance-1",
- "openresty-instance-2",
- "openresty-instance-3",
- "openresty-instance-4"
- ],
- "MongoDB_Monitoring_Agent": [
- "mdb-mon-instance-1",
- "mdb-mon-instance-2",
- "mdb-mon-instance-3",
- "mdb-mon-instance-4"
- ],
- "Tendermint": [
- "tendermint-instance-1",
- "tendermint-instance-2",
- "tendermint-instance-3",
- "tendermint-instance-4"
- ]
- }
-
-
-☐ Generate three keys and corresponding certificate signing requests (CSRs):
-
-#. Server Certificate for the MongoDB instance
-#. Client Certificate for BigchainDB Server to identify itself to MongoDB
-#. Client Certificate for MongoDB Monitoring Agent to identify itself to MongoDB
-
-Use the self-signed CA to sign those three CSRs:
-
-* Three certificates (one for each CSR).
-
-For help, see the pages:
-
-* :doc:`How to Generate a Server Certificate for MongoDB <../production-deployment-template/server-tls-certificate>`
-* :doc:`How to Generate a Client Certificate for MongoDB <../production-deployment-template/client-tls-certificate>`
-
-☐ Make up an FQDN for your BigchainDB node (e.g. ``mynode.mycorp.com``).
-Make sure you've registered the associated domain name (e.g. ``mycorp.com``),
-and have an SSL certificate for the FQDN.
-(You can get an SSL certificate from any SSL certificate provider.)
-
-☐ Ask the managing organization for the user name to use for authenticating to
-MongoDB.
-
-☐ If the cluster uses 3scale for API authentication, monitoring and billing,
-you must ask the managing organization for all relevant 3scale credentials -
-secret token, service ID, version header and API service token.
-
-☐ If the cluster uses MongoDB Cloud Manager for monitoring,
-you must ask the managing organization for the ``Project ID`` and the
-``Agent API Key``.
-(Each Cloud Manager "Project" has its own ``Project ID``. A ``Project ID`` can
-contain a number of ``Agent API Key`` s. It can be found under
-**Settings**. It was recently added to the Cloud Manager to
-allow easier periodic rotation of the ``Agent API Key`` with a constant
-``Project ID``)
-
-
-.. _generate-the-blockchain-id-and-genesis-time:
-
-3. Generate the Blockchain ID and Genesis Time
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Tendermint nodes require two parameters that need to be common and shared between all the
-participants in the network.
-
-* ``chain_id`` : ID of the blockchain. This must be unique for every blockchain.
-
- * Example: ``0001-01-01T00:00:00Z``
-
-* ``genesis_time`` : Official time of blockchain start.
-
- * Example: ``test-chain-9gHylg``
-
-The following parameters can be generated using the ``tendermint init`` command.
-To `initializae `_.
-You will need to `install Tendermint `_
-and verify that a ``genesis.json`` file in created under the `Root Directory
-`_. You can use
-the ``genesis_time`` and ``chain_id`` from this ``genesis.json``.
-
-Sample ``genesis.json``:
-
-.. code:: json
-
- {
- "genesis_time": "0001-01-01T00:00:00Z",
- "chain_id": "test-chain-9gHylg",
- "validators": [
- {
- "pub_key": {
- "type": "ed25519",
- "data": "D12279E746D3724329E5DE33A5AC44D5910623AA6FB8CDDC63617C959383A468"
- },
- "power": 10,
- "name": ""
- }
- ],
- "app_hash": ""
- }
-
-
-
-☐ :doc:`Deploy a Kubernetes cluster on Azure <../production-deployment-template/template-kubernetes-azure>`.
-
-☐ You can now proceed to set up your :ref:`BigchainDB node
-`.
diff --git a/docs/server/source/production-deployment-template/architecture.rst b/docs/server/source/production-deployment-template/architecture.rst
index beb03d7e..7778e45f 100644
--- a/docs/server/source/production-deployment-template/architecture.rst
+++ b/docs/server/source/production-deployment-template/architecture.rst
@@ -1,19 +1,144 @@
-Architecture of an IPDB Node
-============================
+Architecture of a BigchainDB Node
+==================================
-An IPDB Production deployment is hosted on a Kubernetes cluster and includes:
+A BigchainDB Production deployment is hosted on a Kubernetes cluster and includes:
-* NGINX, OpenResty, BigchainDB and MongoDB
+* NGINX, OpenResty, BigchainDB, MongoDB and Tendermint
`Kubernetes Services `_.
-* NGINX, OpenResty, BigchainDB, Monitoring Agent and Backup Agent
+* NGINX, OpenResty, BigchainDB and MongoDB Monitoring Agent.
`Kubernetes Deployments `_.
-* MongoDB `Kubernetes StatefulSet `_.
+* MongoDB and Tendermint `Kubernetes StatefulSet `_.
* Third party services like `3scale `_,
`MongoDB Cloud Manager `_ and the
`Azure Operations Management Suite
`_.
-.. image:: ../_static/arch.jpg
+
+.. _bigchaindb-node:
+
+BigchainDB Node
+---------------
+
+.. aafig::
+ :aspect: 60
+ :scale: 100
+ :background: #rgb
+ :proportional:
+
+ + +
+ +--------------------------------------------------------------------------------------------------------------------------------------+
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+ | "BigchainDB API" | | "Tendermint P2P" |
+ | | | "Communication/" |
+ | | | "Public Key Exchange" |
+ | | | |
+ | | | |
+ | v v |
+ | |
+ | +------------------+ |
+ | |"NGINX Service" | |
+ | +-------+----------+ |
+ | | |
+ | v |
+ | |
+ | +------------------+ |
+ | | "NGINX" | |
+ | | "Deployment" | |
+ | | | |
+ | +-------+----------+ |
+ | | |
+ | | |
+ | | |
+ | v |
+ | |
+ | "443" +----------+ "46656/9986" |
+ | | "Rate" | |
+ | +---------------------------+"Limiting"+-----------------------+ |
+ | | | "Logic" | | |
+ | | +----+-----+ | |
+ | | | | |
+ | | | | |
+ | | | | |
+ | | | | |
+ | | | | |
+ | | "27017" | | |
+ | v | v |
+ | +-------------+ | +------------+ |
+ | |"HTTPS" | | +------------------> |"Tendermint"| |
+ | |"Termination"| | | "9986" |"Service" | "46656" |
+ | | | | | +-------+ | <----+ |
+ | +-----+-------+ | | | +------------+ | |
+ | | | | | | |
+ | | | | v v |
+ | | | | +------------+ +------------+ |
+ | | | | |"NGINX" | |"Tendermint"| |
+ | | | | |"Deployment"| |"Stateful" | |
+ | | | | |"Pub-Key-Ex"| |"Set" | |
+ | ^ | | +------------+ +------------+ |
+ | +-----+-------+ | | |
+ | "POST" |"Analyze" | "GET" | | |
+ | |"Request" | | | |
+ | +-----------+ +--------+ | | |
+ | | +-------------+ | | | |
+ | | | | | "Bi+directional, communication between" |
+ | | | | | "BigchainDB(APP) and Tendermint" |
+ | | | | | "BFT consensus Engine" |
+ | | | | | |
+ | v v | | |
+ | | | |
+ | +-------------+ +--------------+ +----+-------------------> +--------------+ |
+ | | "OpenResty" | | "BigchainDB" | | | "MongoDB" | |
+ | | "Service" | | "Service" | | | "Service" | |
+ | | | +----->| | | +-------> | | |
+ | +------+------+ | +------+-------+ | | +------+-------+ |
+ | | | | | | | |
+ | | | | | | | |
+ | v | v | | v |
+ | +-------------+ | +-------------+ | | +----------+ |
+ | | | | | | <------------+ | |"MongoDB" | |
+ | |"OpenResty" | | | "BigchainDB"| | |"Stateful"| |
+ | |"Deployment" | | | "Deployment"| | |"Set" | |
+ | | | | | | | +-----+----+ |
+ | | | | | +---------------------------+ | |
+ | | | | | | | |
+ | +-----+-------+ | +-------------+ | |
+ | | | | |
+ | | | | |
+ | v | | |
+ | +-----------+ | v |
+ | | "Auth" | | +------------+ |
+ | | "Logic" |----------+ |"MongoDB" | |
+ | | | |"Monitoring"| |
+ | | | |"Agent" | |
+ | +---+-------+ +-----+------+ |
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+ | | | |
+ +---------------+---------------------------------------------------------------------------------------+------------------------------+
+ | |
+ | |
+ | |
+ v v
+ +------------------------------------+ +------------------------------------+
+ | | | |
+ | | | |
+ | | | |
+ | "3Scale" | | "MongoDB Cloud" |
+ | | | |
+ | | | |
+ | | | |
+ +------------------------------------+ +------------------------------------+
+
+
+
.. note::
The arrows in the diagram represent the client-server communication. For
@@ -22,8 +147,8 @@ An IPDB Production deployment is hosted on a Kubernetes cluster and includes:
fully duplex.
-NGINX
------
+NGINX: Entrypoint and Gateway
+-----------------------------
We use an NGINX as HTTP proxy on port 443 (configurable) at the cloud
entrypoint for:
@@ -51,8 +176,8 @@ entrypoint for:
public api port), the connection is proxied to the MongoDB Service.
-OpenResty
----------
+OpenResty: API Management, Authentication and Authorization
+-----------------------------------------------------------
We use `OpenResty `_ to perform authorization checks
with 3scale using the ``app_id`` and ``app_key`` headers in the HTTP request.
@@ -63,13 +188,23 @@ on the LuaJIT compiler to execute the functions to authenticate the ``app_id``
and ``app_key`` with the 3scale backend.
-MongoDB
--------
+MongoDB: Standalone
+-------------------
We use MongoDB as the backend database for BigchainDB.
-In a multi-node deployment, MongoDB members communicate with each other via the
-public port exposed by the NGINX Service.
We achieve security by avoiding DoS attacks at the NGINX proxy layer and by
ensuring that MongoDB has TLS enabled for all its connections.
+
+Tendermint: BFT consensus engine
+--------------------------------
+
+We use Tendermint as the backend consensus engine for BFT replication of BigchainDB.
+In a multi-node deployment, Tendermint nodes/peers communicate with each other via
+the public ports exposed by the NGINX gateway.
+
+We use port **9986** (configurable) to allow tendermint nodes to access the public keys
+of the peers and port **46656** (configurable) for the rest of the communications between
+the peers.
+
diff --git a/docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst b/docs/server/source/production-deployment-template/bigchaindb-network-on-kubernetes.rst
similarity index 99%
rename from docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst
rename to docs/server/source/production-deployment-template/bigchaindb-network-on-kubernetes.rst
index 4781fccc..ed6c5433 100644
--- a/docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst
+++ b/docs/server/source/production-deployment-template/bigchaindb-network-on-kubernetes.rst
@@ -218,7 +218,7 @@ the :doc:`deployment steps for each node ` N number of times
the number of participants in the network.
In our Kubernetes deployment template for a single BigchainDB node, we covered the basic configurations
-settings :ref:`here `.
+settings :ref:`here `.
Since, we index the ConfigMap and Secret Keys for the single site deployment, we need to update
all the Kubernetes components to reflect the corresponding changes i.e. For each Kubernetes Service,
diff --git a/docs/server/source/production-deployment-template/cloud-manager.rst b/docs/server/source/production-deployment-template/cloud-manager.rst
index c438afaf..fb0512df 100644
--- a/docs/server/source/production-deployment-template/cloud-manager.rst
+++ b/docs/server/source/production-deployment-template/cloud-manager.rst
@@ -1,10 +1,10 @@
-.. _configure-mongodb-cloud-manager-for-monitoring-and-backup:
+.. _configure-mongodb-cloud-manager-for-monitoring:
-Configure MongoDB Cloud Manager for Monitoring and Backup
-=========================================================
+Configure MongoDB Cloud Manager for Monitoring
+==============================================
This document details the steps required to configure MongoDB Cloud Manager to
-enable monitoring and backup of data in a MongoDB Replica Set.
+enable monitoring of data in a MongoDB Replica Set.
Configure MongoDB Cloud Manager for Monitoring
@@ -60,39 +60,3 @@ Configure MongoDB Cloud Manager for Monitoring
* Verify on the UI that data is being sent by the monitoring agent to the
Cloud Manager. It may take upto 5 minutes for data to appear on the UI.
-
-
-Configure MongoDB Cloud Manager for Backup
-------------------------------------------
-
- * Once the Backup Agent is up and running, open
- `MongoDB Cloud Manager `_.
-
- * Click ``Login`` under ``MongoDB Cloud Manager`` and log in to the Cloud
- Manager.
-
- * Select the group from the dropdown box on the page.
-
- * Click ``Backup`` tab.
-
- * Hover over the ``Status`` column of your backup and click ``Start``
- to start the backup.
-
- * Select the replica set on the side pane.
-
- * If you have authentication enabled, select the authentication mechanism as
- per your deployment. The default BigchainDB production deployment currently
- supports ``X.509 Client Certificate`` as the authentication mechanism.
-
- * If you have TLS enabled, select the checkbox ``Replica set allows TLS/SSL
- connections``. This should be selected by default in case you selected
- ``X.509 Client Certificate`` as the auth mechanism above.
-
- * Choose the ``WiredTiger`` storage engine.
-
- * Verify the details of your MongoDB instance and click on ``Start``.
-
- * It may take up to 5 minutes for the backup process to start.
- During this process, the UI will show the status of the backup process.
-
- * Verify that data is being backed up on the UI.
diff --git a/docs/server/source/production-deployment-template/index.rst b/docs/server/source/production-deployment-template/index.rst
index aa966677..64a834db 100644
--- a/docs/server/source/production-deployment-template/index.rst
+++ b/docs/server/source/production-deployment-template/index.rst
@@ -1,10 +1,10 @@
Production Deployment Template
==============================
-This section outlines how *we* deploy production BigchainDB nodes and clusters
-on Microsoft Azure
-using Kubernetes.
-We improve it constantly.
+This section outlines how *we* deploy production BigchainDB,
+integrated with Tendermint(backend for BFT consensus),
+clusters on Microsoft Azure using
+Kubernetes. We improve it constantly.
You may choose to use it as a template or reference for your own deployment,
but *we make no claim that it is suitable for your purposes*.
Feel free change things to suit your needs or preferences.
@@ -25,8 +25,7 @@ Feel free change things to suit your needs or preferences.
cloud-manager
easy-rsa
upgrade-on-kubernetes
- add-node-on-kubernetes
- restore-from-mongodb-cloud-manager
+ bigchaindb-network-on-kubernetes
tectonic-azure
troubleshoot
architecture
diff --git a/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst b/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst
index 5140e8d6..7ee9d01a 100644
--- a/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst
+++ b/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst
@@ -11,7 +11,7 @@ and ``secret.yaml`` (a set of Secrets).
They are stored in the Kubernetes cluster's key-value store (etcd).
Make sure you did all the things listed in the section titled
-:ref:`things-each-node-operator-must-do`
+:ref:`things-each-node-operator-must-do-tmt`
(including generation of all the SSL certificates needed
for MongoDB auth).
@@ -35,7 +35,7 @@ vars.cluster-fqdn
~~~~~~~~~~~~~~~~~
The ``cluster-fqdn`` field specifies the domain you would have
-:ref:`registered before `.
+:ref:`registered before `.
vars.cluster-frontend-port
@@ -71,15 +71,8 @@ of naming instances, so the instances in your BigchainDB node
should conform to that standard (i.e. you can't just make up some names).
There are some things worth noting about the ``mdb-instance-name``:
-* MongoDB reads the local ``/etc/hosts`` file while bootstrapping a replica
- set to resolve the hostname provided to the ``rs.initiate()`` command.
- It needs to ensure that the replica set is being initialized in the same
- instance where the MongoDB instance is running.
-* We use the value in the ``mdb-instance-name`` field to achieve this.
* This field will be the DNS name of your MongoDB instance, and Kubernetes
maps this name to its internal DNS.
-* This field will also be used by other MongoDB instances when forming a
- MongoDB replica set.
* We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in our
documentation. Your BigchainDB cluster may use a different naming convention.
@@ -145,27 +138,6 @@ There's another :doc:`page with a complete listing of all the BigchainDB Server
configuration settings <../server-reference/configuration>`.
-bdb-config.bdb-keyring
-~~~~~~~~~~~~~~~~~~~~~~~
-
-This lists the BigchainDB public keys
-of all *other* nodes in your BigchainDB cluster
-(not including the public key of your BigchainDB node). Cases:
-
-* If you're deploying the first node in the cluster,
- the value should be ``""`` (an empty string).
-* If you're deploying the second node in the cluster,
- the value should be the BigchainDB public key of the first/original
- node in the cluster.
- For example,
- ``"EPQk5i5yYpoUwGVM8VKZRjM8CYxB6j8Lu8i8SG7kGGce"``
-* If there are two or more other nodes already in the cluster,
- the value should be a colon-separated list
- of the BigchainDB public keys
- of those other nodes.
- For example,
- ``"DPjpKbmbPYPKVAuf6VSkqGCf5jzrEh69Ldef6TrLwsEQ:EPQk5i5yYpoUwGVM8VKZRjM8CYxB6j8Lu8i8SG7kGGce"``
-
bdb-config.bdb-user
~~~~~~~~~~~~~~~~~~~
@@ -176,16 +148,16 @@ We need to specify the user name *as seen in the certificate* issued to
the BigchainDB instance in order to authenticate correctly. Use
the following ``openssl`` command to extract the user name from the
certificate:
-
+
.. code:: bash
$ openssl x509 -in \
-inform PEM -subject -nameopt RFC2253
-
+
You should see an output line that resembles:
-
+
.. code:: bash
-
+
subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
The ``subject`` line states the complete user name we need to use for this
@@ -196,6 +168,137 @@ field (``bdb-config.bdb-user``), i.e.
emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
+tendermint-config.tm-instance-name
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Your BigchainDB cluster organization should have a standard way
+of naming instances, so the instances in your BigchainDB node
+should conform to that standard. There are some things worth noting
+about the ``tm-instance-name``:
+
+* This field will be the DNS name of your Tendermint instance, and Kubernetes
+ maps this name to its internal DNS, so all the peer to peer communication
+ depends on this, in case of a network/multi-node deployment.
+* This parameter is also used to access the public key of a particular node.
+* We use ``tm-instance-0``, ``tm-instance-1`` and so on in our
+ documentation. Your BigchainDB cluster may use a different naming convention.
+
+
+tendermint-config.ngx-tm-instance-name
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+NGINX needs the FQDN of the servers inside the cluster to be able to forward
+traffic.
+``ngx-tm-instance-name`` is the FQDN of the Tendermint
+instance in this Kubernetes cluster.
+In Kubernetes, this is usually the name of the module specified in the
+corresponding ``tendermint-config.*-instance-name`` followed by the
+``.svc.cluster.local``. For example, if you run Tendermint in
+the default Kubernetes namespace, this will be
+``.default.svc.cluster.local``
+
+
+tendermint-config.tm-seeds
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``tm-seeds`` is the initial set of peers to connect to. It is a comma separated
+list of all the peers part of the cluster.
+
+If you are deploying a stand-alone BigchainDB node the value should the same as
+````. If you are deploying a network this parameter will look
+like this:
+
+.. code::
+
+ ,,,
+
+
+tendermint-config.tm-validators
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``tm-validators`` is the initial set of validators in the network. It is a comma separated list
+of all the participant validator nodes.
+
+If you are deploying a stand-alone BigchainDB node the value should be the same as
+````. If you are deploying a network this parameter will look like
+this:
+
+.. code::
+
+ ,,,
+
+
+tendermint-config.tm-validator-power
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``tm-validator-power`` represents the voting power of each validator. It is a comma separated
+list of all the participants in the network.
+
+**Note**: The order of the validator power list should be the same as the ``tm-validators`` list.
+
+.. code::
+
+ tm-validators: ,,,
+
+For the above list of validators the ``tm-validator-power`` list should look like this:
+
+.. code::
+
+ tm-validator-power: ,,,
+
+
+tendermint-config.tm-genesis-time
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``tm-genesis-time`` represents the official time of blockchain start. Details regarding, how to generate
+this parameter are covered :ref:`here `.
+
+
+tendermint-config.tm-chain-id
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``tm-chain-id`` represents the ID of the blockchain. This must be unique for every blockchain.
+Details regarding, how to generate this parameter are covered
+:ref:`here `.
+
+
+tendermint-config.tm-abci-port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``tm-abci-port`` has a default value ``46658`` which is used by Tendermint Core for
+ABCI(Application BlockChain Interface) traffic. BigchainDB nodes use this port
+internally to communicate with Tendermint Core.
+
+
+tendermint-config.tm-p2p-port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``tm-p2p-port`` has a default value ``46656`` which is used by Tendermint Core for
+peer to peer communication.
+
+For a multi-node/zone deployment, this port needs to be available publicly for P2P
+communication between Tendermint nodes.
+
+
+tendermint-config.tm-rpc-port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``tm-rpc-port`` has a default value ``46657`` which is used by Tendermint Core for RPC
+traffic. BigchainDB nodes use this port with RPC listen address.
+
+
+tendermint-config.tm-pub-key-access
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``tm-pub-key-access`` has a default value ``9986``, which is used to discover the public
+key of a tendermint node. Each Tendermint StatefulSet(Pod, Tendermint + NGINX) hosts its
+public key.
+
+.. code::
+
+ http://tendermint-instance-1:9986/pub_key.json
+
+
Edit secret.yaml
----------------
diff --git a/docs/server/source/production-deployment-template/node-on-kubernetes.rst b/docs/server/source/production-deployment-template/node-on-kubernetes.rst
index d45df83a..2989492f 100644
--- a/docs/server/source/production-deployment-template/node-on-kubernetes.rst
+++ b/docs/server/source/production-deployment-template/node-on-kubernetes.rst
@@ -1,20 +1,17 @@
-.. _kubernetes-template-deploy-a-single-node-bigchaindb:
+.. _kubernetes-template-deploy-a-single-bigchaindb-node:
Kubernetes Template: Deploy a Single BigchainDB Node
====================================================
-This page describes how to deploy the first BigchainDB node
-in a BigchainDB cluster, or a stand-alone BigchainDB node,
+This page describes how to deploy a stand-alone BigchainDB + Tendermint node,
+or a static network of BigchainDB + Tendermint nodes.
using `Kubernetes `_.
It assumes you already have a running Kubernetes cluster.
-If you want to add a new BigchainDB node to an existing BigchainDB cluster,
-refer to :doc:`the page about that `.
-
Below, we refer to many files by their directory and filename,
-such as ``configuration/config-map.yaml``. Those files are files in the
-`bigchaindb/bigchaindb repository on GitHub
-`_ in the ``k8s/`` directory.
+such as ``configuration/config-map-tm.yaml``. Those files are files in the
+`bigchaindb/bigchaindb repository on GitHub `_
+in the ``k8s/`` directory.
Make sure you're getting those files from the appropriate Git branch on
GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB
cluster is using.
@@ -32,7 +29,8 @@ The default location of the kubectl configuration file is ``~/.kube/config``.
If you don't have that file, then you need to get it.
**Azure.** If you deployed your Kubernetes cluster on Azure
-using the Azure CLI 2.0 (as per :doc:`our template `),
+using the Azure CLI 2.0 (as per :doc:`our template
+<../production-deployment-template/template-kubernetes-azure>`),
then you can get the ``~/.kube/config`` file using:
.. code:: bash
@@ -109,7 +107,8 @@ Step 3: Configure Your BigchainDB Node
See the page titled :ref:`how-to-configure-a-bigchaindb-node`.
-.. _start-the-nginx-service:
+
+.. _start-the-nginx-service-tmt:
Step 4: Start the NGINX Service
-------------------------------
@@ -126,7 +125,7 @@ Step 4: Start the NGINX Service
Step 4.1: Vanilla NGINX
^^^^^^^^^^^^^^^^^^^^^^^
- * This configuration is located in the file ``nginx-http/nginx-http-svc.yaml``.
+ * This configuration is located in the file ``nginx-http/nginx-http-svc-tm.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``ngx-instance-name`` in the ConfigMap above.
@@ -140,11 +139,21 @@ Step 4.1: Vanilla NGINX
``cluster-frontend-port`` in the ConfigMap above. This is the
``public-cluster-port`` in the file which is the ingress in to the cluster.
+ * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
+ ``tm-pub-access-port`` in the ConfigMap above. This is the
+ ``tm-pub-key-access`` in the file which specifies where Public Key for
+ the Tendermint instance is available.
+
+ * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
+ ``tm-p2p-port`` in the ConfigMap above. This is the
+ ``tm-p2p-port`` in the file which is used for P2P communication for Tendermint
+ nodes.
+
* Start the Kubernetes Service:
.. code:: bash
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc.yaml
+ $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc-tm.yaml
Step 4.2: NGINX with HTTPS
@@ -156,7 +165,7 @@ Step 4.2: NGINX with HTTPS
* You should have already created the necessary Kubernetes Secrets in the previous
step (i.e. ``https-certs``).
- * This configuration is located in the file ``nginx-https/nginx-https-svc.yaml``.
+ * This configuration is located in the file ``nginx-https/nginx-https-svc-tm.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``ngx-instance-name`` in the ConfigMap above.
@@ -175,14 +184,25 @@ Step 4.2: NGINX with HTTPS
``public-mdb-port`` in the file which specifies where MongoDB is
available.
+ * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
+ ``tm-pub-access-port`` in the ConfigMap above. This is the
+ ``tm-pub-key-access`` in the file which specifies where Public Key for
+ the Tendermint instance is available.
+
+ * Set ``ports[3].port`` and ``ports[3].targetPort`` to the value set in the
+ ``tm-p2p-port`` in the ConfigMap above. This is the
+ ``tm-p2p-port`` in the file which is used for P2P communication between Tendermint
+ nodes.
+
+
* Start the Kubernetes Service:
.. code:: bash
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc.yaml
+ $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc-tm.yaml
-.. _assign-dns-name-to-the-nginx-public-ip:
+.. _assign-dns-name-to-nginx-public-ip-tmt:
Step 5: Assign DNS Name to the NGINX Public IP
----------------------------------------------
@@ -221,16 +241,16 @@ changes to be applied.
To verify the DNS setting is operational, you can run ``nslookup `` from your local Linux shell.
-This will ensure that when you scale the replica set later, other MongoDB
-members in the replica set can reach this instance.
+This will ensure that when you scale to different geographical zones, other Tendermint
+nodes in the network can reach this instance.
-.. _start-the-mongodb-kubernetes-service:
+.. _start-the-mongodb-kubernetes-service-tmt:
Step 6: Start the MongoDB Kubernetes Service
--------------------------------------------
- * This configuration is located in the file ``mongodb/mongo-svc.yaml``.
+ * This configuration is located in the file ``mongodb/mongo-svc-tm.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``mdb-instance-name`` in the ConfigMap above.
@@ -249,15 +269,15 @@ Step 6: Start the MongoDB Kubernetes Service
.. code:: bash
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc.yaml
+ $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc-tm.yaml
-.. _start-the-bigchaindb-kubernetes-service:
+.. _start-the-bigchaindb-kubernetes-service-tmt:
Step 7: Start the BigchainDB Kubernetes Service
-----------------------------------------------
- * This configuration is located in the file ``bigchaindb/bigchaindb-svc.yaml``.
+ * This configuration is located in the file ``bigchaindb/bigchaindb-svc-tm.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``bdb-instance-name`` in the ConfigMap above.
@@ -277,19 +297,24 @@ Step 7: Start the BigchainDB Kubernetes Service
This is the ``bdb-ws-port`` in the file which specifies where BigchainDB
listens for Websocket connections.
+ * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
+ ``tm-abci-port`` in the ConfigMap above.
+ This is the ``tm-abci-port`` in the file which specifies the port used
+ for ABCI communication.
+
* Start the Kubernetes Service:
.. code:: bash
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc.yaml
+ $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc-tm.yaml
-.. _start-the-openresty-kubernetes-service:
+.. _start-the-openresty-kubernetes-service-tmt:
Step 8: Start the OpenResty Kubernetes Service
----------------------------------------------
- * This configuration is located in the file ``nginx-openresty/nginx-openresty-svc.yaml``.
+ * This configuration is located in the file ``nginx-openresty/nginx-openresty-svc-tm.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``openresty-instance-name`` in the ConfigMap above.
@@ -303,26 +328,64 @@ Step 8: Start the OpenResty Kubernetes Service
.. code:: bash
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc.yaml
+ $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc-tm.yaml
-.. _start-the-nginx-kubernetes-deployment:
+.. _start-the-tendermint-kubernetes-service-tmt:
-Step 9: Start the NGINX Kubernetes Deployment
----------------------------------------------
+Step 9: Start the Tendermint Kubernetes Service
+-----------------------------------------------
- * NGINX is used as a proxy to OpenResty, BigchainDB and MongoDB instances in
+ * This configuration is located in the file ``tendermint/tendermint-svc.yaml``.
+
+ * Set the ``metadata.name`` and ``metadata.labels.name`` to the value
+ set in ``tm-instance-name`` in the ConfigMap above.
+
+ * Set the ``spec.selector.app`` to the value set in ``tm-instance-name`` in
+ the ConfigMap followed by ``-ss``. For example, if the value set in the
+ ``tm-instance-name`` is ``tm-instance-0``, set the
+ ``spec.selector.app`` to ``tm-instance-0-ss``.
+
+ * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
+ ``tm-p2p-port`` in the ConfigMap above.
+ This is the ``p2p`` in the file which specifies where Tendermint peers
+ communicate.
+
+ * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
+ ``tm-rpc-port`` in the ConfigMap above.
+ This is the ``rpc`` in the file which specifies the port used by Tendermint core
+ for RPC traffic.
+
+ * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
+ ``tm-pub-key-access`` in the ConfigMap above.
+ This is the ``pub-key-access`` in the file which specifies the port to host/distribute
+ the public key for the Tendermint node.
+
+ * Start the Kubernetes Service:
+
+ .. code:: bash
+
+ $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-svc.yaml
+
+
+.. _start-the-nginx-deployment-tmt:
+
+Step 10: Start the NGINX Kubernetes Deployment
+----------------------------------------------
+
+ * NGINX is used as a proxy to OpenResty, BigchainDB, Tendermint and MongoDB instances in
the node. It proxies HTTP/HTTPS requests on the ``cluster-frontend-port``
- to the corresponding OpenResty or BigchainDB backend, and TCP connections
- on ``mongodb-frontend-port`` to the MongoDB backend.
+ to the corresponding OpenResty or BigchainDB backend, TCP connections
+ on ``mongodb-frontend-port``, ``tm-p2p-port`` and ``tm-pub-key-access``
+ to MongoDB and Tendermint respectively.
* As in step 4, you have the option to use vanilla NGINX without HTTPS or
NGINX with HTTPS support.
-Step 9.1: Vanilla NGINX
-^^^^^^^^^^^^^^^^^^^^^^^
+Step 10.1: Vanilla NGINX
+^^^^^^^^^^^^^^^^^^^^^^^^
- * This configuration is located in the file ``nginx-http/nginx-http-dep.yaml``.
+ * This configuration is located in the file ``nginx-http/nginx-http-dep-tm.yaml``.
* Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
@@ -330,9 +393,10 @@ Step 9.1: Vanilla NGINX
``ngx-http-instance-0``, set the fields to ``ngx-http-instance-0-dep``.
* Set the ports to be exposed from the pod in the
- ``spec.containers[0].ports`` section. We currently expose 3 ports -
- ``mongodb-frontend-port``, ``cluster-frontend-port`` and
- ``cluster-health-check-port``. Set them to the values specified in the
+ ``spec.containers[0].ports`` section. We currently expose 5 ports -
+ ``mongodb-frontend-port``, ``cluster-frontend-port``,
+ ``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port``.
+ Set them to the values specified in the
ConfigMap.
* The configuration uses the following values set in the ConfigMap:
@@ -346,19 +410,22 @@ Step 9.1: Vanilla NGINX
- ``ngx-bdb-instance-name``
- ``bigchaindb-api-port``
- ``bigchaindb-ws-port``
+ - ``ngx-tm-instance-name``
+ - ``tm-pub-key-access``
+ - ``tm-p2p-port``
* Start the Kubernetes Deployment:
.. code:: bash
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep.yaml
+ $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep-tm.yaml
-Step 9.2: NGINX with HTTPS
-^^^^^^^^^^^^^^^^^^^^^^^^^^
+Step 10.2: NGINX with HTTPS
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file
- ``nginx-https/nginx-https-dep.yaml``.
+ ``nginx-https/nginx-https-dep-tm.yaml``.
* Set the ``metadata.name`` and ``spec.template.metadata.labels.app``
to the value set in ``ngx-instance-name`` in the ConfigMap followed by a
@@ -366,9 +433,10 @@ Step 9.2: NGINX with HTTPS
``ngx-https-instance-0``, set the fields to ``ngx-https-instance-0-dep``.
* Set the ports to be exposed from the pod in the
- ``spec.containers[0].ports`` section. We currently expose 3 ports -
- ``mongodb-frontend-port``, ``cluster-frontend-port`` and
- ``cluster-health-check-port``. Set them to the values specified in the
+ ``spec.containers[0].ports`` section. We currently expose 6 ports -
+ ``mongodb-frontend-port``, ``cluster-frontend-port``,
+ ``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port``
+ . Set them to the values specified in the
ConfigMap.
* The configuration uses the following values set in the ConfigMap:
@@ -385,6 +453,9 @@ Step 9.2: NGINX with HTTPS
- ``ngx-bdb-instance-name``
- ``bigchaindb-api-port``
- ``bigchaindb-ws-port``
+ - ``ngx-tm-instance-name``
+ - ``tm-pub-key-access``
+ - ``tm-p2p-port```
* The configuration uses the following values set in the Secret:
@@ -394,12 +465,12 @@ Step 9.2: NGINX with HTTPS
.. code:: bash
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep.yaml
+ $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep-tm.yaml
-.. _create-kubernetes-storage-classes-for-mongodb:
+.. _create-kubernetes-storage-class-mdb-tmt:
-Step 10: Create Kubernetes Storage Classes for MongoDB
+Step 11: Create Kubernetes Storage Classes for MongoDB
------------------------------------------------------
MongoDB needs somewhere to store its data persistently,
@@ -428,7 +499,7 @@ The first thing to do is create the Kubernetes storage classes.
First, you need an Azure storage account.
If you deployed your Kubernetes cluster on Azure
using the Azure CLI 2.0
-(as per :doc:`our template `),
+(as per :doc:`our template <../production-deployment-template/template-kubernetes-azure>`),
then the `az acs create` command already created a
storage account in the same location and resource group
as your Kubernetes cluster.
@@ -440,7 +511,7 @@ in the same data center.
Premium storage is higher-cost and higher-performance.
It uses solid state drives (SSD).
You can create a `storage account `_
-for Premium storage and associate it with your Azure resource group.
+for Premium storage and associate it with your Azure resource group.
For future reference, the command to create a storage account is
`az storage account create `_.
@@ -456,7 +527,7 @@ specify the location you are using in Azure.
If you want to use a custom storage account with the Storage Class, you
can also update `parameters.storageAccount` and provide the Azure storage
-account name.
+account name.
Create the required storage classes using:
@@ -468,10 +539,10 @@ Create the required storage classes using:
You can check if it worked using ``kubectl get storageclasses``.
-.. _create-kubernetes-persistent-volume-claims:
+.. _create-kubernetes-persistent-volume-claim-mdb-tmt:
-Step 11: Create Kubernetes Persistent Volume Claims
----------------------------------------------------
+Step 12: Create Kubernetes Persistent Volume Claims for MongoDB
+---------------------------------------------------------------
Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and
``mongo-configdb-claim``.
@@ -517,18 +588,18 @@ but it should become "Bound" fairly quickly.
* Run the following command to update a PV's reclaim policy to
.. Code:: bash
-
+
$ kubectl --context k8s-bdb-test-cluster-0 patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
For notes on recreating a private volume form a released Azure disk resource consult
- :ref:`cluster-troubleshooting`.
+ :doc:`the page about cluster troubleshooting <../production-deployment-template/troubleshoot>`.
-.. _start-a-kubernetes-statefulset-for-mongodb:
+.. _start-kubernetes-stateful-set-mongodb-tmt:
-Step 12: Start a Kubernetes StatefulSet for MongoDB
+Step 13: Start a Kubernetes StatefulSet for MongoDB
---------------------------------------------------
- * This configuration is located in the file ``mongodb/mongo-ss.yaml``.
+ * This configuration is located in the file ``mongodb/mongo-ss-tm.yaml``.
* Set the ``spec.serviceName`` to the value set in ``mdb-instance-name`` in
the ConfigMap.
@@ -570,9 +641,8 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
* The configuration uses the following values set in the ConfigMap:
- ``mdb-instance-name``
- - ``mongodb-replicaset-name``
- ``mongodb-backend-port``
-
+
* The configuration uses the following values set in the Secret:
- ``mdb-certs``
@@ -595,7 +665,7 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
.. code:: bash
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss.yaml
+ $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss-tm.yaml
* It might take up to 10 minutes for the disks, specified in the Persistent
Volume Claims above, to be created and attached to the pod.
@@ -608,9 +678,10 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
$ kubectl --context k8s-bdb-test-cluster-0 get pods -w
-.. _configure-users-and-access-control-for-mongodb:
-Step 13: Configure Users and Access Control for MongoDB
+.. _configure-users-and-access-control-mongodb-tmt:
+
+Step 14: Configure Users and Access Control for MongoDB
-------------------------------------------------------
* In this step, you will create a user on MongoDB with authorization
@@ -638,28 +709,6 @@ Step 13: Configure Users and Access Control for MongoDB
--sslCAFile /etc/mongod/ca/ca.pem \
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
- * Initialize the replica set using:
-
- .. code:: bash
-
- > rs.initiate( {
- _id : "bigchain-rs",
- members: [ {
- _id : 0,
- host :":27017"
- } ]
- } )
-
- The ``hostname`` in this case will be the value set in
- ``mdb-instance-name`` in the ConfigMap.
- For example, if the value set in the ``mdb-instance-name`` is
- ``mdb-instance-0``, set the ``hostname`` above to the value ``mdb-instance-0``.
-
- * The instance should be voted as the ``PRIMARY`` in the replica set (since
- this is the only instance in the replica set till now).
- This can be observed from the mongo shell prompt,
- which will read ``PRIMARY>``.
-
* Create a user ``adminUser`` on the ``admin`` database with the
authorization to create other users. This will only work the first time you
log in to the mongo shell. For further details, see `localhost
@@ -717,8 +766,7 @@ Step 13: Configure Users and Access Control for MongoDB
]
} )
- * You can similarly create users for MongoDB Monitoring Agent and MongoDB
- Backup Agent. For example:
+ * You can similarly create user for MongoDB Monitoring Agent. For example:
.. code:: bash
@@ -730,18 +778,127 @@ Step 13: Configure Users and Access Control for MongoDB
]
} )
- PRIMARY> db.getSiblingDB("$external").runCommand( {
- createUser: 'emailAddress=dev@bigchaindb.com,CN=test-mdb-bak-ssl,OU=MongoDB-Bak-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE',
- writeConcern: { w: 'majority' , wtimeout: 5000 },
- roles: [
- { role: 'backup', db: 'admin' }
- ]
- } )
+
+.. _create-kubernetes-storage-class-tmt:
+
+Step 15: Create Kubernetes Storage Classes for Tendermint
+----------------------------------------------------------
+
+Tendermint needs somewhere to store its data persistently, it uses
+LevelDB as the persistent storage layer.
+
+The Kubernetes template for configuration of Storage Class is located in the
+file ``tendermint/tendermint-sc.yaml``.
+
+Details about how to create a Azure Storage account and how Kubernetes Storage Class works
+are already covered in this document: :ref:`create-kubernetes-storage-class-mdb-tmt`.
+
+Create the required storage classes using:
+
+.. code:: bash
+
+ $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-sc.yaml
-.. _start-a-kubernetes-deployment-for-mongodb-monitoring-agent:
+You can check if it worked using ``kubectl get storageclasses``.
-Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent
+.. _create-kubernetes-persistent-volume-claim-tmt:
+
+Step 16: Create Kubernetes Persistent Volume Claims for Tendermint
+------------------------------------------------------------------
+
+Next, you will create two PersistentVolumeClaim objects ``tendermint-db-claim`` and
+``tendermint-config-db-claim``.
+
+This configuration is located in the file ``tendermint/tendermint-pvc.yaml``.
+
+Details about Kubernetes Persistent Volumes, Persistent Volume Claims
+and how they work with Azure are already covered in this
+document: :ref:`create-kubernetes-persistent-volume-claim-mdb-tmt`.
+
+Create the required Persistent Volume Claims using:
+
+.. code:: bash
+
+ $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-pvc.yaml
+
+You can check its status using:
+
+.. code::
+
+ kubectl get pvc -w
+
+
+.. _create-kubernetes-stateful-set-tmt:
+
+Step 17: Start a Kubernetes StatefulSet for Tendermint
+------------------------------------------------------
+
+ * This configuration is located in the file ``tendermint/tendermint-ss.yaml``.
+
+ * Set the ``spec.serviceName`` to the value set in ``tm-instance-name`` in
+ the ConfigMap.
+ For example, if the value set in the ``tm-instance-name``
+ is ``tm-instance-0``, set the field to ``tm-instance-0``.
+
+ * Set ``metadata.name``, ``spec.template.metadata.name`` and
+ ``spec.template.metadata.labels.app`` to the value set in
+ ``tm-instance-name`` in the ConfigMap, followed by
+ ``-ss``.
+ For example, if the value set in the
+ ``tm-instance-name`` is ``tm-instance-0``, set the fields to the value
+ ``tm-insance-0-ss``.
+
+ * Note how the Tendermint container uses the ``tendermint-db-claim`` and the
+ ``tendermint-config-db-claim`` PersistentVolumeClaims for its ``/tendermint`` and
+ ``/tendermint_node_data`` directories (mount paths).
+
+ * As we gain more experience running Tendermint in testing and production, we
+ will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
+
+We deploy Tendermint as POD(Tendermint + NGINX), Tendermint is used as the consensus
+engine while NGINX is used to serve the public key of the Tendermint instance.
+
+ * For the NGINX container,set the ports to be exposed from the container
+ ``spec.containers[0].ports[0]`` section. Set it to the value specified
+ for ``tm-pub-key-access`` from ConfigMap.
+
+ * For the Tendermint container, Set the ports to be exposed from the container in the
+ ``spec.containers[1].ports`` section. We currently expose two Tendermint ports.
+ Set it to the value specified for ``tm-p2p-port`` and ``tm-rpc-port``
+ in the ConfigMap, repectively
+
+ * The configuration uses the following values set in the ConfigMap:
+
+ - ``tm-pub-key-access``
+ - ``tm-seeds``
+ - ``tm-validator-power``
+ - ``tm-validators``
+ - ``tm-genesis-time``
+ - ``tm-chain-id``
+ - ``tm-abci-port``
+ - ``bdb-instance-name``
+
+ * Create the Tendermint StatefulSet using:
+
+ .. code:: bash
+
+ $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-ss.yaml
+
+ * It might take up to 10 minutes for the disks, specified in the Persistent
+ Volume Claims above, to be created and attached to the pod.
+ The UI might show that the pod has errored with the message
+ "timeout expired waiting for volumes to attach/mount". Use the CLI below
+ to check the status of the pod in this case, instead of the UI.
+ This happens due to a bug in Azure ACS.
+
+ .. code:: bash
+
+ $ kubectl --context k8s-bdb-test-cluster-0 get pods -w
+
+.. _start-kubernetes-deployment-for-mdb-mon-agent-tmt:
+
+Step 18: Start a Kubernetes Deployment for MongoDB Monitoring Agent
-------------------------------------------------------------------
* This configuration is located in the file
@@ -768,40 +925,13 @@ Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml
-.. _start-a-kubernetes-deployment-for-mongodb-backup-agent:
+.. _start-kubernetes-deployment-bdb-tmt:
-Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent
----------------------------------------------------------------
-
- * This configuration is located in the file
- ``mongodb-backup-agent/mongo-backup-dep.yaml``.
-
- * Set ``metadata.name``, ``spec.template.metadata.name`` and
- ``spec.template.metadata.labels.app`` to the value set in
- ``mdb-bak-instance-name`` in the ConfigMap, followed by
- ``-dep``.
- For example, if the value set in the
- ``mdb-bak-instance-name`` is ``mdb-bak-instance-0``, set the fields to the
- value ``mdb-bak-instance-0-dep``.
-
- * The configuration uses the following values set in the Secret:
-
- - ``mdb-bak-certs``
- - ``ca-auth``
- - ``cloud-manager-credentials``
-
- * Start the Kubernetes Deployment using:
-
- .. code:: bash
-
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-backup-agent/mongo-backup-dep.yaml
-
-
-Step 16: Start a Kubernetes Deployment for BigchainDB
+Step 19: Start a Kubernetes Deployment for BigchainDB
-----------------------------------------------------
* This configuration is located in the file
- ``bigchaindb/bigchaindb-dep.yaml``.
+ ``bigchaindb/bigchaindb-dep-tm.yaml``.
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
value set in ``bdb-instance-name`` in the ConfigMap, followed by
@@ -810,21 +940,14 @@ Step 16: Start a Kubernetes Deployment for BigchainDB
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the
value ``bdb-insance-0-dep``.
- * Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded).
- (In the future, we'd like to pull the BigchainDB private key from
- the Secret named ``bdb-private-key``,
- but a Secret can only be mounted as a file,
- so BigchainDB Server would have to be modified to look for it
- in a file.)
-
* As we gain more experience running BigchainDB in testing and production,
we will tweak the ``resources.limits`` values for CPU and memory, and as
richer monitoring and probing becomes available in BigchainDB, we will
tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
* Set the ports to be exposed from the pod in the
- ``spec.containers[0].ports`` section. We currently expose 2 ports -
- ``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the
+ ``spec.containers[0].ports`` section. We currently expose 3 ports -
+ ``bigchaindb-api-port``, ``bigchaindb-ws-port`` and ``tm-abci-port``. Set them to the
values specified in the ConfigMap.
* The configuration uses the following values set in the ConfigMap:
@@ -845,6 +968,8 @@ Step 16: Start a Kubernetes Deployment for BigchainDB
- ``bigchaindb-database-connection-timeout``
- ``bigchaindb-log-level``
- ``bdb-user``
+ - ``tm-instance-name``
+ - ``tm-rpc-port``
* The configuration uses the following values set in the Secret:
@@ -855,15 +980,15 @@ Step 16: Start a Kubernetes Deployment for BigchainDB
.. code:: bash
- $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep.yaml
+ $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep-tm.yaml
* You can check its status using the command ``kubectl get deployments -w``
-.. _start-a-kubernetes-deployment-for-openresty:
+.. _start-kubernetes-deployment-openresty-tmt:
-Step 17: Start a Kubernetes Deployment for OpenResty
+Step 20: Start a Kubernetes Deployment for OpenResty
----------------------------------------------------
* This configuration is located in the file
@@ -902,21 +1027,21 @@ Step 17: Start a Kubernetes Deployment for OpenResty
* You can check its status using the command ``kubectl get deployments -w``
-Step 18: Configure the MongoDB Cloud Manager
+Step 21: Configure the MongoDB Cloud Manager
--------------------------------------------
Refer to the
-:ref:`documentation `
+:doc:`documentation <../production-deployment-template/cloud-manager>`
for details on how to configure the MongoDB Cloud Manager to enable
monitoring and backup.
-.. _verify-the-bigchaindb-node-setup:
+.. _verify-and-test-bdb-tmt:
-Step 19: Verify the BigchainDB Node Setup
+Step 22: Verify the BigchainDB Node Setup
-----------------------------------------
-Step 19.1: Testing Internally
+Step 22.1: Testing Internally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To test the setup of your BigchainDB node, you could use a Docker container
@@ -967,6 +1092,18 @@ To test the BigchainDB instance:
$ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions
+To test the Tendermint instance:
+
+.. code:: bash
+
+ $ nslookup tm-instance-0
+
+ $ dig +noall +answer _bdb-api-port._tcp.tm-instance-0.default.svc.cluster.local SRV
+
+ $ dig +noall +answer _bdb-ws-port._tcp.tm-instance-0.default.svc.cluster.local SRV
+
+ $ curl -X GET http://tm-instance-0:9986/pub_key.json
+
To test the OpenResty instance:
@@ -1020,10 +1157,10 @@ The above curl command should result in the response
``It looks like you are trying to access MongoDB over HTTP on the native driver port.``
-Step 19.2: Testing Externally
+Step 22.2: Testing Externally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Check the MongoDB monitoring and backup agent on the MongoDB Cloud Manager
+Check the MongoDB monitoring agent on the MongoDB Cloud Manager
portal to verify they are working fine.
If you are using the NGINX with HTTP support, accessing the URL
@@ -1035,3 +1172,7 @@ If you are using the NGINX with HTTPS support, use ``https`` instead of
Use the Python Driver to send some transactions to the BigchainDB node and
verify that your node or cluster works as expected.
+
+Next, you can set up log analytics and monitoring, by following our templates:
+
+* :doc:`../production-deployment-template/log-analytics`.
\ No newline at end of file
diff --git a/docs/server/source/production-deployment-template/tectonic-azure.rst b/docs/server/source/production-deployment-template/tectonic-azure.rst
index 68b0afd9..f9d58074 100644
--- a/docs/server/source/production-deployment-template/tectonic-azure.rst
+++ b/docs/server/source/production-deployment-template/tectonic-azure.rst
@@ -123,8 +123,6 @@ Next, you can follow one of our following deployment templates:
* :doc:`node-on-kubernetes`.
-* :doc:`../production-deployment-template-tendermint/node-on-kubernetes`
-
Tectonic References
-------------------
diff --git a/docs/server/source/production-deployment-template/template-kubernetes-azure.rst b/docs/server/source/production-deployment-template/template-kubernetes-azure.rst
index 7d43fafc..d57abe27 100644
--- a/docs/server/source/production-deployment-template/template-kubernetes-azure.rst
+++ b/docs/server/source/production-deployment-template/template-kubernetes-azure.rst
@@ -224,6 +224,5 @@ CAUTION: You might end up deleting resources other than the ACS cluster.
--name
-Next, you can :doc:`run a BigchainDB node(Non-BFT) ` or :doc:`run a BigchainDB
-node/cluster(BFT) <../production-deployment-template-tendermint/node-on-kubernetes>`
+Next, you can :doc: `run a BigchainDB node/cluster(BFT) `
on your new Kubernetes cluster.
\ No newline at end of file
diff --git a/docs/server/source/production-deployment-template/workflow.rst b/docs/server/source/production-deployment-template/workflow.rst
index a790a619..0a35d65b 100644
--- a/docs/server/source/production-deployment-template/workflow.rst
+++ b/docs/server/source/production-deployment-template/workflow.rst
@@ -6,28 +6,13 @@ to set up a production BigchainDB cluster.
We are constantly improving them.
You can modify them to suit your needs.
-
-Things the Managing Organization Must Do First
-----------------------------------------------
+.. Note::
+ We use standalone MongoDB (without Replica Set), BFT replication is handled by Tendermint.
-1. Set Up a Self-Signed Certificate Authority
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. _register-a-domain-and-get-an-ssl-certificate-for-it-tmt:
-We use SSL/TLS and self-signed certificates
-for MongoDB authentication (and message encryption).
-The certificates are signed by the organization managing the cluster.
-If your organization already has a process
-for signing certificates
-(i.e. an internal self-signed certificate authority [CA]),
-then you can skip this step.
-Otherwise, your organization must
-:ref:`set up its own self-signed certificate authority `.
-
-
-.. _register-a-domain-and-get-an-ssl-certificate-for-it:
-
-2. Register a Domain and Get an SSL Certificate for It
+1. Register a Domain and Get an SSL Certificate for It
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The BigchainDB APIs (HTTP API and WebSocket API) should be served using TLS,
@@ -36,83 +21,149 @@ should choose an FQDN for their API (e.g. api.organization-x.com),
register the domain name,
and buy an SSL/TLS certificate for the FQDN.
-.. _things-each-node-operator-must-do:
+
+.. _generate-the-blockchain-id-and-genesis-time:
+
+2. Generate the Blockchain ID and Genesis Time
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Tendermint nodes require two parameters that need to be common and shared between all the
+participants in the network.
+
+* ``chain_id`` : ID of the blockchain. This must be unique for every blockchain.
+
+ * Example: ``test-chain-9gHylg``
+
+* ``genesis_time`` : Official time of blockchain start.
+
+ * Example: ``0001-01-01T00:00:00Z``
+
+The preceding parameters can be generated using the ``tendermint init`` command.
+To `initialize `_.
+,you will need to `install Tendermint `_
+and verify that a ``genesis.json`` file is created under the `Root Directory
+`_. You can use
+the ``genesis_time`` and ``chain_id`` from this example ``genesis.json`` file:
+
+.. code:: json
+
+ {
+ "genesis_time": "0001-01-01T00:00:00Z",
+ "chain_id": "test-chain-9gHylg",
+ "validators": [
+ {
+ "pub_key": {
+ "type": "ed25519",
+ "data": "D12279E746D3724329E5DE33A5AC44D5910623AA6FB8CDDC63617C959383A468"
+ },
+ "power": 10,
+ "name": ""
+ }
+ ],
+ "app_hash": ""
+ }
+
+.. _things-each-node-operator-must-do-tmt:
Things Each Node Operator Must Do
---------------------------------
-☐ Every MongoDB instance in the cluster must have a unique (one-of-a-kind) name.
-Ask the organization managing your cluster if they have a standard
-way of naming instances in the cluster.
-For example, maybe they assign a unique number to each node,
-so that if you're operating node 12, your MongoDB instance would be named
-``mdb-instance-12``.
-Similarly, other instances must also have unique names in the cluster.
+☐ Set Up a Self-Signed Certificate Authority
-#. Name of the MongoDB instance (``mdb-instance-*``)
-#. Name of the BigchainDB instance (``bdb-instance-*``)
-#. Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``)
-#. Name of the OpenResty instance (``openresty-instance-*``)
-#. Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``)
-#. Name of the MongoDB backup agent instance (``mdb-bak-instance-*``)
+We use SSL/TLS and self-signed certificates
+for MongoDB authentication (and message encryption).
+The certificates are signed by the organization managing the :ref:`bigchaindb-node`.
+If your organization already has a process
+for signing certificates
+(i.e. an internal self-signed certificate authority [CA]),
+then you can skip this step.
+Otherwise, your organization must
+:ref:`set up its own self-signed certificate authority `.
-☐ Generate four keys and corresponding certificate signing requests (CSRs):
+☐ Follow Standard and Unique Naming Convention
-#. Server Certificate (a.k.a. Member Certificate) for the MongoDB instance
+ ☐ Name of the MongoDB instance (``mdb-instance-*``)
+
+ ☐ Name of the BigchainDB instance (``bdb-instance-*``)
+
+ ☐ Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``)
+
+ ☐ Name of the OpenResty instance (``openresty-instance-*``)
+
+ ☐ Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``)
+
+ ☐ Name of the Tendermint instance (``tm-instance-*``)
+
+**Example**
+
+
+.. code:: text
+
+ {
+ "MongoDB": [
+ "mdb-instance-1",
+ "mdb-instance-2",
+ "mdb-instance-3",
+ "mdb-instance-4"
+ ],
+ "BigchainDB": [
+ "bdb-instance-1",
+ "bdb-instance-2",
+ "bdb-instance-3",
+ "bdb-instance-4"
+ ],
+ "NGINX": [
+ "ngx-instance-1",
+ "ngx-instance-2",
+ "ngx-instance-3",
+ "ngx-instance-4"
+ ],
+ "OpenResty": [
+ "openresty-instance-1",
+ "openresty-instance-2",
+ "openresty-instance-3",
+ "openresty-instance-4"
+ ],
+ "MongoDB_Monitoring_Agent": [
+ "mdb-mon-instance-1",
+ "mdb-mon-instance-2",
+ "mdb-mon-instance-3",
+ "mdb-mon-instance-4"
+ ],
+ "Tendermint": [
+ "tendermint-instance-1",
+ "tendermint-instance-2",
+ "tendermint-instance-3",
+ "tendermint-instance-4"
+ ]
+ }
+
+
+☐ Generate three keys and corresponding certificate signing requests (CSRs):
+
+#. Server Certificate for the MongoDB instance
#. Client Certificate for BigchainDB Server to identify itself to MongoDB
#. Client Certificate for MongoDB Monitoring Agent to identify itself to MongoDB
-#. Client Certificate for MongoDB Backup Agent to identify itself to MongoDB
-Ask the managing organization to use its self-signed CA to sign those four CSRs.
-They should send you:
-
-* Four certificates (one for each CSR you sent them).
-* One ``ca.crt`` file: their CA certificate.
-* One ``crl.pem`` file: a certificate revocation list.
-
-For help, see the pages:
-
-* :ref:`how-to-generate-a-server-certificate-for-mongodb`
-* :ref:`how-to-generate-a-client-certificate-for-mongodb`
-
-
-☐ Every node in a BigchainDB cluster needs its own
-BigchainDB keypair (i.e. a public key and corresponding private key).
-You can generate a BigchainDB keypair for your node, for example,
-using the `BigchainDB Python Driver `_.
-
-.. code:: python
-
- from bigchaindb_driver.crypto import generate_keypair
- print(generate_keypair())
-
-
-☐ Share your BigchaindB *public* key with all the other nodes
-in the BigchainDB cluster.
-Don't share your private key.
-
-
-☐ Get the BigchainDB public keys of all the other nodes in the cluster.
-That list of public keys is known as the BigchainDB "keyring."
+Use the self-signed CA to sign those three CSRs. For help, see the pages:
+* :doc:`How to Generate a Server Certificate for MongoDB <../production-deployment-template/server-tls-certificate>`
+* :doc:`How to Generate a Client Certificate for MongoDB <../production-deployment-template/client-tls-certificate>`
☐ Make up an FQDN for your BigchainDB node (e.g. ``mynode.mycorp.com``).
Make sure you've registered the associated domain name (e.g. ``mycorp.com``),
and have an SSL certificate for the FQDN.
(You can get an SSL certificate from any SSL certificate provider.)
-
-☐ Ask the managing organization for the user name to use for authenticating to
+☐ Ask the BigchainDB Node operator/owner for the username to use for authenticating to
MongoDB.
-
☐ If the cluster uses 3scale for API authentication, monitoring and billing,
-you must ask the managing organization for all relevant 3scale credentials -
+you must ask the BigchainDB node operator/owner for all relevant 3scale credentials -
secret token, service ID, version header and API service token.
-
-☐ If the cluster uses MongoDB Cloud Manager for monitoring and backup,
+☐ If the cluster uses MongoDB Cloud Manager for monitoring,
you must ask the managing organization for the ``Project ID`` and the
``Agent API Key``.
(Each Cloud Manager "Project" has its own ``Project ID``. A ``Project ID`` can
@@ -122,11 +173,7 @@ allow easier periodic rotation of the ``Agent API Key`` with a constant
``Project ID``)
-☐ :doc:`Deploy a Kubernetes cluster on Azure `.
+☐ :doc:`Deploy a Kubernetes cluster on Azure <../production-deployment-template/template-kubernetes-azure>`.
-
-☐ You can now proceed to set up your BigchainDB node based on whether it is the
-:ref:`first node in a new cluster
-` or a
-:ref:`node that will be added to an existing cluster
-`.
+☐ You can now proceed to set up your :ref:`BigchainDB node
+`.
diff --git a/setup.py b/setup.py
index 49325444..6fd910e4 100644
--- a/setup.py
+++ b/setup.py
@@ -41,6 +41,7 @@ docs_require = [
'sphinx-rtd-theme>=0.1.9',
'sphinxcontrib-httpdomain>=1.5.0',
'sphinxcontrib-napoleon>=0.4.4',
+ 'aafigure>=0.6',
]
tests_require = [