mirror of
https://github.com/bigchaindb/bigchaindb.git
synced 2024-10-13 13:34:05 +00:00
Merge pull request #1992 from bigchaindb/tendermint-docs-k8s-dep
Template for BigchainDB + Tendermint Kubernetes Deployment
This commit is contained in:
commit
53019ff02a
@ -4,3 +4,4 @@ sphinx-rtd-theme>=0.1.9
|
||||
sphinxcontrib-napoleon>=0.4.4
|
||||
sphinxcontrib-httpdomain>=1.5.0
|
||||
pyyaml>=3.12
|
||||
aafigure>=0.6
|
||||
|
@ -10,6 +10,7 @@ The following ports should expect unsolicited inbound traffic:
|
||||
1. **Port 9984** can expect inbound HTTP (TCP) traffic from BigchainDB clients sending transactions to the BigchainDB HTTP API.
|
||||
1. **Port 9985** can expect inbound WebSocket traffic from BigchainDB clients.
|
||||
1. **Port 46656** can expect inbound Tendermint P2P traffic from other Tendermint peers.
|
||||
1. **Port 9986** can expect inbound HTTP (TCP) traffic from clients accessing the Public Key of a Tendermint instance.
|
||||
|
||||
All other ports should only get inbound traffic in response to specific requests from inside the node.
|
||||
|
||||
@ -49,6 +50,12 @@ You may want to have Gunicorn and the reverse proxy running on different servers
|
||||
|
||||
Port 9985 is the default port for the [BigchainDB WebSocket Event Stream API](../websocket-event-stream-api.html).
|
||||
|
||||
|
||||
## Port 9986
|
||||
|
||||
Port 9986 is the default port to access the Public Key of a Tendermint instance, it is used by a NGINX instance
|
||||
that runs with Tendermint instance(Pod), and only hosts the Public Key.
|
||||
|
||||
## Port 46656
|
||||
|
||||
Port 46656 is the default port used by Tendermint Core to communicate with other instances of Tendermint Core (peers).
|
||||
|
@ -48,7 +48,7 @@ extensions = [
|
||||
'sphinx.ext.todo',
|
||||
'sphinx.ext.napoleon',
|
||||
'sphinxcontrib.httpdomain',
|
||||
'sphinx.ext.autosectionlabel',
|
||||
'aafigure.sphinxext',
|
||||
# Below are actually build steps made to look like sphinx extensions.
|
||||
# It was the easiest way to get it running with ReadTheDocs.
|
||||
'generate_http_server_api_documentation',
|
||||
|
@ -1,3 +1,5 @@
|
||||
.. _the-block-model:
|
||||
|
||||
The Block Model
|
||||
===============
|
||||
|
||||
@ -27,5 +29,5 @@ Since the blockchain height increases monotonically the height of block can be r
|
||||
|
||||
**transactions**
|
||||
|
||||
A list of the :ref:`transactions <The Transaction Model>` included in the block.
|
||||
A list of the :ref:`transactions <the-transaction-model>` included in the block.
|
||||
(Each transaction is a JSON object.)
|
||||
|
@ -1,3 +1,5 @@
|
||||
.. _the-transaction-model:
|
||||
|
||||
The Transaction Model
|
||||
=====================
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
.. _the-websocket-event-stream-api:
|
||||
|
||||
The WebSocket Event Stream API
|
||||
==============================
|
||||
|
||||
@ -24,7 +26,7 @@ Determining Support for the Event Stream API
|
||||
|
||||
It's a good idea to make sure that the node you're connecting with
|
||||
has advertised support for the Event Stream API. To do so, send a HTTP GET
|
||||
request to the node's :ref:`API Root Endpoint`
|
||||
request to the node's :ref:`api-root-endpoint`
|
||||
(e.g. ``http://localhost:9984/api/v1/``) and check that the
|
||||
response contains a ``streams`` property:
|
||||
|
||||
@ -61,7 +63,7 @@ Streams will always be under the WebSocket protocol (so ``ws://`` or
|
||||
API root URL (for example, `validated transactions <#valid-transactions>`_
|
||||
would be accessible under ``/api/v1/streams/valid_transactions``). If you're
|
||||
running your own BigchainDB instance and need help determining its root URL,
|
||||
then see the page titled :ref:`Determining the API Root URL`.
|
||||
then see the page titled :ref:`determining-the-api-root-url`.
|
||||
|
||||
All messages sent in a stream are in the JSON format.
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
.. _the-http-client-server-api:
|
||||
|
||||
The HTTP Client-Server API
|
||||
==========================
|
||||
|
||||
@ -26,8 +28,10 @@ with something like the following in the body:
|
||||
:language: http
|
||||
|
||||
|
||||
.. _api-root-endpoint:
|
||||
|
||||
API Root Endpoint
|
||||
-------------------
|
||||
-----------------
|
||||
|
||||
If you send an HTTP GET request to the API Root Endpoint
|
||||
e.g. ``http://localhost:9984/api/v1/``
|
||||
@ -40,7 +44,7 @@ that allows you to discover the BigchainDB API endpoints:
|
||||
|
||||
|
||||
Transactions
|
||||
-------------------
|
||||
------------
|
||||
|
||||
.. http:get:: /api/v1/transactions/{transaction_id}
|
||||
|
||||
@ -130,10 +134,10 @@ Transactions
|
||||
<http://tendermint.readthedocs.io/projects/tools/en/master/using-tendermint.html#broadcast-api>`_ with three different modes to post transactions.
|
||||
By setting the mode, a new transaction can be pushed with a different mode than the default. The default mode is ``async``, which
|
||||
will return immediately and not wait to see if the transaction is valid. The ``sync`` mode will return after the transaction is validated, while ``commit``
|
||||
returns after the transaction is committed to a block.
|
||||
returns after the transaction is committed to a block.
|
||||
|
||||
.. note::
|
||||
|
||||
|
||||
The posted `transaction
|
||||
<https://docs.bigchaindb.com/projects/server/en/latest/data-models/transaction-model.html>`_
|
||||
should be structurally valid and not spending an already spent output.
|
||||
@ -155,9 +159,10 @@ Transactions
|
||||
:language: http
|
||||
|
||||
.. note::
|
||||
|
||||
If the server is returning a ``202`` HTTP status code when ``mode=aysnc`` or ``mode=sync``, then the
|
||||
transaction has been accepted for processing. The client can subscribe to the
|
||||
:ref:`WebSocket Event Stream API <The WebSocket Event Stream API>` to listen for comitted transactions.
|
||||
:ref:`WebSocket Event Stream API <the-websocket-event-stream-api>` to listen for comitted transactions.
|
||||
|
||||
:resheader Content-Type: ``application/json``
|
||||
|
||||
@ -618,7 +623,7 @@ so you can access it from the same machine,
|
||||
but it won't be directly accessible from the outside world.
|
||||
(The outside world could connect via a SOCKS proxy or whatnot.)
|
||||
|
||||
The documentation about BigchainDB Server :any:`Configuration Settings`
|
||||
The documentation about BigchainDB Server :doc:`Configuration Settings <server-reference/configuration>`
|
||||
has a section about how to set ``server.bind`` so as to make
|
||||
the HTTP API publicly accessible.
|
||||
|
||||
|
@ -1,383 +0,0 @@
|
||||
Kubernetes Template: Add a BigchainDB Node to an Existing BigchainDB Cluster
|
||||
============================================================================
|
||||
|
||||
This page describes how to deploy a BigchainDB node using Kubernetes,
|
||||
and how to add that node to an existing BigchainDB cluster.
|
||||
It assumes you already have a running Kubernetes cluster
|
||||
where you can deploy the new BigchainDB node.
|
||||
|
||||
If you want to deploy the first BigchainDB node in a BigchainDB cluster,
|
||||
or a stand-alone BigchainDB node,
|
||||
then see :doc:`the page about that <node-on-kubernetes>`.
|
||||
|
||||
|
||||
Terminology Used
|
||||
----------------
|
||||
|
||||
``existing cluster`` will refer to one of the existing Kubernetes clusters
|
||||
hosting one of the existing BigchainDB nodes.
|
||||
|
||||
``ctx-1`` will refer to the kubectl context of the existing cluster.
|
||||
|
||||
``new cluster`` will refer to the new Kubernetes cluster that will run a new
|
||||
BigchainDB node (including a BigchainDB instance and a MongoDB instance).
|
||||
|
||||
``ctx-2`` will refer to the kubectl context of the new cluster.
|
||||
|
||||
``new MongoDB instance`` will refer to the MongoDB instance in the new cluster.
|
||||
|
||||
``existing MongoDB instance`` will refer to the MongoDB instance in the
|
||||
existing cluster.
|
||||
|
||||
``new BigchainDB instance`` will refer to the BigchainDB instance in the new
|
||||
cluster.
|
||||
|
||||
``existing BigchainDB instance`` will refer to the BigchainDB instance in the
|
||||
existing cluster.
|
||||
|
||||
Below, we refer to multiple files by their directory and filename,
|
||||
such as ``mongodb/mongo-ext-conn-svc.yaml``. Those files are files in the
|
||||
`bigchaindb/bigchaindb repository on GitHub
|
||||
<https://github.com/bigchaindb/bigchaindb/>`_ in the ``k8s/`` directory.
|
||||
Make sure you're getting those files from the appropriate Git branch on
|
||||
GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB
|
||||
cluster is using.
|
||||
|
||||
|
||||
Step 1: Prerequisites
|
||||
---------------------
|
||||
|
||||
* :ref:`List of all the things to be done by each node operator <Things Each Node Operator Must Do>`.
|
||||
|
||||
* The public key should be shared offline with the other existing BigchainDB
|
||||
nodes in the existing BigchainDB cluster.
|
||||
|
||||
* You will need the public keys of all the existing BigchainDB nodes.
|
||||
|
||||
* A new Kubernetes cluster setup with kubectl configured to access it.
|
||||
|
||||
* Some familiarity with deploying a BigchainDB node on Kubernetes.
|
||||
See our :doc:`other docs about that <node-on-kubernetes>`.
|
||||
|
||||
Note: If you are managing multiple Kubernetes clusters, from your local
|
||||
system, you can run ``kubectl config view`` to list all the contexts that
|
||||
are available for the local kubectl.
|
||||
To target a specific cluster, add a ``--context`` flag to the kubectl CLI. For
|
||||
example:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 apply -f example.yaml
|
||||
$ kubectl --context ctx-2 apply -f example.yaml
|
||||
$ kubectl --context ctx-1 proxy --port 8001
|
||||
$ kubectl --context ctx-2 proxy --port 8002
|
||||
|
||||
|
||||
Step 2: Configure the BigchainDB Node
|
||||
-------------------------------------
|
||||
|
||||
See the section on how to :ref:`configure your BigchainDB node <How to Configure a BigchainDB Node>`.
|
||||
|
||||
|
||||
Step 3: Start the NGINX Service
|
||||
--------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Start NGINX service <Step 4: Start the NGINX Service>`.
|
||||
|
||||
|
||||
Step 4: Assign DNS Name to the NGINX Public IP
|
||||
----------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Assign DNS to NGINX Public IP <Step 5: Assign DNS Name to the NGINX Public IP>`.
|
||||
|
||||
|
||||
Step 5: Start the MongoDB Kubernetes Service
|
||||
--------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Start the MongoDB Kubernetes Service <Step 6: Start the MongoDB Kubernetes Service>`.
|
||||
|
||||
|
||||
Step 6: Start the BigchainDB Kubernetes Service
|
||||
-----------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Start the BigchainDB Kubernetes Service <Step 7: Start the BigchainDB Kubernetes Service>`.
|
||||
|
||||
|
||||
Step 7: Start the OpenResty Kubernetes Service
|
||||
----------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Start the OpenResty Kubernetes Service <Step 8: Start the OpenResty Kubernetes Service>`.
|
||||
|
||||
|
||||
Step 8: Start the NGINX Kubernetes Deployment
|
||||
---------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Run NGINX deployment <Step 9: Start the NGINX Kubernetes Deployment>`.
|
||||
|
||||
|
||||
Step 9: Create Kubernetes Storage Classes for MongoDB
|
||||
-----------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Step 10: Create Kubernetes Storage Classes for MongoDB`.
|
||||
|
||||
|
||||
Step 10: Create Kubernetes Persistent Volume Claims
|
||||
---------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Step 11: Create Kubernetes Persistent Volume Claims`.
|
||||
|
||||
|
||||
Step 11: Start a Kubernetes StatefulSet for MongoDB
|
||||
---------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Step 12: Start a Kubernetes StatefulSet for MongoDB`.
|
||||
|
||||
|
||||
Step 12: Verify network connectivity between the MongoDB instances
|
||||
------------------------------------------------------------------
|
||||
|
||||
Make sure your MongoDB instances can access each other over the network. *If* you are deploying
|
||||
the new MongoDB node in a different cluster or geographical location using Azure Kubernetes Container
|
||||
Service, you will have to set up networking between the two clusters using `Kubernetes
|
||||
Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
|
||||
|
||||
Assuming we have an existing MongoDB instance ``mdb-instance-0`` residing in Azure data center location ``westeurope`` and we
|
||||
want to add a new MongoDB instance ``mdb-instance-1`` located in Azure data center location ``eastus`` to the existing MongoDB
|
||||
replica set. Unless you already have explicitly set up networking for ``mdb-instance-0`` to communicate with ``mdb-instance-1`` and
|
||||
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
|
||||
MongoDB replica set.
|
||||
It is similar to ensuring that there is a ``CNAME`` record in the DNS
|
||||
infrastructure to resolve ``mdb-instance-X`` to the host where it is actually available.
|
||||
We can do this in Kubernetes using a Kubernetes Service of ``type``
|
||||
``ExternalName``.
|
||||
|
||||
* This configuration is located in the file ``mongodb/mongo-ext-conn-svc.yaml``.
|
||||
|
||||
* Set the name of the ``metadata.name`` to the host name of the MongoDB instance you are trying to connect to.
|
||||
For instance if you are configuring this service on cluster with ``mdb-instance-0`` then the ``metadata.name`` will
|
||||
be ``mdb-instance-1`` and vice versa.
|
||||
|
||||
* Set ``spec.ports.port[0]`` to the ``mongodb-backend-port`` from the ConfigMap for the other cluster.
|
||||
|
||||
* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to.
|
||||
For more information about the FQDN please refer to: :ref:`Assign DNS Name to the NGINX Public
|
||||
IP <Step 5: Assign DNS Name to the NGINX Public IP>`
|
||||
|
||||
.. note::
|
||||
This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs
|
||||
we need to communicate with.
|
||||
|
||||
If you are not the system administrator of the cluster, you have to get in
|
||||
touch with the system administrator/s of the other ``n-1`` clusters and
|
||||
share with them your instance name (``mdb-instance-name`` in the ConfigMap)
|
||||
and the FQDN for your node (``cluster-fqdn`` in the ConfigMap).
|
||||
|
||||
|
||||
Step 13: Add the New MongoDB Instance to the Existing Replica Set
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Note that by ``replica set``, we are referring to the MongoDB replica set,
|
||||
not a Kubernetes' ``ReplicaSet``.
|
||||
|
||||
If you are not the administrator of an existing BigchainDB node, you
|
||||
will have to coordinate offline with an existing administrator so that they can
|
||||
add the new MongoDB instance to the replica set.
|
||||
|
||||
Add the new instance of MongoDB from an existing instance by accessing the
|
||||
``mongo`` shell and authenticate as the ``adminUser`` we created for existing MongoDB instance OR
|
||||
contact the admin of the PRIMARY MongoDB node:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 exec -it <existing mongodb-instance-name> bash
|
||||
$ mongo --host <existing mongodb-instance-name> --port 27017 --verbose --ssl \
|
||||
--sslCAFile /etc/mongod/ssl/ca.pem \
|
||||
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
|
||||
|
||||
PRIMARY> use admin
|
||||
PRIMARY> db.auth("adminUser", "superstrongpassword")
|
||||
|
||||
One can only add members to a replica set from the ``PRIMARY`` instance.
|
||||
The ``mongo`` shell prompt should state that this is the primary member in the
|
||||
replica set.
|
||||
If not, then you can use the ``rs.status()`` command to find out who the
|
||||
primary is and login to the ``mongo`` shell in the primary.
|
||||
|
||||
Run the ``rs.add()`` command with the FQDN and port number of the other instances:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
PRIMARY> rs.add("<new mdb-instance-name>:<port>")
|
||||
|
||||
|
||||
Step 14: Verify the Replica Set Membership
|
||||
------------------------------------------
|
||||
|
||||
You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the
|
||||
mongo shell to verify the replica set membership.
|
||||
|
||||
The new MongoDB instance should be listed in the membership information
|
||||
displayed.
|
||||
|
||||
|
||||
Step 15: Configure Users and Access Control for MongoDB
|
||||
-------------------------------------------------------
|
||||
|
||||
* Create the users in MongoDB with the appropriate roles assigned to them. This
|
||||
will enable the new BigchainDB instance, new MongoDB Monitoring Agent
|
||||
instance and the new MongoDB Backup Agent instance to function correctly.
|
||||
|
||||
* Please refer to
|
||||
:ref:`Configure Users and Access Control for MongoDB <Step 13: Configure
|
||||
Users and Access Control for MongoDB>` to create and configure the new
|
||||
BigchainDB, MongoDB Monitoring Agent and MongoDB Backup Agent users on the
|
||||
cluster.
|
||||
|
||||
.. note::
|
||||
You will not have to create the MongoDB replica set or create the admin user, as they already exist.
|
||||
|
||||
If you do not have access to the ``PRIMARY`` member of the replica set, you
|
||||
need to get in touch with the administrator who can create the users in the
|
||||
MongoDB cluster.
|
||||
|
||||
|
||||
|
||||
Step 16: Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
||||
-------------------------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent`.
|
||||
|
||||
.. note::
|
||||
Every MMS group has only one active Monitoring and Backup Agent and having
|
||||
multiple agents provides high availability and failover, in case one goes
|
||||
down. For more information about Monitoring and Backup Agents please
|
||||
consult the `official MongoDB documenation
|
||||
<https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
|
||||
|
||||
|
||||
Step 17: Start a Kubernetes Deployment for MongoDB Backup Agent
|
||||
---------------------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent`.
|
||||
|
||||
.. note::
|
||||
Every MMS group has only one active Monitoring and Backup Agent and having
|
||||
multiple agents provides high availability and failover, in case one goes
|
||||
down. For more information about Monitoring and Backup Agents please
|
||||
consult the `official MongoDB documenation
|
||||
<https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
|
||||
|
||||
|
||||
Step 18: Start a Kubernetes Deployment for BigchainDB
|
||||
-----------------------------------------------------
|
||||
|
||||
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
|
||||
value set in ``bdb-instance-name`` in the ConfigMap, followed by
|
||||
``-dep``.
|
||||
For example, if the value set in the
|
||||
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the
|
||||
value ``bdb-instance-0-dep``.
|
||||
|
||||
* Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded).
|
||||
(In the future, we'd like to pull the BigchainDB private key from
|
||||
the Secret named ``bdb-private-key``, but a Secret can only be mounted as a file,
|
||||
so BigchainDB Server would have to be modified to look for it
|
||||
in a file.)
|
||||
|
||||
* As we gain more experience running BigchainDB in testing and production,
|
||||
we will tweak the ``resources.limits`` values for CPU and memory, and as
|
||||
richer monitoring and probing becomes available in BigchainDB, we will
|
||||
tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
|
||||
|
||||
* Set the ports to be exposed from the pod in the
|
||||
``spec.containers[0].ports`` section. We currently expose 2 ports -
|
||||
``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the
|
||||
values specified in the ConfigMap.
|
||||
|
||||
* Uncomment the env var ``BIGCHAINDB_KEYRING``, it will pick up the
|
||||
``:`` delimited list of all the public keys in the BigchainDB cluster from the ConfigMap.
|
||||
|
||||
Create the required Deployment using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-2 apply -f bigchaindb-dep.yaml
|
||||
|
||||
You can check its status using the command ``kubectl --context ctx-2 get deploy -w``
|
||||
|
||||
|
||||
Step 19: Restart the Existing BigchainDB Instance(s)
|
||||
----------------------------------------------------
|
||||
|
||||
* Add the public key of the new BigchainDB instance to the ConfigMap
|
||||
``bdb-keyring`` variable of all the existing BigchainDB instances.
|
||||
Update all the existing ConfigMap using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 apply -f configuration/config-map.yaml
|
||||
|
||||
* Uncomment the ``BIGCHAINDB_KEYRING`` variable from the
|
||||
``bigchaindb/bigchaindb-dep.yaml`` to refer to the keyring updated in the
|
||||
ConfigMap.
|
||||
Update the running BigchainDB instance using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 delete -f bigchaindb/bigchaindb-dep.yaml
|
||||
$ kubectl --context ctx-1 apply -f bigchaindb/bigchaindb-dep.yaml
|
||||
|
||||
|
||||
See the page titled :ref:`How to Configure a BigchainDB Node` for more information about
|
||||
ConfigMap configuration.
|
||||
|
||||
You can SSH to an existing BigchainDB instance and run the ``bigchaindb
|
||||
show-config`` command to check that the keyring is updated.
|
||||
|
||||
|
||||
Step 20: Start a Kubernetes Deployment for OpenResty
|
||||
----------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Step 17: Start a Kubernetes Deployment for OpenResty`.
|
||||
|
||||
|
||||
Step 21: Configure the MongoDB Cloud Manager
|
||||
--------------------------------------------
|
||||
|
||||
* MongoDB Cloud Manager auto-detects the members of the replica set and
|
||||
configures the agents to act as a master/slave accordingly.
|
||||
|
||||
* You can verify that the new MongoDB instance is detected by the
|
||||
Monitoring and Backup Agent using the Cloud Manager UI.
|
||||
|
||||
|
||||
Step 22: Test Your New BigchainDB Node
|
||||
--------------------------------------
|
||||
|
||||
* Please refer to the testing steps :ref:`here <Step 19: Verify the BigchainDB
|
||||
Node Setup>` to verify that your new BigchainDB node is working as expected.
|
||||
|
@ -1,20 +1,144 @@
|
||||
Architecture of a Testnet Node
|
||||
==============================
|
||||
Architecture of a BigchainDB Node
|
||||
==================================
|
||||
|
||||
Each node in the `BigchainDB Testnet <https://testnet.bigchaindb.com/>`_
|
||||
is hosted on a Kubernetes cluster and includes:
|
||||
A BigchainDB Production deployment is hosted on a Kubernetes cluster and includes:
|
||||
|
||||
* NGINX, OpenResty, BigchainDB and MongoDB
|
||||
* NGINX, OpenResty, BigchainDB, MongoDB and Tendermint
|
||||
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
|
||||
* NGINX, OpenResty, BigchainDB, Monitoring Agent and Backup Agent
|
||||
* NGINX, OpenResty, BigchainDB and MongoDB Monitoring Agent.
|
||||
`Kubernetes Deployments <https://kubernetes.io/docs/concepts/workloads/controllers/deployment/>`_.
|
||||
* MongoDB `Kubernetes StatefulSet <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/>`_.
|
||||
* MongoDB and Tendermint `Kubernetes StatefulSet <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/>`_.
|
||||
* Third party services like `3scale <https://3scale.net>`_,
|
||||
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_ and the
|
||||
`Azure Operations Management Suite
|
||||
<https://docs.microsoft.com/en-us/azure/operations-management-suite/>`_.
|
||||
|
||||
.. image:: ../_static/arch.jpg
|
||||
|
||||
.. _bigchaindb-node:
|
||||
|
||||
BigchainDB Node
|
||||
---------------
|
||||
|
||||
.. aafig::
|
||||
:aspect: 60
|
||||
:scale: 100
|
||||
:background: #rgb
|
||||
:proportional:
|
||||
|
||||
+ +
|
||||
+--------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| "BigchainDB API" | | "Tendermint P2P" |
|
||||
| | | "Communication/" |
|
||||
| | | "Public Key Exchange" |
|
||||
| | | |
|
||||
| | | |
|
||||
| v v |
|
||||
| |
|
||||
| +------------------+ |
|
||||
| |"NGINX Service" | |
|
||||
| +-------+----------+ |
|
||||
| | |
|
||||
| v |
|
||||
| |
|
||||
| +------------------+ |
|
||||
| | "NGINX" | |
|
||||
| | "Deployment" | |
|
||||
| | | |
|
||||
| +-------+----------+ |
|
||||
| | |
|
||||
| | |
|
||||
| | |
|
||||
| v |
|
||||
| |
|
||||
| "443" +----------+ "46656/9986" |
|
||||
| | "Rate" | |
|
||||
| +---------------------------+"Limiting"+-----------------------+ |
|
||||
| | | "Logic" | | |
|
||||
| | +----+-----+ | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | "27017" | | |
|
||||
| v | v |
|
||||
| +-------------+ | +------------+ |
|
||||
| |"HTTPS" | | +------------------> |"Tendermint"| |
|
||||
| |"Termination"| | | "9986" |"Service" | "46656" |
|
||||
| | | | | +-------+ | <----+ |
|
||||
| +-----+-------+ | | | +------------+ | |
|
||||
| | | | | | |
|
||||
| | | | v v |
|
||||
| | | | +------------+ +------------+ |
|
||||
| | | | |"NGINX" | |"Tendermint"| |
|
||||
| | | | |"Deployment"| |"Stateful" | |
|
||||
| | | | |"Pub-Key-Ex"| |"Set" | |
|
||||
| ^ | | +------------+ +------------+ |
|
||||
| +-----+-------+ | | |
|
||||
| "POST" |"Analyze" | "GET" | | |
|
||||
| |"Request" | | | |
|
||||
| +-----------+ +--------+ | | |
|
||||
| | +-------------+ | | | |
|
||||
| | | | | "Bi+directional, communication between" |
|
||||
| | | | | "BigchainDB(APP) and Tendermint" |
|
||||
| | | | | "BFT consensus Engine" |
|
||||
| | | | | |
|
||||
| v v | | |
|
||||
| | | |
|
||||
| +-------------+ +--------------+ +----+-------------------> +--------------+ |
|
||||
| | "OpenResty" | | "BigchainDB" | | | "MongoDB" | |
|
||||
| | "Service" | | "Service" | | | "Service" | |
|
||||
| | | +----->| | | +-------> | | |
|
||||
| +------+------+ | +------+-------+ | | +------+-------+ |
|
||||
| | | | | | | |
|
||||
| | | | | | | |
|
||||
| v | v | | v |
|
||||
| +-------------+ | +-------------+ | | +----------+ |
|
||||
| | | | | | <------------+ | |"MongoDB" | |
|
||||
| |"OpenResty" | | | "BigchainDB"| | |"Stateful"| |
|
||||
| |"Deployment" | | | "Deployment"| | |"Set" | |
|
||||
| | | | | | | +-----+----+ |
|
||||
| | | | | +---------------------------+ | |
|
||||
| | | | | | | |
|
||||
| +-----+-------+ | +-------------+ | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| v | | |
|
||||
| +-----------+ | v |
|
||||
| | "Auth" | | +------------+ |
|
||||
| | "Logic" |----------+ |"MongoDB" | |
|
||||
| | | |"Monitoring"| |
|
||||
| | | |"Agent" | |
|
||||
| +---+-------+ +-----+------+ |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
+---------------+---------------------------------------------------------------------------------------+------------------------------+
|
||||
| |
|
||||
| |
|
||||
| |
|
||||
v v
|
||||
+------------------------------------+ +------------------------------------+
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| "3Scale" | | "MongoDB Cloud" |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
+------------------------------------+ +------------------------------------+
|
||||
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
The arrows in the diagram represent the client-server communication. For
|
||||
@ -23,8 +147,8 @@ is hosted on a Kubernetes cluster and includes:
|
||||
fully duplex.
|
||||
|
||||
|
||||
NGINX
|
||||
-----
|
||||
NGINX: Entrypoint and Gateway
|
||||
-----------------------------
|
||||
|
||||
We use an NGINX as HTTP proxy on port 443 (configurable) at the cloud
|
||||
entrypoint for:
|
||||
@ -52,8 +176,8 @@ entrypoint for:
|
||||
public api port), the connection is proxied to the MongoDB Service.
|
||||
|
||||
|
||||
OpenResty
|
||||
---------
|
||||
OpenResty: API Management, Authentication and Authorization
|
||||
-----------------------------------------------------------
|
||||
|
||||
We use `OpenResty <https://openresty.org/>`_ to perform authorization checks
|
||||
with 3scale using the ``app_id`` and ``app_key`` headers in the HTTP request.
|
||||
@ -64,13 +188,23 @@ on the LuaJIT compiler to execute the functions to authenticate the ``app_id``
|
||||
and ``app_key`` with the 3scale backend.
|
||||
|
||||
|
||||
MongoDB
|
||||
-------
|
||||
MongoDB: Standalone
|
||||
-------------------
|
||||
|
||||
We use MongoDB as the backend database for BigchainDB.
|
||||
In a multi-node deployment, MongoDB members communicate with each other via the
|
||||
public port exposed by the NGINX Service.
|
||||
|
||||
We achieve security by avoiding DoS attacks at the NGINX proxy layer and by
|
||||
ensuring that MongoDB has TLS enabled for all its connections.
|
||||
|
||||
|
||||
Tendermint: BFT consensus engine
|
||||
--------------------------------
|
||||
|
||||
We use Tendermint as the backend consensus engine for BFT replication of BigchainDB.
|
||||
In a multi-node deployment, Tendermint nodes/peers communicate with each other via
|
||||
the public ports exposed by the NGINX gateway.
|
||||
|
||||
We use port **9986** (configurable) to allow tendermint nodes to access the public keys
|
||||
of the peers and port **46656** (configurable) for the rest of the communications between
|
||||
the peers.
|
||||
|
||||
|
@ -0,0 +1,546 @@
|
||||
.. _kubernetes-template-deploy-bigchaindb-network:
|
||||
|
||||
Kubernetes Template: Deploying a BigchainDB network
|
||||
===================================================
|
||||
|
||||
This page describes how to deploy a static BigchainDB + Tendermint network.
|
||||
|
||||
If you want to deploy a stand-alone BigchainDB node in a BigchainDB cluster,
|
||||
or a stand-alone BigchainDB node,
|
||||
then see :doc:`the page about that <node-on-kubernetes>`.
|
||||
|
||||
We can use this guide to deploy a BigchainDB network in the following scenarios:
|
||||
|
||||
* Single Azure Kubernetes Site.
|
||||
* Multiple Azure Kubernetes Sites (Geographically dispersed).
|
||||
|
||||
|
||||
Terminology Used
|
||||
----------------
|
||||
|
||||
``BigchainDB node`` is a set of Kubernetes components that join together to
|
||||
form a BigchainDB single node. Please refer to the :doc:`architecture diagram <architecture>`
|
||||
for more details.
|
||||
|
||||
``BigchainDB network`` will refer to a collection of nodes working together
|
||||
to form a network.
|
||||
|
||||
|
||||
Below, we refer to multiple files by their directory and filename,
|
||||
such as ``tendermint/tendermint-ext-conn-svc.yaml``. Those files are located in the
|
||||
`bigchaindb/bigchaindb repository on GitHub
|
||||
<https://github.com/bigchaindb/bigchaindb/>`_ in the ``k8s/`` directory.
|
||||
Make sure you're getting those files from the appropriate Git branch on
|
||||
GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB
|
||||
cluster is using.
|
||||
|
||||
.. note::
|
||||
|
||||
This deployment strategy is currently used for testing purposes only,
|
||||
operated by a single stakeholder or tightly coupled stakeholders.
|
||||
|
||||
.. note::
|
||||
|
||||
Currently, we only support a static set of participants in the network.
|
||||
Once a BigchainDB network is started with a certain number of validators
|
||||
and a genesis file. Users cannot add new validator nodes dynamically.
|
||||
You can track the progress of this funtionality on our
|
||||
`github repository <https://github.com/bigchaindb/bigchaindb/milestones>`_.
|
||||
|
||||
|
||||
.. _pre-reqs-bdb-network:
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
The deployment methodology is similar to one covered with :doc:`node-on-kubernetes`, but
|
||||
we need to tweak some configurations depending on your choice of deployment.
|
||||
|
||||
The operator needs to follow some consistent naming convention for all the components
|
||||
covered :ref:`here <things-each-node-operator-must-do>`.
|
||||
|
||||
Lets assume we are deploying a 4 node cluster, your naming conventions could look like this:
|
||||
|
||||
.. code::
|
||||
|
||||
{
|
||||
"MongoDB": [
|
||||
"mdb-instance-1",
|
||||
"mdb-instance-2",
|
||||
"mdb-instance-3",
|
||||
"mdb-instance-4"
|
||||
],
|
||||
"BigchainDB": [
|
||||
"bdb-instance-1",
|
||||
"bdb-instance-2",
|
||||
"bdb-instance-3",
|
||||
"bdb-instance-4"
|
||||
],
|
||||
"NGINX": [
|
||||
"ngx-instance-1",
|
||||
"ngx-instance-2",
|
||||
"ngx-instance-3",
|
||||
"ngx-instance-4"
|
||||
],
|
||||
"OpenResty": [
|
||||
"openresty-instance-1",
|
||||
"openresty-instance-2",
|
||||
"openresty-instance-3",
|
||||
"openresty-instance-4"
|
||||
],
|
||||
"MongoDB_Monitoring_Agent": [
|
||||
"mdb-mon-instance-1",
|
||||
"mdb-mon-instance-2",
|
||||
"mdb-mon-instance-3",
|
||||
"mdb-mon-instance-4"
|
||||
],
|
||||
"Tendermint": [
|
||||
"tendermint-instance-1",
|
||||
"tendermint-instance-2",
|
||||
"tendermint-instance-3",
|
||||
"tendermint-instance-4"
|
||||
]
|
||||
}
|
||||
|
||||
.. note::
|
||||
|
||||
Blockchain Genesis ID and Time will be shared across all nodes.
|
||||
|
||||
Edit config.yaml and secret.yaml
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Make N(number of nodes) copies of ``configuration/config-map.yaml`` and ``configuration/secret.yaml``.
|
||||
|
||||
.. code:: text
|
||||
|
||||
# For config-map.yaml
|
||||
config-map-node-1.yaml
|
||||
config-map-node-2.yaml
|
||||
config-map-node-3.yaml
|
||||
config-map-node-4.yaml
|
||||
|
||||
# For secret.yaml
|
||||
secret-node-1.yaml
|
||||
secret-node-2.yaml
|
||||
secret-node-3.yaml
|
||||
secret-node-4.yaml
|
||||
|
||||
Edit the data values as described in :doc:`this document <node-config-map-and-secrets>`, based
|
||||
on the naming convention described :ref:`above <pre-reqs-bdb-network>`.
|
||||
|
||||
**Only for single site deployments**: Since all the configuration files use the
|
||||
same ConfigMap and Secret Keys i.e.
|
||||
``metadata.name -> vars, bdb-config and tendermint-config`` and
|
||||
``metadata.name -> cloud-manager-credentials, mdb-certs, mdb-mon-certs, bdb-certs,``
|
||||
``https-certs, three-scale-credentials, ca-auth`` respectively, each file
|
||||
will overwrite the configuration of the previously deployed one.
|
||||
We want each node to have its own unique configurations.
|
||||
One way to go about it is that, using the
|
||||
:ref:`naming convention above <pre-reqs-bdb-network>` we edit the ConfigMap and Secret keys.
|
||||
|
||||
.. code:: text
|
||||
|
||||
# For config-map-node-1.yaml
|
||||
metadata.name: vars -> vars-node-1
|
||||
metadata.name: bdb-config -> bdb-config-node-1
|
||||
metadata.name: tendermint-config -> tendermint-config-node-1
|
||||
|
||||
# For secret-node-1.yaml
|
||||
metadata.name: cloud-manager-credentials -> cloud-manager-credentials-node-1
|
||||
metadata.name: mdb-certs -> mdb-certs-node-1
|
||||
metadata.name: mdb-mon-certs -> mdb-mon-certs-node-1
|
||||
metadata.name: bdb-certs -> bdb-certs-node-1
|
||||
metadata.name: https-certs -> https-certs-node-1
|
||||
metadata.name: threescale-credentials -> threescale-credentials-node-1
|
||||
metadata.name: ca-auth -> ca-auth-node-1
|
||||
|
||||
# Repeat for the remaining files.
|
||||
|
||||
Deploy all your configuration maps and secrets.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl apply -f configuration/config-map-node-1.yaml
|
||||
kubectl apply -f configuration/config-map-node-2.yaml
|
||||
kubectl apply -f configuration/config-map-node-3.yaml
|
||||
kubectl apply -f configuration/config-map-node-4.yaml
|
||||
kubectl apply -f configuration/secret-node-1.yaml
|
||||
kubectl apply -f configuration/secret-node-2.yaml
|
||||
kubectl apply -f configuration/secret-node-3.yaml
|
||||
kubectl apply -f configuration/secret-node-4.yaml
|
||||
|
||||
.. note::
|
||||
|
||||
Similar to what we did, with config-map.yaml and secret.yaml i.e. indexing them
|
||||
per node, we have to do the same for each Kubernetes component
|
||||
i.e. Services, StorageClasses, PersistentVolumeClaims, StatefulSets, Deployments etc.
|
||||
|
||||
.. code:: text
|
||||
|
||||
# For Services
|
||||
*-node-1-svc.yaml
|
||||
*-node-2-svc.yaml
|
||||
*-node-3-svc.yaml
|
||||
*-node-4-svc.yaml
|
||||
|
||||
# For StorageClasses
|
||||
*-node-1-sc.yaml
|
||||
*-node-2-sc.yaml
|
||||
*-node-3-sc.yaml
|
||||
*-node-4-sc.yaml
|
||||
|
||||
# For PersistentVolumeClaims
|
||||
*-node-1-pvc.yaml
|
||||
*-node-2-pvc.yaml
|
||||
*-node-3-pvc.yaml
|
||||
*-node-4-pvc.yaml
|
||||
|
||||
# For StatefulSets
|
||||
*-node-1-ss.yaml
|
||||
*-node-2-ss.yaml
|
||||
*-node-3-ss.yaml
|
||||
*-node-4-ss.yaml
|
||||
|
||||
# For Deployments
|
||||
*-node-1-dep.yaml
|
||||
*-node-2-dep.yaml
|
||||
*-node-3-dep.yaml
|
||||
*-node-4-dep.yaml
|
||||
|
||||
|
||||
.. _single-site-network:
|
||||
|
||||
Single Site: Single Azure Kubernetes Cluster
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
For the deployment of a BigchainDB network under a single cluster, we need to replicate
|
||||
the :doc:`deployment steps for each node <node-on-kubernetes>` N number of times, N being
|
||||
the number of participants in the network.
|
||||
|
||||
In our Kubernetes deployment template for a single BigchainDB node, we covered the basic configurations
|
||||
settings :ref:`here <how-to-configure-a-bigchaindb-node>`.
|
||||
|
||||
Since, we index the ConfigMap and Secret Keys for the single site deployment, we need to update
|
||||
all the Kubernetes components to reflect the corresponding changes i.e. For each Kubernetes Service,
|
||||
StatefulSet, PersistentVolumeClaim, Deployment, and StorageClass, we need to update the respective
|
||||
`*.yaml` file and update the ConfigMapKeyRef.name OR secret.secretName.
|
||||
|
||||
Example
|
||||
"""""""
|
||||
|
||||
Assuming we are deploying the MongoDB StatefulSet for Node 1. We need to update
|
||||
the ``mongo-node-1-ss.yaml`` and update the corresponding ConfigMapKeyRef.name or secret.secretNames.
|
||||
|
||||
.. code:: text
|
||||
|
||||
########################################################################
|
||||
# This YAML file desribes a StatefulSet with a service for running and #
|
||||
# exposing a MongoDB instance. #
|
||||
# It depends on the configdb and db k8s pvc. #
|
||||
########################################################################
|
||||
|
||||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: mdb-instance-0-ss
|
||||
namespace: default
|
||||
spec:
|
||||
serviceName: mdb-instance-0
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
name: mdb-instance-0-ss
|
||||
labels:
|
||||
app: mdb-instance-0-ss
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: mongodb
|
||||
image: bigchaindb/mongodb:3.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
env:
|
||||
- name: MONGODB_FQDN
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: vars-1 # Changed from ``vars``
|
||||
key: mdb-instance-name
|
||||
- name: MONGODB_POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
- name: MONGODB_PORT
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: vars-1 # Changed from ``vars``
|
||||
key: mongodb-backend-port
|
||||
- name: STORAGE_ENGINE_CACHE_SIZE
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: vars-1 # Changed from ``vars``
|
||||
key: storage-engine-cache-size
|
||||
args:
|
||||
- --mongodb-port
|
||||
- $(MONGODB_PORT)
|
||||
- --mongodb-key-file-path
|
||||
- /etc/mongod/ssl/mdb-instance.pem
|
||||
- --mongodb-ca-file-path
|
||||
- /etc/mongod/ca/ca.pem
|
||||
- --mongodb-crl-file-path
|
||||
- /etc/mongod/ca/crl.pem
|
||||
- --mongodb-fqdn
|
||||
- $(MONGODB_FQDN)
|
||||
- --mongodb-ip
|
||||
- $(MONGODB_POD_IP)
|
||||
- --storage-engine-cache-size
|
||||
- $(STORAGE_ENGINE_CACHE_SIZE)
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- FOWNER
|
||||
ports:
|
||||
- containerPort: "<mongodb-backend-port from ConfigMap>"
|
||||
protocol: TCP
|
||||
name: mdb-api-port
|
||||
volumeMounts:
|
||||
- name: mdb-db
|
||||
mountPath: /data/db
|
||||
- name: mdb-configdb
|
||||
mountPath: /data/configdb
|
||||
- name: mdb-certs
|
||||
mountPath: /etc/mongod/ssl/
|
||||
readOnly: true
|
||||
- name: ca-auth
|
||||
mountPath: /etc/mongod/ca/
|
||||
readOnly: true
|
||||
resources:
|
||||
limits:
|
||||
cpu: 200m
|
||||
memory: 5G
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: mdb-api-port
|
||||
initialDelaySeconds: 15
|
||||
successThreshold: 1
|
||||
failureThreshold: 3
|
||||
periodSeconds: 15
|
||||
timeoutSeconds: 10
|
||||
restartPolicy: Always
|
||||
volumes:
|
||||
- name: mdb-db
|
||||
persistentVolumeClaim:
|
||||
claimName: mongo-db-claim-1 # Changed from ``mongo-db-claim``
|
||||
- name: mdb-configdb
|
||||
persistentVolumeClaim:
|
||||
claimName: mongo-configdb-claim-1 # Changed from ``mongo-configdb-claim``
|
||||
- name: mdb-certs
|
||||
secret:
|
||||
secretName: mdb-certs-1 # Changed from ``mdb-certs``
|
||||
defaultMode: 0400
|
||||
- name: ca-auth
|
||||
secret:
|
||||
secretName: ca-auth-1 # Changed from ``ca-auth``
|
||||
defaultMode: 0400
|
||||
|
||||
The above example is meant to be repeated for all the Kubernetes components of a BigchainDB node.
|
||||
|
||||
* ``nginx-http/nginx-http-node-X-svc.yaml`` or ``nginx-https/nginx-https-node-X-svc.yaml``
|
||||
|
||||
* ``nginx-http/nginx-http-node-X-dep.yaml`` or ``nginx-https/nginx-https-node-X-dep.yaml``
|
||||
|
||||
* ``mongodb/mongodb-node-X-svc.yaml``
|
||||
|
||||
* ``mongodb/mongodb-node-X-sc.yaml``
|
||||
|
||||
* ``mongodb/mongodb-node-X-pvc.yaml``
|
||||
|
||||
* ``mongodb/mongodb-node-X-ss.yaml``
|
||||
|
||||
* ``tendermint/tendermint-node-X-svc.yaml``
|
||||
|
||||
* ``tendermint/tendermint-node-X-sc.yaml``
|
||||
|
||||
* ``tendermint/tendermint-node-X-pvc.yaml``
|
||||
|
||||
* ``tendermint/tendermint-node-X-ss.yaml``
|
||||
|
||||
* ``bigchaindb/bigchaindb-node-X-svc.yaml``
|
||||
|
||||
* ``bigchaindb/bigchaindb-node-X-dep.yaml``
|
||||
|
||||
* ``nginx-openresty/nginx-openresty-node-X-svc.yaml``
|
||||
|
||||
* ``nginx-openresty/nginx-openresty-node-X-dep.yaml``
|
||||
|
||||
|
||||
Multi Site: Multiple Azure Kubernetes Clusters
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
For the multi site deployment of a BigchainDB network with geographically dispersed
|
||||
nodes, we need to replicate the :doc:`deployment steps for each node <node-on-kubernetes>` N number of times,
|
||||
N being the number of participants in the network.
|
||||
|
||||
The operator needs to follow a consistent naming convention which has :ref:`already
|
||||
discussed in this document <pre-reqs-bdb-network>`.
|
||||
|
||||
.. note::
|
||||
|
||||
Assuming we are using independent Kubernetes clusters, the ConfigMap and Secret Keys
|
||||
do not need to be updated unlike :ref:`single-site-network`, and we also do not
|
||||
need to update corresponding ConfigMap/Secret imports in the Kubernetes components.
|
||||
|
||||
|
||||
Deploy Kubernetes Services
|
||||
--------------------------
|
||||
|
||||
Deploy the following services for each node by following the naming convention
|
||||
described :ref:`above <pre-reqs-bdb-network>`:
|
||||
|
||||
* :ref:`Start the NGINX Service <start-the-nginx-service>`.
|
||||
|
||||
* :ref:`Assign DNS Name to the NGINX Public IP <assign-dns-name-to-nginx-public-ip>`
|
||||
|
||||
* :ref:`Start the MongoDB Kubernetes Service <start-the-mongodb-kubernetes-service>`.
|
||||
|
||||
* :ref:`Start the BigchainDB Kubernetes Service <start-the-bigchaindb-kubernetes-service>`.
|
||||
|
||||
* :ref:`Start the OpenResty Kubernetes Service <start-the-openresty-kubernetes-service>`.
|
||||
|
||||
* :ref:`Start the Tendermint Kubernetes Service <start-the-tendermint-kubernetes-service>`.
|
||||
|
||||
|
||||
Only for multi site deployments
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
We need to make sure that clusters are able
|
||||
to talk to each other i.e. specifically the communication between the
|
||||
Tendermint peers. Set up networking between the clusters using
|
||||
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
|
||||
|
||||
Assuming we have a Tendermint instance ``tendermint-instance-1`` residing in Azure data center location ``westeurope`` and we
|
||||
want to connect to ``tendermint-instance-2``, ``tendermint-instance-3``, and ``tendermint-instance-4`` located in Azure data centers
|
||||
``eastus``, ``centralus`` and ``westus``, respectively. Unless you already have explicitly set up networking for
|
||||
``tendermint-instance-1`` to communicate with ``tendermint-instance-2/3/4`` and
|
||||
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
|
||||
Tendermint P2P network.
|
||||
It is similar to ensuring that there is a ``CNAME`` record in the DNS
|
||||
infrastructure to resolve ``tendermint-instance-X`` to the host where it is actually available.
|
||||
We can do this in Kubernetes using a Kubernetes Service of ``type``
|
||||
``ExternalName``.
|
||||
|
||||
* This configuration is located in the file ``tendermint/tendermint-ext-conn-svc.yaml``.
|
||||
|
||||
* Set the name of the ``metadata.name`` to the host name of the Tendermint instance you are trying to connect to.
|
||||
For instance if you are configuring this service on cluster with ``tendermint-instance-1`` then the ``metadata.name`` will
|
||||
be ``tendermint-instance-2`` and vice versa.
|
||||
|
||||
* Set ``spec.ports.port[0]`` to the ``tm-p2p-port`` from the ConfigMap for the other cluster.
|
||||
|
||||
* Set ``spec.ports.port[1]`` to the ``tm-rpc-port`` from the ConfigMap for the other cluster.
|
||||
|
||||
* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to.
|
||||
For more information about the FQDN please refer to: :ref:`Assign DNS name to NGINX Public
|
||||
IP <assign-dns-name-to-nginx-public-ip>`.
|
||||
|
||||
.. note::
|
||||
This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs
|
||||
we need to communicate with.
|
||||
|
||||
If you are not the system administrator of the cluster, you have to get in
|
||||
touch with the system administrator/s of the other ``n-1`` clusters and
|
||||
share with them your instance name (``tendermint-instance-name`` in the ConfigMap)
|
||||
and the FQDN of the NGINX instance acting as Gateway(set in: :ref:`Assign DNS name to NGINX
|
||||
Public IP <assign-dns-name-to-nginx-public-ip>`).
|
||||
|
||||
|
||||
Start NGINX Kubernetes deployments
|
||||
----------------------------------
|
||||
|
||||
Start the NGINX deployment that serves as a Gateway for each node by following the
|
||||
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
|
||||
|
||||
* :ref:`Start the NGINX Kubernetes Deployment <start-the-nginx-deployment>`.
|
||||
|
||||
|
||||
Deploy Kubernetes StorageClasses for MongoDB and Tendermint
|
||||
-----------------------------------------------------------
|
||||
|
||||
Deploy the following StorageClasses for each node by following the naming convention
|
||||
described :ref:`above <pre-reqs-bdb-network>`:
|
||||
|
||||
* :ref:`Create Kubernetes Storage Classes for MongoDB <create-kubernetes-storage-class-mdb>`.
|
||||
|
||||
* :ref:`Create Kubernetes Storage Classes for Tendermint <create-kubernetes-storage-class>`.
|
||||
|
||||
|
||||
Deploy Kubernetes PersistentVolumeClaims for MongoDB and Tendermint
|
||||
--------------------------------------------------------------------
|
||||
|
||||
Deploy the following services for each node by following the naming convention
|
||||
described :ref:`above <pre-reqs-bdb-network>`:
|
||||
|
||||
* :ref:`Create Kubernetes Persistent Volume Claims for MongoDB <create-kubernetes-persistent-volume-claim-mdb>`.
|
||||
|
||||
* :ref:`Create Kubernetes Persistent Volume Claims for Tendermint <create-kubernetes-persistent-volume-claim>`
|
||||
|
||||
|
||||
Deploy MongoDB Kubernetes StatefulSet
|
||||
--------------------------------------
|
||||
|
||||
Deploy the MongoDB StatefulSet (standalone MongoDB) for each node by following the naming convention
|
||||
described :ref:`above <pre-reqs-bdb-network>`: and referring to the following section:
|
||||
|
||||
* :ref:`Start a Kubernetes StatefulSet for MongoDB <start-kubernetes-stateful-set-mongodb>`.
|
||||
|
||||
|
||||
Configure Users and Access Control for MongoDB
|
||||
----------------------------------------------
|
||||
|
||||
Configure users and access control for each MongoDB instance
|
||||
in the network by referring to the following section:
|
||||
|
||||
* :ref:`Configure Users and Access Control for MongoDB <configure-users-and-access-control-mongodb>`.
|
||||
|
||||
|
||||
Deploy Tendermint Kubernetes StatefulSet
|
||||
----------------------------------------
|
||||
|
||||
Deploy the Tendermint Stateful for each node by following the
|
||||
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
|
||||
|
||||
* :ref:`create-kubernetes-stateful-set`.
|
||||
|
||||
|
||||
Start Kubernetes Deployment for MongoDB Monitoring Agent
|
||||
---------------------------------------------------------
|
||||
|
||||
Start the MongoDB monitoring agent Kubernetes deployment for each node by following the
|
||||
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
|
||||
|
||||
* :ref:`Start a Kubernetes StatefulSet for Tendermint <start-kubernetes-deployment-for-mdb-mon-agent>`.
|
||||
|
||||
|
||||
Start Kubernetes Deployment for BigchainDB
|
||||
------------------------------------------
|
||||
|
||||
Start the BigchainDB Kubernetes deployment for each node by following the
|
||||
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
|
||||
|
||||
* :ref:`Start a Kubernetes Deployment for BigchainDB <start-kubernetes-deployment-bdb>`.
|
||||
|
||||
|
||||
Start Kubernetes Deployment for OpenResty
|
||||
------------------------------------------
|
||||
|
||||
Start the OpenResty Kubernetes deployment for each node by following the
|
||||
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
|
||||
|
||||
* :ref:`Start a Kubernetes Deployment for OpenResty <start-kubernetes-deployment-openresty>`.
|
||||
|
||||
|
||||
Verify and Test
|
||||
---------------
|
||||
|
||||
Verify and test your setup by referring to the following instructions:
|
||||
|
||||
* :ref:`Verify the BigchainDB Node Setup <verify-and-test-bdb>`.
|
||||
|
@ -1,3 +1,5 @@
|
||||
.. _how-to-set-up-a-self-signed-certificate-authority:
|
||||
|
||||
How to Set Up a Self-Signed Certificate Authority
|
||||
=================================================
|
||||
|
||||
@ -18,7 +20,7 @@ First create a directory for the CA and cd into it:
|
||||
|
||||
cd bdb-cluster-ca
|
||||
|
||||
Then :ref:`install and configure Easy-RSA in that directory <How to Install & Configure Easy-RSA>`.
|
||||
Then :ref:`install and configure Easy-RSA in that directory <how-to-install-and-configure-easyrsa>`.
|
||||
|
||||
|
||||
Step 2: Create a Self-Signed CA
|
||||
|
@ -1,3 +1,5 @@
|
||||
.. _how-to-generate-a-client-certificate-for-mongodb:
|
||||
|
||||
How to Generate a Client Certificate for MongoDB
|
||||
================================================
|
||||
|
||||
@ -17,7 +19,7 @@ First create a directory for the client certificate and cd into it:
|
||||
|
||||
cd client-cert
|
||||
|
||||
Then :ref:`install and configure Easy-RSA in that directory <How to Install & Configure Easy-RSA>`.
|
||||
Then :ref:`install and configure Easy-RSA in that directory <how-to-install-and-configure-easyrsa>`.
|
||||
|
||||
|
||||
Step 2: Create the Client Private Key and CSR
|
||||
|
@ -1,8 +1,10 @@
|
||||
Configure MongoDB Cloud Manager for Monitoring and Backup
|
||||
=========================================================
|
||||
.. _configure-mongodb-cloud-manager-for-monitoring:
|
||||
|
||||
Configure MongoDB Cloud Manager for Monitoring
|
||||
==============================================
|
||||
|
||||
This document details the steps required to configure MongoDB Cloud Manager to
|
||||
enable monitoring and backup of data in a MongoDB Replica Set.
|
||||
enable monitoring of data in a MongoDB Replica Set.
|
||||
|
||||
|
||||
Configure MongoDB Cloud Manager for Monitoring
|
||||
@ -58,39 +60,3 @@ Configure MongoDB Cloud Manager for Monitoring
|
||||
|
||||
* Verify on the UI that data is being sent by the monitoring agent to the
|
||||
Cloud Manager. It may take upto 5 minutes for data to appear on the UI.
|
||||
|
||||
|
||||
Configure MongoDB Cloud Manager for Backup
|
||||
------------------------------------------
|
||||
|
||||
* Once the Backup Agent is up and running, open
|
||||
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_.
|
||||
|
||||
* Click ``Login`` under ``MongoDB Cloud Manager`` and log in to the Cloud
|
||||
Manager.
|
||||
|
||||
* Select the group from the dropdown box on the page.
|
||||
|
||||
* Click ``Backup`` tab.
|
||||
|
||||
* Hover over the ``Status`` column of your backup and click ``Start``
|
||||
to start the backup.
|
||||
|
||||
* Select the replica set on the side pane.
|
||||
|
||||
* If you have authentication enabled, select the authentication mechanism as
|
||||
per your deployment. The default BigchainDB production deployment currently
|
||||
supports ``X.509 Client Certificate`` as the authentication mechanism.
|
||||
|
||||
* If you have TLS enabled, select the checkbox ``Replica set allows TLS/SSL
|
||||
connections``. This should be selected by default in case you selected
|
||||
``X.509 Client Certificate`` as the auth mechanism above.
|
||||
|
||||
* Choose the ``WiredTiger`` storage engine.
|
||||
|
||||
* Verify the details of your MongoDB instance and click on ``Start``.
|
||||
|
||||
* It may take up to 5 minutes for the backup process to start.
|
||||
During this process, the UI will show the status of the backup process.
|
||||
|
||||
* Verify that data is being backed up on the UI.
|
||||
|
@ -1,3 +1,5 @@
|
||||
.. _how-to-install-and-configure-easyrsa:
|
||||
|
||||
How to Install & Configure Easy-RSA
|
||||
===================================
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
Production Deployment Template
|
||||
==============================
|
||||
|
||||
This section outlines how *we* deploy production BigchainDB nodes and clusters
|
||||
on Microsoft Azure
|
||||
using Kubernetes.
|
||||
We improve it constantly.
|
||||
This section outlines how *we* deploy production BigchainDB,
|
||||
integrated with Tendermint(backend for BFT consensus),
|
||||
clusters on Microsoft Azure using
|
||||
Kubernetes. We improve it constantly.
|
||||
You may choose to use it as a template or reference for your own deployment,
|
||||
but *we make no claim that it is suitable for your purposes*.
|
||||
Feel free change things to suit your needs or preferences.
|
||||
@ -25,8 +25,7 @@ Feel free change things to suit your needs or preferences.
|
||||
cloud-manager
|
||||
easy-rsa
|
||||
upgrade-on-kubernetes
|
||||
add-node-on-kubernetes
|
||||
restore-from-mongodb-cloud-manager
|
||||
bigchaindb-network-on-kubernetes
|
||||
tectonic-azure
|
||||
troubleshoot
|
||||
architecture
|
||||
|
@ -1,3 +1,5 @@
|
||||
.. _how-to-configure-a-bigchaindb-node:
|
||||
|
||||
How to Configure a BigchainDB Node
|
||||
==================================
|
||||
|
||||
@ -9,7 +11,7 @@ and ``secret.yaml`` (a set of Secrets).
|
||||
They are stored in the Kubernetes cluster's key-value store (etcd).
|
||||
|
||||
Make sure you did all the things listed in the section titled
|
||||
:ref:`Things Each Node Operator Must Do`
|
||||
:ref:`things-each-node-operator-must-do`
|
||||
(including generation of all the SSL certificates needed
|
||||
for MongoDB auth).
|
||||
|
||||
@ -33,7 +35,7 @@ vars.cluster-fqdn
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``cluster-fqdn`` field specifies the domain you would have
|
||||
:ref:`registered before <2. Register a Domain and Get an SSL Certificate for It>`.
|
||||
:ref:`registered before <register-a-domain-and-get-an-ssl-certificate-for-it>`.
|
||||
|
||||
|
||||
vars.cluster-frontend-port
|
||||
@ -69,15 +71,8 @@ of naming instances, so the instances in your BigchainDB node
|
||||
should conform to that standard (i.e. you can't just make up some names).
|
||||
There are some things worth noting about the ``mdb-instance-name``:
|
||||
|
||||
* MongoDB reads the local ``/etc/hosts`` file while bootstrapping a replica
|
||||
set to resolve the hostname provided to the ``rs.initiate()`` command.
|
||||
It needs to ensure that the replica set is being initialized in the same
|
||||
instance where the MongoDB instance is running.
|
||||
* We use the value in the ``mdb-instance-name`` field to achieve this.
|
||||
* This field will be the DNS name of your MongoDB instance, and Kubernetes
|
||||
maps this name to its internal DNS.
|
||||
* This field will also be used by other MongoDB instances when forming a
|
||||
MongoDB replica set.
|
||||
* We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in our
|
||||
documentation. Your BigchainDB cluster may use a different naming convention.
|
||||
|
||||
@ -139,31 +134,10 @@ listening for HTTP requests. Currently set to ``9984`` by default.
|
||||
The ``bigchaindb-ws-port`` is the port number on which BigchainDB is
|
||||
listening for Websocket requests. Currently set to ``9985`` by default.
|
||||
|
||||
There's another :ref:`page with a complete listing of all the BigchainDB Server
|
||||
configuration settings <Configuration Settings>`.
|
||||
There's another :doc:`page with a complete listing of all the BigchainDB Server
|
||||
configuration settings <../server-reference/configuration>`.
|
||||
|
||||
|
||||
bdb-config.bdb-keyring
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This lists the BigchainDB public keys
|
||||
of all *other* nodes in your BigchainDB cluster
|
||||
(not including the public key of your BigchainDB node). Cases:
|
||||
|
||||
* If you're deploying the first node in the cluster,
|
||||
the value should be ``""`` (an empty string).
|
||||
* If you're deploying the second node in the cluster,
|
||||
the value should be the BigchainDB public key of the first/original
|
||||
node in the cluster.
|
||||
For example,
|
||||
``"EPQk5i5yYpoUwGVM8VKZRjM8CYxB6j8Lu8i8SG7kGGce"``
|
||||
* If there are two or more other nodes already in the cluster,
|
||||
the value should be a colon-separated list
|
||||
of the BigchainDB public keys
|
||||
of those other nodes.
|
||||
For example,
|
||||
``"DPjpKbmbPYPKVAuf6VSkqGCf5jzrEh69Ldef6TrLwsEQ:EPQk5i5yYpoUwGVM8VKZRjM8CYxB6j8Lu8i8SG7kGGce"``
|
||||
|
||||
bdb-config.bdb-user
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
@ -174,16 +148,16 @@ We need to specify the user name *as seen in the certificate* issued to
|
||||
the BigchainDB instance in order to authenticate correctly. Use
|
||||
the following ``openssl`` command to extract the user name from the
|
||||
certificate:
|
||||
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ openssl x509 -in <path to the bigchaindb certificate> \
|
||||
-inform PEM -subject -nameopt RFC2253
|
||||
|
||||
|
||||
You should see an output line that resembles:
|
||||
|
||||
|
||||
.. code:: bash
|
||||
|
||||
|
||||
subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
|
||||
|
||||
The ``subject`` line states the complete user name we need to use for this
|
||||
@ -194,6 +168,137 @@ field (``bdb-config.bdb-user``), i.e.
|
||||
emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE
|
||||
|
||||
|
||||
tendermint-config.tm-instance-name
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Your BigchainDB cluster organization should have a standard way
|
||||
of naming instances, so the instances in your BigchainDB node
|
||||
should conform to that standard. There are some things worth noting
|
||||
about the ``tm-instance-name``:
|
||||
|
||||
* This field will be the DNS name of your Tendermint instance, and Kubernetes
|
||||
maps this name to its internal DNS, so all the peer to peer communication
|
||||
depends on this, in case of a network/multi-node deployment.
|
||||
* This parameter is also used to access the public key of a particular node.
|
||||
* We use ``tm-instance-0``, ``tm-instance-1`` and so on in our
|
||||
documentation. Your BigchainDB cluster may use a different naming convention.
|
||||
|
||||
|
||||
tendermint-config.ngx-tm-instance-name
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
NGINX needs the FQDN of the servers inside the cluster to be able to forward
|
||||
traffic.
|
||||
``ngx-tm-instance-name`` is the FQDN of the Tendermint
|
||||
instance in this Kubernetes cluster.
|
||||
In Kubernetes, this is usually the name of the module specified in the
|
||||
corresponding ``tendermint-config.*-instance-name`` followed by the
|
||||
``<namespace name>.svc.cluster.local``. For example, if you run Tendermint in
|
||||
the default Kubernetes namespace, this will be
|
||||
``<tendermint-config.tm-instance-name>.default.svc.cluster.local``
|
||||
|
||||
|
||||
tendermint-config.tm-seeds
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-seeds`` is the initial set of peers to connect to. It is a comma separated
|
||||
list of all the peers part of the cluster.
|
||||
|
||||
If you are deploying a stand-alone BigchainDB node the value should the same as
|
||||
``<tm-instance-name>``. If you are deploying a network this parameter will look
|
||||
like this:
|
||||
|
||||
.. code::
|
||||
|
||||
<tm-instance-1>,<tm-instance-2>,<tm-instance-3>,<tm-instance-4>
|
||||
|
||||
|
||||
tendermint-config.tm-validators
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-validators`` is the initial set of validators in the network. It is a comma separated list
|
||||
of all the participant validator nodes.
|
||||
|
||||
If you are deploying a stand-alone BigchainDB node the value should be the same as
|
||||
``<tm-instance-name>``. If you are deploying a network this parameter will look like
|
||||
this:
|
||||
|
||||
.. code::
|
||||
|
||||
<tm-instance-1>,<tm-instance-2>,<tm-instance-3>,<tm-instance-4>
|
||||
|
||||
|
||||
tendermint-config.tm-validator-power
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-validator-power`` represents the voting power of each validator. It is a comma separated
|
||||
list of all the participants in the network.
|
||||
|
||||
**Note**: The order of the validator power list should be the same as the ``tm-validators`` list.
|
||||
|
||||
.. code::
|
||||
|
||||
tm-validators: <tm-instance-1>,<tm-instance-2>,<tm-instance-3>,<tm-instance-4>
|
||||
|
||||
For the above list of validators the ``tm-validator-power`` list should look like this:
|
||||
|
||||
.. code::
|
||||
|
||||
tm-validator-power: <tm-instance-1-power>,<tm-instance-2-power>,<tm-instance-3-power>,<tm-instance-4-power>
|
||||
|
||||
|
||||
tendermint-config.tm-genesis-time
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-genesis-time`` represents the official time of blockchain start. Details regarding, how to generate
|
||||
this parameter are covered :ref:`here <generate-the-blockchain-id-and-genesis-time>`.
|
||||
|
||||
|
||||
tendermint-config.tm-chain-id
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-chain-id`` represents the ID of the blockchain. This must be unique for every blockchain.
|
||||
Details regarding, how to generate this parameter are covered
|
||||
:ref:`here <generate-the-blockchain-id-and-genesis-time>`.
|
||||
|
||||
|
||||
tendermint-config.tm-abci-port
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-abci-port`` has a default value ``46658`` which is used by Tendermint Core for
|
||||
ABCI(Application BlockChain Interface) traffic. BigchainDB nodes use this port
|
||||
internally to communicate with Tendermint Core.
|
||||
|
||||
|
||||
tendermint-config.tm-p2p-port
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-p2p-port`` has a default value ``46656`` which is used by Tendermint Core for
|
||||
peer to peer communication.
|
||||
|
||||
For a multi-node/zone deployment, this port needs to be available publicly for P2P
|
||||
communication between Tendermint nodes.
|
||||
|
||||
|
||||
tendermint-config.tm-rpc-port
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-rpc-port`` has a default value ``46657`` which is used by Tendermint Core for RPC
|
||||
traffic. BigchainDB nodes use this port with RPC listen address.
|
||||
|
||||
|
||||
tendermint-config.tm-pub-key-access
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
``tm-pub-key-access`` has a default value ``9986``, which is used to discover the public
|
||||
key of a tendermint node. Each Tendermint StatefulSet(Pod, Tendermint + NGINX) hosts its
|
||||
public key.
|
||||
|
||||
.. code::
|
||||
|
||||
http://tendermint-instance-1:9986/pub_key.json
|
||||
|
||||
|
||||
Edit secret.yaml
|
||||
----------------
|
||||
|
||||
|
@ -1,18 +1,16 @@
|
||||
.. _kubernetes-template-deploy-a-single-bigchaindb-node:
|
||||
|
||||
Kubernetes Template: Deploy a Single BigchainDB Node
|
||||
====================================================
|
||||
|
||||
This page describes how to deploy the first BigchainDB node
|
||||
in a BigchainDB cluster, or a stand-alone BigchainDB node,
|
||||
This page describes how to deploy a stand-alone BigchainDB + Tendermint node
|
||||
using `Kubernetes <https://kubernetes.io/>`_.
|
||||
It assumes you already have a running Kubernetes cluster.
|
||||
|
||||
If you want to add a new BigchainDB node to an existing BigchainDB cluster,
|
||||
refer to :doc:`the page about that <add-node-on-kubernetes>`.
|
||||
|
||||
Below, we refer to many files by their directory and filename,
|
||||
such as ``configuration/config-map.yaml``. Those files are files in the
|
||||
`bigchaindb/bigchaindb repository on GitHub
|
||||
<https://github.com/bigchaindb/bigchaindb/>`_ in the ``k8s/`` directory.
|
||||
`bigchaindb/bigchaindb repository on GitHub <https://github.com/bigchaindb/bigchaindb/>`_
|
||||
in the ``k8s/`` directory.
|
||||
Make sure you're getting those files from the appropriate Git branch on
|
||||
GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB
|
||||
cluster is using.
|
||||
@ -30,7 +28,8 @@ The default location of the kubectl configuration file is ``~/.kube/config``.
|
||||
If you don't have that file, then you need to get it.
|
||||
|
||||
**Azure.** If you deployed your Kubernetes cluster on Azure
|
||||
using the Azure CLI 2.0 (as per :doc:`our template <template-kubernetes-azure>`),
|
||||
using the Azure CLI 2.0 (as per :doc:`our template
|
||||
<../production-deployment-template/template-kubernetes-azure>`),
|
||||
then you can get the ``~/.kube/config`` file using:
|
||||
|
||||
.. code:: bash
|
||||
@ -105,9 +104,11 @@ That means you can visit the dashboard in your web browser at
|
||||
Step 3: Configure Your BigchainDB Node
|
||||
--------------------------------------
|
||||
|
||||
See the page titled :ref:`How to Configure a BigchainDB Node`.
|
||||
See the page titled :ref:`how-to-configure-a-bigchaindb-node`.
|
||||
|
||||
|
||||
.. _start-the-nginx-service:
|
||||
|
||||
Step 4: Start the NGINX Service
|
||||
-------------------------------
|
||||
|
||||
@ -137,6 +138,16 @@ Step 4.1: Vanilla NGINX
|
||||
``cluster-frontend-port`` in the ConfigMap above. This is the
|
||||
``public-cluster-port`` in the file which is the ingress in to the cluster.
|
||||
|
||||
* Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
|
||||
``tm-pub-access-port`` in the ConfigMap above. This is the
|
||||
``tm-pub-key-access`` in the file which specifies where Public Key for
|
||||
the Tendermint instance is available.
|
||||
|
||||
* Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
|
||||
``tm-p2p-port`` in the ConfigMap above. This is the
|
||||
``tm-p2p-port`` in the file which is used for P2P communication for Tendermint
|
||||
nodes.
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
@ -172,6 +183,17 @@ Step 4.2: NGINX with HTTPS
|
||||
``public-mdb-port`` in the file which specifies where MongoDB is
|
||||
available.
|
||||
|
||||
* Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
|
||||
``tm-pub-access-port`` in the ConfigMap above. This is the
|
||||
``tm-pub-key-access`` in the file which specifies where Public Key for
|
||||
the Tendermint instance is available.
|
||||
|
||||
* Set ``ports[3].port`` and ``ports[3].targetPort`` to the value set in the
|
||||
``tm-p2p-port`` in the ConfigMap above. This is the
|
||||
``tm-p2p-port`` in the file which is used for P2P communication between Tendermint
|
||||
nodes.
|
||||
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
@ -179,6 +201,8 @@ Step 4.2: NGINX with HTTPS
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc.yaml
|
||||
|
||||
|
||||
.. _assign-dns-name-to-nginx-public-ip:
|
||||
|
||||
Step 5: Assign DNS Name to the NGINX Public IP
|
||||
----------------------------------------------
|
||||
|
||||
@ -216,10 +240,12 @@ changes to be applied.
|
||||
To verify the DNS setting is operational, you can run ``nslookup <DNS
|
||||
name added in Azure configuration>`` from your local Linux shell.
|
||||
|
||||
This will ensure that when you scale the replica set later, other MongoDB
|
||||
members in the replica set can reach this instance.
|
||||
This will ensure that when you scale to different geographical zones, other Tendermint
|
||||
nodes in the network can reach this instance.
|
||||
|
||||
|
||||
.. _start-the-mongodb-kubernetes-service:
|
||||
|
||||
Step 6: Start the MongoDB Kubernetes Service
|
||||
--------------------------------------------
|
||||
|
||||
@ -245,6 +271,8 @@ Step 6: Start the MongoDB Kubernetes Service
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc.yaml
|
||||
|
||||
|
||||
.. _start-the-bigchaindb-kubernetes-service:
|
||||
|
||||
Step 7: Start the BigchainDB Kubernetes Service
|
||||
-----------------------------------------------
|
||||
|
||||
@ -268,6 +296,11 @@ Step 7: Start the BigchainDB Kubernetes Service
|
||||
This is the ``bdb-ws-port`` in the file which specifies where BigchainDB
|
||||
listens for Websocket connections.
|
||||
|
||||
* Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
|
||||
``tm-abci-port`` in the ConfigMap above.
|
||||
This is the ``tm-abci-port`` in the file which specifies the port used
|
||||
for ABCI communication.
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
@ -275,6 +308,8 @@ Step 7: Start the BigchainDB Kubernetes Service
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc.yaml
|
||||
|
||||
|
||||
.. _start-the-openresty-kubernetes-service:
|
||||
|
||||
Step 8: Start the OpenResty Kubernetes Service
|
||||
----------------------------------------------
|
||||
|
||||
@ -288,6 +323,9 @@ Step 8: Start the OpenResty Kubernetes Service
|
||||
``openresty-instance-name`` is ``openresty-instance-0``, set the
|
||||
``spec.selector.app`` to ``openresty-instance-0-dep``.
|
||||
|
||||
* Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
|
||||
``openresty-backend-port`` in the ConfigMap.
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
@ -295,19 +333,56 @@ Step 8: Start the OpenResty Kubernetes Service
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc.yaml
|
||||
|
||||
|
||||
Step 9: Start the NGINX Kubernetes Deployment
|
||||
---------------------------------------------
|
||||
.. _start-the-tendermint-kubernetes-service:
|
||||
|
||||
* NGINX is used as a proxy to OpenResty, BigchainDB and MongoDB instances in
|
||||
Step 9: Start the Tendermint Kubernetes Service
|
||||
-----------------------------------------------
|
||||
|
||||
* This configuration is located in the file ``tendermint/tendermint-svc.yaml``.
|
||||
|
||||
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
|
||||
set in ``tm-instance-name`` in the ConfigMap above.
|
||||
|
||||
* Set the ``spec.selector.app`` to the value set in ``tm-instance-name`` in
|
||||
the ConfigMap followed by ``-ss``. For example, if the value set in the
|
||||
``tm-instance-name`` is ``tm-instance-0``, set the
|
||||
``spec.selector.app`` to ``tm-instance-0-ss``.
|
||||
|
||||
* Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the
|
||||
``tm-p2p-port`` in the ConfigMap above.
|
||||
It specifies where Tendermint peers communicate.
|
||||
|
||||
* Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the
|
||||
``tm-rpc-port`` in the ConfigMap above.
|
||||
It specifies the port used by Tendermint core for RPC traffic.
|
||||
|
||||
* Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the
|
||||
``tm-pub-key-access`` in the ConfigMap above.
|
||||
It specifies the port to host/distribute the public key for the Tendermint node.
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-svc.yaml
|
||||
|
||||
|
||||
.. _start-the-nginx-deployment:
|
||||
|
||||
Step 10: Start the NGINX Kubernetes Deployment
|
||||
----------------------------------------------
|
||||
|
||||
* NGINX is used as a proxy to OpenResty, BigchainDB, Tendermint and MongoDB instances in
|
||||
the node. It proxies HTTP/HTTPS requests on the ``cluster-frontend-port``
|
||||
to the corresponding OpenResty or BigchainDB backend, and TCP connections
|
||||
on ``mongodb-frontend-port`` to the MongoDB backend.
|
||||
to the corresponding OpenResty or BigchainDB backend, TCP connections
|
||||
on ``mongodb-frontend-port``, ``tm-p2p-port`` and ``tm-pub-key-access``
|
||||
to MongoDB and Tendermint respectively.
|
||||
|
||||
* As in step 4, you have the option to use vanilla NGINX without HTTPS or
|
||||
NGINX with HTTPS support.
|
||||
|
||||
Step 9.1: Vanilla NGINX
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Step 10.1: Vanilla NGINX
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* This configuration is located in the file ``nginx-http/nginx-http-dep.yaml``.
|
||||
|
||||
@ -317,9 +392,10 @@ Step 9.1: Vanilla NGINX
|
||||
``ngx-http-instance-0``, set the fields to ``ngx-http-instance-0-dep``.
|
||||
|
||||
* Set the ports to be exposed from the pod in the
|
||||
``spec.containers[0].ports`` section. We currently expose 3 ports -
|
||||
``mongodb-frontend-port``, ``cluster-frontend-port`` and
|
||||
``cluster-health-check-port``. Set them to the values specified in the
|
||||
``spec.containers[0].ports`` section. We currently expose 5 ports -
|
||||
``mongodb-frontend-port``, ``cluster-frontend-port``,
|
||||
``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port``.
|
||||
Set them to the values specified in the
|
||||
ConfigMap.
|
||||
|
||||
* The configuration uses the following values set in the ConfigMap:
|
||||
@ -333,6 +409,9 @@ Step 9.1: Vanilla NGINX
|
||||
- ``ngx-bdb-instance-name``
|
||||
- ``bigchaindb-api-port``
|
||||
- ``bigchaindb-ws-port``
|
||||
- ``ngx-tm-instance-name``
|
||||
- ``tm-pub-key-access``
|
||||
- ``tm-p2p-port``
|
||||
|
||||
* Start the Kubernetes Deployment:
|
||||
|
||||
@ -341,8 +420,8 @@ Step 9.1: Vanilla NGINX
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep.yaml
|
||||
|
||||
|
||||
Step 9.2: NGINX with HTTPS
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Step 10.2: NGINX with HTTPS
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* This configuration is located in the file
|
||||
``nginx-https/nginx-https-dep.yaml``.
|
||||
@ -353,9 +432,10 @@ Step 9.2: NGINX with HTTPS
|
||||
``ngx-https-instance-0``, set the fields to ``ngx-https-instance-0-dep``.
|
||||
|
||||
* Set the ports to be exposed from the pod in the
|
||||
``spec.containers[0].ports`` section. We currently expose 3 ports -
|
||||
``mongodb-frontend-port``, ``cluster-frontend-port`` and
|
||||
``cluster-health-check-port``. Set them to the values specified in the
|
||||
``spec.containers[0].ports`` section. We currently expose 6 ports -
|
||||
``mongodb-frontend-port``, ``cluster-frontend-port``,
|
||||
``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port``
|
||||
. Set them to the values specified in the
|
||||
ConfigMap.
|
||||
|
||||
* The configuration uses the following values set in the ConfigMap:
|
||||
@ -372,6 +452,9 @@ Step 9.2: NGINX with HTTPS
|
||||
- ``ngx-bdb-instance-name``
|
||||
- ``bigchaindb-api-port``
|
||||
- ``bigchaindb-ws-port``
|
||||
- ``ngx-tm-instance-name``
|
||||
- ``tm-pub-key-access``
|
||||
- ``tm-p2p-port```
|
||||
|
||||
* The configuration uses the following values set in the Secret:
|
||||
|
||||
@ -384,7 +467,9 @@ Step 9.2: NGINX with HTTPS
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep.yaml
|
||||
|
||||
|
||||
Step 10: Create Kubernetes Storage Classes for MongoDB
|
||||
.. _create-kubernetes-storage-class-mdb:
|
||||
|
||||
Step 11: Create Kubernetes Storage Classes for MongoDB
|
||||
------------------------------------------------------
|
||||
|
||||
MongoDB needs somewhere to store its data persistently,
|
||||
@ -394,10 +479,10 @@ Our MongoDB Docker container
|
||||
exports two volume mounts with correct
|
||||
permissions from inside the container:
|
||||
|
||||
* The directory where the mongod instance stores its data: ``/data/db``.
|
||||
* The directory where the MongoDB instance stores its data: ``/data/db``.
|
||||
There's more explanation in the MongoDB docs about `storage.dbpath <https://docs.mongodb.com/manual/reference/configuration-options/#storage.dbPath>`_.
|
||||
|
||||
* The directory where the mongodb instance stores the metadata for a sharded
|
||||
* The directory where the MongoDB instance stores the metadata for a sharded
|
||||
cluster: ``/data/configdb/``.
|
||||
There's more explanation in the MongoDB docs about `sharding.configDB <https://docs.mongodb.com/manual/reference/configuration-options/#sharding.configDB>`_.
|
||||
|
||||
@ -413,7 +498,7 @@ The first thing to do is create the Kubernetes storage classes.
|
||||
First, you need an Azure storage account.
|
||||
If you deployed your Kubernetes cluster on Azure
|
||||
using the Azure CLI 2.0
|
||||
(as per :doc:`our template <template-kubernetes-azure>`),
|
||||
(as per :doc:`our template <../production-deployment-template/template-kubernetes-azure>`),
|
||||
then the `az acs create` command already created a
|
||||
storage account in the same location and resource group
|
||||
as your Kubernetes cluster.
|
||||
@ -425,7 +510,7 @@ in the same data center.
|
||||
Premium storage is higher-cost and higher-performance.
|
||||
It uses solid state drives (SSD).
|
||||
You can create a `storage account <https://docs.microsoft.com/en-us/azure/storage/common/storage-create-storage-account>`_
|
||||
for Premium storage and associate it with your Azure resource group.
|
||||
for Premium storage and associate it with your Azure resource group.
|
||||
For future reference, the command to create a storage account is
|
||||
`az storage account create <https://docs.microsoft.com/en-us/cli/azure/storage/account#create>`_.
|
||||
|
||||
@ -433,7 +518,7 @@ For future reference, the command to create a storage account is
|
||||
Please refer to `Azure documentation <https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage>`_
|
||||
for the list of VMs that are supported by Premium Storage.
|
||||
|
||||
The Kubernetes template for configuration of Storage Class is located in the
|
||||
The Kubernetes template for configuration of the MongoDB Storage Class is located in the
|
||||
file ``mongodb/mongo-sc.yaml``.
|
||||
|
||||
You may have to update the ``parameters.location`` field in the file to
|
||||
@ -441,7 +526,7 @@ specify the location you are using in Azure.
|
||||
|
||||
If you want to use a custom storage account with the Storage Class, you
|
||||
can also update `parameters.storageAccount` and provide the Azure storage
|
||||
account name.
|
||||
account name.
|
||||
|
||||
Create the required storage classes using:
|
||||
|
||||
@ -453,8 +538,10 @@ Create the required storage classes using:
|
||||
You can check if it worked using ``kubectl get storageclasses``.
|
||||
|
||||
|
||||
Step 11: Create Kubernetes Persistent Volume Claims
|
||||
---------------------------------------------------
|
||||
.. _create-kubernetes-persistent-volume-claim-mdb:
|
||||
|
||||
Step 12: Create Kubernetes Persistent Volume Claims for MongoDB
|
||||
---------------------------------------------------------------
|
||||
|
||||
Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and
|
||||
``mongo-configdb-claim``.
|
||||
@ -500,13 +587,15 @@ but it should become "Bound" fairly quickly.
|
||||
* Run the following command to update a PV's reclaim policy to <Retain>
|
||||
|
||||
.. Code:: bash
|
||||
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
|
||||
|
||||
For notes on recreating a private volume form a released Azure disk resource consult
|
||||
:ref:`the page about cluster troubleshooting <Cluster Troubleshooting>`.
|
||||
:doc:`the page about cluster troubleshooting <../production-deployment-template/troubleshoot>`.
|
||||
|
||||
Step 12: Start a Kubernetes StatefulSet for MongoDB
|
||||
.. _start-kubernetes-stateful-set-mongodb:
|
||||
|
||||
Step 13: Start a Kubernetes StatefulSet for MongoDB
|
||||
---------------------------------------------------
|
||||
|
||||
* This configuration is located in the file ``mongodb/mongo-ss.yaml``.
|
||||
@ -551,9 +640,8 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
|
||||
* The configuration uses the following values set in the ConfigMap:
|
||||
|
||||
- ``mdb-instance-name``
|
||||
- ``mongodb-replicaset-name``
|
||||
- ``mongodb-backend-port``
|
||||
|
||||
|
||||
* The configuration uses the following values set in the Secret:
|
||||
|
||||
- ``mdb-certs``
|
||||
@ -590,7 +678,9 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 get pods -w
|
||||
|
||||
|
||||
Step 13: Configure Users and Access Control for MongoDB
|
||||
.. _configure-users-and-access-control-mongodb:
|
||||
|
||||
Step 14: Configure Users and Access Control for MongoDB
|
||||
-------------------------------------------------------
|
||||
|
||||
* In this step, you will create a user on MongoDB with authorization
|
||||
@ -618,28 +708,6 @@ Step 13: Configure Users and Access Control for MongoDB
|
||||
--sslCAFile /etc/mongod/ca/ca.pem \
|
||||
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
|
||||
|
||||
* Initialize the replica set using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
> rs.initiate( {
|
||||
_id : "bigchain-rs",
|
||||
members: [ {
|
||||
_id : 0,
|
||||
host :"<hostname>:27017"
|
||||
} ]
|
||||
} )
|
||||
|
||||
The ``hostname`` in this case will be the value set in
|
||||
``mdb-instance-name`` in the ConfigMap.
|
||||
For example, if the value set in the ``mdb-instance-name`` is
|
||||
``mdb-instance-0``, set the ``hostname`` above to the value ``mdb-instance-0``.
|
||||
|
||||
* The instance should be voted as the ``PRIMARY`` in the replica set (since
|
||||
this is the only instance in the replica set till now).
|
||||
This can be observed from the mongo shell prompt,
|
||||
which will read ``PRIMARY>``.
|
||||
|
||||
* Create a user ``adminUser`` on the ``admin`` database with the
|
||||
authorization to create other users. This will only work the first time you
|
||||
log in to the mongo shell. For further details, see `localhost
|
||||
@ -697,8 +765,7 @@ Step 13: Configure Users and Access Control for MongoDB
|
||||
]
|
||||
} )
|
||||
|
||||
* You can similarly create users for MongoDB Monitoring Agent and MongoDB
|
||||
Backup Agent. For example:
|
||||
* You can similarly create user for MongoDB Monitoring Agent. For example:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
@ -710,16 +777,127 @@ Step 13: Configure Users and Access Control for MongoDB
|
||||
]
|
||||
} )
|
||||
|
||||
PRIMARY> db.getSiblingDB("$external").runCommand( {
|
||||
createUser: 'emailAddress=dev@bigchaindb.com,CN=test-mdb-bak-ssl,OU=MongoDB-Bak-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE',
|
||||
writeConcern: { w: 'majority' , wtimeout: 5000 },
|
||||
roles: [
|
||||
{ role: 'backup', db: 'admin' }
|
||||
]
|
||||
} )
|
||||
|
||||
.. _create-kubernetes-storage-class:
|
||||
|
||||
Step 15: Create Kubernetes Storage Classes for Tendermint
|
||||
----------------------------------------------------------
|
||||
|
||||
Tendermint needs somewhere to store its data persistently, it uses
|
||||
LevelDB as the persistent storage layer.
|
||||
|
||||
The Kubernetes template for configuration of Storage Class is located in the
|
||||
file ``tendermint/tendermint-sc.yaml``.
|
||||
|
||||
Details about how to create a Azure Storage account and how Kubernetes Storage Class works
|
||||
are already covered in this document: :ref:`create-kubernetes-storage-class-mdb`.
|
||||
|
||||
Create the required storage classes using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-sc.yaml
|
||||
|
||||
|
||||
Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
||||
You can check if it worked using ``kubectl get storageclasses``.
|
||||
|
||||
.. _create-kubernetes-persistent-volume-claim:
|
||||
|
||||
Step 16: Create Kubernetes Persistent Volume Claims for Tendermint
|
||||
------------------------------------------------------------------
|
||||
|
||||
Next, you will create two PersistentVolumeClaim objects ``tendermint-db-claim`` and
|
||||
``tendermint-config-db-claim``.
|
||||
|
||||
This configuration is located in the file ``tendermint/tendermint-pvc.yaml``.
|
||||
|
||||
Details about Kubernetes Persistent Volumes, Persistent Volume Claims
|
||||
and how they work with Azure are already covered in this
|
||||
document: :ref:`create-kubernetes-persistent-volume-claim-mdb`.
|
||||
|
||||
Create the required Persistent Volume Claims using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-pvc.yaml
|
||||
|
||||
You can check its status using:
|
||||
|
||||
.. code::
|
||||
|
||||
kubectl get pvc -w
|
||||
|
||||
|
||||
.. _create-kubernetes-stateful-set:
|
||||
|
||||
Step 17: Start a Kubernetes StatefulSet for Tendermint
|
||||
------------------------------------------------------
|
||||
|
||||
* This configuration is located in the file ``tendermint/tendermint-ss.yaml``.
|
||||
|
||||
* Set the ``spec.serviceName`` to the value set in ``tm-instance-name`` in
|
||||
the ConfigMap.
|
||||
For example, if the value set in the ``tm-instance-name``
|
||||
is ``tm-instance-0``, set the field to ``tm-instance-0``.
|
||||
|
||||
* Set ``metadata.name``, ``spec.template.metadata.name`` and
|
||||
``spec.template.metadata.labels.app`` to the value set in
|
||||
``tm-instance-name`` in the ConfigMap, followed by
|
||||
``-ss``.
|
||||
For example, if the value set in the
|
||||
``tm-instance-name`` is ``tm-instance-0``, set the fields to the value
|
||||
``tm-insance-0-ss``.
|
||||
|
||||
* Note how the Tendermint container uses the ``tendermint-db-claim`` and the
|
||||
``tendermint-config-db-claim`` PersistentVolumeClaims for its ``/tendermint`` and
|
||||
``/tendermint_node_data`` directories (mount paths).
|
||||
|
||||
* As we gain more experience running Tendermint in testing and production, we
|
||||
will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
|
||||
|
||||
We deploy Tendermint as POD(Tendermint + NGINX), Tendermint is used as the consensus
|
||||
engine while NGINX is used to serve the public key of the Tendermint instance.
|
||||
|
||||
* For the NGINX container,set the ports to be exposed from the container
|
||||
``spec.containers[0].ports[0]`` section. Set it to the value specified
|
||||
for ``tm-pub-key-access`` from ConfigMap.
|
||||
|
||||
* For the Tendermint container, Set the ports to be exposed from the container in the
|
||||
``spec.containers[1].ports`` section. We currently expose two Tendermint ports.
|
||||
Set it to the value specified for ``tm-p2p-port`` and ``tm-rpc-port``
|
||||
in the ConfigMap, repectively
|
||||
|
||||
* The configuration uses the following values set in the ConfigMap:
|
||||
|
||||
- ``tm-pub-key-access``
|
||||
- ``tm-seeds``
|
||||
- ``tm-validator-power``
|
||||
- ``tm-validators``
|
||||
- ``tm-genesis-time``
|
||||
- ``tm-chain-id``
|
||||
- ``tm-abci-port``
|
||||
- ``bdb-instance-name``
|
||||
|
||||
* Create the Tendermint StatefulSet using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-ss.yaml
|
||||
|
||||
* It might take up to 10 minutes for the disks, specified in the Persistent
|
||||
Volume Claims above, to be created and attached to the pod.
|
||||
The UI might show that the pod has errored with the message
|
||||
"timeout expired waiting for volumes to attach/mount". Use the CLI below
|
||||
to check the status of the pod in this case, instead of the UI.
|
||||
This happens due to a bug in Azure ACS.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 get pods -w
|
||||
|
||||
.. _start-kubernetes-deployment-for-mdb-mon-agent:
|
||||
|
||||
Step 18: Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
||||
-------------------------------------------------------------------
|
||||
|
||||
* This configuration is located in the file
|
||||
@ -746,34 +924,9 @@ Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml
|
||||
|
||||
|
||||
Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent
|
||||
---------------------------------------------------------------
|
||||
.. _start-kubernetes-deployment-bdb:
|
||||
|
||||
* This configuration is located in the file
|
||||
``mongodb-backup-agent/mongo-backup-dep.yaml``.
|
||||
|
||||
* Set ``metadata.name``, ``spec.template.metadata.name`` and
|
||||
``spec.template.metadata.labels.app`` to the value set in
|
||||
``mdb-bak-instance-name`` in the ConfigMap, followed by
|
||||
``-dep``.
|
||||
For example, if the value set in the
|
||||
``mdb-bak-instance-name`` is ``mdb-bak-instance-0``, set the fields to the
|
||||
value ``mdb-bak-instance-0-dep``.
|
||||
|
||||
* The configuration uses the following values set in the Secret:
|
||||
|
||||
- ``mdb-bak-certs``
|
||||
- ``ca-auth``
|
||||
- ``cloud-manager-credentials``
|
||||
|
||||
* Start the Kubernetes Deployment using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-backup-agent/mongo-backup-dep.yaml
|
||||
|
||||
|
||||
Step 16: Start a Kubernetes Deployment for BigchainDB
|
||||
Step 19: Start a Kubernetes Deployment for BigchainDB
|
||||
-----------------------------------------------------
|
||||
|
||||
* This configuration is located in the file
|
||||
@ -786,21 +939,14 @@ Step 16: Start a Kubernetes Deployment for BigchainDB
|
||||
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the
|
||||
value ``bdb-insance-0-dep``.
|
||||
|
||||
* Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded).
|
||||
(In the future, we'd like to pull the BigchainDB private key from
|
||||
the Secret named ``bdb-private-key``,
|
||||
but a Secret can only be mounted as a file,
|
||||
so BigchainDB Server would have to be modified to look for it
|
||||
in a file.)
|
||||
|
||||
* As we gain more experience running BigchainDB in testing and production,
|
||||
we will tweak the ``resources.limits`` values for CPU and memory, and as
|
||||
richer monitoring and probing becomes available in BigchainDB, we will
|
||||
tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
|
||||
|
||||
* Set the ports to be exposed from the pod in the
|
||||
``spec.containers[0].ports`` section. We currently expose 2 ports -
|
||||
``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the
|
||||
``spec.containers[0].ports`` section. We currently expose 3 ports -
|
||||
``bigchaindb-api-port``, ``bigchaindb-ws-port`` and ``tm-abci-port``. Set them to the
|
||||
values specified in the ConfigMap.
|
||||
|
||||
* The configuration uses the following values set in the ConfigMap:
|
||||
@ -821,6 +967,8 @@ Step 16: Start a Kubernetes Deployment for BigchainDB
|
||||
- ``bigchaindb-database-connection-timeout``
|
||||
- ``bigchaindb-log-level``
|
||||
- ``bdb-user``
|
||||
- ``tm-instance-name``
|
||||
- ``tm-rpc-port``
|
||||
|
||||
* The configuration uses the following values set in the Secret:
|
||||
|
||||
@ -837,7 +985,9 @@ Step 16: Start a Kubernetes Deployment for BigchainDB
|
||||
* You can check its status using the command ``kubectl get deployments -w``
|
||||
|
||||
|
||||
Step 17: Start a Kubernetes Deployment for OpenResty
|
||||
.. _start-kubernetes-deployment-openresty:
|
||||
|
||||
Step 20: Start a Kubernetes Deployment for OpenResty
|
||||
----------------------------------------------------
|
||||
|
||||
* This configuration is located in the file
|
||||
@ -876,19 +1026,21 @@ Step 17: Start a Kubernetes Deployment for OpenResty
|
||||
* You can check its status using the command ``kubectl get deployments -w``
|
||||
|
||||
|
||||
Step 18: Configure the MongoDB Cloud Manager
|
||||
Step 21: Configure the MongoDB Cloud Manager
|
||||
--------------------------------------------
|
||||
|
||||
Refer to the
|
||||
:ref:`documentation <Configure MongoDB Cloud Manager for Monitoring and Backup>`
|
||||
:doc:`documentation <../production-deployment-template/cloud-manager>`
|
||||
for details on how to configure the MongoDB Cloud Manager to enable
|
||||
monitoring and backup.
|
||||
|
||||
|
||||
Step 19: Verify the BigchainDB Node Setup
|
||||
.. _verify-and-test-bdb:
|
||||
|
||||
Step 22: Verify the BigchainDB Node Setup
|
||||
-----------------------------------------
|
||||
|
||||
Step 19.1: Testing Internally
|
||||
Step 22.1: Testing Internally
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To test the setup of your BigchainDB node, you could use a Docker container
|
||||
@ -939,6 +1091,18 @@ To test the BigchainDB instance:
|
||||
|
||||
$ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions
|
||||
|
||||
To test the Tendermint instance:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ nslookup tm-instance-0
|
||||
|
||||
$ dig +noall +answer _bdb-api-port._tcp.tm-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
$ dig +noall +answer _bdb-ws-port._tcp.tm-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
$ curl -X GET http://tm-instance-0:9986/pub_key.json
|
||||
|
||||
|
||||
To test the OpenResty instance:
|
||||
|
||||
@ -992,10 +1156,10 @@ The above curl command should result in the response
|
||||
``It looks like you are trying to access MongoDB over HTTP on the native driver port.``
|
||||
|
||||
|
||||
Step 19.2: Testing Externally
|
||||
Step 22.2: Testing Externally
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Check the MongoDB monitoring and backup agent on the MongoDB Cloud Manager
|
||||
Check the MongoDB monitoring agent on the MongoDB Cloud Manager
|
||||
portal to verify they are working fine.
|
||||
|
||||
If you are using the NGINX with HTTP support, accessing the URL
|
||||
@ -1007,3 +1171,7 @@ If you are using the NGINX with HTTPS support, use ``https`` instead of
|
||||
|
||||
Use the Python Driver to send some transactions to the BigchainDB node and
|
||||
verify that your node or cluster works as expected.
|
||||
|
||||
Next, you can set up log analytics and monitoring, by following our templates:
|
||||
|
||||
* :doc:`../production-deployment-template/log-analytics`.
|
||||
|
@ -1,146 +0,0 @@
|
||||
How to Restore Data Backed On MongoDB Cloud Manager
|
||||
===================================================
|
||||
|
||||
This page describes how to restore data backed up on
|
||||
`MongoDB Cloud Manager <https://cloud.mongodb.com/>`_ by
|
||||
the backup agent when using a single instance MongoDB replica set.
|
||||
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
- You can restore to either new hardware or existing hardware. We cover
|
||||
restoring data to an existing MongoDB Kubernetes StatefulSet using a
|
||||
Kubernetes Persistent Volume Claim below as described
|
||||
:doc:`here <node-on-kubernetes>`.
|
||||
|
||||
- If the backup and destination database storage engines or settings do not
|
||||
match, mongod cannot start once the backup is restored.
|
||||
|
||||
- If the backup and destination database do not belong to the same MongoDB
|
||||
Cloud Manager group, then the database will start but never initialize
|
||||
properly.
|
||||
|
||||
- The backup restore file includes a metadata file, restoreInfo.txt. This file
|
||||
captures the options the database used when the snapshot was taken. The
|
||||
database must be run with the listed options after it has been restored. It
|
||||
contains:
|
||||
1. Group name
|
||||
2. Replica Set name
|
||||
3. Cluster Id (if applicable)
|
||||
4. Snapshot timestamp (as Timestamp at UTC)
|
||||
5. Last Oplog applied (as a BSON Timestamp at UTC)
|
||||
6. MongoDB version
|
||||
7. Storage engine type
|
||||
8. mongod startup options used on the database when the snapshot was taken
|
||||
|
||||
|
||||
Step 1: Get the Backup/Archived Data from Cloud Manager
|
||||
-------------------------------------------------------
|
||||
|
||||
- Log in to the Cloud Manager.
|
||||
|
||||
- Select the Group that you want to restore data from.
|
||||
|
||||
- Click Backup. Hover over the Status column, click on the
|
||||
``Restore Or Download`` button.
|
||||
|
||||
- Select the appropriate SNAPSHOT, and click Next.
|
||||
|
||||
.. note::
|
||||
|
||||
We currently do not support restoring data using the ``POINT IN TIME`` and
|
||||
``OPLOG TIMESTAMP`` method.
|
||||
|
||||
- Select 'Pull via Secure HTTP'. Select the number of times the link can be
|
||||
used to download data in the dropdown box. We select ``Once``.
|
||||
Select the link expiration time - the time till the download link is active.
|
||||
We usually select ``1 hour``.
|
||||
|
||||
- Check for the email from MongoDB.
|
||||
|
||||
.. note::
|
||||
|
||||
This can take some time as the Cloud Manager needs to prepare an archive of
|
||||
the backed up data.
|
||||
|
||||
- Once you receive the email, click on the link to open the
|
||||
``restore jobs page``. Follow the instructions to download the backup data.
|
||||
|
||||
.. note::
|
||||
|
||||
You will be shown a link to download the back up archive. You can either
|
||||
click on the ``Download`` button to download it using the browser.
|
||||
Under rare circumstances, the download is interrupted and errors out; I have
|
||||
no idea why.
|
||||
An alternative is to copy the download link and use the ``wget`` tool on
|
||||
Linux systems to download the data.
|
||||
|
||||
Step 2: Copy the archive to the MongoDB Instance
|
||||
------------------------------------------------
|
||||
|
||||
- Once you have the archive, you can copy it to the MongoDB instance running
|
||||
on a Kubernetes cluster using something similar to:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 cp bigchain-rs-XXXX.tar.gz mdb-instance-name:/
|
||||
|
||||
where ``bigchain-rs-XXXX.tar.gz`` is the archive downloaded from Cloud
|
||||
Manager, and ``mdb-instance-name`` is the name of your MongoDB instance.
|
||||
|
||||
|
||||
Step 3: Prepare the MongoDB Instance for Restore
|
||||
------------------------------------------------
|
||||
|
||||
- Log in to the MongoDB instance using something like:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 exec -it mdb-instance-name bash
|
||||
|
||||
- Extract the archive that we have copied to the instance at the proper
|
||||
location using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ mv /bigchain-rs-XXXX.tar.gz /data/db
|
||||
|
||||
$ cd /data/db
|
||||
|
||||
$ tar xzvf bigchain-rs-XXXX.tar.gz
|
||||
|
||||
|
||||
- Rename the directories on the disk, so that MongoDB can find the correct
|
||||
data after we restart it.
|
||||
|
||||
- The current database will be located in the ``/data/db/main`` directory.
|
||||
We simply rename the old directory to ``/data/db/main.BAK`` and rename the
|
||||
backup directory ``bigchain-rs-XXXX`` to ``main``.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ mv main main.BAK
|
||||
|
||||
$ mv bigchain-rs-XXXX main
|
||||
|
||||
.. note::
|
||||
|
||||
Ensure that there are no connections to MongoDB from any client, in our
|
||||
case, BigchainDB. This can be done in multiple ways - iptable rules,
|
||||
shutting down BigchainDB, stop sending any transactions to BigchainDB, etc.
|
||||
The simplest way to do it is to stop the MongoDB Kubernetes Service.
|
||||
BigchainDB has a retry mechanism built in, and it will keep trying to
|
||||
connect to MongoDB backend repeatedly till it succeeds.
|
||||
|
||||
Step 4: Restart the MongoDB Instance
|
||||
------------------------------------
|
||||
|
||||
- This can be achieved using something like:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 delete -f k8s/mongo/mongo-ss.yaml
|
||||
|
||||
$ kubectl --context ctx-1 apply -f k8s/mongo/mongo-ss.yaml
|
||||
|
@ -10,7 +10,7 @@ Step 1: Revoke a Certificate
|
||||
----------------------------
|
||||
|
||||
Since we used Easy-RSA version 3 to
|
||||
:ref:`set up the CA <How to Set Up a Self-Signed Certificate Authority>`,
|
||||
:ref:`set up the CA <how-to-set-up-a-self-signed-certificate-authority>`,
|
||||
we use it to revoke certificates too.
|
||||
|
||||
Go to the following directory (associated with the self-signed CA):
|
||||
|
@ -1,3 +1,5 @@
|
||||
.. _how-to-generate-a-server-certificate-for-mongodb:
|
||||
|
||||
How to Generate a Server Certificate for MongoDB
|
||||
================================================
|
||||
|
||||
@ -19,7 +21,7 @@ First create a directory for the server certificate (member cert) and cd into it
|
||||
|
||||
cd member-cert
|
||||
|
||||
Then :ref:`install and configure Easy-RSA in that directory <How to Install & Configure Easy-RSA>`.
|
||||
Then :ref:`install and configure Easy-RSA in that directory <how-to-install-and-configure-easyrsa>`.
|
||||
|
||||
|
||||
Step 2: Create the Server Private Key and CSR
|
||||
|
@ -14,10 +14,10 @@ Step 1: Prerequisites for Deploying Tectonic Cluster
|
||||
----------------------------------------------------
|
||||
|
||||
Get an Azure account. Refer to
|
||||
:ref:`this step in our docs <Step 1: Get a Pay-As-You-Go Azure Subscription>`.
|
||||
:ref:`this step in our docs <get-a-pay-as-you-go-azure-subscription>`.
|
||||
|
||||
Create an SSH Key pair for the new Tectonic cluster. Refer to
|
||||
:ref:`this step in our docs <Step 2: Create an SSH Key Pair>`.
|
||||
:ref:`this step in our docs <create-an-ssh-key-pair>`.
|
||||
|
||||
|
||||
Step 2: Get a Tectonic Subscription
|
||||
@ -119,8 +119,9 @@ Step 4: Configure kubectl
|
||||
|
||||
$ export KUBECONFIG=/path/to/config/kubectl-config
|
||||
|
||||
Next, you can :doc:`run a BigchainDB node on your new
|
||||
Kubernetes cluster <node-on-kubernetes>`.
|
||||
Next, you can follow one of our following deployment templates:
|
||||
|
||||
* :doc:`node-on-kubernetes`.
|
||||
|
||||
|
||||
Tectonic References
|
||||
@ -128,5 +129,4 @@ Tectonic References
|
||||
|
||||
#. https://coreos.com/tectonic/docs/latest/tutorials/azure/install.html
|
||||
#. https://coreos.com/tectonic/docs/latest/troubleshooting/installer-terraform.html
|
||||
#. https://coreos.com/tectonic/docs/latest/tutorials/azure/first-app.html
|
||||
|
||||
#. https://coreos.com/tectonic/docs/latest/tutorials/azure/first-app.html
|
@ -6,6 +6,8 @@ cluster.
|
||||
This page describes one way to deploy a Kubernetes cluster on Azure.
|
||||
|
||||
|
||||
.. _get-a-pay-as-you-go-azure-subscription:
|
||||
|
||||
Step 1: Get a Pay-As-You-Go Azure Subscription
|
||||
----------------------------------------------
|
||||
|
||||
@ -18,6 +20,8 @@ You may find that you have to sign up for a Free Trial subscription first.
|
||||
That's okay: you can have many subscriptions.
|
||||
|
||||
|
||||
.. _create-an-ssh-key-pair:
|
||||
|
||||
Step 2: Create an SSH Key Pair
|
||||
------------------------------
|
||||
|
||||
@ -28,7 +32,8 @@ but it's probably a good idea to make a new SSH key pair
|
||||
for your Kubernetes VMs and nothing else.)
|
||||
|
||||
See the
|
||||
:ref:`page about how to generate a key pair for SSH <Generate a Key Pair for SSH>`.
|
||||
:doc:`page about how to generate a key pair for SSH
|
||||
<../appendices/generate-key-pair-for-ssh>`.
|
||||
|
||||
|
||||
Step 3: Deploy an Azure Container Service (ACS)
|
||||
@ -99,7 +104,7 @@ Finally, you can deploy an ACS using something like:
|
||||
--master-count 3 \
|
||||
--agent-count 2 \
|
||||
--admin-username ubuntu \
|
||||
--agent-vm-size Standard_D2_v2 \
|
||||
--agent-vm-size Standard_L4s \
|
||||
--dns-prefix <make up a name> \
|
||||
--ssh-key-value ~/.ssh/<name>.pub \
|
||||
--orchestrator-type kubernetes \
|
||||
@ -135,6 +140,8 @@ and click on the one you created
|
||||
to see all the resources in it.
|
||||
|
||||
|
||||
.. _ssh-to-your-new-kubernetes-cluster-nodes:
|
||||
|
||||
Optional: SSH to Your New Kubernetes Cluster Nodes
|
||||
--------------------------------------------------
|
||||
|
||||
@ -217,5 +224,5 @@ CAUTION: You might end up deleting resources other than the ACS cluster.
|
||||
--name <name of resource group containing the cluster>
|
||||
|
||||
|
||||
Next, you can :doc:`run a BigchainDB node on your new
|
||||
Kubernetes cluster <node-on-kubernetes>`.
|
||||
Next, you can :doc: `run a BigchainDB node/cluster(BFT) <node-on-kubernetes>`
|
||||
on your new Kubernetes cluster.
|
@ -1,3 +1,5 @@
|
||||
.. _cluster-troubleshooting:
|
||||
|
||||
Cluster Troubleshooting
|
||||
=======================
|
||||
|
||||
|
@ -32,7 +32,7 @@ as the host (master and agent) operating system.
|
||||
You can upgrade Ubuntu and Docker on Azure
|
||||
by SSHing into each of the hosts,
|
||||
as documented on
|
||||
:ref:`another page <Optional: SSH to Your New Kubernetes Cluster Nodes>`.
|
||||
:ref:`another page <ssh-to-your-new-kubernetes-cluster-nodes>`.
|
||||
|
||||
In general, you can SSH to each host in your Kubernetes Cluster
|
||||
to update the OS and Docker.
|
||||
|
@ -6,27 +6,14 @@ to set up a production BigchainDB cluster.
|
||||
We are constantly improving them.
|
||||
You can modify them to suit your needs.
|
||||
|
||||
|
||||
Things the Managing Organization Must Do First
|
||||
----------------------------------------------
|
||||
.. Note::
|
||||
We use standalone MongoDB (without Replica Set), BFT replication is handled by Tendermint.
|
||||
|
||||
|
||||
1. Set Up a Self-Signed Certificate Authority
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. _register-a-domain-and-get-an-ssl-certificate-for-it:
|
||||
|
||||
We use SSL/TLS and self-signed certificates
|
||||
for MongoDB authentication (and message encryption).
|
||||
The certificates are signed by the organization managing the cluster.
|
||||
If your organization already has a process
|
||||
for signing certificates
|
||||
(i.e. an internal self-signed certificate authority [CA]),
|
||||
then you can skip this step.
|
||||
Otherwise, your organization must
|
||||
:ref:`set up its own self-signed certificate authority <How to Set Up a Self-Signed Certificate Authority>`.
|
||||
|
||||
|
||||
2. Register a Domain and Get an SSL Certificate for It
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
1. Register a Domain and Get an SSL Certificate for It
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The BigchainDB APIs (HTTP API and WebSocket API) should be served using TLS,
|
||||
so the organization running the cluster
|
||||
@ -35,81 +22,148 @@ register the domain name,
|
||||
and buy an SSL/TLS certificate for the FQDN.
|
||||
|
||||
|
||||
.. _generate-the-blockchain-id-and-genesis-time:
|
||||
|
||||
2. Generate the Blockchain ID and Genesis Time
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Tendermint nodes require two parameters that need to be common and shared between all the
|
||||
participants in the network.
|
||||
|
||||
* ``chain_id`` : ID of the blockchain. This must be unique for every blockchain.
|
||||
|
||||
* Example: ``test-chain-9gHylg``
|
||||
|
||||
* ``genesis_time`` : Official time of blockchain start.
|
||||
|
||||
* Example: ``0001-01-01T00:00:00Z``
|
||||
|
||||
The preceding parameters can be generated using the ``tendermint init`` command.
|
||||
To `initialize <https://tendermint.readthedocs.io/en/master/using-tendermint.html#initialize>`_.
|
||||
,you will need to `install Tendermint <https://tendermint.readthedocs.io/en/master/install.html>`_
|
||||
and verify that a ``genesis.json`` file is created under the `Root Directory
|
||||
<https://tendermint.readthedocs.io/en/master/using-tendermint.html#directory-root>`_. You can use
|
||||
the ``genesis_time`` and ``chain_id`` from this example ``genesis.json`` file:
|
||||
|
||||
.. code:: json
|
||||
|
||||
{
|
||||
"genesis_time": "0001-01-01T00:00:00Z",
|
||||
"chain_id": "test-chain-9gHylg",
|
||||
"validators": [
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "ed25519",
|
||||
"data": "D12279E746D3724329E5DE33A5AC44D5910623AA6FB8CDDC63617C959383A468"
|
||||
},
|
||||
"power": 10,
|
||||
"name": ""
|
||||
}
|
||||
],
|
||||
"app_hash": ""
|
||||
}
|
||||
|
||||
.. _things-each-node-operator-must-do:
|
||||
|
||||
Things Each Node Operator Must Do
|
||||
---------------------------------
|
||||
|
||||
☐ Every MongoDB instance in the cluster must have a unique (one-of-a-kind) name.
|
||||
Ask the organization managing your cluster if they have a standard
|
||||
way of naming instances in the cluster.
|
||||
For example, maybe they assign a unique number to each node,
|
||||
so that if you're operating node 12, your MongoDB instance would be named
|
||||
``mdb-instance-12``.
|
||||
Similarly, other instances must also have unique names in the cluster.
|
||||
☐ Set Up a Self-Signed Certificate Authority
|
||||
|
||||
#. Name of the MongoDB instance (``mdb-instance-*``)
|
||||
#. Name of the BigchainDB instance (``bdb-instance-*``)
|
||||
#. Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``)
|
||||
#. Name of the OpenResty instance (``openresty-instance-*``)
|
||||
#. Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``)
|
||||
#. Name of the MongoDB backup agent instance (``mdb-bak-instance-*``)
|
||||
We use SSL/TLS and self-signed certificates
|
||||
for MongoDB authentication (and message encryption).
|
||||
The certificates are signed by the organization managing the :ref:`bigchaindb-node`.
|
||||
If your organization already has a process
|
||||
for signing certificates
|
||||
(i.e. an internal self-signed certificate authority [CA]),
|
||||
then you can skip this step.
|
||||
Otherwise, your organization must
|
||||
:ref:`set up its own self-signed certificate authority <how-to-set-up-a-self-signed-certificate-authority>`.
|
||||
|
||||
|
||||
☐ Generate four keys and corresponding certificate signing requests (CSRs):
|
||||
☐ Follow Standard and Unique Naming Convention
|
||||
|
||||
#. Server Certificate (a.k.a. Member Certificate) for the MongoDB instance
|
||||
☐ Name of the MongoDB instance (``mdb-instance-*``)
|
||||
|
||||
☐ Name of the BigchainDB instance (``bdb-instance-*``)
|
||||
|
||||
☐ Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``)
|
||||
|
||||
☐ Name of the OpenResty instance (``openresty-instance-*``)
|
||||
|
||||
☐ Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``)
|
||||
|
||||
☐ Name of the Tendermint instance (``tm-instance-*``)
|
||||
|
||||
**Example**
|
||||
|
||||
|
||||
.. code:: text
|
||||
|
||||
{
|
||||
"MongoDB": [
|
||||
"mdb-instance-1",
|
||||
"mdb-instance-2",
|
||||
"mdb-instance-3",
|
||||
"mdb-instance-4"
|
||||
],
|
||||
"BigchainDB": [
|
||||
"bdb-instance-1",
|
||||
"bdb-instance-2",
|
||||
"bdb-instance-3",
|
||||
"bdb-instance-4"
|
||||
],
|
||||
"NGINX": [
|
||||
"ngx-instance-1",
|
||||
"ngx-instance-2",
|
||||
"ngx-instance-3",
|
||||
"ngx-instance-4"
|
||||
],
|
||||
"OpenResty": [
|
||||
"openresty-instance-1",
|
||||
"openresty-instance-2",
|
||||
"openresty-instance-3",
|
||||
"openresty-instance-4"
|
||||
],
|
||||
"MongoDB_Monitoring_Agent": [
|
||||
"mdb-mon-instance-1",
|
||||
"mdb-mon-instance-2",
|
||||
"mdb-mon-instance-3",
|
||||
"mdb-mon-instance-4"
|
||||
],
|
||||
"Tendermint": [
|
||||
"tm-instance-1",
|
||||
"tm-instance-2",
|
||||
"tm-instance-3",
|
||||
"tm-instance-4"
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
☐ Generate three keys and corresponding certificate signing requests (CSRs):
|
||||
|
||||
#. Server Certificate for the MongoDB instance
|
||||
#. Client Certificate for BigchainDB Server to identify itself to MongoDB
|
||||
#. Client Certificate for MongoDB Monitoring Agent to identify itself to MongoDB
|
||||
#. Client Certificate for MongoDB Backup Agent to identify itself to MongoDB
|
||||
|
||||
Ask the managing organization to use its self-signed CA to sign those four CSRs.
|
||||
They should send you:
|
||||
|
||||
* Four certificates (one for each CSR you sent them).
|
||||
* One ``ca.crt`` file: their CA certificate.
|
||||
* One ``crl.pem`` file: a certificate revocation list.
|
||||
|
||||
For help, see the pages:
|
||||
|
||||
* :ref:`How to Generate a Server Certificate for MongoDB`
|
||||
* :ref:`How to Generate a Client Certificate for MongoDB`
|
||||
|
||||
|
||||
☐ Every node in a BigchainDB cluster needs its own
|
||||
BigchainDB keypair (i.e. a public key and corresponding private key).
|
||||
You can generate a BigchainDB keypair for your node, for example,
|
||||
using the `BigchainDB Python Driver <http://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>`_.
|
||||
|
||||
.. code:: python
|
||||
|
||||
from bigchaindb_driver.crypto import generate_keypair
|
||||
print(generate_keypair())
|
||||
|
||||
|
||||
☐ Share your BigchaindB *public* key with all the other nodes
|
||||
in the BigchainDB cluster.
|
||||
Don't share your private key.
|
||||
|
||||
|
||||
☐ Get the BigchainDB public keys of all the other nodes in the cluster.
|
||||
That list of public keys is known as the BigchainDB "keyring."
|
||||
Use the self-signed CA to sign those three CSRs. For help, see the pages:
|
||||
|
||||
* :doc:`How to Generate a Server Certificate for MongoDB <../production-deployment-template/server-tls-certificate>`
|
||||
* :doc:`How to Generate a Client Certificate for MongoDB <../production-deployment-template/client-tls-certificate>`
|
||||
|
||||
☐ Make up an FQDN for your BigchainDB node (e.g. ``mynode.mycorp.com``).
|
||||
Make sure you've registered the associated domain name (e.g. ``mycorp.com``),
|
||||
and have an SSL certificate for the FQDN.
|
||||
(You can get an SSL certificate from any SSL certificate provider.)
|
||||
|
||||
|
||||
☐ Ask the managing organization for the user name to use for authenticating to
|
||||
☐ Ask the BigchainDB Node operator/owner for the username to use for authenticating to
|
||||
MongoDB.
|
||||
|
||||
|
||||
☐ If the cluster uses 3scale for API authentication, monitoring and billing,
|
||||
you must ask the managing organization for all relevant 3scale credentials -
|
||||
you must ask the BigchainDB node operator/owner for all relevant 3scale credentials -
|
||||
secret token, service ID, version header and API service token.
|
||||
|
||||
|
||||
☐ If the cluster uses MongoDB Cloud Manager for monitoring and backup,
|
||||
☐ If the cluster uses MongoDB Cloud Manager for monitoring,
|
||||
you must ask the managing organization for the ``Project ID`` and the
|
||||
``Agent API Key``.
|
||||
(Each Cloud Manager "Project" has its own ``Project ID``. A ``Project ID`` can
|
||||
@ -119,11 +173,7 @@ allow easier periodic rotation of the ``Agent API Key`` with a constant
|
||||
``Project ID``)
|
||||
|
||||
|
||||
☐ :doc:`Deploy a Kubernetes cluster on Azure <template-kubernetes-azure>`.
|
||||
☐ :doc:`Deploy a Kubernetes cluster on Azure <../production-deployment-template/template-kubernetes-azure>`.
|
||||
|
||||
|
||||
☐ You can now proceed to set up your BigchainDB node based on whether it is the
|
||||
:ref:`first node in a new cluster
|
||||
<Kubernetes Template: Deploy a Single BigchainDB Node>` or a
|
||||
:ref:`node that will be added to an existing cluster
|
||||
<Kubernetes Template: Add a BigchainDB Node to an Existing BigchainDB Cluster>`.
|
||||
☐ You can now proceed to set up your :ref:`BigchainDB node
|
||||
<kubernetes-template-deploy-a-single-bigchaindb-node>`.
|
||||
|
Loading…
x
Reference in New Issue
Block a user