|
|
|
@ -47,22 +47,18 @@ cluster is using.
|
|
|
|
|
Step 1: Prerequisites
|
|
|
|
|
---------------------
|
|
|
|
|
|
|
|
|
|
* A public/private key pair for the new BigchainDB instance.
|
|
|
|
|
* :ref:`List of all the things to be done by each node operator <Things Each Node Operator Must Do>`.
|
|
|
|
|
|
|
|
|
|
* The public key should be shared offline with the other existing BigchainDB
|
|
|
|
|
nodes in the existing BigchainDB cluster.
|
|
|
|
|
|
|
|
|
|
* You will need the public keys of all the existing BigchainDB nodes.
|
|
|
|
|
|
|
|
|
|
* Client Certificate for the new BigchainDB Server to identify itself to the cluster.
|
|
|
|
|
|
|
|
|
|
* A new Kubernetes cluster setup with kubectl configured to access it.
|
|
|
|
|
|
|
|
|
|
* Some familiarity with deploying a BigchainDB node on Kubernetes.
|
|
|
|
|
See our :doc:`other docs about that <node-on-kubernetes>`.
|
|
|
|
|
|
|
|
|
|
* You will need a client certificate for each MongoDB monitoring and backup agent.
|
|
|
|
|
|
|
|
|
|
Note: If you are managing multiple Kubernetes clusters, from your local
|
|
|
|
|
system, you can run ``kubectl config view`` to list all the contexts that
|
|
|
|
|
are available for the local kubectl.
|
|
|
|
@ -77,33 +73,86 @@ example:
|
|
|
|
|
$ kubectl --context ctx-2 proxy --port 8002
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 2: Prepare the New Kubernetes Cluster
|
|
|
|
|
------------------------------------------
|
|
|
|
|
Step 2: Configure the BigchainDB Node
|
|
|
|
|
-------------------------------------
|
|
|
|
|
|
|
|
|
|
Follow the steps in the sections to set up Storage Classes and Persistent Volume
|
|
|
|
|
Claims, and to run MongoDB in the new cluster:
|
|
|
|
|
|
|
|
|
|
1. :ref:`Add Storage Classes <Step 10: Create Kubernetes Storage Classes for MongoDB>`.
|
|
|
|
|
2. :ref:`Add Persistent Volume Claims <Step 11: Create Kubernetes Persistent Volume Claims>`.
|
|
|
|
|
3. :ref:`Create the Config Map <Step 3: Configure Your BigchainDB Node>`.
|
|
|
|
|
4. :ref:`Prepare the Kubernetes Secrets <Step 3: Configure Your BigchainDB Node>`, as per your
|
|
|
|
|
requirement i.e. if you do not want a certain functionality, just remove it from the
|
|
|
|
|
``configuration/secret.yaml``.
|
|
|
|
|
5. :ref:`Run MongoDB instance <Step 12: Start a Kubernetes StatefulSet for MongoDB>`.
|
|
|
|
|
See the section on how to :ref:`configure your BigchainDB node <How to Configure a BigchainDB Node>`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 3: Start NGINX service, Assign DNS to NGINX Pubic IP and run NGINX deployment
|
|
|
|
|
----------------------------------------------------------------------------------
|
|
|
|
|
Step 3: Start the NGINX Service
|
|
|
|
|
--------------------------------
|
|
|
|
|
|
|
|
|
|
Please see the following pages:
|
|
|
|
|
Please see the following section:
|
|
|
|
|
|
|
|
|
|
* :ref:`Start NGINX service <Step 4: Start the NGINX Service>`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 4: Assign DNS Name to the NGINX Public IP
|
|
|
|
|
----------------------------------------------
|
|
|
|
|
|
|
|
|
|
Please see the following section:
|
|
|
|
|
|
|
|
|
|
* :ref:`Assign DNS to NGINX Public IP <Step 5: Assign DNS Name to the NGINX Public IP>`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 5: Start the MongoDB Kubernetes Service
|
|
|
|
|
--------------------------------------------
|
|
|
|
|
|
|
|
|
|
Please see the following section:
|
|
|
|
|
|
|
|
|
|
* :ref:`Start the MongoDB Kubernetes Service <Step 6: Start the MongoDB Kubernetes Service>`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 6: Start the BigchainDB Kubernetes Service
|
|
|
|
|
-----------------------------------------------
|
|
|
|
|
|
|
|
|
|
Please see the following section:
|
|
|
|
|
|
|
|
|
|
* :ref:`Start the BigchainDB Kubernetes Service <Step 7: Start the BigchainDB Kubernetes Service>`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 7: Start the OpenResty Kubernetes Service
|
|
|
|
|
----------------------------------------------
|
|
|
|
|
|
|
|
|
|
Please see the following section:
|
|
|
|
|
|
|
|
|
|
* :ref:`Start the OpenResty Kubernetes Service <Step 8: Start the OpenResty Kubernetes Service>`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 8: Start the NGINX Kubernetes Deployment
|
|
|
|
|
---------------------------------------------
|
|
|
|
|
|
|
|
|
|
Please see the following section:
|
|
|
|
|
|
|
|
|
|
* :ref:`Run NGINX deployment <Step 9: Start the NGINX Kubernetes Deployment>`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 4: Verify network connectivity between the MongoDB instances
|
|
|
|
|
-----------------------------------------------------------------
|
|
|
|
|
Step 9: Create Kubernetes Storage Classes for MongoDB
|
|
|
|
|
-----------------------------------------------------
|
|
|
|
|
|
|
|
|
|
Please see the following section:
|
|
|
|
|
|
|
|
|
|
* :ref:`Step 10: Create Kubernetes Storage Classes for MongoDB`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 10: Create Kubernetes Persistent Volume Claims
|
|
|
|
|
---------------------------------------------------
|
|
|
|
|
|
|
|
|
|
Please see the following section:
|
|
|
|
|
|
|
|
|
|
* :ref:`Step 11: Create Kubernetes Persistent Volume Claims`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 11: Start a Kubernetes StatefulSet for MongoDB
|
|
|
|
|
---------------------------------------------------
|
|
|
|
|
|
|
|
|
|
Please see the following section:
|
|
|
|
|
|
|
|
|
|
* :ref:`Step 12: Start a Kubernetes StatefulSet for MongoDB`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 12: Verify network connectivity between the MongoDB instances
|
|
|
|
|
------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
Make sure your MongoDB instances can access each other over the network. *If* you are deploying
|
|
|
|
|
the new MongoDB node in a different cluster or geographical location using Azure Kubernetes Container
|
|
|
|
@ -115,14 +164,18 @@ want to add a new MongoDB instance ``mdb-instance-1`` located in Azure data cent
|
|
|
|
|
replica set. Unless you already have explicitly set up networking for ``mdb-instance-0`` to communicate with ``mdb-instance-1`` and
|
|
|
|
|
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
|
|
|
|
|
MongoDB replica set.
|
|
|
|
|
It is similar to ensuring that there is a ``CNAME`` record in the DNS
|
|
|
|
|
infrastructure to resolve ``mdb-instance-X`` to the host where it is actually available.
|
|
|
|
|
We can do this in Kubernetes using a Kubernetes Service of ``type``
|
|
|
|
|
``ExternalName``.
|
|
|
|
|
|
|
|
|
|
* This configuration is located in the file ``mongodb/mongo-ext-conn-svc.yaml``.
|
|
|
|
|
|
|
|
|
|
* Set the name of the ``metadata.name`` to the host name of the MongoDB instance you are trying to connect to.
|
|
|
|
|
For instance if you are configuring this service on cluster with `mdb-instance-0` then the ``metadata.name`` will
|
|
|
|
|
For instance if you are configuring this service on cluster with ``mdb-instance-0`` then the ``metadata.name`` will
|
|
|
|
|
be ``mdb-instance-1`` and vice versa.
|
|
|
|
|
|
|
|
|
|
* Set ``spec.ports.port[0]`` to the ``mongodb-backend-port`` from the ConfigMap.
|
|
|
|
|
* Set ``spec.ports.port[0]`` to the ``mongodb-backend-port`` from the ConfigMap for the other cluster.
|
|
|
|
|
|
|
|
|
|
* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to.
|
|
|
|
|
For more information about the FQDN please refer to: :ref:`Assign DNS Name to the NGINX Public
|
|
|
|
@ -132,9 +185,14 @@ MongoDB replica set.
|
|
|
|
|
This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs
|
|
|
|
|
we need to communicate with.
|
|
|
|
|
|
|
|
|
|
If you are not the system administrator of the cluster, you have to get in
|
|
|
|
|
touch with the system administrator/s of the other ``n-1`` clusters and
|
|
|
|
|
share with them your instance name (``mdb-instance-name`` in the ConfigMap)
|
|
|
|
|
and the FQDN for your node (``cluster-fqdn`` in the ConfigMap).
|
|
|
|
|
|
|
|
|
|
Step 5: Add the New MongoDB Instance to the Existing Replica Set
|
|
|
|
|
----------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
Step 13: Add the New MongoDB Instance to the Existing Replica Set
|
|
|
|
|
-----------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
Note that by ``replica set``, we are referring to the MongoDB replica set,
|
|
|
|
|
not a Kubernetes' ``ReplicaSet``.
|
|
|
|
@ -149,8 +207,8 @@ contact the admin of the PRIMARY MongoDB node:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context ctx-1 exec -it <existing-mongodb-host-name> -c mongodb -- /bin/bash
|
|
|
|
|
$ mongo --host <existing-mongodb-host-name> --port 27017 --verbose --ssl \
|
|
|
|
|
$ kubectl --context ctx-1 exec -it <existing mongodb-instance-name> bash
|
|
|
|
|
$ mongo --host <existing mongodb-instance-name> --port 27017 --verbose --ssl \
|
|
|
|
|
--sslCAFile /etc/mongod/ssl/ca.pem \
|
|
|
|
|
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
|
|
|
|
|
|
|
|
|
@ -167,11 +225,11 @@ Run the ``rs.add()`` command with the FQDN and port number of the other instance
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
PRIMARY> rs.add("<fqdn>:<port>")
|
|
|
|
|
PRIMARY> rs.add("<new mdb-instance-name>:<port>")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 6: Verify the Replica Set Membership
|
|
|
|
|
-----------------------------------------
|
|
|
|
|
Step 14: Verify the Replica Set Membership
|
|
|
|
|
------------------------------------------
|
|
|
|
|
|
|
|
|
|
You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the
|
|
|
|
|
mongo shell to verify the replica set membership.
|
|
|
|
@ -180,15 +238,60 @@ The new MongoDB instance should be listed in the membership information
|
|
|
|
|
displayed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 7: Start the New BigchainDB Instance
|
|
|
|
|
-----------------------------------------
|
|
|
|
|
Step 15: Configure Users and Access Control for MongoDB
|
|
|
|
|
-------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
Get the file ``bigchaindb-dep.yaml`` from GitHub using:
|
|
|
|
|
* Create the users in MongoDB with the appropriate roles assigned to them. This
|
|
|
|
|
will enable the new BigchainDB instance, new MongoDB Monitoring Agent
|
|
|
|
|
instance and the new MongoDB Backup Agent instance to function correctly.
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
* Please refer to
|
|
|
|
|
:ref:`Configure Users and Access Control for MongoDB <Step 13: Configure
|
|
|
|
|
Users and Access Control for MongoDB>` to create and configure the new
|
|
|
|
|
BigchainDB, MongoDB Monitoring Agent and MongoDB Backup Agent users on the
|
|
|
|
|
cluster.
|
|
|
|
|
|
|
|
|
|
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/bigchaindb/bigchaindb-dep.yaml
|
|
|
|
|
.. note::
|
|
|
|
|
You will not have to create the MongoDB replica set or create the admin user, as they already exist.
|
|
|
|
|
|
|
|
|
|
If you do not have access to the ``PRIMARY`` member of the replica set, you
|
|
|
|
|
need to get in touch with the administrator who can create the users in the
|
|
|
|
|
MongoDB cluster.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 16: Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
|
|
|
|
-------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
Please see the following section:
|
|
|
|
|
|
|
|
|
|
* :ref:`Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent`.
|
|
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
Every MMS group has only one active Monitoring and Backup Agent and having
|
|
|
|
|
multiple agents provides high availability and failover, in case one goes
|
|
|
|
|
down. For more information about Monitoring and Backup Agents please
|
|
|
|
|
consult the `official MongoDB documenation
|
|
|
|
|
<https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 17: Start a Kubernetes Deployment for MongoDB Backup Agent
|
|
|
|
|
---------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
Please see the following section:
|
|
|
|
|
|
|
|
|
|
* :ref:`Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent`.
|
|
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
Every MMS group has only one active Monitoring and Backup Agent and having
|
|
|
|
|
multiple agents provides high availability and failover, in case one goes
|
|
|
|
|
down. For more information about Monitoring and Backup Agents please
|
|
|
|
|
consult the `official MongoDB documenation
|
|
|
|
|
<https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 18: Start a Kubernetes Deployment for BigchainDB
|
|
|
|
|
-----------------------------------------------------
|
|
|
|
|
|
|
|
|
|
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
|
|
|
|
|
value set in ``bdb-instance-name`` in the ConfigMap, followed by
|
|
|
|
@ -216,72 +319,65 @@ Get the file ``bigchaindb-dep.yaml`` from GitHub using:
|
|
|
|
|
* Uncomment the env var ``BIGCHAINDB_KEYRING``, it will pick up the
|
|
|
|
|
``:`` delimited list of all the public keys in the BigchainDB cluster from the ConfigMap.
|
|
|
|
|
|
|
|
|
|
* Authenticate the new BigchainDB instance using the client x.509 certificate with MongoDB. We need to specify the
|
|
|
|
|
user name *as seen in the certificate* issued to the BigchainDB instance in order to authenticate correctly.
|
|
|
|
|
Please refer to: :ref:`Configure Users and Access Control for MongoDB <Step 13: Configure Users and Access Control for MongoDB>`
|
|
|
|
|
|
|
|
|
|
Create the required Deployment using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context ctx-2 apply -f bigchaindb-dep.yaml
|
|
|
|
|
|
|
|
|
|
You can check its status using the command ``kubectl get deploy -w``
|
|
|
|
|
You can check its status using the command ``kubectl --context ctx-2 get deploy -w``
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 8: Restart the Existing BigchainDB Instance(s)
|
|
|
|
|
---------------------------------------------------
|
|
|
|
|
Step 19: Restart the Existing BigchainDB Instance(s)
|
|
|
|
|
----------------------------------------------------
|
|
|
|
|
|
|
|
|
|
Add the public key of the new BigchainDB instance to the ConfigMap ``bdb-keyring``
|
|
|
|
|
variable of existing BigchainDB instances, update the ConfigMap of the existing
|
|
|
|
|
BigchainDB instances and update the instances respectively:
|
|
|
|
|
* Add the public key of the new BigchainDB instance to the ConfigMap
|
|
|
|
|
``bdb-keyring`` variable of all the existing BigchainDB instances.
|
|
|
|
|
Update all the existing ConfigMap using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context ctx-1 apply -f configuration/config-map.yaml
|
|
|
|
|
$ kubectl --context ctx-1 replace -f bigchaindb/bigchaindb-dep.yaml --force
|
|
|
|
|
|
|
|
|
|
* Uncomment the ``BIGCHAINDB_KEYRING`` variable from the
|
|
|
|
|
``bigchaindb/bigchaindb-dep.yaml`` to refer to the keyring updated in the
|
|
|
|
|
ConfigMap.
|
|
|
|
|
Update the running BigchainDB instance using:
|
|
|
|
|
|
|
|
|
|
.. code:: bash
|
|
|
|
|
|
|
|
|
|
$ kubectl --context ctx-1 delete -f bigchaindb/bigchaindb-dep.yaml
|
|
|
|
|
$ kubectl --context ctx-1 apply -f bigchaindb/bigchaindb-dep.yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
See the page titled :ref:`How to Configure a BigchainDB Node` for more information about
|
|
|
|
|
ConfigMap configuration.
|
|
|
|
|
|
|
|
|
|
This will create a "rolling deployment" in Kubernetes where a new instance of
|
|
|
|
|
BigchainDB will be created, and if the health check on the new instance is
|
|
|
|
|
successful, the earlier one will be terminated. This ensures that there is
|
|
|
|
|
zero downtime during updates.
|
|
|
|
|
|
|
|
|
|
You can SSH to an existing BigchainDB instance and run the ``bigchaindb
|
|
|
|
|
show-config`` command to check that the keyring is updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 9: Deploy MongoDB Monitoring and Backup Agent
|
|
|
|
|
--------------------------------------------------
|
|
|
|
|
Step 20: Start a Kubernetes Deployment for OpenResty
|
|
|
|
|
----------------------------------------------------
|
|
|
|
|
|
|
|
|
|
To Deploy MongoDB monitoring and backup agent for the new cluster, you have to authenticate each agent using its
|
|
|
|
|
unique client certificate. For more information on how to authenticate and add users to MongoDB please refer to:
|
|
|
|
|
Please see the following section:
|
|
|
|
|
|
|
|
|
|
* :ref:`Configure Users and Access Control for MongoDB<Step 13: Configure Users and Access Control for MongoDB>`
|
|
|
|
|
|
|
|
|
|
After authentication, start the Kubernetes Deployments:
|
|
|
|
|
|
|
|
|
|
* :ref:`Start a Kubernetes Deployment for MongoDB Monitoring Agent <Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent>`.
|
|
|
|
|
* :ref:`Start a Kubernetes Deployment for MongoDB Backup Agent <Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent>`.
|
|
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
Every MMS group has only one active Monitoring and Backup agent and having multiple agents provides High availability and failover, in case
|
|
|
|
|
one goes down. For more information about Monitoring and Backup Agents please consult the `official MongoDB documenation <https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
|
|
|
|
|
* :ref:`Step 17: Start a Kubernetes Deployment for OpenResty`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 10: Start OpenResty Service and Deployment
|
|
|
|
|
---------------------------------------------------------
|
|
|
|
|
Step 21: Configure the MongoDB Cloud Manager
|
|
|
|
|
--------------------------------------------
|
|
|
|
|
|
|
|
|
|
Please refer to the following instructions:
|
|
|
|
|
* MongoDB Cloud Manager auto-detects the members of the replica set and
|
|
|
|
|
configures the agents to act as a master/slave accordingly.
|
|
|
|
|
|
|
|
|
|
* :ref:`Start the OpenResty Kubernetes Service <Step 8: Start the OpenResty Kubernetes Service>`.
|
|
|
|
|
* :ref:`Start a Kubernetes Deployment for OpenResty <Step 17: Start a Kubernetes Deployment for OpenResty>`.
|
|
|
|
|
* You can verify that the new MongoDB instance is detected by the
|
|
|
|
|
Monitoring and Backup Agent using the Cloud Manager UI.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Step 11: Test Your New BigchainDB Node
|
|
|
|
|
Step 22: Test Your New BigchainDB Node
|
|
|
|
|
--------------------------------------
|
|
|
|
|
|
|
|
|
|
Please refer to the testing steps :ref:`here <Step 19: Verify the BigchainDB
|
|
|
|
|
* Please refer to the testing steps :ref:`here <Step 19: Verify the BigchainDB
|
|
|
|
|
Node Setup>` to verify that your new BigchainDB node is working as expected.
|
|
|
|
|
|
|
|
|
|