mirror of
https://github.com/bigchaindb/bigchaindb.git
synced 2024-10-13 13:34:05 +00:00
Remove unwanted references
This commit is contained in:
parent
03219a9371
commit
6052043a03
@ -1,384 +0,0 @@
|
||||
.. _kubernetes-template-add-a-bigchaindb-node-to-an-existing-cluster:
|
||||
|
||||
Kubernetes Template: Add a BigchainDB Node to an Existing BigchainDB Cluster
|
||||
============================================================================
|
||||
|
||||
This page describes how to deploy a BigchainDB node using Kubernetes,
|
||||
and how to add that node to an existing BigchainDB cluster.
|
||||
It assumes you already have a running Kubernetes cluster
|
||||
where you can deploy the new BigchainDB node.
|
||||
|
||||
If you want to deploy the first BigchainDB node in a BigchainDB cluster,
|
||||
or a stand-alone BigchainDB node,
|
||||
then see :doc:`the page about that <node-on-kubernetes>`.
|
||||
|
||||
|
||||
Terminology Used
|
||||
----------------
|
||||
|
||||
``existing cluster`` will refer to one of the existing Kubernetes clusters
|
||||
hosting one of the existing BigchainDB nodes.
|
||||
|
||||
``ctx-1`` will refer to the kubectl context of the existing cluster.
|
||||
|
||||
``new cluster`` will refer to the new Kubernetes cluster that will run a new
|
||||
BigchainDB node (including a BigchainDB instance and a MongoDB instance).
|
||||
|
||||
``ctx-2`` will refer to the kubectl context of the new cluster.
|
||||
|
||||
``new MongoDB instance`` will refer to the MongoDB instance in the new cluster.
|
||||
|
||||
``existing MongoDB instance`` will refer to the MongoDB instance in the
|
||||
existing cluster.
|
||||
|
||||
``new BigchainDB instance`` will refer to the BigchainDB instance in the new
|
||||
cluster.
|
||||
|
||||
``existing BigchainDB instance`` will refer to the BigchainDB instance in the
|
||||
existing cluster.
|
||||
|
||||
Below, we refer to multiple files by their directory and filename,
|
||||
such as ``mongodb/mongo-ext-conn-svc.yaml``. Those files are files in the
|
||||
`bigchaindb/bigchaindb repository on GitHub
|
||||
<https://github.com/bigchaindb/bigchaindb/>`_ in the ``k8s/`` directory.
|
||||
Make sure you're getting those files from the appropriate Git branch on
|
||||
GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB
|
||||
cluster is using.
|
||||
|
||||
|
||||
Step 1: Prerequisites
|
||||
---------------------
|
||||
|
||||
* :ref:`List of all the things to be done by each node operator <things-each-node-operator-must-do>`.
|
||||
|
||||
* The public key should be shared offline with the other existing BigchainDB
|
||||
nodes in the existing BigchainDB cluster.
|
||||
|
||||
* You will need the public keys of all the existing BigchainDB nodes.
|
||||
|
||||
* A new Kubernetes cluster setup with kubectl configured to access it.
|
||||
|
||||
* Some familiarity with deploying a BigchainDB node on Kubernetes.
|
||||
See our :doc:`other docs about that <node-on-kubernetes>`.
|
||||
|
||||
Note: If you are managing multiple Kubernetes clusters, from your local
|
||||
system, you can run ``kubectl config view`` to list all the contexts that
|
||||
are available for the local kubectl.
|
||||
To target a specific cluster, add a ``--context`` flag to the kubectl CLI. For
|
||||
example:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 apply -f example.yaml
|
||||
$ kubectl --context ctx-2 apply -f example.yaml
|
||||
$ kubectl --context ctx-1 proxy --port 8001
|
||||
$ kubectl --context ctx-2 proxy --port 8002
|
||||
|
||||
|
||||
Step 2: Configure the BigchainDB Node
|
||||
-------------------------------------
|
||||
|
||||
See the section on how to :ref:`how-to-configure-a-bigchaindb-node`.
|
||||
|
||||
|
||||
Step 3: Start the NGINX Service
|
||||
--------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`start-the-nginx-service`.
|
||||
|
||||
|
||||
Step 4: Assign DNS Name to the NGINX Public IP
|
||||
----------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`assign-dns-name-to-the-nginx-public-ip`.
|
||||
|
||||
|
||||
Step 5: Start the MongoDB Kubernetes Service
|
||||
--------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`start-the-mongodb-kubernetes-service`.
|
||||
|
||||
|
||||
Step 6: Start the BigchainDB Kubernetes Service
|
||||
-----------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`start-the-bigchaindb-kubernetes-service`.
|
||||
|
||||
|
||||
Step 7: Start the OpenResty Kubernetes Service
|
||||
----------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`start-the-openresty-kubernetes-service`.
|
||||
|
||||
|
||||
Step 8: Start the NGINX Kubernetes Deployment
|
||||
---------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`start-the-nginx-kubernetes-deployment`.
|
||||
|
||||
|
||||
Step 9: Create Kubernetes Storage Classes for MongoDB
|
||||
-----------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`create-kubernetes-storage-classes-for-mongodb`.
|
||||
|
||||
|
||||
Step 10: Create Kubernetes Persistent Volume Claims
|
||||
---------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`create-kubernetes-persistent-volume-claims`.
|
||||
|
||||
|
||||
Step 11: Start a Kubernetes StatefulSet for MongoDB
|
||||
---------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`start-a-kubernetes-statefulset-for-mongodb`.
|
||||
|
||||
|
||||
Step 12: Verify network connectivity between the MongoDB instances
|
||||
------------------------------------------------------------------
|
||||
|
||||
Make sure your MongoDB instances can access each other over the network. *If* you are deploying
|
||||
the new MongoDB node in a different cluster or geographical location using Azure Kubernetes Container
|
||||
Service, you will have to set up networking between the two clusters using `Kubernetes
|
||||
Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
|
||||
|
||||
Assuming we have an existing MongoDB instance ``mdb-instance-0`` residing in Azure data center location ``westeurope`` and we
|
||||
want to add a new MongoDB instance ``mdb-instance-1`` located in Azure data center location ``eastus`` to the existing MongoDB
|
||||
replica set. Unless you already have explicitly set up networking for ``mdb-instance-0`` to communicate with ``mdb-instance-1`` and
|
||||
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
|
||||
MongoDB replica set.
|
||||
It is similar to ensuring that there is a ``CNAME`` record in the DNS
|
||||
infrastructure to resolve ``mdb-instance-X`` to the host where it is actually available.
|
||||
We can do this in Kubernetes using a Kubernetes Service of ``type``
|
||||
``ExternalName``.
|
||||
|
||||
* This configuration is located in the file ``mongodb/mongo-ext-conn-svc.yaml``.
|
||||
|
||||
* Set the name of the ``metadata.name`` to the host name of the MongoDB instance you are trying to connect to.
|
||||
For instance if you are configuring this service on cluster with ``mdb-instance-0`` then the ``metadata.name`` will
|
||||
be ``mdb-instance-1`` and vice versa.
|
||||
|
||||
* Set ``spec.ports.port[0]`` to the ``mongodb-backend-port`` from the ConfigMap for the other cluster.
|
||||
|
||||
* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to.
|
||||
For more information about the FQDN please refer to: :ref:`assign-dns-name-to-the-nginx-public-ip`.
|
||||
|
||||
.. note::
|
||||
This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs
|
||||
we need to communicate with.
|
||||
|
||||
If you are not the system administrator of the cluster, you have to get in
|
||||
touch with the system administrator/s of the other ``n-1`` clusters and
|
||||
share with them your instance name (``mdb-instance-name`` in the ConfigMap)
|
||||
and the FQDN for your node (``cluster-fqdn`` in the ConfigMap).
|
||||
|
||||
|
||||
Step 13: Add the New MongoDB Instance to the Existing Replica Set
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Note that by ``replica set``, we are referring to the MongoDB replica set,
|
||||
not a Kubernetes' ``ReplicaSet``.
|
||||
|
||||
If you are not the administrator of an existing BigchainDB node, you
|
||||
will have to coordinate offline with an existing administrator so that they can
|
||||
add the new MongoDB instance to the replica set.
|
||||
|
||||
Add the new instance of MongoDB from an existing instance by accessing the
|
||||
``mongo`` shell and authenticate as the ``adminUser`` we created for existing MongoDB instance OR
|
||||
contact the admin of the PRIMARY MongoDB node:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 exec -it <existing mongodb-instance-name> bash
|
||||
$ mongo --host <existing mongodb-instance-name> --port 27017 --verbose --ssl \
|
||||
--sslCAFile /etc/mongod/ssl/ca.pem \
|
||||
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
|
||||
|
||||
PRIMARY> use admin
|
||||
PRIMARY> db.auth("adminUser", "superstrongpassword")
|
||||
|
||||
One can only add members to a replica set from the ``PRIMARY`` instance.
|
||||
The ``mongo`` shell prompt should state that this is the primary member in the
|
||||
replica set.
|
||||
If not, then you can use the ``rs.status()`` command to find out who the
|
||||
primary is and login to the ``mongo`` shell in the primary.
|
||||
|
||||
Run the ``rs.add()`` command with the FQDN and port number of the other instances:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
PRIMARY> rs.add("<new mdb-instance-name>:<port>")
|
||||
|
||||
|
||||
Step 14: Verify the Replica Set Membership
|
||||
------------------------------------------
|
||||
|
||||
You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the
|
||||
mongo shell to verify the replica set membership.
|
||||
|
||||
The new MongoDB instance should be listed in the membership information
|
||||
displayed.
|
||||
|
||||
|
||||
Step 15: Configure Users and Access Control for MongoDB
|
||||
-------------------------------------------------------
|
||||
|
||||
* Create the users in MongoDB with the appropriate roles assigned to them. This
|
||||
will enable the new BigchainDB instance, new MongoDB Monitoring Agent
|
||||
instance and the new MongoDB Backup Agent instance to function correctly.
|
||||
|
||||
* Please refer to
|
||||
:ref:`configure-users-and-access-control-for-mongodb` to create and
|
||||
configure the new BigchainDB, MongoDB Monitoring Agent and MongoDB Backup
|
||||
Agent users on the cluster.
|
||||
|
||||
.. note::
|
||||
You will not have to create the MongoDB replica set or create the admin user, as they already exist.
|
||||
|
||||
If you do not have access to the ``PRIMARY`` member of the replica set, you
|
||||
need to get in touch with the administrator who can create the users in the
|
||||
MongoDB cluster.
|
||||
|
||||
|
||||
|
||||
Step 16: Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
||||
-------------------------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`start-a-kubernetes-deployment-for-mongodb-monitoring-agent`.
|
||||
|
||||
.. note::
|
||||
Every MMS group has only one active Monitoring and Backup Agent and having
|
||||
multiple agents provides high availability and failover, in case one goes
|
||||
down. For more information about Monitoring and Backup Agents please
|
||||
consult the `official MongoDB documenation
|
||||
<https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
|
||||
|
||||
|
||||
Step 17: Start a Kubernetes Deployment for MongoDB Backup Agent
|
||||
---------------------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`start-a-kubernetes-deployment-for-mongodb-backup-agent`.
|
||||
|
||||
.. note::
|
||||
Every MMS group has only one active Monitoring and Backup Agent and having
|
||||
multiple agents provides high availability and failover, in case one goes
|
||||
down. For more information about Monitoring and Backup Agents please
|
||||
consult the `official MongoDB documenation
|
||||
<https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
|
||||
|
||||
|
||||
Step 18: Start a Kubernetes Deployment for BigchainDB
|
||||
-----------------------------------------------------
|
||||
|
||||
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
|
||||
value set in ``bdb-instance-name`` in the ConfigMap, followed by
|
||||
``-dep``.
|
||||
For example, if the value set in the
|
||||
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the
|
||||
value ``bdb-instance-0-dep``.
|
||||
|
||||
* Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded).
|
||||
(In the future, we'd like to pull the BigchainDB private key from
|
||||
the Secret named ``bdb-private-key``, but a Secret can only be mounted as a file,
|
||||
so BigchainDB Server would have to be modified to look for it
|
||||
in a file.)
|
||||
|
||||
* As we gain more experience running BigchainDB in testing and production,
|
||||
we will tweak the ``resources.limits`` values for CPU and memory, and as
|
||||
richer monitoring and probing becomes available in BigchainDB, we will
|
||||
tweak the ``livenessProbe`` and ``readinessProbe`` parameters.
|
||||
|
||||
* Set the ports to be exposed from the pod in the
|
||||
``spec.containers[0].ports`` section. We currently expose 2 ports -
|
||||
``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the
|
||||
values specified in the ConfigMap.
|
||||
|
||||
* Uncomment the env var ``BIGCHAINDB_KEYRING``, it will pick up the
|
||||
``:`` delimited list of all the public keys in the BigchainDB cluster from the ConfigMap.
|
||||
|
||||
Create the required Deployment using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-2 apply -f bigchaindb-dep.yaml
|
||||
|
||||
You can check its status using the command ``kubectl --context ctx-2 get deploy -w``
|
||||
|
||||
|
||||
Step 19: Restart the Existing BigchainDB Instance(s)
|
||||
----------------------------------------------------
|
||||
|
||||
* Add the public key of the new BigchainDB instance to the ConfigMap
|
||||
``bdb-keyring`` variable of all the existing BigchainDB instances.
|
||||
Update all the existing ConfigMap using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 apply -f configuration/config-map.yaml
|
||||
|
||||
* Uncomment the ``BIGCHAINDB_KEYRING`` variable from the
|
||||
``bigchaindb/bigchaindb-dep.yaml`` to refer to the keyring updated in the
|
||||
ConfigMap.
|
||||
Update the running BigchainDB instance using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 delete -f bigchaindb/bigchaindb-dep.yaml
|
||||
$ kubectl --context ctx-1 apply -f bigchaindb/bigchaindb-dep.yaml
|
||||
|
||||
|
||||
See the page titled :ref:`how-to-configure-a-bigchaindb-node`
|
||||
for more information about ConfigMap configuration.
|
||||
|
||||
You can SSH to an existing BigchainDB instance and run the ``bigchaindb
|
||||
show-config`` command to check that the keyring is updated.
|
||||
|
||||
|
||||
Step 20: Start a Kubernetes Deployment for OpenResty
|
||||
----------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`start-a-kubernetes-deployment-for-openresty`.
|
||||
|
||||
|
||||
Step 21: Configure the MongoDB Cloud Manager
|
||||
--------------------------------------------
|
||||
|
||||
* MongoDB Cloud Manager auto-detects the members of the replica set and
|
||||
configures the agents to act as a master/slave accordingly.
|
||||
|
||||
* You can verify that the new MongoDB instance is detected by the
|
||||
Monitoring and Backup Agent using the Cloud Manager UI.
|
||||
|
||||
|
||||
Step 22: Test Your New BigchainDB Node
|
||||
--------------------------------------
|
||||
|
||||
* Please refer to the testing steps :ref:`here
|
||||
<verify-the-bigchaindb-node-setup>` to verify that your new BigchainDB
|
||||
node is working as expected.
|
||||
|
@ -1,146 +0,0 @@
|
||||
How to Restore Data Backed On MongoDB Cloud Manager
|
||||
===================================================
|
||||
|
||||
This page describes how to restore data backed up on
|
||||
`MongoDB Cloud Manager <https://cloud.mongodb.com/>`_ by
|
||||
the backup agent when using a single instance MongoDB replica set.
|
||||
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
- You can restore to either new hardware or existing hardware. We cover
|
||||
restoring data to an existing MongoDB Kubernetes StatefulSet using a
|
||||
Kubernetes Persistent Volume Claim below as described
|
||||
:doc:`here <node-on-kubernetes>`.
|
||||
|
||||
- If the backup and destination database storage engines or settings do not
|
||||
match, mongod cannot start once the backup is restored.
|
||||
|
||||
- If the backup and destination database do not belong to the same MongoDB
|
||||
Cloud Manager group, then the database will start but never initialize
|
||||
properly.
|
||||
|
||||
- The backup restore file includes a metadata file, restoreInfo.txt. This file
|
||||
captures the options the database used when the snapshot was taken. The
|
||||
database must be run with the listed options after it has been restored. It
|
||||
contains:
|
||||
1. Group name
|
||||
2. Replica Set name
|
||||
3. Cluster Id (if applicable)
|
||||
4. Snapshot timestamp (as Timestamp at UTC)
|
||||
5. Last Oplog applied (as a BSON Timestamp at UTC)
|
||||
6. MongoDB version
|
||||
7. Storage engine type
|
||||
8. mongod startup options used on the database when the snapshot was taken
|
||||
|
||||
|
||||
Step 1: Get the Backup/Archived Data from Cloud Manager
|
||||
-------------------------------------------------------
|
||||
|
||||
- Log in to the Cloud Manager.
|
||||
|
||||
- Select the Group that you want to restore data from.
|
||||
|
||||
- Click Backup. Hover over the Status column, click on the
|
||||
``Restore Or Download`` button.
|
||||
|
||||
- Select the appropriate SNAPSHOT, and click Next.
|
||||
|
||||
.. note::
|
||||
|
||||
We currently do not support restoring data using the ``POINT IN TIME`` and
|
||||
``OPLOG TIMESTAMP`` method.
|
||||
|
||||
- Select 'Pull via Secure HTTP'. Select the number of times the link can be
|
||||
used to download data in the dropdown box. We select ``Once``.
|
||||
Select the link expiration time - the time till the download link is active.
|
||||
We usually select ``1 hour``.
|
||||
|
||||
- Check for the email from MongoDB.
|
||||
|
||||
.. note::
|
||||
|
||||
This can take some time as the Cloud Manager needs to prepare an archive of
|
||||
the backed up data.
|
||||
|
||||
- Once you receive the email, click on the link to open the
|
||||
``restore jobs page``. Follow the instructions to download the backup data.
|
||||
|
||||
.. note::
|
||||
|
||||
You will be shown a link to download the back up archive. You can either
|
||||
click on the ``Download`` button to download it using the browser.
|
||||
Under rare circumstances, the download is interrupted and errors out; I have
|
||||
no idea why.
|
||||
An alternative is to copy the download link and use the ``wget`` tool on
|
||||
Linux systems to download the data.
|
||||
|
||||
Step 2: Copy the archive to the MongoDB Instance
|
||||
------------------------------------------------
|
||||
|
||||
- Once you have the archive, you can copy it to the MongoDB instance running
|
||||
on a Kubernetes cluster using something similar to:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 cp bigchain-rs-XXXX.tar.gz mdb-instance-name:/
|
||||
|
||||
where ``bigchain-rs-XXXX.tar.gz`` is the archive downloaded from Cloud
|
||||
Manager, and ``mdb-instance-name`` is the name of your MongoDB instance.
|
||||
|
||||
|
||||
Step 3: Prepare the MongoDB Instance for Restore
|
||||
------------------------------------------------
|
||||
|
||||
- Log in to the MongoDB instance using something like:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 exec -it mdb-instance-name bash
|
||||
|
||||
- Extract the archive that we have copied to the instance at the proper
|
||||
location using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ mv /bigchain-rs-XXXX.tar.gz /data/db
|
||||
|
||||
$ cd /data/db
|
||||
|
||||
$ tar xzvf bigchain-rs-XXXX.tar.gz
|
||||
|
||||
|
||||
- Rename the directories on the disk, so that MongoDB can find the correct
|
||||
data after we restart it.
|
||||
|
||||
- The current database will be located in the ``/data/db/main`` directory.
|
||||
We simply rename the old directory to ``/data/db/main.BAK`` and rename the
|
||||
backup directory ``bigchain-rs-XXXX`` to ``main``.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ mv main main.BAK
|
||||
|
||||
$ mv bigchain-rs-XXXX main
|
||||
|
||||
.. note::
|
||||
|
||||
Ensure that there are no connections to MongoDB from any client, in our
|
||||
case, BigchainDB. This can be done in multiple ways - iptable rules,
|
||||
shutting down BigchainDB, stop sending any transactions to BigchainDB, etc.
|
||||
The simplest way to do it is to stop the MongoDB Kubernetes Service.
|
||||
BigchainDB has a retry mechanism built in, and it will keep trying to
|
||||
connect to MongoDB backend repeatedly till it succeeds.
|
||||
|
||||
Step 4: Restart the MongoDB Instance
|
||||
------------------------------------
|
||||
|
||||
- This can be achieved using something like:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 delete -f k8s/mongo/mongo-ss.yaml
|
||||
|
||||
$ kubectl --context ctx-1 apply -f k8s/mongo/mongo-ss.yaml
|
||||
|
Loading…
x
Reference in New Issue
Block a user