Merge pull request #1295 from bigchaindb/copyedit-docs-1st-node-on-k8s

Copyedited two docs pages re/ BigchainDB on Kubernetes
This commit is contained in:
Troy McConaghy 2017-03-17 12:06:51 +01:00 committed by GitHub
commit d213eacd57
2 changed files with 73 additions and 105 deletions

View File

@ -1,25 +1,26 @@
Add a BigchainDB Node in a Kubernetes Cluster Kubernetes Template: Add a BigchainDB Node to an Existing BigchainDB Cluster
============================================= ============================================================================
**Refer this document if you want to add a new BigchainDB node to an existing This page describes how to deploy a BigchainDB node using Kubernetes,
cluster** and how to add that node to an existing BigchainDB cluster.
It assumes you already have a running Kubernetes cluster
where you can deploy the new BigchainDB node.
**If you want to start your first BigchainDB node in the BigchainDB cluster, If you want to deploy the first BigchainDB node in a BigchainDB cluster,
refer** or a stand-alone BigchainDB node,
:doc:`this <node-on-kubernetes>` then see :doc:`the page about that <node-on-kubernetes>`.
Terminology Used Terminology Used
---------------- ----------------
``existing cluster`` will refer to the existing (or any one of the existing) ``existing cluster`` will refer to one of the existing Kubernetes clusters
Kubernetes cluster that already hosts a BigchainDB instance with a MongoDB hosting one of the existing BigchainDB nodes.
backend.
``ctx-1`` will refer to the kubectl context of the existing cluster. ``ctx-1`` will refer to the kubectl context of the existing cluster.
``new cluster`` will refer to the new Kubernetes cluster that will run a new ``new cluster`` will refer to the new Kubernetes cluster that will run a new
BigchainDB instance with a MongoDB backend. BigchainDB node (including a BigchainDB instance and a MongoDB instance).
``ctx-2`` will refer to the kubectl context of the new cluster. ``ctx-2`` will refer to the kubectl context of the new cluster.
@ -38,26 +39,19 @@ existing cluster.
Step 1: Prerequisites Step 1: Prerequisites
--------------------- ---------------------
* You will need to have a public and private key for the new BigchainDB * A public/private key pair for the new BigchainDB instance.
instance you will set up.
* The public key should be shared offline with the other existing BigchainDB * The public key should be shared offline with the other existing BigchainDB
instances. The means to achieve this requirement is beyond the scope of this nodes in the existing BigchainDB cluster.
document.
* You will need the public keys of all the existing BigchainDB instances. The * You will need the public keys of all the existing BigchainDB nodes.
means to achieve this requirement is beyond the scope of this document.
* A new Kubernetes cluster setup with kubectl configured to access it. * A new Kubernetes cluster setup with kubectl configured to access it.
If you are using Kubernetes on Azure Container Server (ACS), please refer
our documentation `here <template-kubernetes-azure>` for the set up.
If you haven't read our guide to set up a * Some familiarity with deploying a BigchainDB node on Kubernetes.
:doc:`node on Kubernetes <node-on-kubernetes>`, now is a good time to jump in See our :doc:`other docs about that <node-on-kubernetes>`.
there and then come back here as these instructions build up from there.
Note: If you are managing multiple Kubernetes clusters, from your local
NOTE: If you are managing multiple kubernetes clusters, from your local
system, you can run ``kubectl config view`` to list all the contexts that system, you can run ``kubectl config view`` to list all the contexts that
are available for the local kubectl. are available for the local kubectl.
To target a specific cluster, add a ``--context`` flag to the kubectl CLI. For To target a specific cluster, add a ``--context`` flag to the kubectl CLI. For
@ -71,9 +65,10 @@ example:
$ kubectl --context ctx-2 proxy --port 8002 $ kubectl --context ctx-2 proxy --port 8002
Step 2: Prepare the New Kubernetes cluster Step 2: Prepare the New Kubernetes Cluster
------------------------------------------ ------------------------------------------
Follow the steps in the sections to set up Storage Classes and Persisten Volume
Follow the steps in the sections to set up Storage Classes and Persistent Volume
Claims, and to run MongoDB in the new cluster: Claims, and to run MongoDB in the new cluster:
1. :ref:`Add Storage Classes <Step 3: Create Storage Classes>` 1. :ref:`Add Storage Classes <Step 3: Create Storage Classes>`
@ -84,13 +79,13 @@ Claims, and to run MongoDB in the new cluster:
Step 3: Add the New MongoDB Instance to the Existing Replica Set Step 3: Add the New MongoDB Instance to the Existing Replica Set
---------------------------------------------------------------- ----------------------------------------------------------------
Note that by ``replica set`` we are referring to the MongoDB replica set, and not
to Kubernetes' ``ReplicaSet``.
If you are not the administrator of an existing MongoDB/BigchainDB instance, you Note that by ``replica set``, we are referring to the MongoDB replica set,
will have to coordinate offline with an existing administrator so that s/he can not a Kubernetes' ``ReplicaSet``.
add the new MongoDB instance to the replica set. The means to achieve this is
beyond the scope of this document. If you are not the administrator of an existing BigchainDB node, you
will have to coordinate offline with an existing administrator so that they can
add the new MongoDB instance to the replica set.
Add the new instance of MongoDB from an existing instance by accessing the Add the new instance of MongoDB from an existing instance by accessing the
``mongo`` shell. ``mongo`` shell.
@ -100,7 +95,7 @@ Add the new instance of MongoDB from an existing instance by accessing the
$ kubectl --context ctx-1 exec -it mdb-0 -c mongodb -- /bin/bash $ kubectl --context ctx-1 exec -it mdb-0 -c mongodb -- /bin/bash
root@mdb-0# mongo --port 27017 root@mdb-0# mongo --port 27017
We can only add members to a replica set from the ``PRIMARY`` instance. One can only add members to a replica set from the ``PRIMARY`` instance.
The ``mongo`` shell prompt should state that this is the primary member in the The ``mongo`` shell prompt should state that this is the primary member in the
replica set. replica set.
If not, then you can use the ``rs.status()`` command to find out who the If not, then you can use the ``rs.status()`` command to find out who the
@ -113,7 +108,7 @@ Run the ``rs.add()`` command with the FQDN and port number of the other instance
PRIMARY> rs.add("<fqdn>:<port>") PRIMARY> rs.add("<fqdn>:<port>")
Step 4: Verify the replica set membership Step 4: Verify the Replica Set Membership
----------------------------------------- -----------------------------------------
You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the
@ -123,7 +118,7 @@ The new MongoDB instance should be listed in the membership information
displayed. displayed.
Step 5: Start the new BigchainDB instance Step 5: Start the New BigchainDB Instance
----------------------------------------- -----------------------------------------
Get the file ``bigchaindb-dep.yaml`` from GitHub using: Get the file ``bigchaindb-dep.yaml`` from GitHub using:
@ -149,20 +144,20 @@ Create the required Deployment using:
You can check its status using the command ``kubectl get deploy -w`` You can check its status using the command ``kubectl get deploy -w``
Step 6: Restart the existing BigchainDB instance(s) Step 6: Restart the Existing BigchainDB Instance(s)
--------------------------------------------------- ---------------------------------------------------
Add public key of the new BigchainDB instance to the keyring of all the
existing instances and update the BigchainDB instances using: Add the public key of the new BigchainDB instance to the keyring of all the
existing BigchainDB instances and update the BigchainDB instances using:
.. code:: bash .. code:: bash
$ kubectl --context ctx-1 replace -f bigchaindb-dep.yaml $ kubectl --context ctx-1 replace -f bigchaindb-dep.yaml
This will create a ``rolling deployment`` in Kubernetes where a new instance of This will create a "rolling deployment" in Kubernetes where a new instance of
BigchainDB will be created, and if the health check on the new instance is BigchainDB will be created, and if the health check on the new instance is
successful, the earlier one will be terminated. This ensures that there is successful, the earlier one will be terminated. This ensures that there is
zero downtime during updates. zero downtime during updates.
You can login to an existing BigchainDB instance and run the ``bigchaindb You can SSH to an existing BigchainDB instance and run the ``bigchaindb
show-config`` command to see the configuration update to the keyring. show-config`` command to check that the keyring is updated.

View File

@ -1,15 +1,13 @@
Bootstrap a BigchainDB Node in a Kubernetes Cluster Kubernetes Template: Deploy a Single BigchainDB Node
=================================================== ====================================================
**Refer this document if you are starting your first BigchainDB instance in This page describes how to deploy the first BigchainDB node
a BigchainDB cluster or starting a stand-alone BigchainDB instance** in a BigchainDB cluster, or a stand-alone BigchainDB node,
using `Kubernetes <https://kubernetes.io/>`_.
It assumes you already have a running Kubernetes cluster.
**If you want to add a new BigchainDB node to an existing cluster, refer** If you want to add a new BigchainDB node to an existing BigchainDB cluster,
:doc:`this <add-node-on-kubernetes>` refer to :doc:`the page about that <add-node-on-kubernetes>`.
Assuming you already have a `Kubernetes <https://kubernetes.io/>`_
cluster up and running, this page describes how to run a
BigchainDB node in it.
Step 1: Install kubectl Step 1: Install kubectl
@ -49,18 +47,17 @@ Step 3: Create Storage Classes
MongoDB needs somewhere to store its data persistently, MongoDB needs somewhere to store its data persistently,
outside the container where MongoDB is running. outside the container where MongoDB is running.
Our MongoDB Docker container
The official MongoDB Docker container exports two volume mounts with correct (based on the official MongoDB Docker container)
exports two volume mounts with correct
permissions from inside the container: permissions from inside the container:
* The directory where the mongod instance stores its data: ``/data/db``.
There's more explanation in the MongoDB docs about `storage.dbpath <https://docs.mongodb.com/manual/reference/configuration-options/#storage.dbPath>`_.
* The directory where the mongod instance stores its data - ``/data/db``, * The directory where the mongodb instance stores the metadata for a sharded
described at `storage.dbpath <https://docs.mongodb.com/manual/reference/configuration-options/#storage.dbPath>`_. cluster: ``/data/configdb/``.
There's more explanation in the MongoDB docs about `sharding.configDB <https://docs.mongodb.com/manual/reference/configuration-options/#sharding.configDB>`_.
* The directory where mongodb instance stores the metadata for a sharded
cluster - ``/data/configdb/``, described at
`sharding.configDB <https://docs.mongodb.com/manual/reference/configuration-options/#sharding.configDB>`_.
Explaining how Kubernetes handles persistent volumes, Explaining how Kubernetes handles persistent volumes,
and the associated terminology, and the associated terminology,
@ -69,9 +66,6 @@ see `the Kubernetes docs about persistent volumes
<https://kubernetes.io/docs/user-guide/persistent-volumes>`_. <https://kubernetes.io/docs/user-guide/persistent-volumes>`_.
The first thing to do is create the Kubernetes storage classes. The first thing to do is create the Kubernetes storage classes.
We will accordingly create two storage classes and persistent volume claims in
Kubernetes.
**Azure.** First, you need an Azure storage account. **Azure.** First, you need an Azure storage account.
If you deployed your Kubernetes cluster on Azure If you deployed your Kubernetes cluster on Azure
@ -85,7 +79,6 @@ Standard storage is lower-cost and lower-performance.
It uses hard disk drives (HDD). It uses hard disk drives (HDD).
LRS means locally-redundant storage: three replicas LRS means locally-redundant storage: three replicas
in the same data center. in the same data center.
Premium storage is higher-cost and higher-performance. Premium storage is higher-cost and higher-performance.
It uses solid state drives (SSD). It uses solid state drives (SSD).
At the time of writing, At the time of writing,
@ -102,11 +95,10 @@ Get the file ``mongo-sc.yaml`` from GitHub using:
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-sc.yaml $ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-sc.yaml
You may want to update the ``parameters.location`` field in both the files to You may have to update the ``parameters.location`` field in both the files to
specify the location you are using in Azure. specify the location you are using in Azure.
Create the required storage classes using:
Create the required storage classes using
.. code:: bash .. code:: bash
@ -115,7 +107,7 @@ Create the required storage classes using
You can check if it worked using ``kubectl get storageclasses``. You can check if it worked using ``kubectl get storageclasses``.
Note that there is no line of the form **Azure.** Note that there is no line of the form
``storageAccount: <azure storage account name>`` ``storageAccount: <azure storage account name>``
under ``parameters:``. When we included one under ``parameters:``. When we included one
and then created a PersistentVolumeClaim based on it, and then created a PersistentVolumeClaim based on it,
@ -128,9 +120,8 @@ with the specified skuName and location.
Step 4: Create Persistent Volume Claims Step 4: Create Persistent Volume Claims
--------------------------------------- ---------------------------------------
Next, we'll create two PersistentVolumeClaim objects ``mongo-db-claim`` and Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and
``mongo-configdb-claim``. ``mongo-configdb-claim``.
Get the file ``mongo-pvc.yaml`` from GitHub using: Get the file ``mongo-pvc.yaml`` from GitHub using:
.. code:: bash .. code:: bash
@ -166,15 +157,14 @@ Step 5: Create the Config Map - Optional
This step is required only if you are planning to set up multiple This step is required only if you are planning to set up multiple
`BigchainDB nodes `BigchainDB nodes
<https://docs.bigchaindb.com/en/latest/terminology.html#node>`_, else you can <https://docs.bigchaindb.com/en/latest/terminology.html#node>`_.
skip to the :ref:`next step <Step 6: Run MongoDB as a StatefulSet>`.
MongoDB reads the local ``/etc/hosts`` file while bootstrapping a replica set MongoDB reads the local ``/etc/hosts`` file while bootstrapping a replica set
to resolve the hostname provided to the ``rs.initiate()`` command. It needs to to resolve the hostname provided to the ``rs.initiate()`` command. It needs to
ensure that the replica set is being initialized in the same instance where ensure that the replica set is being initialized in the same instance where
the MongoDB instance is running. the MongoDB instance is running.
To achieve this, we create a ConfigMap with the FQDN of the MongoDB instance To achieve this, you will create a ConfigMap with the FQDN of the MongoDB instance
and populate the ``/etc/hosts`` file with this value so that a replica set can and populate the ``/etc/hosts`` file with this value so that a replica set can
be created seamlessly. be created seamlessly.
@ -188,35 +178,29 @@ You may want to update the ``data.fqdn`` field in the file before creating the
ConfigMap. ``data.fqdn`` field will be the DNS name of your MongoDB instance. ConfigMap. ``data.fqdn`` field will be the DNS name of your MongoDB instance.
This will be used by other MongoDB instances when forming a MongoDB This will be used by other MongoDB instances when forming a MongoDB
replica set. It should resolve to the MongoDB instance in your cluster when replica set. It should resolve to the MongoDB instance in your cluster when
you are done with the setup. This will help when we are adding more MongoDB you are done with the setup. This will help when you are adding more MongoDB
instances to the replica set in the future. instances to the replica set in the future.
For ACS **Azure.**
^^^^^^^
In Kubernetes on ACS, the name you populate in the ``data.fqdn`` field In Kubernetes on ACS, the name you populate in the ``data.fqdn`` field
will be used to configure a DNS name for the public IP assigned to the will be used to configure a DNS name for the public IP assigned to the
Kubernetes Service that is the frontend for the MongoDB instance. Kubernetes Service that is the frontend for the MongoDB instance.
We suggest using a name that will already be available in Azure. We suggest using a name that will already be available in Azure.
We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in this document, We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in this document,
which gives us ``mdb-instance-0.<azure location>.cloudapp.azure.com``, which gives us ``mdb-instance-0.<azure location>.cloudapp.azure.com``,
``mdb-instance-1.<azure location>.cloudapp.azure.com``, etc. as the FQDNs. ``mdb-instance-1.<azure location>.cloudapp.azure.com``, etc. as the FQDNs.
The ``<azure location>`` is the Azure datacenter location you are using, The ``<azure location>`` is the Azure datacenter location you are using,
which can also be obtained using the ``az account list-locations`` command. which can also be obtained using the ``az account list-locations`` command.
You can also try to assign a name to an Public IP in Azure before starting You can also try to assign a name to an Public IP in Azure before starting
the process, or use ``nslookup`` with the name you have in mind to check the process, or use ``nslookup`` with the name you have in mind to check
if it's available for use. if it's available for use.
In the rare chance that name in the ``data.fqdn`` field is not available, In the rare chance that name in the ``data.fqdn`` field is not available,
we will need to create a ConfigMap with a unique name and restart the you must create a ConfigMap with a unique name and restart the
MongoDB instance. MongoDB instance.
For Kubernetes on bare-metal or other cloud providers **Kubernetes on bare-metal or other cloud providers.**
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You need to provide the name resolution function
On other environments, you need to provide the name resolution function
by other means (using DNS providers like GoDaddy, CloudFlare or your own by other means (using DNS providers like GoDaddy, CloudFlare or your own
private DNS server). The DNS set up for other environments is currently private DNS server). The DNS set up for other environments is currently
beyond the scope of this document. beyond the scope of this document.
@ -231,10 +215,9 @@ Create the required ConfigMap using:
You can check its status using: ``kubectl get cm`` You can check its status using: ``kubectl get cm``
Now you are ready to run MongoDB and BigchainDB on our Kubernetes cluster.
Now we are ready to run MongoDB and BigchainDB on our Kubernetes cluster.
Step 6: Run MongoDB as a StatefulSet Step 6: Run MongoDB as a StatefulSet
------------------------------------ ------------------------------------
@ -250,12 +233,10 @@ Note how the MongoDB container uses the ``mongo-db-claim`` and the
``/data/configdb`` diretories (mount path). Note also that we use the pod's ``/data/configdb`` diretories (mount path). Note also that we use the pod's
``securityContext.capabilities.add`` specification to add the ``FOWNER`` ``securityContext.capabilities.add`` specification to add the ``FOWNER``
capability to the container. capability to the container.
That is because MongoDB container has the user ``mongodb``, with uid ``999`` That is because MongoDB container has the user ``mongodb``, with uid ``999``
and group ``mongodb``, with gid ``999``. and group ``mongodb``, with gid ``999``.
When this container runs on a host with a mounted disk, the writes fail when When this container runs on a host with a mounted disk, the writes fail when
there is no user with uid ``999``. there is no user with uid ``999``.
To avoid this, we use the Docker feature of ``--cap-add=FOWNER``. To avoid this, we use the Docker feature of ``--cap-add=FOWNER``.
This bypasses the uid and gid permission checks during writes and allows data This bypasses the uid and gid permission checks during writes and allows data
to be persisted to disk. to be persisted to disk.
@ -277,9 +258,9 @@ Create the required StatefulSet using:
You can check its status using the commands ``kubectl get statefulsets -w`` You can check its status using the commands ``kubectl get statefulsets -w``
and ``kubectl get svc -w`` and ``kubectl get svc -w``
You may have to wait for upto 10 minutes wait for disk to be created You may have to wait for up to 10 minutes for the disk to be created
and attached on the first run. The pod can fail several times with the message and attached on the first run. The pod can fail several times with the message
specifying that the timeout for mounting the disk has exceeded. saying that the timeout for mounting the disk was exceeded.
Step 7: Initialize a MongoDB Replica Set - Optional Step 7: Initialize a MongoDB Replica Set - Optional
@ -287,8 +268,7 @@ Step 7: Initialize a MongoDB Replica Set - Optional
This step is required only if you are planning to set up multiple This step is required only if you are planning to set up multiple
`BigchainDB nodes `BigchainDB nodes
<https://docs.bigchaindb.com/en/latest/terminology.html#node>`_, else you can <https://docs.bigchaindb.com/en/latest/terminology.html#node>`_.
skip to the :ref:`step 9 <Step 9: Run BigchainDB as a Deployment>`.
Login to the running MongoDB instance and access the mongo shell using: Login to the running MongoDB instance and access the mongo shell using:
@ -298,7 +278,7 @@ Login to the running MongoDB instance and access the mongo shell using:
$ kubectl exec -it mdb-0 -c mongodb -- /bin/bash $ kubectl exec -it mdb-0 -c mongodb -- /bin/bash
root@mdb-0:/# mongo --port 27017 root@mdb-0:/# mongo --port 27017
We initialize the replica set by using the ``rs.initiate()`` command from the You will initiate the replica set by using the ``rs.initiate()`` command from the
mongo shell. Its syntax is: mongo shell. Its syntax is:
.. code:: bash .. code:: bash
@ -335,19 +315,13 @@ Step 8: Create a DNS record - Optional
This step is required only if you are planning to set up multiple This step is required only if you are planning to set up multiple
`BigchainDB nodes `BigchainDB nodes
<https://docs.bigchaindb.com/en/latest/terminology.html#node>`_, else you can <https://docs.bigchaindb.com/en/latest/terminology.html#node>`_.
skip to the :ref:`next step <Step 9: Run BigchainDB as a Deployment>`.
Since we currently rely on Azure to provide us with a public IP and manage the **Azure.** Select the current Azure resource group and look for the ``Public IP``
DNS entries of MongoDB instances, we detail only the steps required for ACS
here.
Select the current Azure resource group and look for the ``Public IP``
resource. You should see at least 2 entries there - one for the Kubernetes resource. You should see at least 2 entries there - one for the Kubernetes
master and the other for the MongoDB instance. You may have to ``Refresh`` the master and the other for the MongoDB instance. You may have to ``Refresh`` the
Azure web page listing the resources in a resource group for the latest Azure web page listing the resources in a resource group for the latest
changes to be reflected. changes to be reflected.
Select the ``Public IP`` resource that is attached to your service (it should Select the ``Public IP`` resource that is attached to your service (it should
have the Kubernetes cluster name along with a random string), have the Kubernetes cluster name along with a random string),
select ``Configuration``, add the DNS name that was added in the select ``Configuration``, add the DNS name that was added in the
@ -356,7 +330,6 @@ ConfigMap earlier, click ``Save``, and wait for the changes to be applied.
To verify the DNS setting is operational, you can run ``nslookup <dns To verify the DNS setting is operational, you can run ``nslookup <dns
name added in ConfigMap>`` from your local Linux shell. name added in ConfigMap>`` from your local Linux shell.
This will ensure that when you scale the replica set later, other MongoDB This will ensure that when you scale the replica set later, other MongoDB
members in the replica set can reach this instance. members in the replica set can reach this instance.
@ -420,7 +393,7 @@ on the cluster and query the internal DNS and IP endpoints.
$ kubectl run -it toolbox -- image <docker image to run> --restart=Never --rm $ kubectl run -it toolbox -- image <docker image to run> --restart=Never --rm
It will drop you to the shell prompt. It will drop you to the shell prompt.
Now we can query for the ``mdb`` and ``bdb`` service details. Now you can query for the ``mdb`` and ``bdb`` service details.
.. code:: bash .. code:: bash