mirror of
https://github.com/bigchaindb/bigchaindb.git
synced 2024-10-13 13:34:05 +00:00
More changes to multi-node deployment guide
- Integrating changes suggested by Krish. - Addressing comments on initial commit.
This commit is contained in:
parent
0cf46b331f
commit
e446c31a41
@ -47,22 +47,18 @@ cluster is using.
|
|||||||
Step 1: Prerequisites
|
Step 1: Prerequisites
|
||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
* A public/private key pair for the new BigchainDB instance.
|
* :ref:`List of all the things to be done by each node operator <Things Each Node Operator Must Do>`.
|
||||||
|
|
||||||
* The public key should be shared offline with the other existing BigchainDB
|
* The public key should be shared offline with the other existing BigchainDB
|
||||||
nodes in the existing BigchainDB cluster.
|
nodes in the existing BigchainDB cluster.
|
||||||
|
|
||||||
* You will need the public keys of all the existing BigchainDB nodes.
|
* You will need the public keys of all the existing BigchainDB nodes.
|
||||||
|
|
||||||
* Client Certificate for the new BigchainDB Server to identify itself to the cluster.
|
|
||||||
|
|
||||||
* A new Kubernetes cluster setup with kubectl configured to access it.
|
* A new Kubernetes cluster setup with kubectl configured to access it.
|
||||||
|
|
||||||
* Some familiarity with deploying a BigchainDB node on Kubernetes.
|
* Some familiarity with deploying a BigchainDB node on Kubernetes.
|
||||||
See our :doc:`other docs about that <node-on-kubernetes>`.
|
See our :doc:`other docs about that <node-on-kubernetes>`.
|
||||||
|
|
||||||
* You will need a client certificate for each MongoDB monitoring and backup agent.
|
|
||||||
|
|
||||||
Note: If you are managing multiple Kubernetes clusters, from your local
|
Note: If you are managing multiple Kubernetes clusters, from your local
|
||||||
system, you can run ``kubectl config view`` to list all the contexts that
|
system, you can run ``kubectl config view`` to list all the contexts that
|
||||||
are available for the local kubectl.
|
are available for the local kubectl.
|
||||||
@ -77,33 +73,86 @@ example:
|
|||||||
$ kubectl --context ctx-2 proxy --port 8002
|
$ kubectl --context ctx-2 proxy --port 8002
|
||||||
|
|
||||||
|
|
||||||
Step 2: Prepare the New Kubernetes Cluster
|
Step 2: Configure the BigchainDB Node
|
||||||
------------------------------------------
|
-------------------------------------
|
||||||
|
|
||||||
Follow the steps in the sections to set up Storage Classes and Persistent Volume
|
See the section on how to :ref:`configure your BigchainDB node <How to Configure a BigchainDB Node>`.
|
||||||
Claims, and to run MongoDB in the new cluster:
|
|
||||||
|
|
||||||
1. :ref:`Add Storage Classes <Step 10: Create Kubernetes Storage Classes for MongoDB>`.
|
|
||||||
2. :ref:`Add Persistent Volume Claims <Step 11: Create Kubernetes Persistent Volume Claims>`.
|
|
||||||
3. :ref:`Create the Config Map <Step 3: Configure Your BigchainDB Node>`.
|
|
||||||
4. :ref:`Prepare the Kubernetes Secrets <Step 3: Configure Your BigchainDB Node>`, as per your
|
|
||||||
requirement i.e. if you do not want a certain functionality, just remove it from the
|
|
||||||
``configuration/secret.yaml``.
|
|
||||||
5. :ref:`Run MongoDB instance <Step 12: Start a Kubernetes StatefulSet for MongoDB>`.
|
|
||||||
|
|
||||||
|
|
||||||
Step 3: Start NGINX service, Assign DNS to NGINX Pubic IP and run NGINX deployment
|
Step 3: Start the NGINX Service
|
||||||
----------------------------------------------------------------------------------
|
--------------------------------
|
||||||
|
|
||||||
Please see the following pages:
|
Please see the following section:
|
||||||
|
|
||||||
* :ref:`Start NGINX service <Step 4: Start the NGINX Service>`.
|
* :ref:`Start NGINX service <Step 4: Start the NGINX Service>`.
|
||||||
|
|
||||||
|
|
||||||
|
Step 4: Assign DNS Name to the NGINX Public IP
|
||||||
|
----------------------------------------------
|
||||||
|
|
||||||
|
Please see the following section:
|
||||||
|
|
||||||
* :ref:`Assign DNS to NGINX Public IP <Step 5: Assign DNS Name to the NGINX Public IP>`.
|
* :ref:`Assign DNS to NGINX Public IP <Step 5: Assign DNS Name to the NGINX Public IP>`.
|
||||||
|
|
||||||
|
|
||||||
|
Step 5: Start the MongoDB Kubernetes Service
|
||||||
|
--------------------------------------------
|
||||||
|
|
||||||
|
Please see the following section:
|
||||||
|
|
||||||
|
* :ref:`Start the MongoDB Kubernetes Service <Step 6: Start the MongoDB Kubernetes Service>`.
|
||||||
|
|
||||||
|
|
||||||
|
Step 6: Start the BigchainDB Kubernetes Service
|
||||||
|
-----------------------------------------------
|
||||||
|
|
||||||
|
Please see the following section:
|
||||||
|
|
||||||
|
* :ref:`Start the BigchainDB Kubernetes Service <Step 7: Start the BigchainDB Kubernetes Service>`.
|
||||||
|
|
||||||
|
|
||||||
|
Step 7: Start the OpenResty Kubernetes Service
|
||||||
|
----------------------------------------------
|
||||||
|
|
||||||
|
Please see the following section:
|
||||||
|
|
||||||
|
* :ref:`Start the OpenResty Kubernetes Service <Step 8: Start the OpenResty Kubernetes Service>`.
|
||||||
|
|
||||||
|
|
||||||
|
Step 8: Start the NGINX Kubernetes Deployment
|
||||||
|
---------------------------------------------
|
||||||
|
|
||||||
|
Please see the following section:
|
||||||
|
|
||||||
* :ref:`Run NGINX deployment <Step 9: Start the NGINX Kubernetes Deployment>`.
|
* :ref:`Run NGINX deployment <Step 9: Start the NGINX Kubernetes Deployment>`.
|
||||||
|
|
||||||
|
|
||||||
Step 4: Verify network connectivity between the MongoDB instances
|
Step 9: Create Kubernetes Storage Classes for MongoDB
|
||||||
-----------------------------------------------------------------
|
-----------------------------------------------------
|
||||||
|
|
||||||
|
Please see the following section:
|
||||||
|
|
||||||
|
* :ref:`Step 10: Create Kubernetes Storage Classes for MongoDB`.
|
||||||
|
|
||||||
|
|
||||||
|
Step 10: Create Kubernetes Persistent Volume Claims
|
||||||
|
---------------------------------------------------
|
||||||
|
|
||||||
|
Please see the following section:
|
||||||
|
|
||||||
|
* :ref:`Step 11: Create Kubernetes Persistent Volume Claims`.
|
||||||
|
|
||||||
|
|
||||||
|
Step 11: Start a Kubernetes StatefulSet for MongoDB
|
||||||
|
---------------------------------------------------
|
||||||
|
|
||||||
|
Please see the following section:
|
||||||
|
|
||||||
|
* :ref:`Step 12: Start a Kubernetes StatefulSet for MongoDB`.
|
||||||
|
|
||||||
|
|
||||||
|
Step 12: Verify network connectivity between the MongoDB instances
|
||||||
|
------------------------------------------------------------------
|
||||||
|
|
||||||
Make sure your MongoDB instances can access each other over the network. *If* you are deploying
|
Make sure your MongoDB instances can access each other over the network. *If* you are deploying
|
||||||
the new MongoDB node in a different cluster or geographical location using Azure Kubernetes Container
|
the new MongoDB node in a different cluster or geographical location using Azure Kubernetes Container
|
||||||
@ -115,14 +164,18 @@ want to add a new MongoDB instance ``mdb-instance-1`` located in Azure data cent
|
|||||||
replica set. Unless you already have explicitly set up networking for ``mdb-instance-0`` to communicate with ``mdb-instance-1`` and
|
replica set. Unless you already have explicitly set up networking for ``mdb-instance-0`` to communicate with ``mdb-instance-1`` and
|
||||||
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
|
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
|
||||||
MongoDB replica set.
|
MongoDB replica set.
|
||||||
|
It is similar to ensuring that there is a ``CNAME`` record in the DNS
|
||||||
|
infrastructure to resolve ``mdb-instance-X`` to the host where it is actually available.
|
||||||
|
We can do this in Kubernetes using a Kubernetes Service of ``type``
|
||||||
|
``ExternalName``.
|
||||||
|
|
||||||
* This configuration is located in the file ``mongodb/mongo-ext-conn-svc.yaml``.
|
* This configuration is located in the file ``mongodb/mongo-ext-conn-svc.yaml``.
|
||||||
|
|
||||||
* Set the name of the ``metadata.name`` to the host name of the MongoDB instance you are trying to connect to.
|
* Set the name of the ``metadata.name`` to the host name of the MongoDB instance you are trying to connect to.
|
||||||
For instance if you are configuring this service on cluster with `mdb-instance-0` then the ``metadata.name`` will
|
For instance if you are configuring this service on cluster with ``mdb-instance-0`` then the ``metadata.name`` will
|
||||||
be ``mdb-instance-1`` and vice versa.
|
be ``mdb-instance-1`` and vice versa.
|
||||||
|
|
||||||
* Set ``spec.ports.port[0]`` to the ``mongodb-backend-port`` from the ConfigMap.
|
* Set ``spec.ports.port[0]`` to the ``mongodb-backend-port`` from the ConfigMap for the other cluster.
|
||||||
|
|
||||||
* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to.
|
* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to.
|
||||||
For more information about the FQDN please refer to: :ref:`Assign DNS Name to the NGINX Public
|
For more information about the FQDN please refer to: :ref:`Assign DNS Name to the NGINX Public
|
||||||
@ -132,9 +185,14 @@ MongoDB replica set.
|
|||||||
This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs
|
This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs
|
||||||
we need to communicate with.
|
we need to communicate with.
|
||||||
|
|
||||||
|
If you are not the system administrator of the cluster, you have to get in
|
||||||
|
touch with the system administrator/s of the other ``n-1`` clusters and
|
||||||
|
share with them your instance name (``mdb-instance-name`` in the ConfigMap)
|
||||||
|
and the FQDN for your node (``cluster-fqdn`` in the ConfigMap).
|
||||||
|
|
||||||
Step 5: Add the New MongoDB Instance to the Existing Replica Set
|
|
||||||
----------------------------------------------------------------
|
Step 13: Add the New MongoDB Instance to the Existing Replica Set
|
||||||
|
-----------------------------------------------------------------
|
||||||
|
|
||||||
Note that by ``replica set``, we are referring to the MongoDB replica set,
|
Note that by ``replica set``, we are referring to the MongoDB replica set,
|
||||||
not a Kubernetes' ``ReplicaSet``.
|
not a Kubernetes' ``ReplicaSet``.
|
||||||
@ -149,8 +207,8 @@ contact the admin of the PRIMARY MongoDB node:
|
|||||||
|
|
||||||
.. code:: bash
|
.. code:: bash
|
||||||
|
|
||||||
$ kubectl --context ctx-1 exec -it <existing-mongodb-host-name> -c mongodb -- /bin/bash
|
$ kubectl --context ctx-1 exec -it <existing mongodb-instance-name> bash
|
||||||
$ mongo --host <existing-mongodb-host-name> --port 27017 --verbose --ssl \
|
$ mongo --host <existing mongodb-instance-name> --port 27017 --verbose --ssl \
|
||||||
--sslCAFile /etc/mongod/ssl/ca.pem \
|
--sslCAFile /etc/mongod/ssl/ca.pem \
|
||||||
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
|
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
|
||||||
|
|
||||||
@ -167,11 +225,11 @@ Run the ``rs.add()`` command with the FQDN and port number of the other instance
|
|||||||
|
|
||||||
.. code:: bash
|
.. code:: bash
|
||||||
|
|
||||||
PRIMARY> rs.add("<fqdn>:<port>")
|
PRIMARY> rs.add("<new mdb-instance-name>:<port>")
|
||||||
|
|
||||||
|
|
||||||
Step 6: Verify the Replica Set Membership
|
Step 14: Verify the Replica Set Membership
|
||||||
-----------------------------------------
|
------------------------------------------
|
||||||
|
|
||||||
You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the
|
You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the
|
||||||
mongo shell to verify the replica set membership.
|
mongo shell to verify the replica set membership.
|
||||||
@ -180,15 +238,60 @@ The new MongoDB instance should be listed in the membership information
|
|||||||
displayed.
|
displayed.
|
||||||
|
|
||||||
|
|
||||||
Step 7: Start the New BigchainDB Instance
|
Step 15: Configure Users and Access Control for MongoDB
|
||||||
-----------------------------------------
|
-------------------------------------------------------
|
||||||
|
|
||||||
Get the file ``bigchaindb-dep.yaml`` from GitHub using:
|
* Create the users in MongoDB with the appropriate roles assigned to them. This
|
||||||
|
will enable the new BigchainDB instance, new MongoDB Monitoring Agent
|
||||||
|
instance and the new MongoDB Backup Agent instance to function correctly.
|
||||||
|
|
||||||
.. code:: bash
|
* Please refer to
|
||||||
|
:ref:`Configure Users and Access Control for MongoDB <Step 13: Configure
|
||||||
|
Users and Access Control for MongoDB>` to create and configure the new
|
||||||
|
BigchainDB, MongoDB Monitoring Agent and MongoDB Backup Agent users on the
|
||||||
|
cluster.
|
||||||
|
|
||||||
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/bigchaindb/bigchaindb-dep.yaml
|
.. note::
|
||||||
|
You will not have to create the MongoDB replica set or create the admin user, as they already exist.
|
||||||
|
|
||||||
|
If you do not have access to the ``PRIMARY`` member of the replica set, you
|
||||||
|
need to get in touch with the administrator who can create the users in the
|
||||||
|
MongoDB cluster.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Step 16: Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
||||||
|
-------------------------------------------------------------------
|
||||||
|
|
||||||
|
Please see the following section:
|
||||||
|
|
||||||
|
* :ref:`Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent`.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
Every MMS group has only one active Monitoring and Backup Agent and having
|
||||||
|
multiple agents provides high availability and failover, in case one goes
|
||||||
|
down. For more information about Monitoring and Backup Agents please
|
||||||
|
consult the `official MongoDB documenation
|
||||||
|
<https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
|
||||||
|
|
||||||
|
|
||||||
|
Step 17: Start a Kubernetes Deployment for MongoDB Backup Agent
|
||||||
|
---------------------------------------------------------------
|
||||||
|
|
||||||
|
Please see the following section:
|
||||||
|
|
||||||
|
* :ref:`Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent`.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
Every MMS group has only one active Monitoring and Backup Agent and having
|
||||||
|
multiple agents provides high availability and failover, in case one goes
|
||||||
|
down. For more information about Monitoring and Backup Agents please
|
||||||
|
consult the `official MongoDB documenation
|
||||||
|
<https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
|
||||||
|
|
||||||
|
|
||||||
|
Step 18: Start a Kubernetes Deployment for BigchainDB
|
||||||
|
-----------------------------------------------------
|
||||||
|
|
||||||
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
|
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
|
||||||
value set in ``bdb-instance-name`` in the ConfigMap, followed by
|
value set in ``bdb-instance-name`` in the ConfigMap, followed by
|
||||||
@ -216,72 +319,65 @@ Get the file ``bigchaindb-dep.yaml`` from GitHub using:
|
|||||||
* Uncomment the env var ``BIGCHAINDB_KEYRING``, it will pick up the
|
* Uncomment the env var ``BIGCHAINDB_KEYRING``, it will pick up the
|
||||||
``:`` delimited list of all the public keys in the BigchainDB cluster from the ConfigMap.
|
``:`` delimited list of all the public keys in the BigchainDB cluster from the ConfigMap.
|
||||||
|
|
||||||
* Authenticate the new BigchainDB instance using the client x.509 certificate with MongoDB. We need to specify the
|
|
||||||
user name *as seen in the certificate* issued to the BigchainDB instance in order to authenticate correctly.
|
|
||||||
Please refer to: :ref:`Configure Users and Access Control for MongoDB <Step 13: Configure Users and Access Control for MongoDB>`
|
|
||||||
|
|
||||||
Create the required Deployment using:
|
Create the required Deployment using:
|
||||||
|
|
||||||
.. code:: bash
|
.. code:: bash
|
||||||
|
|
||||||
$ kubectl --context ctx-2 apply -f bigchaindb-dep.yaml
|
$ kubectl --context ctx-2 apply -f bigchaindb-dep.yaml
|
||||||
|
|
||||||
You can check its status using the command ``kubectl get deploy -w``
|
You can check its status using the command ``kubectl --context ctx-2 get deploy -w``
|
||||||
|
|
||||||
|
|
||||||
Step 8: Restart the Existing BigchainDB Instance(s)
|
Step 19: Restart the Existing BigchainDB Instance(s)
|
||||||
---------------------------------------------------
|
----------------------------------------------------
|
||||||
|
|
||||||
Add the public key of the new BigchainDB instance to the ConfigMap ``bdb-keyring``
|
* Add the public key of the new BigchainDB instance to the ConfigMap
|
||||||
variable of existing BigchainDB instances, update the ConfigMap of the existing
|
``bdb-keyring`` variable of all the existing BigchainDB instances.
|
||||||
BigchainDB instances and update the instances respectively:
|
Update all the existing ConfigMap using:
|
||||||
|
|
||||||
.. code:: bash
|
.. code:: bash
|
||||||
|
|
||||||
$ kubectl --context ctx-1 apply -f configuration/config-map.yaml
|
$ kubectl --context ctx-1 apply -f configuration/config-map.yaml
|
||||||
$ kubectl --context ctx-1 replace -f bigchaindb/bigchaindb-dep.yaml --force
|
|
||||||
|
* Uncomment the ``BIGCHAINDB_KEYRING`` variable from the
|
||||||
|
``bigchaindb/bigchaindb-dep.yaml`` to refer to the keyring updated in the
|
||||||
|
ConfigMap.
|
||||||
|
Update the running BigchainDB instance using:
|
||||||
|
|
||||||
|
.. code:: bash
|
||||||
|
|
||||||
|
$ kubectl --context ctx-1 delete -f bigchaindb/bigchaindb-dep.yaml
|
||||||
|
$ kubectl --context ctx-1 apply -f bigchaindb/bigchaindb-dep.yaml
|
||||||
|
|
||||||
|
|
||||||
See the page titled :ref:`How to Configure a BigchainDB Node` for more information about
|
See the page titled :ref:`How to Configure a BigchainDB Node` for more information about
|
||||||
ConfigMap configuration.
|
ConfigMap configuration.
|
||||||
|
|
||||||
This will create a "rolling deployment" in Kubernetes where a new instance of
|
|
||||||
BigchainDB will be created, and if the health check on the new instance is
|
|
||||||
successful, the earlier one will be terminated. This ensures that there is
|
|
||||||
zero downtime during updates.
|
|
||||||
|
|
||||||
You can SSH to an existing BigchainDB instance and run the ``bigchaindb
|
You can SSH to an existing BigchainDB instance and run the ``bigchaindb
|
||||||
show-config`` command to check that the keyring is updated.
|
show-config`` command to check that the keyring is updated.
|
||||||
|
|
||||||
|
|
||||||
Step 9: Deploy MongoDB Monitoring and Backup Agent
|
Step 20: Start a Kubernetes Deployment for OpenResty
|
||||||
--------------------------------------------------
|
----------------------------------------------------
|
||||||
|
|
||||||
To Deploy MongoDB monitoring and backup agent for the new cluster, you have to authenticate each agent using its
|
Please see the following section:
|
||||||
unique client certificate. For more information on how to authenticate and add users to MongoDB please refer to:
|
|
||||||
|
|
||||||
* :ref:`Configure Users and Access Control for MongoDB<Step 13: Configure Users and Access Control for MongoDB>`
|
* :ref:`Step 17: Start a Kubernetes Deployment for OpenResty`.
|
||||||
|
|
||||||
After authentication, start the Kubernetes Deployments:
|
|
||||||
|
|
||||||
* :ref:`Start a Kubernetes Deployment for MongoDB Monitoring Agent <Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent>`.
|
|
||||||
* :ref:`Start a Kubernetes Deployment for MongoDB Backup Agent <Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent>`.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
Every MMS group has only one active Monitoring and Backup agent and having multiple agents provides High availability and failover, in case
|
|
||||||
one goes down. For more information about Monitoring and Backup Agents please consult the `official MongoDB documenation <https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
|
|
||||||
|
|
||||||
|
|
||||||
Step 10: Start OpenResty Service and Deployment
|
Step 21: Configure the MongoDB Cloud Manager
|
||||||
---------------------------------------------------------
|
--------------------------------------------
|
||||||
|
|
||||||
Please refer to the following instructions:
|
* MongoDB Cloud Manager auto-detects the members of the replica set and
|
||||||
|
configures the agents to act as a master/slave accordingly.
|
||||||
|
|
||||||
* :ref:`Start the OpenResty Kubernetes Service <Step 8: Start the OpenResty Kubernetes Service>`.
|
* You can verify that the new MongoDB instance is detected by the
|
||||||
* :ref:`Start a Kubernetes Deployment for OpenResty <Step 17: Start a Kubernetes Deployment for OpenResty>`.
|
Monitoring and Backup Agent using the Cloud Manager UI.
|
||||||
|
|
||||||
|
|
||||||
Step 11: Test Your New BigchainDB Node
|
Step 22: Test Your New BigchainDB Node
|
||||||
--------------------------------------
|
--------------------------------------
|
||||||
|
|
||||||
Please refer to the testing steps :ref:`here <Step 19: Verify the BigchainDB
|
* Please refer to the testing steps :ref:`here <Step 19: Verify the BigchainDB
|
||||||
Node Setup>` to verify that your new BigchainDB node is working as expected.
|
Node Setup>` to verify that your new BigchainDB node is working as expected.
|
||||||
|
|
||||||
|
@ -83,9 +83,21 @@ The files are ``pki/issued/bdb-instance-0.crt`` and ``pki/ca.crt``.
|
|||||||
Step 4: Generate the Consolidated Client PEM File
|
Step 4: Generate the Consolidated Client PEM File
|
||||||
-------------------------------------------------
|
-------------------------------------------------
|
||||||
|
|
||||||
MongoDB requires a single, consolidated file containing both the public and
|
.. note::
|
||||||
private keys.
|
This step can be skipped for BigchainDB client certificate as BigchainDB
|
||||||
|
uses the PyMongo driver, which accepts separate certificate and key files.
|
||||||
|
|
||||||
|
MongoDB, MongoDB Backup Agent and MongoDB Monitoring Agent require a single,
|
||||||
|
consolidated file containing both the public and private keys.
|
||||||
|
|
||||||
.. code:: bash
|
.. code:: bash
|
||||||
|
|
||||||
cat /path/to/bdb-instance-0.crt /path/to/bdb-instance-0.key > bdb-instance-0.pem
|
cat /path/to/mdb-instance-0.crt /path/to/mdb-instance-0.key > mdb-instance-0.pem
|
||||||
|
|
||||||
|
OR
|
||||||
|
|
||||||
|
cat /path/to/mdb-mon-instance-0.crt /path/to/mdb-mon-instance-0.key > mdb-mon-instance-0.pem
|
||||||
|
|
||||||
|
OR
|
||||||
|
|
||||||
|
cat /path/to/mdb-bak-instance-0.crt /path/to/mdb-bak-instance-0.key > mdb-bak-instance-0.pem
|
||||||
|
@ -29,7 +29,6 @@ where all data values must be base64-encoded.
|
|||||||
This is true of all Kubernetes ConfigMaps and Secrets.)
|
This is true of all Kubernetes ConfigMaps and Secrets.)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
vars.cluster-fqdn
|
vars.cluster-fqdn
|
||||||
~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -83,7 +82,7 @@ There are some things worth noting about the ``mdb-instance-name``:
|
|||||||
documentation. Your BigchainDB cluster may use a different naming convention.
|
documentation. Your BigchainDB cluster may use a different naming convention.
|
||||||
|
|
||||||
|
|
||||||
vars.ngx-ndb-instance-name and Similar
|
vars.ngx-mdb-instance-name and Similar
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
NGINX needs the FQDN of the servers inside the cluster to be able to forward
|
NGINX needs the FQDN of the servers inside the cluster to be able to forward
|
||||||
|
@ -117,7 +117,7 @@ Step 4: Start the NGINX Service
|
|||||||
public IP to be assigned.
|
public IP to be assigned.
|
||||||
|
|
||||||
* You have the option to use vanilla NGINX without HTTPS support or an
|
* You have the option to use vanilla NGINX without HTTPS support or an
|
||||||
NGINX with HTTPS support integrated with 3scale API Gateway.
|
NGINX with HTTPS support.
|
||||||
|
|
||||||
|
|
||||||
Step 4.1: Vanilla NGINX
|
Step 4.1: Vanilla NGINX
|
||||||
@ -144,14 +144,14 @@ Step 4.1: Vanilla NGINX
|
|||||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc.yaml
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc.yaml
|
||||||
|
|
||||||
|
|
||||||
Step 4.2: NGINX with HTTPS + 3scale
|
Step 4.2: NGINX with HTTPS
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
* You have to enable HTTPS for this one and will need an HTTPS certificate
|
* You have to enable HTTPS for this one and will need an HTTPS certificate
|
||||||
for your domain.
|
for your domain.
|
||||||
|
|
||||||
* You should have already created the necessary Kubernetes Secrets in the previous
|
* You should have already created the necessary Kubernetes Secrets in the previous
|
||||||
step (e.g. ``https-certs`` and ``threescale-credentials``).
|
step (i.e. ``https-certs``).
|
||||||
|
|
||||||
* This configuration is located in the file ``nginx-https/nginx-https-svc.yaml``.
|
* This configuration is located in the file ``nginx-https/nginx-https-svc.yaml``.
|
||||||
|
|
||||||
@ -304,7 +304,7 @@ Step 9: Start the NGINX Kubernetes Deployment
|
|||||||
on ``mongodb-frontend-port`` to the MongoDB backend.
|
on ``mongodb-frontend-port`` to the MongoDB backend.
|
||||||
|
|
||||||
* As in step 4, you have the option to use vanilla NGINX without HTTPS or
|
* As in step 4, you have the option to use vanilla NGINX without HTTPS or
|
||||||
NGINX with HTTPS support integrated with 3scale API Gateway.
|
NGINX with HTTPS support.
|
||||||
|
|
||||||
Step 9.1: Vanilla NGINX
|
Step 9.1: Vanilla NGINX
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
@ -329,8 +329,8 @@ Step 9.1: Vanilla NGINX
|
|||||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep.yaml
|
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep.yaml
|
||||||
|
|
||||||
|
|
||||||
Step 9.2: NGINX with HTTPS + 3scale
|
Step 9.2: NGINX with HTTPS
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
* This configuration is located in the file
|
* This configuration is located in the file
|
||||||
``nginx-https/nginx-https-dep.yaml``.
|
``nginx-https/nginx-https-dep.yaml``.
|
||||||
@ -888,4 +888,3 @@ If you are using the NGINX with HTTPS support, use ``https`` instead of
|
|||||||
|
|
||||||
Use the Python Driver to send some transactions to the BigchainDB node and
|
Use the Python Driver to send some transactions to the BigchainDB node and
|
||||||
verify that your node or cluster works as expected.
|
verify that your node or cluster works as expected.
|
||||||
|
|
||||||
|
@ -128,9 +128,9 @@ You can SSH to one of the just-deployed Kubernetes "master" nodes
|
|||||||
|
|
||||||
.. code:: bash
|
.. code:: bash
|
||||||
|
|
||||||
$ ssh -i ~/.ssh/<name> ubuntu@<master-ip-address-or-hostname>
|
$ ssh -i ~/.ssh/<name> ubuntu@<master-ip-address-or-fqdn>
|
||||||
|
|
||||||
where you can get the IP address or hostname
|
where you can get the IP address or FQDN
|
||||||
of a master node from the Azure Portal. For example:
|
of a master node from the Azure Portal. For example:
|
||||||
|
|
||||||
.. code:: bash
|
.. code:: bash
|
||||||
@ -139,13 +139,14 @@ of a master node from the Azure Portal. For example:
|
|||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
All the master nodes should have the *same* public IP address and hostname
|
All the master nodes are accessible behind the *same* public IP address and
|
||||||
(also called the Master FQDN).
|
FQDN. You connect to one of the masters randomly based on the load balancing
|
||||||
|
policy.
|
||||||
|
|
||||||
The "agent" nodes shouldn't get public IP addresses or hostnames,
|
The "agent" nodes shouldn't get public IP addresses or externally accessible
|
||||||
so you can't SSH to them *directly*,
|
FQDNs, so you can't SSH to them *directly*,
|
||||||
but you can first SSH to the master
|
but you can first SSH to the master
|
||||||
and then SSH to an agent from there.
|
and then SSH to an agent from there using their hostname.
|
||||||
To do that, you could
|
To do that, you could
|
||||||
copy your SSH key pair to the master (a bad idea),
|
copy your SSH key pair to the master (a bad idea),
|
||||||
or use SSH agent forwarding (better).
|
or use SSH agent forwarding (better).
|
||||||
@ -168,7 +169,7 @@ then SSH agent forwarding hasn't been set up correctly.
|
|||||||
If you get a non-empty response,
|
If you get a non-empty response,
|
||||||
then SSH agent forwarding should work fine
|
then SSH agent forwarding should work fine
|
||||||
and you can SSH to one of the agent nodes (from a master)
|
and you can SSH to one of the agent nodes (from a master)
|
||||||
using something like:
|
using:
|
||||||
|
|
||||||
.. code:: bash
|
.. code:: bash
|
||||||
|
|
||||||
|
@ -100,15 +100,13 @@ and have an SSL certificate for the FQDN.
|
|||||||
(You can get an SSL certificate from any SSL certificate provider.)
|
(You can get an SSL certificate from any SSL certificate provider.)
|
||||||
|
|
||||||
|
|
||||||
☐ Ask the managing organization
|
☐ Ask the managing organization for the user name to use for authenticating to
|
||||||
for the FQDN used to serve the BigchainDB APIs
|
MongoDB.
|
||||||
(e.g. ``api.orgname.net`` or ``bdb.clustername.com``)
|
|
||||||
and for a copy of the associated SSL/TLS certificate.
|
|
||||||
Also, ask for the user name to use for authenticating to MongoDB.
|
|
||||||
|
|
||||||
|
|
||||||
☐ If the cluster uses 3scale for API authentication, monitoring and billing,
|
☐ If the cluster uses 3scale for API authentication, monitoring and billing,
|
||||||
you must ask the managing organization for all relevant 3scale credentials.
|
you must ask the managing organization for all relevant 3scale credentials -
|
||||||
|
secret token, service ID, version header and API service token.
|
||||||
|
|
||||||
|
|
||||||
☐ If the cluster uses MongoDB Cloud Manager for monitoring and backup,
|
☐ If the cluster uses MongoDB Cloud Manager for monitoring and backup,
|
||||||
|
@ -1,17 +1,17 @@
|
|||||||
apiVersion: extensions/v1beta1
|
apiVersion: extensions/v1beta1
|
||||||
kind: Deployment
|
kind: Deployment
|
||||||
metadata:
|
metadata:
|
||||||
name: ngx-http-instance-0-dep
|
name: ngx-instance-0-dep
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
labels:
|
labels:
|
||||||
app: ngx-http-instance-0-dep
|
app: ngx-instance-0-dep
|
||||||
spec:
|
spec:
|
||||||
terminationGracePeriodSeconds: 10
|
terminationGracePeriodSeconds: 10
|
||||||
containers:
|
containers:
|
||||||
- name: nginx-http
|
- name: nginx
|
||||||
image: bigchaindb/nginx_http:1.0
|
image: bigchaindb/nginx_http:1.0
|
||||||
imagePullPolicy: IfNotPresent
|
imagePullPolicy: IfNotPresent
|
||||||
env:
|
env:
|
||||||
|
@ -1,17 +1,17 @@
|
|||||||
apiVersion: extensions/v1beta1
|
apiVersion: extensions/v1beta1
|
||||||
kind: Deployment
|
kind: Deployment
|
||||||
metadata:
|
metadata:
|
||||||
name: ngx-https-instance-0-dep
|
name: ngx-instance-0-dep
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
labels:
|
labels:
|
||||||
app: ngx-https-instance-0-dep
|
app: ngx-instance-0-dep
|
||||||
spec:
|
spec:
|
||||||
terminationGracePeriodSeconds: 10
|
terminationGracePeriodSeconds: 10
|
||||||
containers:
|
containers:
|
||||||
- name: nginx-https
|
- name: nginx
|
||||||
image: bigchaindb/nginx_https:1.0
|
image: bigchaindb/nginx_https:1.0
|
||||||
imagePullPolicy: IfNotPresent
|
imagePullPolicy: IfNotPresent
|
||||||
env:
|
env:
|
||||||
|
@ -1,17 +1,17 @@
|
|||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Service
|
kind: Service
|
||||||
metadata:
|
metadata:
|
||||||
name: ngx-https-instance-0
|
name: ngx-instance-0
|
||||||
namespace: default
|
namespace: default
|
||||||
labels:
|
labels:
|
||||||
name: ngx-https-instance-0
|
name: ngx-instance-0
|
||||||
annotations:
|
annotations:
|
||||||
# NOTE: the following annotation is a beta feature and
|
# NOTE: the following annotation is a beta feature and
|
||||||
# only available in GCE/GKE and Azure as of now
|
# only available in GCE/GKE and Azure as of now
|
||||||
service.beta.kubernetes.io/external-traffic: OnlyLocal
|
service.beta.kubernetes.io/external-traffic: OnlyLocal
|
||||||
spec:
|
spec:
|
||||||
selector:
|
selector:
|
||||||
app: ngx-https-instance-0-dep
|
app: ngx-instance-0-dep
|
||||||
ports:
|
ports:
|
||||||
- port: "<cluster-frontend-port from ConfigMap>"
|
- port: "<cluster-frontend-port from ConfigMap>"
|
||||||
targetPort: "<cluster-frontend-port from ConfigMap>"
|
targetPort: "<cluster-frontend-port from ConfigMap>"
|
||||||
|
@ -12,7 +12,7 @@ spec:
|
|||||||
terminationGracePeriodSeconds: 10
|
terminationGracePeriodSeconds: 10
|
||||||
containers:
|
containers:
|
||||||
- name: nginx-openresty
|
- name: nginx-openresty
|
||||||
image: bigchaindb/nginx_3scale:2.0
|
image: bigchaindb/nginx_3scale:3.0
|
||||||
imagePullPolicy: IfNotPresent
|
imagePullPolicy: IfNotPresent
|
||||||
env:
|
env:
|
||||||
- name: DNS_SERVER
|
- name: DNS_SERVER
|
||||||
|
Loading…
x
Reference in New Issue
Block a user