mirror of
https://github.com/bigchaindb/bigchaindb.git
synced 2024-10-13 13:34:05 +00:00
More changes to multi-node deployment guide
- Integrating changes suggested by Krish. - Addressing comments on initial commit.
This commit is contained in:
parent
0cf46b331f
commit
e446c31a41
@ -47,22 +47,18 @@ cluster is using.
|
||||
Step 1: Prerequisites
|
||||
---------------------
|
||||
|
||||
* A public/private key pair for the new BigchainDB instance.
|
||||
* :ref:`List of all the things to be done by each node operator <Things Each Node Operator Must Do>`.
|
||||
|
||||
* The public key should be shared offline with the other existing BigchainDB
|
||||
nodes in the existing BigchainDB cluster.
|
||||
|
||||
* You will need the public keys of all the existing BigchainDB nodes.
|
||||
|
||||
* Client Certificate for the new BigchainDB Server to identify itself to the cluster.
|
||||
|
||||
* A new Kubernetes cluster setup with kubectl configured to access it.
|
||||
|
||||
* Some familiarity with deploying a BigchainDB node on Kubernetes.
|
||||
See our :doc:`other docs about that <node-on-kubernetes>`.
|
||||
|
||||
* You will need a client certificate for each MongoDB monitoring and backup agent.
|
||||
|
||||
Note: If you are managing multiple Kubernetes clusters, from your local
|
||||
system, you can run ``kubectl config view`` to list all the contexts that
|
||||
are available for the local kubectl.
|
||||
@ -77,33 +73,86 @@ example:
|
||||
$ kubectl --context ctx-2 proxy --port 8002
|
||||
|
||||
|
||||
Step 2: Prepare the New Kubernetes Cluster
|
||||
------------------------------------------
|
||||
Step 2: Configure the BigchainDB Node
|
||||
-------------------------------------
|
||||
|
||||
Follow the steps in the sections to set up Storage Classes and Persistent Volume
|
||||
Claims, and to run MongoDB in the new cluster:
|
||||
|
||||
1. :ref:`Add Storage Classes <Step 10: Create Kubernetes Storage Classes for MongoDB>`.
|
||||
2. :ref:`Add Persistent Volume Claims <Step 11: Create Kubernetes Persistent Volume Claims>`.
|
||||
3. :ref:`Create the Config Map <Step 3: Configure Your BigchainDB Node>`.
|
||||
4. :ref:`Prepare the Kubernetes Secrets <Step 3: Configure Your BigchainDB Node>`, as per your
|
||||
requirement i.e. if you do not want a certain functionality, just remove it from the
|
||||
``configuration/secret.yaml``.
|
||||
5. :ref:`Run MongoDB instance <Step 12: Start a Kubernetes StatefulSet for MongoDB>`.
|
||||
See the section on how to :ref:`configure your BigchainDB node <How to Configure a BigchainDB Node>`.
|
||||
|
||||
|
||||
Step 3: Start NGINX service, Assign DNS to NGINX Pubic IP and run NGINX deployment
|
||||
----------------------------------------------------------------------------------
|
||||
Step 3: Start the NGINX Service
|
||||
--------------------------------
|
||||
|
||||
Please see the following pages:
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Start NGINX service <Step 4: Start the NGINX Service>`.
|
||||
|
||||
|
||||
Step 4: Assign DNS Name to the NGINX Public IP
|
||||
----------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Assign DNS to NGINX Public IP <Step 5: Assign DNS Name to the NGINX Public IP>`.
|
||||
|
||||
|
||||
Step 5: Start the MongoDB Kubernetes Service
|
||||
--------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Start the MongoDB Kubernetes Service <Step 6: Start the MongoDB Kubernetes Service>`.
|
||||
|
||||
|
||||
Step 6: Start the BigchainDB Kubernetes Service
|
||||
-----------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Start the BigchainDB Kubernetes Service <Step 7: Start the BigchainDB Kubernetes Service>`.
|
||||
|
||||
|
||||
Step 7: Start the OpenResty Kubernetes Service
|
||||
----------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Start the OpenResty Kubernetes Service <Step 8: Start the OpenResty Kubernetes Service>`.
|
||||
|
||||
|
||||
Step 8: Start the NGINX Kubernetes Deployment
|
||||
---------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Run NGINX deployment <Step 9: Start the NGINX Kubernetes Deployment>`.
|
||||
|
||||
|
||||
Step 4: Verify network connectivity between the MongoDB instances
|
||||
-----------------------------------------------------------------
|
||||
Step 9: Create Kubernetes Storage Classes for MongoDB
|
||||
-----------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Step 10: Create Kubernetes Storage Classes for MongoDB`.
|
||||
|
||||
|
||||
Step 10: Create Kubernetes Persistent Volume Claims
|
||||
---------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Step 11: Create Kubernetes Persistent Volume Claims`.
|
||||
|
||||
|
||||
Step 11: Start a Kubernetes StatefulSet for MongoDB
|
||||
---------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Step 12: Start a Kubernetes StatefulSet for MongoDB`.
|
||||
|
||||
|
||||
Step 12: Verify network connectivity between the MongoDB instances
|
||||
------------------------------------------------------------------
|
||||
|
||||
Make sure your MongoDB instances can access each other over the network. *If* you are deploying
|
||||
the new MongoDB node in a different cluster or geographical location using Azure Kubernetes Container
|
||||
@ -115,14 +164,18 @@ want to add a new MongoDB instance ``mdb-instance-1`` located in Azure data cent
|
||||
replica set. Unless you already have explicitly set up networking for ``mdb-instance-0`` to communicate with ``mdb-instance-1`` and
|
||||
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
|
||||
MongoDB replica set.
|
||||
It is similar to ensuring that there is a ``CNAME`` record in the DNS
|
||||
infrastructure to resolve ``mdb-instance-X`` to the host where it is actually available.
|
||||
We can do this in Kubernetes using a Kubernetes Service of ``type``
|
||||
``ExternalName``.
|
||||
|
||||
* This configuration is located in the file ``mongodb/mongo-ext-conn-svc.yaml``.
|
||||
|
||||
* Set the name of the ``metadata.name`` to the host name of the MongoDB instance you are trying to connect to.
|
||||
For instance if you are configuring this service on cluster with `mdb-instance-0` then the ``metadata.name`` will
|
||||
For instance if you are configuring this service on cluster with ``mdb-instance-0`` then the ``metadata.name`` will
|
||||
be ``mdb-instance-1`` and vice versa.
|
||||
|
||||
* Set ``spec.ports.port[0]`` to the ``mongodb-backend-port`` from the ConfigMap.
|
||||
* Set ``spec.ports.port[0]`` to the ``mongodb-backend-port`` from the ConfigMap for the other cluster.
|
||||
|
||||
* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to.
|
||||
For more information about the FQDN please refer to: :ref:`Assign DNS Name to the NGINX Public
|
||||
@ -132,9 +185,14 @@ MongoDB replica set.
|
||||
This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs
|
||||
we need to communicate with.
|
||||
|
||||
If you are not the system administrator of the cluster, you have to get in
|
||||
touch with the system administrator/s of the other ``n-1`` clusters and
|
||||
share with them your instance name (``mdb-instance-name`` in the ConfigMap)
|
||||
and the FQDN for your node (``cluster-fqdn`` in the ConfigMap).
|
||||
|
||||
Step 5: Add the New MongoDB Instance to the Existing Replica Set
|
||||
----------------------------------------------------------------
|
||||
|
||||
Step 13: Add the New MongoDB Instance to the Existing Replica Set
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Note that by ``replica set``, we are referring to the MongoDB replica set,
|
||||
not a Kubernetes' ``ReplicaSet``.
|
||||
@ -144,13 +202,13 @@ will have to coordinate offline with an existing administrator so that they can
|
||||
add the new MongoDB instance to the replica set.
|
||||
|
||||
Add the new instance of MongoDB from an existing instance by accessing the
|
||||
``mongo`` shell and authenticate as the ``adminUser`` we created for existing MongoDB instance OR
|
||||
``mongo`` shell and authenticate as the ``adminUser`` we created for existing MongoDB instance OR
|
||||
contact the admin of the PRIMARY MongoDB node:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 exec -it <existing-mongodb-host-name> -c mongodb -- /bin/bash
|
||||
$ mongo --host <existing-mongodb-host-name> --port 27017 --verbose --ssl \
|
||||
$ kubectl --context ctx-1 exec -it <existing mongodb-instance-name> bash
|
||||
$ mongo --host <existing mongodb-instance-name> --port 27017 --verbose --ssl \
|
||||
--sslCAFile /etc/mongod/ssl/ca.pem \
|
||||
--sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem
|
||||
|
||||
@ -167,11 +225,11 @@ Run the ``rs.add()`` command with the FQDN and port number of the other instance
|
||||
|
||||
.. code:: bash
|
||||
|
||||
PRIMARY> rs.add("<fqdn>:<port>")
|
||||
PRIMARY> rs.add("<new mdb-instance-name>:<port>")
|
||||
|
||||
|
||||
Step 6: Verify the Replica Set Membership
|
||||
-----------------------------------------
|
||||
Step 14: Verify the Replica Set Membership
|
||||
------------------------------------------
|
||||
|
||||
You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the
|
||||
mongo shell to verify the replica set membership.
|
||||
@ -180,15 +238,60 @@ The new MongoDB instance should be listed in the membership information
|
||||
displayed.
|
||||
|
||||
|
||||
Step 7: Start the New BigchainDB Instance
|
||||
-----------------------------------------
|
||||
Step 15: Configure Users and Access Control for MongoDB
|
||||
-------------------------------------------------------
|
||||
|
||||
Get the file ``bigchaindb-dep.yaml`` from GitHub using:
|
||||
* Create the users in MongoDB with the appropriate roles assigned to them. This
|
||||
will enable the new BigchainDB instance, new MongoDB Monitoring Agent
|
||||
instance and the new MongoDB Backup Agent instance to function correctly.
|
||||
|
||||
.. code:: bash
|
||||
* Please refer to
|
||||
:ref:`Configure Users and Access Control for MongoDB <Step 13: Configure
|
||||
Users and Access Control for MongoDB>` to create and configure the new
|
||||
BigchainDB, MongoDB Monitoring Agent and MongoDB Backup Agent users on the
|
||||
cluster.
|
||||
|
||||
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/bigchaindb/bigchaindb-dep.yaml
|
||||
.. note::
|
||||
You will not have to create the MongoDB replica set or create the admin user, as they already exist.
|
||||
|
||||
If you do not have access to the ``PRIMARY`` member of the replica set, you
|
||||
need to get in touch with the administrator who can create the users in the
|
||||
MongoDB cluster.
|
||||
|
||||
|
||||
|
||||
Step 16: Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
||||
-------------------------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent`.
|
||||
|
||||
.. note::
|
||||
Every MMS group has only one active Monitoring and Backup Agent and having
|
||||
multiple agents provides high availability and failover, in case one goes
|
||||
down. For more information about Monitoring and Backup Agents please
|
||||
consult the `official MongoDB documenation
|
||||
<https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
|
||||
|
||||
|
||||
Step 17: Start a Kubernetes Deployment for MongoDB Backup Agent
|
||||
---------------------------------------------------------------
|
||||
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent`.
|
||||
|
||||
.. note::
|
||||
Every MMS group has only one active Monitoring and Backup Agent and having
|
||||
multiple agents provides high availability and failover, in case one goes
|
||||
down. For more information about Monitoring and Backup Agents please
|
||||
consult the `official MongoDB documenation
|
||||
<https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
|
||||
|
||||
|
||||
Step 18: Start a Kubernetes Deployment for BigchainDB
|
||||
-----------------------------------------------------
|
||||
|
||||
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
|
||||
value set in ``bdb-instance-name`` in the ConfigMap, followed by
|
||||
@ -216,72 +319,65 @@ Get the file ``bigchaindb-dep.yaml`` from GitHub using:
|
||||
* Uncomment the env var ``BIGCHAINDB_KEYRING``, it will pick up the
|
||||
``:`` delimited list of all the public keys in the BigchainDB cluster from the ConfigMap.
|
||||
|
||||
* Authenticate the new BigchainDB instance using the client x.509 certificate with MongoDB. We need to specify the
|
||||
user name *as seen in the certificate* issued to the BigchainDB instance in order to authenticate correctly.
|
||||
Please refer to: :ref:`Configure Users and Access Control for MongoDB <Step 13: Configure Users and Access Control for MongoDB>`
|
||||
|
||||
Create the required Deployment using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-2 apply -f bigchaindb-dep.yaml
|
||||
|
||||
You can check its status using the command ``kubectl get deploy -w``
|
||||
You can check its status using the command ``kubectl --context ctx-2 get deploy -w``
|
||||
|
||||
|
||||
Step 8: Restart the Existing BigchainDB Instance(s)
|
||||
---------------------------------------------------
|
||||
Step 19: Restart the Existing BigchainDB Instance(s)
|
||||
----------------------------------------------------
|
||||
|
||||
Add the public key of the new BigchainDB instance to the ConfigMap ``bdb-keyring``
|
||||
variable of existing BigchainDB instances, update the ConfigMap of the existing
|
||||
BigchainDB instances and update the instances respectively:
|
||||
* Add the public key of the new BigchainDB instance to the ConfigMap
|
||||
``bdb-keyring`` variable of all the existing BigchainDB instances.
|
||||
Update all the existing ConfigMap using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 apply -f configuration/config-map.yaml
|
||||
$ kubectl --context ctx-1 replace -f bigchaindb/bigchaindb-dep.yaml --force
|
||||
|
||||
* Uncomment the ``BIGCHAINDB_KEYRING`` variable from the
|
||||
``bigchaindb/bigchaindb-dep.yaml`` to refer to the keyring updated in the
|
||||
ConfigMap.
|
||||
Update the running BigchainDB instance using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 delete -f bigchaindb/bigchaindb-dep.yaml
|
||||
$ kubectl --context ctx-1 apply -f bigchaindb/bigchaindb-dep.yaml
|
||||
|
||||
|
||||
See the page titled :ref:`How to Configure a BigchainDB Node` for more information about
|
||||
ConfigMap configuration.
|
||||
|
||||
This will create a "rolling deployment" in Kubernetes where a new instance of
|
||||
BigchainDB will be created, and if the health check on the new instance is
|
||||
successful, the earlier one will be terminated. This ensures that there is
|
||||
zero downtime during updates.
|
||||
|
||||
You can SSH to an existing BigchainDB instance and run the ``bigchaindb
|
||||
show-config`` command to check that the keyring is updated.
|
||||
|
||||
|
||||
Step 9: Deploy MongoDB Monitoring and Backup Agent
|
||||
--------------------------------------------------
|
||||
Step 20: Start a Kubernetes Deployment for OpenResty
|
||||
----------------------------------------------------
|
||||
|
||||
To Deploy MongoDB monitoring and backup agent for the new cluster, you have to authenticate each agent using its
|
||||
unique client certificate. For more information on how to authenticate and add users to MongoDB please refer to:
|
||||
Please see the following section:
|
||||
|
||||
* :ref:`Configure Users and Access Control for MongoDB<Step 13: Configure Users and Access Control for MongoDB>`
|
||||
|
||||
After authentication, start the Kubernetes Deployments:
|
||||
|
||||
* :ref:`Start a Kubernetes Deployment for MongoDB Monitoring Agent <Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent>`.
|
||||
* :ref:`Start a Kubernetes Deployment for MongoDB Backup Agent <Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent>`.
|
||||
|
||||
.. note::
|
||||
Every MMS group has only one active Monitoring and Backup agent and having multiple agents provides High availability and failover, in case
|
||||
one goes down. For more information about Monitoring and Backup Agents please consult the `official MongoDB documenation <https://docs.cloudmanager.mongodb.com/tutorial/move-agent-to-new-server/>`_.
|
||||
* :ref:`Step 17: Start a Kubernetes Deployment for OpenResty`.
|
||||
|
||||
|
||||
Step 10: Start OpenResty Service and Deployment
|
||||
---------------------------------------------------------
|
||||
Step 21: Configure the MongoDB Cloud Manager
|
||||
--------------------------------------------
|
||||
|
||||
Please refer to the following instructions:
|
||||
* MongoDB Cloud Manager auto-detects the members of the replica set and
|
||||
configures the agents to act as a master/slave accordingly.
|
||||
|
||||
* :ref:`Start the OpenResty Kubernetes Service <Step 8: Start the OpenResty Kubernetes Service>`.
|
||||
* :ref:`Start a Kubernetes Deployment for OpenResty <Step 17: Start a Kubernetes Deployment for OpenResty>`.
|
||||
* You can verify that the new MongoDB instance is detected by the
|
||||
Monitoring and Backup Agent using the Cloud Manager UI.
|
||||
|
||||
|
||||
Step 11: Test Your New BigchainDB Node
|
||||
Step 22: Test Your New BigchainDB Node
|
||||
--------------------------------------
|
||||
|
||||
Please refer to the testing steps :ref:`here <Step 19: Verify the BigchainDB
|
||||
Node Setup>` to verify that your new BigchainDB node is working as expected.
|
||||
* Please refer to the testing steps :ref:`here <Step 19: Verify the BigchainDB
|
||||
Node Setup>` to verify that your new BigchainDB node is working as expected.
|
||||
|
||||
|
@ -83,9 +83,21 @@ The files are ``pki/issued/bdb-instance-0.crt`` and ``pki/ca.crt``.
|
||||
Step 4: Generate the Consolidated Client PEM File
|
||||
-------------------------------------------------
|
||||
|
||||
MongoDB requires a single, consolidated file containing both the public and
|
||||
private keys.
|
||||
.. note::
|
||||
This step can be skipped for BigchainDB client certificate as BigchainDB
|
||||
uses the PyMongo driver, which accepts separate certificate and key files.
|
||||
|
||||
MongoDB, MongoDB Backup Agent and MongoDB Monitoring Agent require a single,
|
||||
consolidated file containing both the public and private keys.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
cat /path/to/bdb-instance-0.crt /path/to/bdb-instance-0.key > bdb-instance-0.pem
|
||||
cat /path/to/mdb-instance-0.crt /path/to/mdb-instance-0.key > mdb-instance-0.pem
|
||||
|
||||
OR
|
||||
|
||||
cat /path/to/mdb-mon-instance-0.crt /path/to/mdb-mon-instance-0.key > mdb-mon-instance-0.pem
|
||||
|
||||
OR
|
||||
|
||||
cat /path/to/mdb-bak-instance-0.crt /path/to/mdb-bak-instance-0.key > mdb-bak-instance-0.pem
|
||||
|
@ -29,7 +29,6 @@ where all data values must be base64-encoded.
|
||||
This is true of all Kubernetes ConfigMaps and Secrets.)
|
||||
|
||||
|
||||
|
||||
vars.cluster-fqdn
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
@ -83,7 +82,7 @@ There are some things worth noting about the ``mdb-instance-name``:
|
||||
documentation. Your BigchainDB cluster may use a different naming convention.
|
||||
|
||||
|
||||
vars.ngx-ndb-instance-name and Similar
|
||||
vars.ngx-mdb-instance-name and Similar
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
NGINX needs the FQDN of the servers inside the cluster to be able to forward
|
||||
|
@ -117,7 +117,7 @@ Step 4: Start the NGINX Service
|
||||
public IP to be assigned.
|
||||
|
||||
* You have the option to use vanilla NGINX without HTTPS support or an
|
||||
NGINX with HTTPS support integrated with 3scale API Gateway.
|
||||
NGINX with HTTPS support.
|
||||
|
||||
|
||||
Step 4.1: Vanilla NGINX
|
||||
@ -144,14 +144,14 @@ Step 4.1: Vanilla NGINX
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc.yaml
|
||||
|
||||
|
||||
Step 4.2: NGINX with HTTPS + 3scale
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Step 4.2: NGINX with HTTPS
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* You have to enable HTTPS for this one and will need an HTTPS certificate
|
||||
for your domain.
|
||||
|
||||
* You should have already created the necessary Kubernetes Secrets in the previous
|
||||
step (e.g. ``https-certs`` and ``threescale-credentials``).
|
||||
step (i.e. ``https-certs``).
|
||||
|
||||
* This configuration is located in the file ``nginx-https/nginx-https-svc.yaml``.
|
||||
|
||||
@ -304,7 +304,7 @@ Step 9: Start the NGINX Kubernetes Deployment
|
||||
on ``mongodb-frontend-port`` to the MongoDB backend.
|
||||
|
||||
* As in step 4, you have the option to use vanilla NGINX without HTTPS or
|
||||
NGINX with HTTPS support integrated with 3scale API Gateway.
|
||||
NGINX with HTTPS support.
|
||||
|
||||
Step 9.1: Vanilla NGINX
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
@ -329,8 +329,8 @@ Step 9.1: Vanilla NGINX
|
||||
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep.yaml
|
||||
|
||||
|
||||
Step 9.2: NGINX with HTTPS + 3scale
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Step 9.2: NGINX with HTTPS
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* This configuration is located in the file
|
||||
``nginx-https/nginx-https-dep.yaml``.
|
||||
@ -742,10 +742,10 @@ Step 17: Start a Kubernetes Deployment for OpenResty
|
||||
``openresty-instance-name`` is ``openresty-instance-0``, set the fields to
|
||||
the value ``openresty-instance-0-dep``.
|
||||
|
||||
* Set the port to be exposed from the pod in the
|
||||
``spec.containers[0].ports`` section. We currently expose the port at
|
||||
which OpenResty is listening for requests, ``openresty-backend-port`` in
|
||||
the above ConfigMap.
|
||||
* Set the port to be exposed from the pod in the
|
||||
``spec.containers[0].ports`` section. We currently expose the port at
|
||||
which OpenResty is listening for requests, ``openresty-backend-port`` in
|
||||
the above ConfigMap.
|
||||
|
||||
* Create the OpenResty Deployment using:
|
||||
|
||||
@ -887,5 +887,4 @@ If you are using the NGINX with HTTPS support, use ``https`` instead of
|
||||
``http`` above.
|
||||
|
||||
Use the Python Driver to send some transactions to the BigchainDB node and
|
||||
verify that your node or cluster works as expected.
|
||||
|
||||
verify that your node or cluster works as expected.
|
@ -49,7 +49,7 @@ If you already *have* the Azure CLI installed, you may want to update it.
|
||||
|
||||
.. warning::
|
||||
|
||||
``az component update`` isn't supported if you installed the CLI using some of Microsoft's provided installation instructions. See `the Microsoft docs for update instructions <https://docs.microsoft.com/en-us/cli/azure/install-az-cli2>`_.
|
||||
``az component update`` isn't supported if you installed the CLI using some of Microsoft's provided installation instructions. See `the Microsoft docs for update instructions <https://docs.microsoft.com/en-us/cli/azure/install-az-cli2>`_.
|
||||
|
||||
|
||||
Next, login to your account using:
|
||||
@ -128,9 +128,9 @@ You can SSH to one of the just-deployed Kubernetes "master" nodes
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ ssh -i ~/.ssh/<name> ubuntu@<master-ip-address-or-hostname>
|
||||
$ ssh -i ~/.ssh/<name> ubuntu@<master-ip-address-or-fqdn>
|
||||
|
||||
where you can get the IP address or hostname
|
||||
where you can get the IP address or FQDN
|
||||
of a master node from the Azure Portal. For example:
|
||||
|
||||
.. code:: bash
|
||||
@ -139,13 +139,14 @@ of a master node from the Azure Portal. For example:
|
||||
|
||||
.. note::
|
||||
|
||||
All the master nodes should have the *same* public IP address and hostname
|
||||
(also called the Master FQDN).
|
||||
All the master nodes are accessible behind the *same* public IP address and
|
||||
FQDN. You connect to one of the masters randomly based on the load balancing
|
||||
policy.
|
||||
|
||||
The "agent" nodes shouldn't get public IP addresses or hostnames,
|
||||
so you can't SSH to them *directly*,
|
||||
The "agent" nodes shouldn't get public IP addresses or externally accessible
|
||||
FQDNs, so you can't SSH to them *directly*,
|
||||
but you can first SSH to the master
|
||||
and then SSH to an agent from there.
|
||||
and then SSH to an agent from there using their hostname.
|
||||
To do that, you could
|
||||
copy your SSH key pair to the master (a bad idea),
|
||||
or use SSH agent forwarding (better).
|
||||
@ -168,14 +169,14 @@ then SSH agent forwarding hasn't been set up correctly.
|
||||
If you get a non-empty response,
|
||||
then SSH agent forwarding should work fine
|
||||
and you can SSH to one of the agent nodes (from a master)
|
||||
using something like:
|
||||
using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ ssh ubuntu@k8s-agent-4AC80E97-0
|
||||
|
||||
where ``k8s-agent-4AC80E97-0`` is the name
|
||||
of a Kubernetes agent node in your Kubernetes cluster.
|
||||
of a Kubernetes agent node in your Kubernetes cluster.
|
||||
You will have to replace it by the name
|
||||
of an agent node in your cluster.
|
||||
|
||||
@ -202,4 +203,4 @@ CAUTION: You might end up deleting resources other than the ACS cluster.
|
||||
|
||||
|
||||
Next, you can :doc:`run a BigchainDB node on your new
|
||||
Kubernetes cluster <node-on-kubernetes>`.
|
||||
Kubernetes cluster <node-on-kubernetes>`.
|
@ -45,7 +45,7 @@ For example, maybe they assign a unique number to each node,
|
||||
so that if you're operating node 12, your MongoDB instance would be named
|
||||
``mdb-instance-12``.
|
||||
Similarly, other instances must also have unique names in the cluster.
|
||||
|
||||
|
||||
#. Name of the MongoDB instance (``mdb-instance-*``)
|
||||
#. Name of the BigchainDB instance (``bdb-instance-*``)
|
||||
#. Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``)
|
||||
@ -80,7 +80,7 @@ You can generate a BigchainDB keypair for your node, for example,
|
||||
using the `BigchainDB Python Driver <http://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>`_.
|
||||
|
||||
.. code:: python
|
||||
|
||||
|
||||
from bigchaindb_driver.crypto import generate_keypair
|
||||
print(generate_keypair())
|
||||
|
||||
@ -100,15 +100,13 @@ and have an SSL certificate for the FQDN.
|
||||
(You can get an SSL certificate from any SSL certificate provider.)
|
||||
|
||||
|
||||
☐ Ask the managing organization
|
||||
for the FQDN used to serve the BigchainDB APIs
|
||||
(e.g. ``api.orgname.net`` or ``bdb.clustername.com``)
|
||||
and for a copy of the associated SSL/TLS certificate.
|
||||
Also, ask for the user name to use for authenticating to MongoDB.
|
||||
☐ Ask the managing organization for the user name to use for authenticating to
|
||||
MongoDB.
|
||||
|
||||
|
||||
☐ If the cluster uses 3scale for API authentication, monitoring and billing,
|
||||
you must ask the managing organization for all relevant 3scale credentials.
|
||||
you must ask the managing organization for all relevant 3scale credentials -
|
||||
secret token, service ID, version header and API service token.
|
||||
|
||||
|
||||
☐ If the cluster uses MongoDB Cloud Manager for monitoring and backup,
|
||||
|
@ -1,17 +1,17 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ngx-http-instance-0-dep
|
||||
name: ngx-instance-0-dep
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: ngx-http-instance-0-dep
|
||||
app: ngx-instance-0-dep
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: nginx-http
|
||||
- name: nginx
|
||||
image: bigchaindb/nginx_http:1.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
env:
|
||||
|
@ -1,17 +1,17 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ngx-https-instance-0-dep
|
||||
name: ngx-instance-0-dep
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: ngx-https-instance-0-dep
|
||||
app: ngx-instance-0-dep
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: nginx-https
|
||||
- name: nginx
|
||||
image: bigchaindb/nginx_https:1.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
env:
|
||||
|
@ -1,17 +1,17 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ngx-https-instance-0
|
||||
name: ngx-instance-0
|
||||
namespace: default
|
||||
labels:
|
||||
name: ngx-https-instance-0
|
||||
name: ngx-instance-0
|
||||
annotations:
|
||||
# NOTE: the following annotation is a beta feature and
|
||||
# only available in GCE/GKE and Azure as of now
|
||||
service.beta.kubernetes.io/external-traffic: OnlyLocal
|
||||
spec:
|
||||
selector:
|
||||
app: ngx-https-instance-0-dep
|
||||
app: ngx-instance-0-dep
|
||||
ports:
|
||||
- port: "<cluster-frontend-port from ConfigMap>"
|
||||
targetPort: "<cluster-frontend-port from ConfigMap>"
|
||||
|
@ -12,7 +12,7 @@ spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: nginx-openresty
|
||||
image: bigchaindb/nginx_3scale:2.0
|
||||
image: bigchaindb/nginx_3scale:3.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
env:
|
||||
- name: DNS_SERVER
|
||||
|
Loading…
x
Reference in New Issue
Block a user