From 14892fc839410fab942394c75e8308b4cb7ab786 Mon Sep 17 00:00:00 2001 From: muawiakh Date: Wed, 10 Jan 2018 14:20:32 +0100 Subject: [PATCH 01/10] Template for BigchainDB + Tendermint Kubernetes Deployment - Template doc for BigchainDB + Tendermint single node - Template doc for BigchainDB + Tendermint network - Remove autosectionlabel extension from docs/server/source/conf.py - Removed this extension because this does not allow two different documents to have same headings, because it auto-indexes - Fix and explicitly label headings and references. --- docs/server/source/appendices/vote-yaml.rst | 6 +- docs/server/source/conf.py | 2 +- .../server/source/data-models/block-model.rst | 4 +- .../source/data-models/transaction-model.rst | 2 + docs/server/source/data-models/vote-model.rst | 8 +- docs/server/source/drivers-clients/index.rst | 2 +- .../events/websocket-event-stream-api.rst | 6 +- docs/server/source/http-client-server-api.rst | 8 +- docs/server/source/index.rst | 1 + .../architecture.rst | 204 +++ .../bigchaindb-network-on-kubernetes.rst | 546 ++++++++ .../index.rst | 20 + .../node-config-map-and-secrets.rst | 356 +++++ .../node-on-kubernetes.rst | 1178 +++++++++++++++++ .../workflow.rst | 138 ++ .../add-node-on-kubernetes.rst | 49 +- .../ca-installation.rst | 4 +- .../client-tls-certificate.rst | 4 +- .../cloud-manager.rst | 2 + .../easy-rsa.rst | 2 + .../node-config-map-and-secrets.rst | 10 +- .../node-on-kubernetes.rst | 34 +- .../revoke-tls-certificate.rst | 2 +- .../server-tls-certificate.rst | 4 +- .../tectonic-azure.rst | 14 +- .../template-kubernetes-azure.rst | 14 +- .../troubleshoot.rst | 2 + .../upgrade-on-kubernetes.rst | 2 +- .../workflow.rst | 13 +- 29 files changed, 2575 insertions(+), 62 deletions(-) create mode 100644 docs/server/source/production-deployment-template-tendermint/architecture.rst create mode 100644 docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst create mode 100644 docs/server/source/production-deployment-template-tendermint/index.rst create mode 100644 docs/server/source/production-deployment-template-tendermint/node-config-map-and-secrets.rst create mode 100644 docs/server/source/production-deployment-template-tendermint/node-on-kubernetes.rst create mode 100644 docs/server/source/production-deployment-template-tendermint/workflow.rst diff --git a/docs/server/source/appendices/vote-yaml.rst b/docs/server/source/appendices/vote-yaml.rst index 6613827e..9ddb5541 100644 --- a/docs/server/source/appendices/vote-yaml.rst +++ b/docs/server/source/appendices/vote-yaml.rst @@ -1,7 +1,9 @@ +.. _the-vote-schema-file: + The Vote Schema File ==================== -BigchainDB checks all :ref:`votes ` +BigchainDB checks all :ref:`votes ` (JSON documents) against a formal schema defined in a JSON Schema file named vote.yaml. The contents of that file are copied below. @@ -17,4 +19,4 @@ vote.yaml --------- .. literalinclude:: ../../../../bigchaindb/common/schema/vote.yaml - :language: yaml + :language: yaml \ No newline at end of file diff --git a/docs/server/source/conf.py b/docs/server/source/conf.py index e272baf4..8fc4f397 100644 --- a/docs/server/source/conf.py +++ b/docs/server/source/conf.py @@ -48,7 +48,7 @@ extensions = [ 'sphinx.ext.todo', 'sphinx.ext.napoleon', 'sphinxcontrib.httpdomain', - 'sphinx.ext.autosectionlabel', + #'sphinx.ext.autosectionlabel', # Below are actually build steps made to look like sphinx extensions. # It was the easiest way to get it running with ReadTheDocs. 'generate_http_server_api_documentation', diff --git a/docs/server/source/data-models/block-model.rst b/docs/server/source/data-models/block-model.rst index 0537bfd8..9d08af5f 100644 --- a/docs/server/source/data-models/block-model.rst +++ b/docs/server/source/data-models/block-model.rst @@ -1,3 +1,5 @@ +.. _the-block-model: + The Block Model =============== @@ -48,7 +50,7 @@ An example is ``"1507294217"``. **block.transactions** -A list of the :ref:`transactions ` included in the block. +A list of the :ref:`transactions ` included in the block. (Each transaction is a JSON object.) diff --git a/docs/server/source/data-models/transaction-model.rst b/docs/server/source/data-models/transaction-model.rst index 6e7dadc9..7f7721cc 100644 --- a/docs/server/source/data-models/transaction-model.rst +++ b/docs/server/source/data-models/transaction-model.rst @@ -1,3 +1,5 @@ +.. _the-transaction-model: + The Transaction Model ===================== diff --git a/docs/server/source/data-models/vote-model.rst b/docs/server/source/data-models/vote-model.rst index 7f428f56..9a0d1689 100644 --- a/docs/server/source/data-models/vote-model.rst +++ b/docs/server/source/data-models/vote-model.rst @@ -1,3 +1,5 @@ +.. _the-vote-model: + The Vote Model ============== @@ -44,7 +46,7 @@ see the `IPDB Transaction Spec page about cryptographic keys and signatures The block ID that this vote is for. It's a string. For more information about block IDs, -see the page about :ref:`blocks `. +see the page about :ref:`blocks `. **vote.previous_block** @@ -54,7 +56,7 @@ according to the node which cast this vote. It's a string. (It's possible for different nodes to see different block orders.) For more information about block IDs, -see the page about :ref:`blocks `. +see the page about :ref:`blocks `. **vote.is_block_valid** @@ -100,7 +102,7 @@ The Vote Schema --------------- BigchainDB checks all votes (JSON documents) against a formal schema -defined in a :ref:`JSON Schema file named vote.yaml `. +defined in a :ref:`JSON Schema file named vote.yaml `. An Example Vote diff --git a/docs/server/source/drivers-clients/index.rst b/docs/server/source/drivers-clients/index.rst index 407fe688..a08c6f1c 100644 --- a/docs/server/source/drivers-clients/index.rst +++ b/docs/server/source/drivers-clients/index.rst @@ -9,7 +9,7 @@ Libraries and Tools Maintained by the BigchainDB Team * `The Transaction CLI `_ is a command-line interface for building BigchainDB transactions. You may be able to call it from inside the language of - your choice, and then use :ref:`the HTTP API ` + your choice, and then use :ref:`the HTTP API ` to post transactions. diff --git a/docs/server/source/events/websocket-event-stream-api.rst b/docs/server/source/events/websocket-event-stream-api.rst index 7eb3f3f5..850aded0 100644 --- a/docs/server/source/events/websocket-event-stream-api.rst +++ b/docs/server/source/events/websocket-event-stream-api.rst @@ -1,3 +1,5 @@ +.. _the-websocket-event-stream-api: + The WebSocket Event Stream API ============================== @@ -24,7 +26,7 @@ Determining Support for the Event Stream API It's a good idea to make sure that the node you're connecting with has advertised support for the Event Stream API. To do so, send a HTTP GET -request to the node's :ref:`API Root Endpoint` +request to the node's :ref:`api-root-endpoint` (e.g. ``http://localhost:9984/api/v1/``) and check that the response contains a ``streams`` property: @@ -61,7 +63,7 @@ Streams will always be under the WebSocket protocol (so ``ws://`` or API root URL (for example, `validated transactions <#valid-transactions>`_ would be accessible under ``/api/v1/streams/valid_transactions``). If you're running your own BigchainDB instance and need help determining its root URL, -then see the page titled :ref:`Determining the API Root URL`. +then see the page titled :ref:`determining-the-api-root-url`. All messages sent in a stream are in the JSON format. diff --git a/docs/server/source/http-client-server-api.rst b/docs/server/source/http-client-server-api.rst index 58ec5617..448d28c0 100644 --- a/docs/server/source/http-client-server-api.rst +++ b/docs/server/source/http-client-server-api.rst @@ -1,3 +1,5 @@ +.. _the-http-client-server-api: + The HTTP Client-Server API ========================== @@ -26,6 +28,8 @@ with something like the following in the body: :language: http +.. _api-root-endpoint: + API Root Endpoint ------------------- @@ -153,7 +157,7 @@ Transactions transaction, poll the link to the :ref:`status monitor ` provided in the ``Location`` header or listen to server's - :ref:`WebSocket Event Stream API `. + :ref:`WebSocket Event Stream API `. :resheader Content-Type: ``application/json`` :resheader Location: Relative link to a status monitor for the submitted transaction. @@ -707,7 +711,7 @@ so you can access it from the same machine, but it won't be directly accessible from the outside world. (The outside world could connect via a SOCKS proxy or whatnot.) -The documentation about BigchainDB Server :any:`Configuration Settings` +The documentation about BigchainDB Server :doc:`Configuration Settings ` has a section about how to set ``server.bind`` so as to make the HTTP API publicly accessible. diff --git a/docs/server/source/index.rst b/docs/server/source/index.rst index 65bd8774..750316df 100644 --- a/docs/server/source/index.rst +++ b/docs/server/source/index.rst @@ -10,6 +10,7 @@ BigchainDB Server Documentation production-nodes/index clusters production-deployment-template/index + production-deployment-template-tendermint/index dev-and-test/index server-reference/index http-client-server-api diff --git a/docs/server/source/production-deployment-template-tendermint/architecture.rst b/docs/server/source/production-deployment-template-tendermint/architecture.rst new file mode 100644 index 00000000..528874f8 --- /dev/null +++ b/docs/server/source/production-deployment-template-tendermint/architecture.rst @@ -0,0 +1,204 @@ +Architecture of a BigchainDB Node +================================== + +A BigchainDB Production deployment is hosted on a Kubernetes cluster and includes: + +* NGINX, OpenResty, BigchainDB, MongoDB and Tendermint + `Kubernetes Services `_. +* NGINX, OpenResty, BigchainDB, Monitoring Agent and Backup Agent + `Kubernetes Deployments `_. +* MongoDB and Tendermint `Kubernetes StatefulSet `_. +* Third party services like `3scale `_, + `MongoDB Cloud Manager `_ and the + `Azure Operations Management Suite + `_. + + +.. code:: text + + + BigchainDB Node + + + + +--------------------------------------------------------------------------------------------------------------------------------------+ + | | | | + | | | | + | | | | + | | | | + | | | | + | | | | + | BigchainDB API | | Tendermint P2P | + | | | Communication/ | + | | | Public Key Exchange | + | | | | + | | | | + | v v | + | | + | +------------------+ | + | | NGINX Service | | + | +-------+----------+ | + | | | + | v | + | | + | +------------------+ | + | | NGINX | | + | | Deployment | | + | | | | + | +-------+----------+ | + | | | + | | | + | | | + | v | + | | + | 443 +----------+ 46656/9986 | + | | Rate | | + | +---------------------------+ Limiting +-----------------------+ | + | | | Logic | | | + | | +----------+ | | + | | | | + | | | | + | | | | + | | | | + | | | | + | v v | + | | + | +-----------+ +----------+ | + | |HTTPS | +------------------> |Tendermint| | + | |Termination| | 9986 |Service | 46656 | + | | | | +-------+ | <----+ | + | +-----+-----+ | | +----------+ | | + | | | v v | + | | | | + | | | +----------+ +----------+ | + | | | |NGINX | |Tendermint| | + | | | |Deployment| |Stateful | | + | | | |Pub-Key-Ex| |Set | | + | v | +----------+ +----------+ | + | +-----+-----+ | | + | POST |Analyze | GET | | + | |Request | | | + | +-----------+ +--------+ | | + | | +-----------+ | | | + | | | | Bi-directional, communication between | + | | | | BigchainDB(APP) and Tendermint | + | | | | BFT consensus Engine | + | | | | | + | v v | | + | | | + | +-------------+ +--------------+ | +--------------+ | + | | OpenResty | | BigchainDB | | | MongoDB | | + | | Service | | Service | | | Service | | + | | | +-----> | | | +-------> | | | + | +------+------+ | +------+-------+ | | +------+-------+ | + | | | | | | | | + | v | v | | v | + | | | | | + | +------------+ | +------------+ | | +----------+ | + | | | | | | <-------------+ | |MongoDB | | + | | OpenResty | | | BigchainDB | | |Stateful | | + | | Deployment | | | Deployment | | |Set | | + | | | | | | | +-----+----+ | + | | | | | +--------------------------+ | | + | | | | | | | | + | +-----+------+ | +------------+ | | + | | | | | + | v | | | + | | | | + | +-----------+ | | | + | | Auth | | | | + | | Logic +---------+ | | + | | | | | + | | | | | + | +---+-------+ | | + | | | | + | | | | + | | | | + | | | | + | | | | + | | | | + +--------------------------------------------------------------------------------------------------------------------------------------+ + | | + | | + v v + + +------------------------------------+ +------------------------------------+ + | | | | + | | | | + | | | | + | 3Scale | | MongoDB Cloud | + | | | | + | | | | + | | | | + +------------------------------------+ +------------------------------------+ + + + +.. note:: + The arrows in the diagram represent the client-server communication. For + example, A-->B implies that A initiates the connection to B. + It does not represent the flow of data; the communication channel is always + fully duplex. + + +NGINX: Entrypoint and Gateway +----------------------------- + +We use an NGINX as HTTP proxy on port 443 (configurable) at the cloud +entrypoint for: + +#. Rate Limiting: We configure NGINX to allow only a certain number of requests + (configurable) which prevents DoS attacks. + +#. HTTPS Termination: The HTTPS connection does not carry through all the way + to BigchainDB and terminates at NGINX for now. + +#. Request Routing: For HTTPS connections on port 443 (or the configured BigchainDB public api port), + the connection is proxied to: + + #. OpenResty Service if it is a POST request. + #. BigchainDB Service if it is a GET request. + + +We use an NGINX TCP proxy on port 27017 (configurable) at the cloud +entrypoint for: + +#. Rate Limiting: We configure NGINX to allow only a certain number of requests + (configurable) which prevents DoS attacks. + +#. Request Routing: For connections on port 27017 (or the configured MongoDB + public api port), the connection is proxied to the MongoDB Service. + + +OpenResty: API Management, Authentication and Authorization +----------------------------------------------------------- + +We use `OpenResty `_ to perform authorization checks +with 3scale using the ``app_id`` and ``app_key`` headers in the HTTP request. + +OpenResty is NGINX plus a bunch of other +`components `_. We primarily depend +on the LuaJIT compiler to execute the functions to authenticate the ``app_id`` +and ``app_key`` with the 3scale backend. + + +MongoDB: Standalone +------------------- + +We use MongoDB as the backend database for BigchainDB. +In a multi-node deployment, MongoDB members communicate with each other via the +public port exposed by the NGINX Service. + +We achieve security by avoiding DoS attacks at the NGINX proxy layer and by +ensuring that MongoDB has TLS enabled for all its connections. + + +Tendermint: BFT consensus engine +-------------------------------- + +We use Tendermint as the backend consensus engine for BFT replication of BigchainDB. +In a multi-node deployment, Tendermint nodes/peers communicate with each other via +the public ports exposed by the NGINX gateway. + +We use port **9986** (configurable) to allow tendermint nodes to access the public keys +of the peers and port **46656** (configurable) for the rest of the communications between +the peers. + diff --git a/docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst b/docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst new file mode 100644 index 00000000..aea9d417 --- /dev/null +++ b/docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst @@ -0,0 +1,546 @@ +.. _kubernetes-template-deploy-bigchaindb-network: + +Kubernetes Template: Deploying a BigchainDB network +=================================================== + +This page describes how to deploy a BigchainDB + Tendermint network. + +If you want to deploy a stand-alone BigchainDB node in a BigchainDB cluster, +or a stand-alone BigchainDB node, +then see :doc:`the page about that `. + +We can use this guide to deploy a BigchainDB network in the following scenarios: + +* Single Azure Kubernetes Site. +* Multiple Azure Kubernetes Sites (Geographically dispersed). + + +Terminology Used +---------------- + +``BigchainDB node`` is a set of Kubernetes components that join together to +form a BigchainDB single node. Please refer to the :doc:`architecture diagram ` +for more details. + +``BigchainDB network`` will refer to a collection of nodes working together +to form a network. + + +Below, we refer to multiple files by their directory and filename, +such as ``tendermint/tendermint-ext-conn-svc.yaml``. Those files are located in the +`bigchaindb/bigchaindb repository on GitHub +`_ in the ``k8s/`` directory. +Make sure you're getting those files from the appropriate Git branch on +GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB +cluster is using. + +.. note:: + + This deployment strategy is currently used for testing purposes only, + operated by a single stakeholder or tightly coupled stakeholders. + +.. note:: + + Currently, we only support a static set of participants in the network. + Once a BigchainDB network is started with a certain number of validators + and a genesis file. Users cannot add new validator nodes dynamically. + You can track the progress of this funtionality on our + `github repository `_. + + +.. _pre-reqs-bdb-network-tmt: + +Prerequisites +------------- + +The deployment methodology is similar to one covered with :doc:`node-on-kubernetes`, but +we need to tweak some configurations depending on your choice of deployment. + +The operator needs to follow some consistent naming convention for all the components +covered :ref:`here `. + +Lets assume we are deploying a 4 node cluster, your naming conventions could look like this: + +.. code:: + + { + "MongoDB": [ + "mdb-instance-1", + "mdb-instance-2", + "mdb-instance-3", + "mdb-instance-4" + ], + "BigchainDB": [ + "bdb-instance-1", + "bdb-instance-2", + "bdb-instance-3", + "bdb-instance-4" + ], + "NGINX": [ + "ngx-instance-1", + "ngx-instance-2", + "ngx-instance-3", + "ngx-instance-4" + ], + "OpenResty": [ + "openresty-instance-1", + "openresty-instance-2", + "openresty-instance-3", + "openresty-instance-4" + ], + "MongoDB_Monitoring_Agent": [ + "mdb-mon-instance-1", + "mdb-mon-instance-2", + "mdb-mon-instance-3", + "mdb-mon-instance-4" + ], + "Tendermint": [ + "tendermint-instance-1", + "tendermint-instance-2", + "tendermint-instance-3", + "tendermint-instance-4" + ] + } + +.. note:: + + Blockchain Genesis ID and Time will be shared across all nodes. + +Edit config.yaml and secret.yaml +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Make N(number of nodes) copies of ``configuration/config-map-tm.yaml`` and ``configuration/secret-tm.yaml``. + +.. code:: text + + # For config-map-tm.yaml + config-map-node-1.yaml + config-map-node-2.yaml + config-map-node-3.yaml + config-map-node-4.yaml + + # For secret-tm.yaml + secret-node-1.yaml + secret-node-2.yaml + secret-node-3.yaml + secret-node-4.yaml + +Edit the data values as described in :doc:`this document `, based +on the naming convention described :ref:`above `. + +**Only for single site deployments**: Since all the configuration files use the +same ConfigMap and Secret Keys i.e. +``metadata.name -> vars, bdb-config and tendermint-config`` and +``metadata.name -> cloud-manager-credentials, mdb-certs, mdb-mon-certs, bdb-certs,`` +``https-certs, three-scale-credentials, ca-auth`` respectively, each file +will overwrite the configuration of the previously deployed one. +We want each node to have its own unique configurations. +One way to go about it is that, using the +:ref:`naming convention above ` we edit the ConfigMap and Secret keys. + +.. code:: text + + # For config-map-node-1.yaml + metadata.name: vars -> vars-node-1 + metadata.name: bdb-config -> bdb-config-node-1 + metadata.name: tendermint-config -> tendermint-config-node-1 + + # For secret-node-1.yaml + metadata.name: cloud-manager-credentials -> cloud-manager-credentials-node-1 + metadata.name: mdb-certs -> mdb-certs-node-1 + metadata.name: mdb-mon-certs -> mdb-mon-certs-node-1 + metadata.name: bdb-certs -> bdb-certs-node-1 + metadata.name: https-certs -> https-certs-node-1 + metadata.name: threescale-credentials -> threescale-credentials-node-1 + metadata.name: ca-auth -> ca-auth-node-1 + + # Repeat for the remaining files. + +Deploy all your configuration maps and secrets. + +.. code:: bash + + kubectl apply -f configuration/config-map-node-1.yaml + kubectl apply -f configuration/config-map-node-2.yaml + kubectl apply -f configuration/config-map-node-3.yaml + kubectl apply -f configuration/config-map-node-4.yaml + kubectl apply -f configuration/secret-node-1.yaml + kubectl apply -f configuration/secret-node-2.yaml + kubectl apply -f configuration/secret-node-3.yaml + kubectl apply -f configuration/secret-node-4.yaml + +.. note:: + + Similar to what we did, with config-map.yaml and secret.yaml i.e. indexing them + per node, we have to do the same for each Kubernetes component + i.e. Services, StorageClasses, PersistentVolumeClaims, StatefulSets, Deployments etc. + +.. code:: text + + # For Services + *-node-1-svc.yaml + *-node-2-svc.yaml + *-node-3-svc.yaml + *-node-4-svc.yaml + + # For StorageClasses + *-node-1-sc.yaml + *-node-2-sc.yaml + *-node-3-sc.yaml + *-node-4-sc.yaml + + # For PersistentVolumeClaims + *-node-1-pvc.yaml + *-node-2-pvc.yaml + *-node-3-pvc.yaml + *-node-4-pvc.yaml + + # For StatefulSets + *-node-1-ss.yaml + *-node-2-ss.yaml + *-node-3-ss.yaml + *-node-4-ss.yaml + + # For Deployments + *-node-1-dep.yaml + *-node-2-dep.yaml + *-node-3-dep.yaml + *-node-4-dep.yaml + + +.. _single-site-network-tmt: + +Single Site: Single Azure Kubernetes Cluster +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +For the deployment of a BigchainDB network under a single cluster, we need to replicate +the :doc:`deployment steps for each node ` N number of times, N being +the number of participants in the network. + +In our Kubernetes deployment template for a single BigchainDB node, we covered the basic configurations +settings :ref:`here `. + +Since, we index the ConfigMap and Secret Keys for the single site deployment, we need to update +all the Kubernetes components to reflect the corresponding changes i.e. For each Kubernetes Service, +StatefulSet, PersistentVolumeClaim, Deployment, and StorageClass, we need to update the respective +`*.yaml` file and update the ConfigMapKeyRef.name OR secret.secretName. + +Example +""""""" + +Assuming we are deploying the MongoDB StatefulSet for Node 1. We need to update +the ``mongo-node-1-ss.yaml`` and update the corresponding ConfigMapKeyRef.name or secret.secretNames. + +.. code:: text + + ######################################################################## + # This YAML file desribes a StatefulSet with a service for running and # + # exposing a MongoDB instance. # + # It depends on the configdb and db k8s pvc. # + ######################################################################## + + apiVersion: apps/v1beta1 + kind: StatefulSet + metadata: + name: mdb-instance-0-ss + namespace: default + spec: + serviceName: mdb-instance-0 + replicas: 1 + template: + metadata: + name: mdb-instance-0-ss + labels: + app: mdb-instance-0-ss + spec: + terminationGracePeriodSeconds: 10 + containers: + - name: mongodb + image: bigchaindb/mongodb:3.2 + imagePullPolicy: IfNotPresent + env: + - name: MONGODB_FQDN + valueFrom: + configMapKeyRef: + name: vars-1 # Changed from ``vars`` + key: mdb-instance-name + - name: MONGODB_POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: MONGODB_PORT + valueFrom: + configMapKeyRef: + name: vars-1 # Changed from ``vars`` + key: mongodb-backend-port + - name: STORAGE_ENGINE_CACHE_SIZE + valueFrom: + configMapKeyRef: + name: vars-1 # Changed from ``vars`` + key: storage-engine-cache-size + args: + - --mongodb-port + - $(MONGODB_PORT) + - --mongodb-key-file-path + - /etc/mongod/ssl/mdb-instance.pem + - --mongodb-ca-file-path + - /etc/mongod/ca/ca.pem + - --mongodb-crl-file-path + - /etc/mongod/ca/crl.pem + - --mongodb-fqdn + - $(MONGODB_FQDN) + - --mongodb-ip + - $(MONGODB_POD_IP) + - --storage-engine-cache-size + - $(STORAGE_ENGINE_CACHE_SIZE) + securityContext: + capabilities: + add: + - FOWNER + ports: + - containerPort: "" + protocol: TCP + name: mdb-api-port + volumeMounts: + - name: mdb-db + mountPath: /data/db + - name: mdb-configdb + mountPath: /data/configdb + - name: mdb-certs + mountPath: /etc/mongod/ssl/ + readOnly: true + - name: ca-auth + mountPath: /etc/mongod/ca/ + readOnly: true + resources: + limits: + cpu: 200m + memory: 5G + livenessProbe: + tcpSocket: + port: mdb-api-port + initialDelaySeconds: 15 + successThreshold: 1 + failureThreshold: 3 + periodSeconds: 15 + timeoutSeconds: 10 + restartPolicy: Always + volumes: + - name: mdb-db + persistentVolumeClaim: + claimName: mongo-db-claim-1 # Changed from ``mongo-db-claim`` + - name: mdb-configdb + persistentVolumeClaim: + claimName: mongo-configdb-claim-1 # Changed from ``mongo-configdb-claim`` + - name: mdb-certs + secret: + secretName: mdb-certs-1 # Changed from ``mdb-certs`` + defaultMode: 0400 + - name: ca-auth + secret: + secretName: ca-auth-1 # Changed from ``ca-auth`` + defaultMode: 0400 + +The above example is meant to be repeated for all the Kubernetes components of a BigchainDB node. + +* ``nginx-http/nginx-http-node-X-svc.yaml`` or ``nginx-https/nginx-https-node-X-svc.yaml`` + +* ``nginx-http/nginx-http-node-X-dep.yaml`` or ``nginx-https/nginx-https-node-X-dep.yaml`` + +* ``mongodb/mongodb-node-X-svc.yaml`` + +* ``mongodb/mongodb-node-X-sc.yaml`` + +* ``mongodb/mongodb-node-X-pvc.yaml`` + +* ``mongodb/mongodb-node-X-ss.yaml`` + +* ``tendermint/tendermint-node-X-svc.yaml`` + +* ``tendermint/tendermint-node-X-sc.yaml`` + +* ``tendermint/tendermint-node-X-pvc.yaml`` + +* ``tendermint/tendermint-node-X-ss.yaml`` + +* ``bigchaindb/bigchaindb-node-X-svc.yaml`` + +* ``bigchaindb/bigchaindb-node-X-dep.yaml`` + +* ``nginx-openresty/nginx-openresty-node-X-svc.yaml`` + +* ``nginx-openresty/nginx-openresty-node-X-dep.yaml`` + + +Multi Site: Multiple Azure Kubernetes Clusters +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +For the multi site deployment of a BigchainDB network with geographically dispersed +nodes, we need to replicate the :doc:`deployment steps for each node ` N number of times, +N being the number of participants in the network. + +The operator needs to follow a consistent naming convention which has :ref:`already +discussed in this document `. + +.. note:: + + Assuming we are using independent Kubernetes clusters, the ConfigMap and Secret Keys + do not need to be updated unlike :ref:`single-site-network-tmt`, and we also do not + need to update corresponding ConfigMap/Secret imports in the Kubernetes components. + + +Deploy Kubernetes Services +-------------------------- + +Deploy the following services for each node by following the naming convention +described :ref:`above `: + +* :ref:`Start the NGINX Service `. + +* :ref:`Assign DNS Name to the NGINX Public IP ` + +* :ref:`Start the MongoDB Kubernetes Service `. + +* :ref:`Start the BigchainDB Kubernetes Service `. + +* :ref:`Start the OpenResty Kubernetes Service `. + +* :ref:`Start the Tendermint Kubernetes Service `. + + +Only for multi site deployments +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +We need to make sure that clusters are able +to talk to each other i.e. specifically the communication between the +Tendermint peers. Set up networking between the clusters using +`Kubernetes Services `_. + +Assuming we have a Tendermint instance ``tendermint-instance-1`` residing in Azure data center location ``westeurope`` and we +want to connect to ``tendermint-instance-2``, ``tendermint-instance-3``, and ``tendermint-instance-4`` located in Azure data centers +``eastus``, ``centralus`` and ``westus``, respectively. Unless you already have explicitly set up networking for +``tendermint-instance-1`` to communicate with ``tendermint-instance-2/3/4`` and +vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a +Tendermint P2P network. +It is similar to ensuring that there is a ``CNAME`` record in the DNS +infrastructure to resolve ``tendermint-instance-X`` to the host where it is actually available. +We can do this in Kubernetes using a Kubernetes Service of ``type`` +``ExternalName``. + +* This configuration is located in the file ``tendermint/tendermint-ext-conn-svc.yaml``. + +* Set the name of the ``metadata.name`` to the host name of the Tendermint instance you are trying to connect to. + For instance if you are configuring this service on cluster with ``tendermint-instance-1`` then the ``metadata.name`` will + be ``tendermint-instance-2`` and vice versa. + +* Set ``spec.ports.port[0]`` to the ``tm-p2p-port`` from the ConfigMap for the other cluster. + +* Set ``spec.ports.port[1]`` to the ``tm-rpc-port`` from the ConfigMap for the other cluster. + +* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to. + For more information about the FQDN please refer to: :ref:`Assign DNS name to NGINX Public + IP `. + +.. note:: + This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs + we need to communicate with. + + If you are not the system administrator of the cluster, you have to get in + touch with the system administrator/s of the other ``n-1`` clusters and + share with them your instance name (``tendermint-instance-name`` in the ConfigMap) + and the FQDN of the NGINX instance acting as Gateway(set in: :ref:`Assign DNS name to NGINX + Public IP `). + + +Start NGINX Kubernetes deployments +---------------------------------- + +Start the NGINX deployment that serves as a Gateway for each node by following the +naming convention described :ref:`above ` and referring to the following instructions: + +* :ref:`Start the NGINX Kubernetes Deployment `. + + +Deploy Kubernetes StorageClasses for MongoDB and Tendermint +----------------------------------------------------------- + +Deploy the following StorageClasses for each node by following the naming convention +described :ref:`above `: + +* :ref:`Create Kubernetes Storage Classes for MongoDB `. + +* :ref:`Create Kubernetes Storage Classes for Tendermint `. + + +Deploy Kubernetes PersistentVolumeClaims for MongoDB and Tendermint +-------------------------------------------------------------------- + +Deploy the following services for each node by following the naming convention +described :ref:`above `: + +* :ref:`Create Kubernetes Persistent Volume Claims for MongoDB `. + +* :ref:`Create Kubernetes Persistent Volume Claims for Tendermint ` + + +Deploy MongoDB Kubernetes StatefulSet +-------------------------------------- + +Deploy the MongoDB StatefulSet (standalone MongoDB) for each node by following the naming convention +described :ref:`above `: and referring to the following section: + +* :ref:`Start a Kubernetes StatefulSet for MongoDB `. + + +Configure Users and Access Control for MongoDB +---------------------------------------------- + +Configure users and access control for each MongoDB instance +in the network by referring to the following section: + +* :ref:`Configure Users and Access Control for MongoDB `. + + +Deploy Tendermint Kubernetes StatefulSet +---------------------------------------- + +Deploy the Tendermint Stateful for each node by following the +naming convention described :ref:`above ` and referring to the following instructions: + +* :ref:`create-kubernetes-stateful-set-tmt`. + + +Start Kubernetes Deployment for MongoDB Monitoring Agent +--------------------------------------------------------- + +Start the MongoDB monitoring agent Kubernetes deployment for each node by following the +naming convention described :ref:`above ` and referring to the following instructions: + +* :ref:`Start a Kubernetes StatefulSet for Tendermint `. + + +Start Kubernetes Deployment for BigchainDB +------------------------------------------ + +Start the BigchainDB Kubernetes deployment for each node by following the +naming convention described :ref:`above ` and referring to the following instructions: + +* :ref:`Start a Kubernetes Deployment for BigchainDB `. + + +Start Kubernetes Deployment for OpenResty +------------------------------------------ + +Start the OpenResty Kubernetes deployment for each node by following the +naming convention described :ref:`above ` and referring to the following instructions: + +* :ref:` Start a Kubernetes Deployment for OpenResty `. + + +Verify and Test +--------------- + +Verify and test your setup by referring to the following instructions: + +* :ref:`Verify the BigchainDB Node Setup `. + diff --git a/docs/server/source/production-deployment-template-tendermint/index.rst b/docs/server/source/production-deployment-template-tendermint/index.rst new file mode 100644 index 00000000..8692d180 --- /dev/null +++ b/docs/server/source/production-deployment-template-tendermint/index.rst @@ -0,0 +1,20 @@ +Production Deployment Template: Tendermint BFT +============================================== + +This section outlines how *we* deploy production BigchainDB, +integrated with Tendermint(backend for BFT consensus), +clusters on Microsoft Azure using +Kubernetes. We improve it constantly. +You may choose to use it as a template or reference for your own deployment, +but *we make no claim that it is suitable for your purposes*. +Feel free change things to suit your needs or preferences. + + +.. toctree:: + :maxdepth: 1 + + workflow + architecture + node-on-kubernetes + node-config-map-and-secrets + bigchaindb-network-on-kubernetes \ No newline at end of file diff --git a/docs/server/source/production-deployment-template-tendermint/node-config-map-and-secrets.rst b/docs/server/source/production-deployment-template-tendermint/node-config-map-and-secrets.rst new file mode 100644 index 00000000..2e488a38 --- /dev/null +++ b/docs/server/source/production-deployment-template-tendermint/node-config-map-and-secrets.rst @@ -0,0 +1,356 @@ +.. _how-to-configure-a-bigchaindb-tendermint-node: + +How to Configure a BigchainDB + Tendermint Node +=============================================== + +This page outlines the steps to set a bunch of configuration settings +in your BigchainDB node. +They are pushed to the Kubernetes cluster in two files, +named ``config-map.yaml`` (a set of ConfigMaps) +and ``secret.yaml`` (a set of Secrets). +They are stored in the Kubernetes cluster's key-value store (etcd). + +Make sure you did all the things listed in the section titled +:ref:`things-each-node-operator-must-do-tmt` +(including generation of all the SSL certificates needed +for MongoDB auth). + + +Edit config-map.yaml +-------------------- + +Make a copy of the file ``k8s/configuration/config-map.yaml`` +and edit the data values in the various ConfigMaps. +That file already contains many comments to help you +understand each data value, but we make some additional +remarks on some of the values below. + +Note: None of the data values in ``config-map.yaml`` need +to be base64-encoded. (This is unlike ``secret.yaml``, +where all data values must be base64-encoded. +This is true of all Kubernetes ConfigMaps and Secrets.) + + +vars.cluster-fqdn +~~~~~~~~~~~~~~~~~ + +The ``cluster-fqdn`` field specifies the domain you would have +:ref:`registered before `. + + +vars.cluster-frontend-port +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``cluster-frontend-port`` field specifies the port on which your cluster +will be available to all external clients. +It is set to the HTTPS port ``443`` by default. + + +vars.cluster-health-check-port +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``cluster-healthcheck-port`` is the port number on which health check +probes are sent to the main NGINX instance. +It is set to ``8888`` by default. + + +vars.cluster-dns-server-ip +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``cluster-dns-server-ip`` is the IP of the DNS server for a node. +We use DNS for service discovery. A Kubernetes deployment always has a DNS +server (``kube-dns``) running at 10.0.0.10, and since we use Kubernetes, this is +set to ``10.0.0.10`` by default, which is the default ``kube-dns`` IP address. + + +vars.mdb-instance-name and Similar +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Your BigchainDB cluster organization should have a standard way +of naming instances, so the instances in your BigchainDB node +should conform to that standard (i.e. you can't just make up some names). +There are some things worth noting about the ``mdb-instance-name``: + +* This field will be the DNS name of your MongoDB instance, and Kubernetes + maps this name to its internal DNS. +* We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in our + documentation. Your BigchainDB cluster may use a different naming convention. + + +vars.ngx-mdb-instance-name and Similar +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +NGINX needs the FQDN of the servers inside the cluster to be able to forward +traffic. +The ``ngx-openresty-instance-name``, ``ngx-mdb-instance-name`` and +``ngx-bdb-instance-name`` are the FQDNs of the OpenResty instance, the MongoDB +instance, and the BigchainDB instance in this Kubernetes cluster respectively. +In Kubernetes, this is usually the name of the module specified in the +corresponding ``vars.*-instance-name`` followed by the +``.svc.cluster.local``. For example, if you run OpenResty in +the default Kubernetes namespace, this will be +``.default.svc.cluster.local`` + + +vars.mongodb-frontend-port and vars.mongodb-backend-port +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``mongodb-frontend-port`` is the port number on which external clients can +access MongoDB. This needs to be restricted to only other MongoDB instances +by enabling an authentication mechanism on MongoDB cluster. +It is set to ``27017`` by default. + +The ``mongodb-backend-port`` is the port number on which MongoDB is actually +available/listening for requests in your cluster. +It is also set to ``27017`` by default. + + +vars.openresty-backend-port +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``openresty-backend-port`` is the port number on which OpenResty is +listening for requests. +This is used by the NGINX instance to forward requests +destined for the OpenResty instance to the right port. +This is also used by OpenResty instance to bind to the correct port to +receive requests from NGINX instance. +It is set to ``80`` by default. + + +vars.bigchaindb-wsserver-advertised-scheme +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``bigchaindb-wsserver-advertised-scheme`` is the protocol used to access +the WebSocket API in BigchainDB. This can be set to ``wss`` or ``ws``. +It is set to ``wss`` by default. + + +vars.bigchaindb-api-port, vars.bigchaindb-ws-port and Similar +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``bigchaindb-api-port`` is the port number on which BigchainDB is +listening for HTTP requests. Currently set to ``9984`` by default. + +The ``bigchaindb-ws-port`` is the port number on which BigchainDB is +listening for Websocket requests. Currently set to ``9985`` by default. + +There's another :doc:`page with a complete listing of all the BigchainDB Server +configuration settings <../server-reference/configuration>`. + + +bdb-config.bdb-user +~~~~~~~~~~~~~~~~~~~ + +This is the user name that BigchainDB uses to authenticate itself to the +backend MongoDB database. + +We need to specify the user name *as seen in the certificate* issued to +the BigchainDB instance in order to authenticate correctly. Use +the following ``openssl`` command to extract the user name from the +certificate: + +.. code:: bash + + $ openssl x509 -in \ + -inform PEM -subject -nameopt RFC2253 + +You should see an output line that resembles: + +.. code:: bash + + subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE + +The ``subject`` line states the complete user name we need to use for this +field (``bdb-config.bdb-user``), i.e. + +.. code:: bash + + emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE + + +tendermint-config.tm-instance-name +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Your BigchainDB cluster organization should have a standard way +of naming instances, so the instances in your BigchainDB node +should conform to that standard. There are some things worth noting +about the ``tm-instance-name``: + +* This field will be the DNS name of your Tendermint instance, and Kubernetes + maps this name to its internal DNS, so all the peer to peer communication + depends on this, in case of a network/multi-node deployment. +* This parameter is also used to access the public key of a particular node. +* We use ``tm-instance-0``, ``tm-instance-1`` and so on in our + documentation. Your BigchainDB cluster may use a different naming convention. + + +tendermint-config.ngx-tm-instance-name +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +NGINX needs the FQDN of the servers inside the cluster to be able to forward +traffic. +``ngx-tm-instance-name`` is the FQDN of the Tendermint +instance in this Kubernetes cluster. +In Kubernetes, this is usually the name of the module specified in the +corresponding ``tendermint-config.*-instance-name`` followed by the +``.svc.cluster.local``. For example, if you run Tendermint in +the default Kubernetes namespace, this will be +``.default.svc.cluster.local`` + + +tendermint-config.tm-seeds +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-seeds`` is the initial set of peers to connect to. It is a comma separated +list of all the peers part of the cluster. + +If you are deploying a stand-alone BigchainDB node the value should the same as +````. If you are deploying a network this parameter will look +like this: + +.. code:: + + ,,, + + +tendermint-config.tm-validators +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-validators`` is the initial set of validators in the network. It is a comma separated list +of all the participant validator nodes. + +If you are deploying a stand-alone BigchainDB node the value should be the same as +````. If you are deploying a network this parameter will look like +this: + +.. code:: + + ,,, + + +tendermint-config.tm-validator-power +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-validator-power`` represents the voting power of each validator. It is a comma separated +list of all the participants in the network. + +**Note**: The order of the validator power list should be the same as the ``tm-validators`` list. + +.. code:: + + tm-validators: ,,, + +For the above list of validators the ``tm-validator-power`` list should look like this: + +.. code:: + + tm-validator-power: ,,, + + +tendermint-config.tm-genesis-time +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-genesis-time`` represents the official time of blockchain start. Details regarding, how to generate +this parameter are covered :ref:`here `. + + +tendermint-config.tm-chain-id +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-chain-id`` represents the ID of the blockchain. This must be unique for every blockchain. +Details regarding, how to generate this parameter are covered +:ref:`here `. + + +tendermint-config.tm-abci-port +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-abci-port`` has a default value ``46658`` which is used by Tendermint Core for +ABCI(Application BlockChain Interface) traffic. BigchainDB nodes use this port +internally to communicate with Tendermint Core. + + +tendermint-config.tm-p2p-port +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-p2p-port`` has a default value ``46656`` which is used by Tendermint Core for +peer to peer communication. + +For a multi-node/zone deployment, this port needs to be available publicly for P2P +communication between Tendermint nodes. + + +tendermint-config.tm-rpc-port +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-rpc-port`` has a default value ``46657`` which is used by Tendermint Core for RPC +traffic. BigchainDB nodes use this port with RPC listen address. + + +tendermint-config.tm-pub-key-access +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-pub-key-access`` has a default value ``9986``, which is used to discover the public +key of a tendermint node. Each Tendermint StatefulSet(Pod, Tendermint + NGINX) hosts its +public key. + +.. code:: + + http://tendermint-instance-1:9986/pub_key.json + + +Edit secret.yaml +---------------- + +Make a copy of the file ``k8s/configuration/secret.yaml`` +and edit the data values in the various Secrets. +That file includes many comments to explain the required values. +**In particular, note that all values must be base64-encoded.** +There are tips at the top of the file +explaining how to convert values into base64-encoded values. + +Your BigchainDB node might not need all the Secrets. +For example, if you plan to access the BigchainDB API over HTTP, you +don't need the ``https-certs`` Secret. +You can delete the Secrets you don't need, +or set their data values to ``""``. + +Note that ``ca.pem`` is just another name for ``ca.crt`` +(the certificate of your BigchainDB cluster's self-signed CA). + + +threescale-credentials.* +~~~~~~~~~~~~~~~~~~~~~~~~ + +If you're not using 3scale, +you can delete the ``threescale-credentials`` Secret +or leave all the values blank (``""``). + +If you *are* using 3scale, get the values for ``secret-token``, +``service-id``, ``version-header`` and ``service-token`` by logging in to 3scale +portal using your admin account, click **APIs** and click on **Integration** +for the relevant API. +Scroll to the bottom of the page and click the small link +in the lower right corner, labelled **Download the NGINX Config files**. +Unzip it(if it is a ``zip`` file). Open the ``.conf`` and the ``.lua`` file. +You should be able to find all the values in those files. +You have to be careful because it will have values for **all** your APIs, +and some values vary from API to API. +The ``version-header`` is the timestamp in a line that looks like: + +.. code:: + + proxy_set_header X-3scale-Version "2017-06-28T14:57:34Z"; + + +Deploy Your config-map.yaml and secret.yaml +------------------------------------------- + +You can deploy your edited ``config-map.yaml`` and ``secret.yaml`` +files to your Kubernetes cluster using the commands: + +.. code:: bash + + $ kubectl apply -f config-map.yaml + + $ kubectl apply -f secret.yaml diff --git a/docs/server/source/production-deployment-template-tendermint/node-on-kubernetes.rst b/docs/server/source/production-deployment-template-tendermint/node-on-kubernetes.rst new file mode 100644 index 00000000..45695b9c --- /dev/null +++ b/docs/server/source/production-deployment-template-tendermint/node-on-kubernetes.rst @@ -0,0 +1,1178 @@ +.. _kubernetes-template-deploy-a-single-bigchaindb-node-with-tendermint: + +Kubernetes Template: Deploy a Single BigchainDB Node with Tendermint +==================================================================== + +This page describes how to deploy a stand-alone BigchainDB + Tendermint node, +or a static network of BigchainDB + Tendermint nodes. +using `Kubernetes `_. +It assumes you already have a running Kubernetes cluster. + +Below, we refer to many files by their directory and filename, +such as ``configuration/config-map-tm.yaml``. Those files are files in the +`bigchaindb/bigchaindb repository on GitHub `_ +in the ``k8s/`` directory. +Make sure you're getting those files from the appropriate Git branch on +GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB +cluster is using. + + +Step 1: Install and Configure kubectl +------------------------------------- + +kubectl is the Kubernetes CLI. +If you don't already have it installed, +then see the `Kubernetes docs to install it +`_. + +The default location of the kubectl configuration file is ``~/.kube/config``. +If you don't have that file, then you need to get it. + +**Azure.** If you deployed your Kubernetes cluster on Azure +using the Azure CLI 2.0 (as per :doc:`our template +<../production-deployment-template/template-kubernetes-azure>`), +then you can get the ``~/.kube/config`` file using: + +.. code:: bash + + $ az acs kubernetes get-credentials \ + --resource-group \ + --name + +If it asks for a password (to unlock the SSH key) +and you enter the correct password, +but you get an error message, +then try adding ``--ssh-key-file ~/.ssh/`` +to the above command (i.e. the path to the private key). + +.. note:: + + **About kubectl contexts.** You might manage several + Kubernetes clusters. To make it easy to switch from one to another, + kubectl has a notion of "contexts," e.g. the context for cluster 1 or + the context for cluster 2. To find out the current context, do: + + .. code:: bash + + $ kubectl config view + + and then look for the ``current-context`` in the output. + The output also lists all clusters, contexts and users. + (You might have only one of each.) + You can switch to a different context using: + + .. code:: bash + + $ kubectl config use-context + + You can also switch to a different context for just one command + by inserting ``--context `` into any kubectl command. + For example: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 get pods + + will get a list of the pods in the Kubernetes cluster associated + with the context named ``k8s-bdb-test-cluster-0``. + +Step 2: Connect to Your Cluster's Web UI (Optional) +--------------------------------------------------- + +You can connect to your cluster's +`Kubernetes Dashboard `_ +(also called the Web UI) using: + +.. code:: bash + + $ kubectl proxy -p 8001 + + or + + $ az acs kubernetes browse -g [Resource Group] -n [Container service instance name] --ssh-key-file /path/to/privateKey + +or, if you prefer to be explicit about the context (explained above): + +.. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 proxy -p 8001 + +The output should be something like ``Starting to serve on 127.0.0.1:8001``. +That means you can visit the dashboard in your web browser at +`http://127.0.0.1:8001/ui `_. + + +Step 3: Configure Your BigchainDB Node +-------------------------------------- + +See the page titled :ref:`how-to-configure-a-bigchaindb-tendermint-node`. + + +.. _start-the-nginx-service-tmt: + +Step 4: Start the NGINX Service +------------------------------- + + * This will will give us a public IP for the cluster. + + * Once you complete this step, you might need to wait up to 10 mins for the + public IP to be assigned. + + * You have the option to use vanilla NGINX without HTTPS support or an + NGINX with HTTPS support. + + +Step 4.1: Vanilla NGINX +^^^^^^^^^^^^^^^^^^^^^^^ + + * This configuration is located in the file ``nginx-http/nginx-http-svc-tm.yaml``. + + * Set the ``metadata.name`` and ``metadata.labels.name`` to the value + set in ``ngx-instance-name`` in the ConfigMap above. + + * Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in + the ConfigMap followed by ``-dep``. For example, if the value set in the + ``ngx-instance-name`` is ``ngx-http-instance-0``, set the + ``spec.selector.app`` to ``ngx-http-instance-0-dep``. + + * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the + ``cluster-frontend-port`` in the ConfigMap above. This is the + ``public-cluster-port`` in the file which is the ingress in to the cluster. + + * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the + ``tm-pub-access-port`` in the ConfigMap above. This is the + ``tm-pub-key-access`` in the file which specifies where Public Key for + the Tendermint instance is available. + + * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the + ``tm-p2p-port`` in the ConfigMap above. This is the + ``tm-p2p-port`` in the file which is used for P2P communication for Tendermint + nodes. + + * Start the Kubernetes Service: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc-tm.yaml + + +Step 4.2: NGINX with HTTPS +^^^^^^^^^^^^^^^^^^^^^^^^^^ + + * You have to enable HTTPS for this one and will need an HTTPS certificate + for your domain. + + * You should have already created the necessary Kubernetes Secrets in the previous + step (i.e. ``https-certs``). + + * This configuration is located in the file ``nginx-https/nginx-https-svc-tm.yaml``. + + * Set the ``metadata.name`` and ``metadata.labels.name`` to the value + set in ``ngx-instance-name`` in the ConfigMap above. + + * Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in + the ConfigMap followed by ``-dep``. For example, if the value set in the + ``ngx-instance-name`` is ``ngx-https-instance-0``, set the + ``spec.selector.app`` to ``ngx-https-instance-0-dep``. + + * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the + ``cluster-frontend-port`` in the ConfigMap above. This is the + ``public-secure-cluster-port`` in the file which is the ingress in to the cluster. + + * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the + ``mongodb-frontend-port`` in the ConfigMap above. This is the + ``public-mdb-port`` in the file which specifies where MongoDB is + available. + + * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the + ``tm-pub-access-port`` in the ConfigMap above. This is the + ``tm-pub-key-access`` in the file which specifies where Public Key for + the Tendermint instance is available. + + * Set ``ports[3].port`` and ``ports[3].targetPort`` to the value set in the + ``tm-p2p-port`` in the ConfigMap above. This is the + ``tm-p2p-port`` in the file which is used for P2P communication between Tendermint + nodes. + + + * Start the Kubernetes Service: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc-tm.yaml + + +.. _assign-dns-name-to-nginx-public-ip-tmt: + +Step 5: Assign DNS Name to the NGINX Public IP +---------------------------------------------- + + * This step is required only if you are planning to set up multiple + `BigchainDB nodes + `_ or are using + HTTPS certificates tied to a domain. + + * The following command can help you find out if the NGINX service started + above has been assigned a public IP or external IP address: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 get svc -w + + * Once a public IP is assigned, you can map it to + a DNS name. + We usually assign ``bdb-test-cluster-0``, ``bdb-test-cluster-1`` and + so on in our documentation. + Let's assume that we assign the unique name of ``bdb-test-cluster-0`` here. + + +**Set up DNS mapping in Azure.** +Select the current Azure resource group and look for the ``Public IP`` +resource. You should see at least 2 entries there - one for the Kubernetes +master and the other for the NGINX instance. You may have to ``Refresh`` the +Azure web page listing the resources in a resource group for the latest +changes to be reflected. +Select the ``Public IP`` resource that is attached to your service (it should +have the Azure DNS prefix name along with a long random string, without the +``master-ip`` string), select ``Configuration``, add the DNS assigned above +(for example, ``bdb-test-cluster-0``), click ``Save``, and wait for the +changes to be applied. + +To verify the DNS setting is operational, you can run ``nslookup `` from your local Linux shell. + +This will ensure that when you scale to different geographical zones, other Tendermint +nodes in the network can reach this instance. + + +.. _start-the-mongodb-kubernetes-service-tmt: + +Step 6: Start the MongoDB Kubernetes Service +-------------------------------------------- + + * This configuration is located in the file ``mongodb/mongo-svc-tm.yaml``. + + * Set the ``metadata.name`` and ``metadata.labels.name`` to the value + set in ``mdb-instance-name`` in the ConfigMap above. + + * Set the ``spec.selector.app`` to the value set in ``mdb-instance-name`` in + the ConfigMap followed by ``-ss``. For example, if the value set in the + ``mdb-instance-name`` is ``mdb-instance-0``, set the + ``spec.selector.app`` to ``mdb-instance-0-ss``. + + * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the + ``mongodb-backend-port`` in the ConfigMap above. + This is the ``mdb-port`` in the file which specifies where MongoDB listens + for API requests. + + * Start the Kubernetes Service: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc-tm.yaml + + +.. _start-the-bigchaindb-kubernetes-service-tmt: + +Step 7: Start the BigchainDB Kubernetes Service +----------------------------------------------- + + * This configuration is located in the file ``bigchaindb/bigchaindb-svc-tm.yaml``. + + * Set the ``metadata.name`` and ``metadata.labels.name`` to the value + set in ``bdb-instance-name`` in the ConfigMap above. + + * Set the ``spec.selector.app`` to the value set in ``bdb-instance-name`` in + the ConfigMap followed by ``-dep``. For example, if the value set in the + ``bdb-instance-name`` is ``bdb-instance-0``, set the + ``spec.selector.app`` to ``bdb-instance-0-dep``. + + * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the + ``bigchaindb-api-port`` in the ConfigMap above. + This is the ``bdb-api-port`` in the file which specifies where BigchainDB + listens for HTTP API requests. + + * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the + ``bigchaindb-ws-port`` in the ConfigMap above. + This is the ``bdb-ws-port`` in the file which specifies where BigchainDB + listens for Websocket connections. + + * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the + ``tm-abci-port`` in the ConfigMap above. + This is the ``tm-abci-port`` in the file which specifies the port used + for ABCI communication. + + * Start the Kubernetes Service: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc-tm.yaml + + +.. _start-the-openresty-kubernetes-service-tmt: + +Step 8: Start the OpenResty Kubernetes Service +---------------------------------------------- + + * This configuration is located in the file ``nginx-openresty/nginx-openresty-svc-tm.yaml``. + + * Set the ``metadata.name`` and ``metadata.labels.name`` to the value + set in ``openresty-instance-name`` in the ConfigMap above. + + * Set the ``spec.selector.app`` to the value set in ``openresty-instance-name`` in + the ConfigMap followed by ``-dep``. For example, if the value set in the + ``openresty-instance-name`` is ``openresty-instance-0``, set the + ``spec.selector.app`` to ``openresty-instance-0-dep``. + + * Start the Kubernetes Service: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc-tm.yaml + + +.. _start-the-tendermint-kubernetes-service-tmt: + +Step 9: Start the Tendermint Kubernetes Service +----------------------------------------------- + + * This configuration is located in the file ``tendermint/tendermint-svc.yaml``. + + * Set the ``metadata.name`` and ``metadata.labels.name`` to the value + set in ``tm-instance-name`` in the ConfigMap above. + + * Set the ``spec.selector.app`` to the value set in ``tm-instance-name`` in + the ConfigMap followed by ``-ss``. For example, if the value set in the + ``tm-instance-name`` is ``tm-instance-0``, set the + ``spec.selector.app`` to ``tm-instance-0-ss``. + + * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the + ``tm-p2p-port`` in the ConfigMap above. + This is the ``p2p`` in the file which specifies where Tendermint peers + communicate. + + * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the + ``tm-rpc-port`` in the ConfigMap above. + This is the ``rpc`` in the file which specifies the port used by Tendermint core + for RPC traffic. + + * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the + ``tm-pub-key-access`` in the ConfigMap above. + This is the ``pub-key-access`` in the file which specifies the port to host/distribute + the public key for the Tendermint node. + + * Start the Kubernetes Service: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-svc.yaml + + +.. _start-the-nginx-deployment-tmt: + +Step 10: Start the NGINX Kubernetes Deployment +---------------------------------------------- + + * NGINX is used as a proxy to OpenResty, BigchainDB, Tendermint and MongoDB instances in + the node. It proxies HTTP/HTTPS requests on the ``cluster-frontend-port`` + to the corresponding OpenResty or BigchainDB backend, TCP connections + on ``mongodb-frontend-port``, ``tm-p2p-port`` and ``tm-pub-key-access`` + to MongoDB and Tendermint respectively. + + * As in step 4, you have the option to use vanilla NGINX without HTTPS or + NGINX with HTTPS support. + +Step 10.1: Vanilla NGINX +^^^^^^^^^^^^^^^^^^^^^^^^ + + * This configuration is located in the file ``nginx-http/nginx-http-dep-tm.yaml``. + + * Set the ``metadata.name`` and ``spec.template.metadata.labels.app`` + to the value set in ``ngx-instance-name`` in the ConfigMap followed by a + ``-dep``. For example, if the value set in the ``ngx-instance-name`` is + ``ngx-http-instance-0``, set the fields to ``ngx-http-instance-0-dep``. + + * Set the ports to be exposed from the pod in the + ``spec.containers[0].ports`` section. We currently expose 5 ports - + ``mongodb-frontend-port``, ``cluster-frontend-port``, + ``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port``. + Set them to the values specified in the + ConfigMap. + + * The configuration uses the following values set in the ConfigMap: + + - ``cluster-frontend-port`` + - ``cluster-health-check-port`` + - ``cluster-dns-server-ip`` + - ``mongodb-frontend-port`` + - ``ngx-mdb-instance-name`` + - ``mongodb-backend-port`` + - ``ngx-bdb-instance-name`` + - ``bigchaindb-api-port`` + - ``bigchaindb-ws-port`` + - ``ngx-tm-instance-name`` + - ``tm-pub-key-access`` + - ``tm-p2p-port`` + + * Start the Kubernetes Deployment: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep-tm.yaml + + +Step 10.2: NGINX with HTTPS +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + * This configuration is located in the file + ``nginx-https/nginx-https-dep-tm.yaml``. + + * Set the ``metadata.name`` and ``spec.template.metadata.labels.app`` + to the value set in ``ngx-instance-name`` in the ConfigMap followed by a + ``-dep``. For example, if the value set in the ``ngx-instance-name`` is + ``ngx-https-instance-0``, set the fields to ``ngx-https-instance-0-dep``. + + * Set the ports to be exposed from the pod in the + ``spec.containers[0].ports`` section. We currently expose 6 ports - + ``mongodb-frontend-port``, ``cluster-frontend-port``, + ``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port`` + . Set them to the values specified in the + ConfigMap. + + * The configuration uses the following values set in the ConfigMap: + + - ``cluster-frontend-port`` + - ``cluster-health-check-port`` + - ``cluster-fqdn`` + - ``cluster-dns-server-ip`` + - ``mongodb-frontend-port`` + - ``ngx-mdb-instance-name`` + - ``mongodb-backend-port`` + - ``openresty-backend-port`` + - ``ngx-openresty-instance-name`` + - ``ngx-bdb-instance-name`` + - ``bigchaindb-api-port`` + - ``bigchaindb-ws-port`` + - ``ngx-tm-instance-name`` + - ``tm-pub-key-access`` + - ``tm-p2p-port``` + + * The configuration uses the following values set in the Secret: + + - ``https-certs`` + + * Start the Kubernetes Deployment: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep-tm.yaml + + +.. _create-kubernetes-storage-class-mdb-tmt: + +Step 11: Create Kubernetes Storage Classes for MongoDB +------------------------------------------------------ + +MongoDB needs somewhere to store its data persistently, +outside the container where MongoDB is running. +Our MongoDB Docker container +(based on the official MongoDB Docker container) +exports two volume mounts with correct +permissions from inside the container: + +* The directory where the mongod instance stores its data: ``/data/db``. + There's more explanation in the MongoDB docs about `storage.dbpath `_. + +* The directory where the mongodb instance stores the metadata for a sharded + cluster: ``/data/configdb/``. + There's more explanation in the MongoDB docs about `sharding.configDB `_. + +Explaining how Kubernetes handles persistent volumes, +and the associated terminology, +is beyond the scope of this documentation; +see `the Kubernetes docs about persistent volumes +`_. + +The first thing to do is create the Kubernetes storage classes. + +**Set up Storage Classes in Azure.** +First, you need an Azure storage account. +If you deployed your Kubernetes cluster on Azure +using the Azure CLI 2.0 +(as per :doc:`our template <../production-deployment-template/template-kubernetes-azure>`), +then the `az acs create` command already created a +storage account in the same location and resource group +as your Kubernetes cluster. +Both should have the same "storage account SKU": ``Standard_LRS``. +Standard storage is lower-cost and lower-performance. +It uses hard disk drives (HDD). +LRS means locally-redundant storage: three replicas +in the same data center. +Premium storage is higher-cost and higher-performance. +It uses solid state drives (SSD). +You can create a `storage account `_ +for Premium storage and associate it with your Azure resource group. +For future reference, the command to create a storage account is +`az storage account create `_. + +.. Note:: + Please refer to `Azure documentation `_ + for the list of VMs that are supported by Premium Storage. + +The Kubernetes template for configuration of Storage Class is located in the +file ``mongodb/mongo-sc.yaml``. + +You may have to update the ``parameters.location`` field in the file to +specify the location you are using in Azure. + +If you want to use a custom storage account with the Storage Class, you +can also update `parameters.storageAccount` and provide the Azure storage +account name. + +Create the required storage classes using: + +.. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-sc.yaml + + +You can check if it worked using ``kubectl get storageclasses``. + + +.. _create-kubernetes-persistent-volume-claim-mdb-tmt: + +Step 12: Create Kubernetes Persistent Volume Claims for MongoDB +--------------------------------------------------------------- + +Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and +``mongo-configdb-claim``. + +This configuration is located in the file ``mongodb/mongo-pvc.yaml``. + +Note how there's no explicit mention of Azure, AWS or whatever. +``ReadWriteOnce`` (RWO) means the volume can be mounted as +read-write by a single Kubernetes node. +(``ReadWriteOnce`` is the *only* access mode supported +by AzureDisk.) +``storage: 20Gi`` means the volume has a size of 20 +`gibibytes `_. + +You may want to update the ``spec.resources.requests.storage`` field in both +the files to specify a different disk size. + +Create the required Persistent Volume Claims using: + +.. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-pvc.yaml + + +You can check its status using: ``kubectl get pvc -w`` + +Initially, the status of persistent volume claims might be "Pending" +but it should become "Bound" fairly quickly. + +.. Note:: + The default Reclaim Policy for dynamically created persistent volumes is ``Delete`` + which means the PV and its associated Azure storage resource will be automatically + deleted on deletion of PVC or PV. In order to prevent this from happening do + the following steps to change default reclaim policy of dyanmically created PVs + from ``Delete`` to ``Retain`` + + * Run the following command to list existing PVs + + .. Code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 get pv + + * Run the following command to update a PV's reclaim policy to + + .. Code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' + + For notes on recreating a private volume form a released Azure disk resource consult + :doc:`the page about cluster troubleshooting <../production-deployment-template/troubleshoot>`. + +.. _start-kubernetes-stateful-set-mongodb-tmt: + +Step 13: Start a Kubernetes StatefulSet for MongoDB +--------------------------------------------------- + + * This configuration is located in the file ``mongodb/mongo-ss-tm.yaml``. + + * Set the ``spec.serviceName`` to the value set in ``mdb-instance-name`` in + the ConfigMap. + For example, if the value set in the ``mdb-instance-name`` + is ``mdb-instance-0``, set the field to ``mdb-instance-0``. + + * Set ``metadata.name``, ``spec.template.metadata.name`` and + ``spec.template.metadata.labels.app`` to the value set in + ``mdb-instance-name`` in the ConfigMap, followed by + ``-ss``. + For example, if the value set in the + ``mdb-instance-name`` is ``mdb-instance-0``, set the fields to the value + ``mdb-insance-0-ss``. + + * Note how the MongoDB container uses the ``mongo-db-claim`` and the + ``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and + ``/data/configdb`` directories (mount paths). + + * Note also that we use the pod's ``securityContext.capabilities.add`` + specification to add the ``FOWNER`` capability to the container. That is + because the MongoDB container has the user ``mongodb``, with uid ``999`` and + group ``mongodb``, with gid ``999``. + When this container runs on a host with a mounted disk, the writes fail + when there is no user with uid ``999``. To avoid this, we use the Docker + feature of ``--cap-add=FOWNER``. This bypasses the uid and gid permission + checks during writes and allows data to be persisted to disk. + Refer to the `Docker docs + `_ + for details. + + * As we gain more experience running MongoDB in testing and production, we + will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``. + + * Set the ports to be exposed from the pod in the + ``spec.containers[0].ports`` section. We currently only expose the MongoDB + backend port. Set it to the value specified for ``mongodb-backend-port`` + in the ConfigMap. + + * The configuration uses the following values set in the ConfigMap: + + - ``mdb-instance-name`` + - ``mongodb-backend-port`` + + * The configuration uses the following values set in the Secret: + + - ``mdb-certs`` + - ``ca-auth`` + + * **Optional**: You can change the value for ``STORAGE_ENGINE_CACHE_SIZE`` in the ConfigMap ``storage-engine-cache-size``, for more information + regarding this configuration, please consult the `MongoDB Official + Documentation `_. + + * **Optional**: If you are not using the **Standard_D2_v2** virtual machines for Kubernetes agents as per the guide, + please update the ``resources`` for ``mongo-ss``. We suggest allocating ``memory`` using the following scheme + for a MongoDB StatefulSet: + + .. code:: bash + + memory = (Total_Memory_Agent_VM_GB - 2GB) + STORAGE_ENGINE_CACHE_SIZE = memory / 2 + + * Create the MongoDB StatefulSet using: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss-tm.yaml + + * It might take up to 10 minutes for the disks, specified in the Persistent + Volume Claims above, to be created and attached to the pod. + The UI might show that the pod has errored with the message + "timeout expired waiting for volumes to attach/mount". Use the CLI below + to check the status of the pod in this case, instead of the UI. + This happens due to a bug in Azure ACS. + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 get pods -w + + +.. _configure-users-and-access-control-mongodb-tmt: + +Step 14: Configure Users and Access Control for MongoDB +------------------------------------------------------- + + * In this step, you will create a user on MongoDB with authorization + to create more users and assign + roles to them. + Note: You need to do this only when setting up the first MongoDB node of + the cluster. + + * Find out the name of your MongoDB pod by reading the output + of the ``kubectl ... get pods`` command at the end of the last step. + It should be something like ``mdb-instance-0-ss-0``. + + * Log in to the MongoDB pod using: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 exec -it bash + + * Open a mongo shell using the certificates + already present at ``/etc/mongod/ssl/`` + + .. code:: bash + + $ mongo --host localhost --port 27017 --verbose --ssl \ + --sslCAFile /etc/mongod/ca/ca.pem \ + --sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem + + * Create a user ``adminUser`` on the ``admin`` database with the + authorization to create other users. This will only work the first time you + log in to the mongo shell. For further details, see `localhost + exception `_ + in MongoDB. + + .. code:: bash + + PRIMARY> use admin + PRIMARY> db.createUser( { + user: "adminUser", + pwd: "superstrongpassword", + roles: [ { role: "userAdminAnyDatabase", db: "admin" }, + { role: "clusterManager", db: "admin"} ] + } ) + + * Exit and restart the mongo shell using the above command. + Authenticate as the ``adminUser`` we created earlier: + + .. code:: bash + + PRIMARY> use admin + PRIMARY> db.auth("adminUser", "superstrongpassword") + + ``db.auth()`` returns 0 when authentication is not successful, + and 1 when successful. + + * We need to specify the user name *as seen in the certificate* issued to + the BigchainDB instance in order to authenticate correctly. Use + the following ``openssl`` command to extract the user name from the + certificate: + + .. code:: bash + + $ openssl x509 -in \ + -inform PEM -subject -nameopt RFC2253 + + You should see an output line that resembles: + + .. code:: bash + + subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE + + The ``subject`` line states the complete user name we need to use for + creating the user on the mongo shell as follows: + + .. code:: bash + + PRIMARY> db.getSiblingDB("$external").runCommand( { + createUser: 'emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE', + writeConcern: { w: 'majority' , wtimeout: 5000 }, + roles: [ + { role: 'clusterAdmin', db: 'admin' }, + { role: 'readWriteAnyDatabase', db: 'admin' } + ] + } ) + + * You can similarly create user for MongoDB Monitoring Agent. For example: + + .. code:: bash + + PRIMARY> db.getSiblingDB("$external").runCommand( { + createUser: 'emailAddress=dev@bigchaindb.com,CN=test-mdb-mon-ssl,OU=MongoDB-Mon-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE', + writeConcern: { w: 'majority' , wtimeout: 5000 }, + roles: [ + { role: 'clusterMonitor', db: 'admin' } + ] + } ) + + +.. _create-kubernetes-storage-class-tmt: + +Step 15: Create Kubernetes Storage Classes for Tendermint +---------------------------------------------------------- + +Tendermint needs somewhere to store its data persistently, it uses +LevelDB as the persistent storage layer. + +The Kubernetes template for configuration of Storage Class is located in the +file ``tendermint/tendermint-sc.yaml``. + +Details about how to create a Azure Storage account and how Kubernetes Storage Class works +are already covered in this document: :ref:`create-kubernetes-storage-class-mdb-tmt`. + +Create the required storage classes using: + +.. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-sc.yaml + + +You can check if it worked using ``kubectl get storageclasses``. + +.. _create-kubernetes-persistent-volume-claim-tmt: + +Step 16: Create Kubernetes Persistent Volume Claims for Tendermint +------------------------------------------------------------------ + +Next, you will create two PersistentVolumeClaim objects ``tendermint-db-claim`` and +``tendermint-config-db-claim``. + +This configuration is located in the file ``tendermint/tendermint-pvc.yaml``. + +Details about Kubernetes Persistent Volumes, Persistent Volume Claims +and how they work with Azure are already covered in this +document: :ref:`create-kubernetes-persistent-volume-claim-mdb-tmt`. + +Create the required Persistent Volume Claims using: + +.. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-pvc.yaml + +You can check its status using: + +.. code:: + + kubectl get pvc -w + + +.. _create-kubernetes-stateful-set-tmt: + +Step 17: Start a Kubernetes StatefulSet for Tendermint +------------------------------------------------------ + + * This configuration is located in the file ``tendermint/tendermint-ss.yaml``. + + * Set the ``spec.serviceName`` to the value set in ``tm-instance-name`` in + the ConfigMap. + For example, if the value set in the ``tm-instance-name`` + is ``tm-instance-0``, set the field to ``tm-instance-0``. + + * Set ``metadata.name``, ``spec.template.metadata.name`` and + ``spec.template.metadata.labels.app`` to the value set in + ``tm-instance-name`` in the ConfigMap, followed by + ``-ss``. + For example, if the value set in the + ``tm-instance-name`` is ``tm-instance-0``, set the fields to the value + ``tm-insance-0-ss``. + + * Note how the Tendermint container uses the ``tendermint-db-claim`` and the + ``tendermint-config-db-claim`` PersistentVolumeClaims for its ``/tendermint`` and + ``/tendermint_node_data`` directories (mount paths). + + * As we gain more experience running Tendermint in testing and production, we + will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``. + +We deploy Tendermint as POD(Tendermint + NGINX), Tendermint is used as the consensus +engine while NGINX is used to serve the public key of the Tendermint instance. + + * For the NGINX container,set the ports to be exposed from the container + ``spec.containers[0].ports[0]`` section. Set it to the value specified + for ``tm-pub-key-access`` from ConfigMap. + + * For the Tendermint container, Set the ports to be exposed from the container in the + ``spec.containers[1].ports`` section. We currently expose two Tendermint ports. + Set it to the value specified for ``tm-p2p-port`` and ``tm-rpc-port`` + in the ConfigMap, repectively + + * The configuration uses the following values set in the ConfigMap: + + - ``tm-pub-key-access`` + - ``tm-seeds`` + - ``tm-validator-power`` + - ``tm-validators`` + - ``tm-genesis-time`` + - ``tm-chain-id`` + - ``tm-abci-port`` + - ``bdb-instance-name`` + + * Create the Tendermint StatefulSet using: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-ss.yaml + + * It might take up to 10 minutes for the disks, specified in the Persistent + Volume Claims above, to be created and attached to the pod. + The UI might show that the pod has errored with the message + "timeout expired waiting for volumes to attach/mount". Use the CLI below + to check the status of the pod in this case, instead of the UI. + This happens due to a bug in Azure ACS. + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 get pods -w + +.. _start-kubernetes-deployment-for-mdb-mon-agent-tmt: + +Step 18: Start a Kubernetes Deployment for MongoDB Monitoring Agent +------------------------------------------------------------------- + + * This configuration is located in the file + ``mongodb-monitoring-agent/mongo-mon-dep.yaml``. + + * Set ``metadata.name``, ``spec.template.metadata.name`` and + ``spec.template.metadata.labels.app`` to the value set in + ``mdb-mon-instance-name`` in the ConfigMap, followed by + ``-dep``. + For example, if the value set in the + ``mdb-mon-instance-name`` is ``mdb-mon-instance-0``, set the fields to the + value ``mdb-mon-instance-0-dep``. + + * The configuration uses the following values set in the Secret: + + - ``mdb-mon-certs`` + - ``ca-auth`` + - ``cloud-manager-credentials`` + + * Start the Kubernetes Deployment using: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml + + +.. _start-kubernetes-deployment-bdb-tmt: + +Step 19: Start a Kubernetes Deployment for BigchainDB +----------------------------------------------------- + + * This configuration is located in the file + ``bigchaindb/bigchaindb-dep-tm.yaml``. + + * Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the + value set in ``bdb-instance-name`` in the ConfigMap, followed by + ``-dep``. + For example, if the value set in the + ``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the + value ``bdb-insance-0-dep``. + + * As we gain more experience running BigchainDB in testing and production, + we will tweak the ``resources.limits`` values for CPU and memory, and as + richer monitoring and probing becomes available in BigchainDB, we will + tweak the ``livenessProbe`` and ``readinessProbe`` parameters. + + * Set the ports to be exposed from the pod in the + ``spec.containers[0].ports`` section. We currently expose 3 ports - + ``bigchaindb-api-port``, ``bigchaindb-ws-port`` and ``tm-abci-port``. Set them to the + values specified in the ConfigMap. + + * The configuration uses the following values set in the ConfigMap: + + - ``mdb-instance-name`` + - ``mongodb-backend-port`` + - ``mongodb-replicaset-name`` + - ``bigchaindb-database-name`` + - ``bigchaindb-server-bind`` + - ``bigchaindb-ws-interface`` + - ``cluster-fqdn`` + - ``bigchaindb-ws-port`` + - ``cluster-frontend-port`` + - ``bigchaindb-wsserver-advertised-scheme`` + - ``bdb-public-key`` + - ``bigchaindb-backlog-reassign-delay`` + - ``bigchaindb-database-maxtries`` + - ``bigchaindb-database-connection-timeout`` + - ``bigchaindb-log-level`` + - ``bdb-user`` + - ``tm-instance-name`` + - ``tm-rpc-port`` + + * The configuration uses the following values set in the Secret: + + - ``bdb-certs`` + - ``ca-auth`` + + * Create the BigchainDB Deployment using: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep-tm.yaml + + + * You can check its status using the command ``kubectl get deployments -w`` + + +.. _start-kubernetes-deployment-openresty-tmt: + +Step 20: Start a Kubernetes Deployment for OpenResty +---------------------------------------------------- + + * This configuration is located in the file + ``nginx-openresty/nginx-openresty-dep.yaml``. + + * Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the + value set in ``openresty-instance-name`` in the ConfigMap, followed by + ``-dep``. + For example, if the value set in the + ``openresty-instance-name`` is ``openresty-instance-0``, set the fields to + the value ``openresty-instance-0-dep``. + + * Set the port to be exposed from the pod in the + ``spec.containers[0].ports`` section. We currently expose the port at + which OpenResty is listening for requests, ``openresty-backend-port`` in + the above ConfigMap. + + * The configuration uses the following values set in the Secret: + + - ``threescale-credentials`` + + * The configuration uses the following values set in the ConfigMap: + + - ``cluster-dns-server-ip`` + - ``openresty-backend-port`` + - ``ngx-bdb-instance-name`` + - ``bigchaindb-api-port`` + + * Create the OpenResty Deployment using: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-dep.yaml + + + * You can check its status using the command ``kubectl get deployments -w`` + + +Step 21: Configure the MongoDB Cloud Manager +-------------------------------------------- + +Refer to the +:doc:`documentation <../production-deployment-template/cloud-manager>` +for details on how to configure the MongoDB Cloud Manager to enable +monitoring and backup. + + +.. _verify-and-test-bdb-tmt: + +Step 22: Verify the BigchainDB Node Setup +----------------------------------------- + +Step 22.1: Testing Internally +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +To test the setup of your BigchainDB node, you could use a Docker container +that provides utilities like ``nslookup``, ``curl`` and ``dig``. +For example, you could use a container based on our +`bigchaindb/toolbox `_ image. +(The corresponding +`Dockerfile `_ +is in the ``bigchaindb/bigchaindb`` repository on GitHub.) +You can use it as below to get started immediately: + +.. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 \ + run -it toolbox \ + --image bigchaindb/toolbox \ + --image-pull-policy=Always \ + --restart=Never --rm + +It will drop you to the shell prompt. + +To test the MongoDB instance: + +.. code:: bash + + $ nslookup mdb-instance-0 + + $ dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV + + $ curl -X GET http://mdb-instance-0:27017 + +The ``nslookup`` command should output the configured IP address of the service +(in the cluster). +The ``dig`` command should return the configured port numbers. +The ``curl`` command tests the availability of the service. + +To test the BigchainDB instance: + +.. code:: bash + + $ nslookup bdb-instance-0 + + $ dig +noall +answer _bdb-api-port._tcp.bdb-instance-0.default.svc.cluster.local SRV + + $ dig +noall +answer _bdb-ws-port._tcp.bdb-instance-0.default.svc.cluster.local SRV + + $ curl -X GET http://bdb-instance-0:9984 + + $ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions + +To test the Tendermint instance: + +.. code:: bash + + $ nslookup tm-instance-0 + + $ dig +noall +answer _bdb-api-port._tcp.tm-instance-0.default.svc.cluster.local SRV + + $ dig +noall +answer _bdb-ws-port._tcp.tm-instance-0.default.svc.cluster.local SRV + + $ curl -X GET http://tm-instance-0:9986/pub_key.json + + +To test the OpenResty instance: + +.. code:: bash + + $ nslookup openresty-instance-0 + + $ dig +noall +answer _openresty-svc-port._tcp.openresty-instance-0.default.svc.cluster.local SRV + +To verify if OpenResty instance forwards the requests properly, send a ``POST`` +transaction to OpenResty at post ``80`` and check the response from the backend +BigchainDB instance. + + +To test the vanilla NGINX instance: + +.. code:: bash + + $ nslookup ngx-http-instance-0 + + $ dig +noall +answer _public-cluster-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV + + $ dig +noall +answer _public-health-check-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV + + $ wsc -er ws://ngx-http-instance-0/api/v1/streams/valid_transactions + + $ curl -X GET http://ngx-http-instance-0:27017 + +The above curl command should result in the response +``It looks like you are trying to access MongoDB over HTTP on the native driver port.`` + + + +To test the NGINX instance with HTTPS and 3scale integration: + +.. code:: bash + + $ nslookup ngx-instance-0 + + $ dig +noall +answer _public-secure-cluster-port._tcp.ngx-instance-0.default.svc.cluster.local SRV + + $ dig +noall +answer _public-mdb-port._tcp.ngx-instance-0.default.svc.cluster.local SRV + + $ dig +noall +answer _public-insecure-cluster-port._tcp.ngx-instance-0.default.svc.cluster.local SRV + + $ wsc -er wss:///api/v1/streams/valid_transactions + + $ curl -X GET http://:27017 + +The above curl command should result in the response +``It looks like you are trying to access MongoDB over HTTP on the native driver port.`` + + +Step 22.2: Testing Externally +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Check the MongoDB monitoring agent on the MongoDB Cloud Manager +portal to verify they are working fine. + +If you are using the NGINX with HTTP support, accessing the URL +``http://:cluster-frontend-port`` +on your browser should result in a JSON response that shows the BigchainDB +server version, among other things. +If you are using the NGINX with HTTPS support, use ``https`` instead of +``http`` above. + +Use the Python Driver to send some transactions to the BigchainDB node and +verify that your node or cluster works as expected. + +Next, you can set up log analytics and monitoring, by following our templates: + +* :doc:`../production-deployment-template/log-analytics`. diff --git a/docs/server/source/production-deployment-template-tendermint/workflow.rst b/docs/server/source/production-deployment-template-tendermint/workflow.rst new file mode 100644 index 00000000..3cee8a94 --- /dev/null +++ b/docs/server/source/production-deployment-template-tendermint/workflow.rst @@ -0,0 +1,138 @@ +Overview +======== + +This page summarizes the steps *we* go through +to set up a production BigchainDB + Tendermint cluster. +We are constantly improving them. +You can modify them to suit your needs. + +.. Note:: + With our BigchainDB + Tendermint deployment model, we use standalone MongoDB + (without Replica Set), BFT replication is handled by Tendermint. + + +1. Set Up a Self-Signed Certificate Authority +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +We use SSL/TLS and self-signed certificates +for MongoDB authentication (and message encryption). +The certificates are signed by the organization managing the cluster. +If your organization already has a process +for signing certificates +(i.e. an internal self-signed certificate authority [CA]), +then you can skip this step. +Otherwise, your organization must +:ref:`set up its own self-signed certificate authority `. + + +.. _register-a-domain-and-get-an-ssl-certificate-for-it-tmt: + +2. Register a Domain and Get an SSL Certificate for It +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The BigchainDB APIs (HTTP API and WebSocket API) should be served using TLS, +so the organization running the cluster +should choose an FQDN for their API (e.g. api.organization-x.com), +register the domain name, +and buy an SSL/TLS certificate for the FQDN. + +.. _things-each-node-operator-must-do-tmt: + +Things Each Node Operator Must Do +--------------------------------- + +#. Name of the MongoDB instance (``mdb-instance-*``) +#. Name of the BigchainDB instance (``bdb-instance-*``) +#. Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``) +#. Name of the OpenResty instance (``openresty-instance-*``) +#. Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``) +#. Name of the Tendermint instance (``tendermint-instance-*``) + + +☐ Generate two keys and corresponding certificate signing requests (CSRs): + +#. Client Certificate for BigchainDB Server to identify itself to MongoDB +#. Client Certificate for MongoDB Monitoring Agent to identify itself to MongoDB + +Ask the managing organization to use its self-signed CA to sign those four CSRs. +They should send you: + +* Two certificates (one for each CSR you sent them). +* One ``ca.crt`` file: their CA certificate. +* One ``crl.pem`` file: a certificate revocation list. + +For help, see the pages: + +* :doc:`How to Generate a Client Certificate for MongoDB <../production-deployment-template/client-tls-certificate>` + +☐ Make up an FQDN for your BigchainDB node (e.g. ``mynode.mycorp.com``). +Make sure you've registered the associated domain name (e.g. ``mycorp.com``), +and have an SSL certificate for the FQDN. +(You can get an SSL certificate from any SSL certificate provider.) + +☐ Ask the managing organization for the user name to use for authenticating to +MongoDB. + +☐ If the cluster uses 3scale for API authentication, monitoring and billing, +you must ask the managing organization for all relevant 3scale credentials - +secret token, service ID, version header and API service token. + +☐ If the cluster uses MongoDB Cloud Manager for monitoring, +you must ask the managing organization for the ``Project ID`` and the +``Agent API Key``. +(Each Cloud Manager "Project" has its own ``Project ID``. A ``Project ID`` can +contain a number of ``Agent API Key`` s. It can be found under +**Settings**. It was recently added to the Cloud Manager to +allow easier periodic rotation of the ``Agent API Key`` with a constant +``Project ID``) + + +.. _generate-the-blockchain-id-and-genesis-time: + +3. Generate the Blockchain ID and Genesis Time +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Tendermint nodes require two parameters that need to be common and shared between all the +participants in the network. + +* ``chain_id`` : ID of the blockchain. This must be unique for every blockchain. + + * Example: ``0001-01-01T00:00:00Z`` + +* ``genesis_time`` : Official time of blockchain start. + + * Example: ``test-chain-9gHylg`` + +The following parameters can be generated using the ``tendermint init`` command. +To `initializae `_. +You will need to `install Tendermint `_ +and verify that a ``genesis.json`` file in created under the `Root Directory +`_. You can use +the ``genesis_time`` and ``chain_id`` from this ``genesis.json``. + +Sample ``genesis.json``: + +.. code:: json + + { + "genesis_time": "0001-01-01T00:00:00Z", + "chain_id": "test-chain-9gHylg", + "validators": [ + { + "pub_key": { + "type": "ed25519", + "data": "D12279E746D3724329E5DE33A5AC44D5910623AA6FB8CDDC63617C959383A468" + }, + "power": 10, + "name": "" + } + ], + "app_hash": "" + } + + + +☐ :doc:`Deploy a Kubernetes cluster on Azure <../production-deployment-template/template-kubernetes-azure>`. + +☐ You can now proceed to set up your :ref:`BigchainDB node +`. diff --git a/docs/server/source/production-deployment-template/add-node-on-kubernetes.rst b/docs/server/source/production-deployment-template/add-node-on-kubernetes.rst index 826b7432..da2d58fa 100644 --- a/docs/server/source/production-deployment-template/add-node-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/add-node-on-kubernetes.rst @@ -1,3 +1,5 @@ +.. _kubernetes-template-add-a-bigchaindb-node-to-an-existing-cluster: + Kubernetes Template: Add a BigchainDB Node to an Existing BigchainDB Cluster ============================================================================ @@ -47,7 +49,7 @@ cluster is using. Step 1: Prerequisites --------------------- -* :ref:`List of all the things to be done by each node operator `. +* :ref:`List of all the things to be done by each node operator `. * The public key should be shared offline with the other existing BigchainDB nodes in the existing BigchainDB cluster. @@ -76,7 +78,7 @@ example: Step 2: Configure the BigchainDB Node ------------------------------------- -See the section on how to :ref:`configure your BigchainDB node `. +See the section on how to :ref:`how-to-configure-a-bigchaindb-node`. Step 3: Start the NGINX Service @@ -84,7 +86,7 @@ Step 3: Start the NGINX Service Please see the following section: -* :ref:`Start NGINX service `. +* :ref:`start-the-nginx-service`. Step 4: Assign DNS Name to the NGINX Public IP @@ -92,7 +94,7 @@ Step 4: Assign DNS Name to the NGINX Public IP Please see the following section: -* :ref:`Assign DNS to NGINX Public IP `. +* :ref:`assign-dns-name-to-the-nginx-public-ip`. Step 5: Start the MongoDB Kubernetes Service @@ -100,7 +102,7 @@ Step 5: Start the MongoDB Kubernetes Service Please see the following section: -* :ref:`Start the MongoDB Kubernetes Service `. +* :ref:`start-the-mongodb-kubernetes-service`. Step 6: Start the BigchainDB Kubernetes Service @@ -108,7 +110,7 @@ Step 6: Start the BigchainDB Kubernetes Service Please see the following section: -* :ref:`Start the BigchainDB Kubernetes Service `. +* :ref:`start-the-bigchaindb-kubernetes-service`. Step 7: Start the OpenResty Kubernetes Service @@ -116,7 +118,7 @@ Step 7: Start the OpenResty Kubernetes Service Please see the following section: -* :ref:`Start the OpenResty Kubernetes Service `. +* :ref:`start-the-openresty-kubernetes-service`. Step 8: Start the NGINX Kubernetes Deployment @@ -124,7 +126,7 @@ Step 8: Start the NGINX Kubernetes Deployment Please see the following section: -* :ref:`Run NGINX deployment `. +* :ref:`start-the-nginx-kubernetes-deployment`. Step 9: Create Kubernetes Storage Classes for MongoDB @@ -132,7 +134,7 @@ Step 9: Create Kubernetes Storage Classes for MongoDB Please see the following section: -* :ref:`Step 10: Create Kubernetes Storage Classes for MongoDB`. +* :ref:`create-kubernetes-storage-classes-for-mongodb`. Step 10: Create Kubernetes Persistent Volume Claims @@ -140,7 +142,7 @@ Step 10: Create Kubernetes Persistent Volume Claims Please see the following section: -* :ref:`Step 11: Create Kubernetes Persistent Volume Claims`. +* :ref:`create-kubernetes-persistent-volume-claims`. Step 11: Start a Kubernetes StatefulSet for MongoDB @@ -148,7 +150,7 @@ Step 11: Start a Kubernetes StatefulSet for MongoDB Please see the following section: -* :ref:`Step 12: Start a Kubernetes StatefulSet for MongoDB`. +* :ref:`start-a-kubernetes-statefulset-for-mongodb`. Step 12: Verify network connectivity between the MongoDB instances @@ -178,8 +180,7 @@ We can do this in Kubernetes using a Kubernetes Service of ``type`` * Set ``spec.ports.port[0]`` to the ``mongodb-backend-port`` from the ConfigMap for the other cluster. * Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to. - For more information about the FQDN please refer to: :ref:`Assign DNS Name to the NGINX Public - IP ` + For more information about the FQDN please refer to: :ref:`assign-dns-name-to-the-nginx-public-ip`. .. note:: This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs @@ -246,10 +247,9 @@ Step 15: Configure Users and Access Control for MongoDB instance and the new MongoDB Backup Agent instance to function correctly. * Please refer to - :ref:`Configure Users and Access Control for MongoDB ` to create and configure the new - BigchainDB, MongoDB Monitoring Agent and MongoDB Backup Agent users on the - cluster. + :ref:`configure-users-and-access-control-for-mongodb` to create and + configure the new BigchainDB, MongoDB Monitoring Agent and MongoDB Backup + Agent users on the cluster. .. note:: You will not have to create the MongoDB replica set or create the admin user, as they already exist. @@ -265,7 +265,7 @@ Step 16: Start a Kubernetes Deployment for MongoDB Monitoring Agent Please see the following section: -* :ref:`Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent`. +* :ref:`start-a-kubernetes-deployment-for-mongodb-monitoring-agent`. .. note:: Every MMS group has only one active Monitoring and Backup Agent and having @@ -280,7 +280,7 @@ Step 17: Start a Kubernetes Deployment for MongoDB Backup Agent Please see the following section: -* :ref:`Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent`. +* :ref:`start-a-kubernetes-deployment-for-mongodb-backup-agent`. .. note:: Every MMS group has only one active Monitoring and Backup Agent and having @@ -350,8 +350,8 @@ Step 19: Restart the Existing BigchainDB Instance(s) $ kubectl --context ctx-1 apply -f bigchaindb/bigchaindb-dep.yaml -See the page titled :ref:`How to Configure a BigchainDB Node` for more information about -ConfigMap configuration. +See the page titled :ref:`how-to-configure-a-bigchaindb-node` +for more information about ConfigMap configuration. You can SSH to an existing BigchainDB instance and run the ``bigchaindb show-config`` command to check that the keyring is updated. @@ -362,7 +362,7 @@ Step 20: Start a Kubernetes Deployment for OpenResty Please see the following section: -* :ref:`Step 17: Start a Kubernetes Deployment for OpenResty`. +* :ref:`start-a-kubernetes-deployment-for-openresty`. Step 21: Configure the MongoDB Cloud Manager @@ -378,6 +378,7 @@ Step 21: Configure the MongoDB Cloud Manager Step 22: Test Your New BigchainDB Node -------------------------------------- -* Please refer to the testing steps :ref:`here ` to verify that your new BigchainDB node is working as expected. +* Please refer to the testing steps :ref:`here + ` to verify that your new BigchainDB + node is working as expected. diff --git a/docs/server/source/production-deployment-template/ca-installation.rst b/docs/server/source/production-deployment-template/ca-installation.rst index 146bd461..6b2644d8 100644 --- a/docs/server/source/production-deployment-template/ca-installation.rst +++ b/docs/server/source/production-deployment-template/ca-installation.rst @@ -1,3 +1,5 @@ +.. _how-to-set-up-a-self-signed-certificate-authority: + How to Set Up a Self-Signed Certificate Authority ================================================= @@ -18,7 +20,7 @@ First create a directory for the CA and cd into it: cd bdb-cluster-ca -Then :ref:`install and configure Easy-RSA in that directory `. +Then :ref:`install and configure Easy-RSA in that directory `. Step 2: Create a Self-Signed CA diff --git a/docs/server/source/production-deployment-template/client-tls-certificate.rst b/docs/server/source/production-deployment-template/client-tls-certificate.rst index d0a67006..3004d5cc 100644 --- a/docs/server/source/production-deployment-template/client-tls-certificate.rst +++ b/docs/server/source/production-deployment-template/client-tls-certificate.rst @@ -1,3 +1,5 @@ +.. _how-to-generate-a-client-certificate-for-mongodb: + How to Generate a Client Certificate for MongoDB ================================================ @@ -17,7 +19,7 @@ First create a directory for the client certificate and cd into it: cd client-cert -Then :ref:`install and configure Easy-RSA in that directory `. +Then :ref:`install and configure Easy-RSA in that directory `. Step 2: Create the Client Private Key and CSR diff --git a/docs/server/source/production-deployment-template/cloud-manager.rst b/docs/server/source/production-deployment-template/cloud-manager.rst index c407ceb1..c438afaf 100644 --- a/docs/server/source/production-deployment-template/cloud-manager.rst +++ b/docs/server/source/production-deployment-template/cloud-manager.rst @@ -1,3 +1,5 @@ +.. _configure-mongodb-cloud-manager-for-monitoring-and-backup: + Configure MongoDB Cloud Manager for Monitoring and Backup ========================================================= diff --git a/docs/server/source/production-deployment-template/easy-rsa.rst b/docs/server/source/production-deployment-template/easy-rsa.rst index ff268bf2..f0f609b2 100644 --- a/docs/server/source/production-deployment-template/easy-rsa.rst +++ b/docs/server/source/production-deployment-template/easy-rsa.rst @@ -1,3 +1,5 @@ +.. _how-to-install-and-configure-easyrsa: + How to Install & Configure Easy-RSA =================================== diff --git a/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst b/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst index a3a33c8d..5140e8d6 100644 --- a/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst +++ b/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst @@ -1,3 +1,5 @@ +.. _how-to-configure-a-bigchaindb-node: + How to Configure a BigchainDB Node ================================== @@ -9,7 +11,7 @@ and ``secret.yaml`` (a set of Secrets). They are stored in the Kubernetes cluster's key-value store (etcd). Make sure you did all the things listed in the section titled -:ref:`Things Each Node Operator Must Do` +:ref:`things-each-node-operator-must-do` (including generation of all the SSL certificates needed for MongoDB auth). @@ -33,7 +35,7 @@ vars.cluster-fqdn ~~~~~~~~~~~~~~~~~ The ``cluster-fqdn`` field specifies the domain you would have -:ref:`registered before <2. Register a Domain and Get an SSL Certificate for It>`. +:ref:`registered before `. vars.cluster-frontend-port @@ -139,8 +141,8 @@ listening for HTTP requests. Currently set to ``9984`` by default. The ``bigchaindb-ws-port`` is the port number on which BigchainDB is listening for Websocket requests. Currently set to ``9985`` by default. -There's another :ref:`page with a complete listing of all the BigchainDB Server -configuration settings `. +There's another :doc:`page with a complete listing of all the BigchainDB Server +configuration settings <../server-reference/configuration>`. bdb-config.bdb-keyring diff --git a/docs/server/source/production-deployment-template/node-on-kubernetes.rst b/docs/server/source/production-deployment-template/node-on-kubernetes.rst index 492d15c6..d45df83a 100644 --- a/docs/server/source/production-deployment-template/node-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/node-on-kubernetes.rst @@ -1,3 +1,5 @@ +.. _kubernetes-template-deploy-a-single-node-bigchaindb: + Kubernetes Template: Deploy a Single BigchainDB Node ==================================================== @@ -105,8 +107,9 @@ That means you can visit the dashboard in your web browser at Step 3: Configure Your BigchainDB Node -------------------------------------- -See the page titled :ref:`How to Configure a BigchainDB Node`. +See the page titled :ref:`how-to-configure-a-bigchaindb-node`. +.. _start-the-nginx-service: Step 4: Start the NGINX Service ------------------------------- @@ -179,6 +182,8 @@ Step 4.2: NGINX with HTTPS $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc.yaml +.. _assign-dns-name-to-the-nginx-public-ip: + Step 5: Assign DNS Name to the NGINX Public IP ---------------------------------------------- @@ -220,6 +225,8 @@ This will ensure that when you scale the replica set later, other MongoDB members in the replica set can reach this instance. +.. _start-the-mongodb-kubernetes-service: + Step 6: Start the MongoDB Kubernetes Service -------------------------------------------- @@ -245,6 +252,8 @@ Step 6: Start the MongoDB Kubernetes Service $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc.yaml +.. _start-the-bigchaindb-kubernetes-service: + Step 7: Start the BigchainDB Kubernetes Service ----------------------------------------------- @@ -275,6 +284,8 @@ Step 7: Start the BigchainDB Kubernetes Service $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc.yaml +.. _start-the-openresty-kubernetes-service: + Step 8: Start the OpenResty Kubernetes Service ---------------------------------------------- @@ -295,6 +306,8 @@ Step 8: Start the OpenResty Kubernetes Service $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc.yaml +.. _start-the-nginx-kubernetes-deployment: + Step 9: Start the NGINX Kubernetes Deployment --------------------------------------------- @@ -384,6 +397,8 @@ Step 9.2: NGINX with HTTPS $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep.yaml +.. _create-kubernetes-storage-classes-for-mongodb: + Step 10: Create Kubernetes Storage Classes for MongoDB ------------------------------------------------------ @@ -453,6 +468,8 @@ Create the required storage classes using: You can check if it worked using ``kubectl get storageclasses``. +.. _create-kubernetes-persistent-volume-claims: + Step 11: Create Kubernetes Persistent Volume Claims --------------------------------------------------- @@ -504,7 +521,9 @@ but it should become "Bound" fairly quickly. $ kubectl --context k8s-bdb-test-cluster-0 patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' For notes on recreating a private volume form a released Azure disk resource consult - :ref:`the page about cluster troubleshooting `. + :ref:`cluster-troubleshooting`. + +.. _start-a-kubernetes-statefulset-for-mongodb: Step 12: Start a Kubernetes StatefulSet for MongoDB --------------------------------------------------- @@ -589,6 +608,7 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB $ kubectl --context k8s-bdb-test-cluster-0 get pods -w +.. _configure-users-and-access-control-for-mongodb: Step 13: Configure Users and Access Control for MongoDB ------------------------------------------------------- @@ -719,6 +739,8 @@ Step 13: Configure Users and Access Control for MongoDB } ) +.. _start-a-kubernetes-deployment-for-mongodb-monitoring-agent: + Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent ------------------------------------------------------------------- @@ -746,6 +768,8 @@ Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml +.. _start-a-kubernetes-deployment-for-mongodb-backup-agent: + Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent --------------------------------------------------------------- @@ -837,6 +861,8 @@ Step 16: Start a Kubernetes Deployment for BigchainDB * You can check its status using the command ``kubectl get deployments -w`` +.. _start-a-kubernetes-deployment-for-openresty: + Step 17: Start a Kubernetes Deployment for OpenResty ---------------------------------------------------- @@ -880,11 +906,13 @@ Step 18: Configure the MongoDB Cloud Manager -------------------------------------------- Refer to the -:ref:`documentation ` +:ref:`documentation ` for details on how to configure the MongoDB Cloud Manager to enable monitoring and backup. +.. _verify-the-bigchaindb-node-setup: + Step 19: Verify the BigchainDB Node Setup ----------------------------------------- diff --git a/docs/server/source/production-deployment-template/revoke-tls-certificate.rst b/docs/server/source/production-deployment-template/revoke-tls-certificate.rst index 7584ceb5..617ff82d 100644 --- a/docs/server/source/production-deployment-template/revoke-tls-certificate.rst +++ b/docs/server/source/production-deployment-template/revoke-tls-certificate.rst @@ -10,7 +10,7 @@ Step 1: Revoke a Certificate ---------------------------- Since we used Easy-RSA version 3 to -:ref:`set up the CA `, +:ref:`set up the CA `, we use it to revoke certificates too. Go to the following directory (associated with the self-signed CA): diff --git a/docs/server/source/production-deployment-template/server-tls-certificate.rst b/docs/server/source/production-deployment-template/server-tls-certificate.rst index 8444b0ab..caf0806f 100644 --- a/docs/server/source/production-deployment-template/server-tls-certificate.rst +++ b/docs/server/source/production-deployment-template/server-tls-certificate.rst @@ -1,3 +1,5 @@ +.. _how-to-generate-a-server-certificate-for-mongodb: + How to Generate a Server Certificate for MongoDB ================================================ @@ -19,7 +21,7 @@ First create a directory for the server certificate (member cert) and cd into it cd member-cert -Then :ref:`install and configure Easy-RSA in that directory `. +Then :ref:`install and configure Easy-RSA in that directory `. Step 2: Create the Server Private Key and CSR diff --git a/docs/server/source/production-deployment-template/tectonic-azure.rst b/docs/server/source/production-deployment-template/tectonic-azure.rst index 3803751e..68b0afd9 100644 --- a/docs/server/source/production-deployment-template/tectonic-azure.rst +++ b/docs/server/source/production-deployment-template/tectonic-azure.rst @@ -14,10 +14,10 @@ Step 1: Prerequisites for Deploying Tectonic Cluster ---------------------------------------------------- Get an Azure account. Refer to -:ref:`this step in our docs `. +:ref:`this step in our docs `. Create an SSH Key pair for the new Tectonic cluster. Refer to -:ref:`this step in our docs `. +:ref:`this step in our docs `. Step 2: Get a Tectonic Subscription @@ -119,8 +119,11 @@ Step 4: Configure kubectl $ export KUBECONFIG=/path/to/config/kubectl-config -Next, you can :doc:`run a BigchainDB node on your new -Kubernetes cluster `. +Next, you can follow one of our following deployment templates: + +* :doc:`node-on-kubernetes`. + +* :doc:`../production-deployment-template-tendermint/node-on-kubernetes` Tectonic References @@ -128,5 +131,4 @@ Tectonic References #. https://coreos.com/tectonic/docs/latest/tutorials/azure/install.html #. https://coreos.com/tectonic/docs/latest/troubleshooting/installer-terraform.html -#. https://coreos.com/tectonic/docs/latest/tutorials/azure/first-app.html - +#. https://coreos.com/tectonic/docs/latest/tutorials/azure/first-app.html \ No newline at end of file diff --git a/docs/server/source/production-deployment-template/template-kubernetes-azure.rst b/docs/server/source/production-deployment-template/template-kubernetes-azure.rst index 7312ba36..7d43fafc 100644 --- a/docs/server/source/production-deployment-template/template-kubernetes-azure.rst +++ b/docs/server/source/production-deployment-template/template-kubernetes-azure.rst @@ -6,6 +6,8 @@ cluster. This page describes one way to deploy a Kubernetes cluster on Azure. +.. _get-a-pay-as-you-go-azure-subscription: + Step 1: Get a Pay-As-You-Go Azure Subscription ---------------------------------------------- @@ -18,6 +20,8 @@ You may find that you have to sign up for a Free Trial subscription first. That's okay: you can have many subscriptions. +.. _create-an-ssh-key-pair: + Step 2: Create an SSH Key Pair ------------------------------ @@ -28,7 +32,8 @@ but it's probably a good idea to make a new SSH key pair for your Kubernetes VMs and nothing else.) See the -:ref:`page about how to generate a key pair for SSH `. +:doc:`page about how to generate a key pair for SSH +<../appendices/generate-key-pair-for-ssh>`. Step 3: Deploy an Azure Container Service (ACS) @@ -135,6 +140,8 @@ and click on the one you created to see all the resources in it. +.. _ssh-to-your-new-kubernetes-cluster-nodes: + Optional: SSH to Your New Kubernetes Cluster Nodes -------------------------------------------------- @@ -217,5 +224,6 @@ CAUTION: You might end up deleting resources other than the ACS cluster. --name -Next, you can :doc:`run a BigchainDB node on your new -Kubernetes cluster `. \ No newline at end of file +Next, you can :doc:`run a BigchainDB node(Non-BFT) ` or :doc:`run a BigchainDB +node/cluster(BFT) <../production-deployment-template-tendermint/node-on-kubernetes>` +on your new Kubernetes cluster. \ No newline at end of file diff --git a/docs/server/source/production-deployment-template/troubleshoot.rst b/docs/server/source/production-deployment-template/troubleshoot.rst index 72e073e0..785927dd 100644 --- a/docs/server/source/production-deployment-template/troubleshoot.rst +++ b/docs/server/source/production-deployment-template/troubleshoot.rst @@ -1,3 +1,5 @@ +.. _cluster-troubleshooting: + Cluster Troubleshooting ======================= diff --git a/docs/server/source/production-deployment-template/upgrade-on-kubernetes.rst b/docs/server/source/production-deployment-template/upgrade-on-kubernetes.rst index ba109fbe..07d63f7b 100644 --- a/docs/server/source/production-deployment-template/upgrade-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/upgrade-on-kubernetes.rst @@ -32,7 +32,7 @@ as the host (master and agent) operating system. You can upgrade Ubuntu and Docker on Azure by SSHing into each of the hosts, as documented on -:ref:`another page `. +:ref:`another page `. In general, you can SSH to each host in your Kubernetes Cluster to update the OS and Docker. diff --git a/docs/server/source/production-deployment-template/workflow.rst b/docs/server/source/production-deployment-template/workflow.rst index aff8d5f9..a790a619 100644 --- a/docs/server/source/production-deployment-template/workflow.rst +++ b/docs/server/source/production-deployment-template/workflow.rst @@ -22,9 +22,11 @@ for signing certificates (i.e. an internal self-signed certificate authority [CA]), then you can skip this step. Otherwise, your organization must -:ref:`set up its own self-signed certificate authority `. +:ref:`set up its own self-signed certificate authority `. +.. _register-a-domain-and-get-an-ssl-certificate-for-it: + 2. Register a Domain and Get an SSL Certificate for It ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -34,6 +36,7 @@ should choose an FQDN for their API (e.g. api.organization-x.com), register the domain name, and buy an SSL/TLS certificate for the FQDN. +.. _things-each-node-operator-must-do: Things Each Node Operator Must Do --------------------------------- @@ -70,8 +73,8 @@ They should send you: For help, see the pages: -* :ref:`How to Generate a Server Certificate for MongoDB` -* :ref:`How to Generate a Client Certificate for MongoDB` +* :ref:`how-to-generate-a-server-certificate-for-mongodb` +* :ref:`how-to-generate-a-client-certificate-for-mongodb` ☐ Every node in a BigchainDB cluster needs its own @@ -124,6 +127,6 @@ allow easier periodic rotation of the ``Agent API Key`` with a constant ☐ You can now proceed to set up your BigchainDB node based on whether it is the :ref:`first node in a new cluster -` or a +` or a :ref:`node that will be added to an existing cluster -`. +`. From c4e752d379254ca3ca13696b12ed5a923ae0a871 Mon Sep 17 00:00:00 2001 From: muawiakh Date: Fri, 12 Jan 2018 18:16:05 +0100 Subject: [PATCH 02/10] Address comments - Use aafigure to render text -> HTML/image - Update some docs --- docs/server/requirements.txt | 1 + .../source/appendices/firewall-notes.md | 7 + docs/server/source/conf.py | 2 +- .../architecture.rst | 186 +++++++++--------- .../bigchaindb-network-on-kubernetes.rst | 52 ++--- .../workflow.rst | 59 +++++- 6 files changed, 183 insertions(+), 124 deletions(-) diff --git a/docs/server/requirements.txt b/docs/server/requirements.txt index cd06eab9..c68268c9 100644 --- a/docs/server/requirements.txt +++ b/docs/server/requirements.txt @@ -4,4 +4,5 @@ sphinx-rtd-theme>=0.1.9 sphinxcontrib-napoleon>=0.4.4 sphinxcontrib-httpdomain>=1.5.0 pyyaml>=3.12 +aafigure>=0.6 bigchaindb diff --git a/docs/server/source/appendices/firewall-notes.md b/docs/server/source/appendices/firewall-notes.md index 35ce74bc..2a2de78a 100644 --- a/docs/server/source/appendices/firewall-notes.md +++ b/docs/server/source/appendices/firewall-notes.md @@ -10,6 +10,7 @@ The following ports should expect unsolicited inbound traffic: 1. **Port 9984** can expect inbound HTTP (TCP) traffic from BigchainDB clients sending transactions to the BigchainDB HTTP API. 1. **Port 9985** can expect inbound WebSocket traffic from BigchainDB clients. 1. **Port 46656** can expect inbound Tendermint P2P traffic from other Tendermint peers. +1. **Port 9986** can expect inbound HTTP (TCP) traffic from clients accessing the Public Key of a Tendermint instance. All other ports should only get inbound traffic in response to specific requests from inside the node. @@ -49,6 +50,12 @@ You may want to have Gunicorn and the reverse proxy running on different servers Port 9985 is the default port for the [BigchainDB WebSocket Event Stream API](../websocket-event-stream-api.html). + +## Port 9986 + +Port 9986 is the default port to access the Public Key of a Tendermint instance, it is used by a NGINX instance +that runs with Tendermint instance(Pod), and only hosts the Public Key. + ## Port 46656 Port 46656 is the default port used by Tendermint Core to communicate with other instances of Tendermint Core (peers). diff --git a/docs/server/source/conf.py b/docs/server/source/conf.py index 8fc4f397..2de8a3dc 100644 --- a/docs/server/source/conf.py +++ b/docs/server/source/conf.py @@ -48,7 +48,7 @@ extensions = [ 'sphinx.ext.todo', 'sphinx.ext.napoleon', 'sphinxcontrib.httpdomain', - #'sphinx.ext.autosectionlabel', + 'aafigure.sphinxext', # Below are actually build steps made to look like sphinx extensions. # It was the easiest way to get it running with ReadTheDocs. 'generate_http_server_api_documentation', diff --git a/docs/server/source/production-deployment-template-tendermint/architecture.rst b/docs/server/source/production-deployment-template-tendermint/architecture.rst index 528874f8..7778e45f 100644 --- a/docs/server/source/production-deployment-template-tendermint/architecture.rst +++ b/docs/server/source/production-deployment-template-tendermint/architecture.rst @@ -5,7 +5,7 @@ A BigchainDB Production deployment is hosted on a Kubernetes cluster and include * NGINX, OpenResty, BigchainDB, MongoDB and Tendermint `Kubernetes Services `_. -* NGINX, OpenResty, BigchainDB, Monitoring Agent and Backup Agent +* NGINX, OpenResty, BigchainDB and MongoDB Monitoring Agent. `Kubernetes Deployments `_. * MongoDB and Tendermint `Kubernetes StatefulSet `_. * Third party services like `3scale `_, @@ -14,10 +14,17 @@ A BigchainDB Production deployment is hosted on a Kubernetes cluster and include `_. -.. code:: text +.. _bigchaindb-node: +BigchainDB Node +--------------- + +.. aafig:: + :aspect: 60 + :scale: 100 + :background: #rgb + :proportional: - BigchainDB Node + + +--------------------------------------------------------------------------------------------------------------------------------------+ | | | | @@ -26,22 +33,22 @@ A BigchainDB Production deployment is hosted on a Kubernetes cluster and include | | | | | | | | | | | | - | BigchainDB API | | Tendermint P2P | - | | | Communication/ | - | | | Public Key Exchange | + | "BigchainDB API" | | "Tendermint P2P" | + | | | "Communication/" | + | | | "Public Key Exchange" | | | | | | | | | | v v | | | | +------------------+ | - | | NGINX Service | | + | |"NGINX Service" | | | +-------+----------+ | | | | | v | | | | +------------------+ | - | | NGINX | | - | | Deployment | | + | | "NGINX" | | + | | "Deployment" | | | | | | | +-------+----------+ | | | | @@ -49,86 +56,87 @@ A BigchainDB Production deployment is hosted on a Kubernetes cluster and include | | | | v | | | - | 443 +----------+ 46656/9986 | - | | Rate | | - | +---------------------------+ Limiting +-----------------------+ | - | | | Logic | | | - | | +----------+ | | - | | | | - | | | | - | | | | - | | | | - | | | | - | v v | - | | - | +-----------+ +----------+ | - | |HTTPS | +------------------> |Tendermint| | - | |Termination| | 9986 |Service | 46656 | - | | | | +-------+ | <----+ | - | +-----+-----+ | | +----------+ | | - | | | v v | - | | | | - | | | +----------+ +----------+ | - | | | |NGINX | |Tendermint| | - | | | |Deployment| |Stateful | | - | | | |Pub-Key-Ex| |Set | | - | v | +----------+ +----------+ | - | +-----+-----+ | | - | POST |Analyze | GET | | - | |Request | | | - | +-----------+ +--------+ | | - | | +-----------+ | | | - | | | | Bi-directional, communication between | - | | | | BigchainDB(APP) and Tendermint | - | | | | BFT consensus Engine | - | | | | | - | v v | | - | | | - | +-------------+ +--------------+ | +--------------+ | - | | OpenResty | | BigchainDB | | | MongoDB | | - | | Service | | Service | | | Service | | - | | | +-----> | | | +-------> | | | - | +------+------+ | +------+-------+ | | +------+-------+ | - | | | | | | | | - | v | v | | v | - | | | | | - | +------------+ | +------------+ | | +----------+ | - | | | | | | <-------------+ | |MongoDB | | - | | OpenResty | | | BigchainDB | | |Stateful | | - | | Deployment | | | Deployment | | |Set | | - | | | | | | | +-----+----+ | - | | | | | +--------------------------+ | | - | | | | | | | | - | +-----+------+ | +------------+ | | - | | | | | - | v | | | - | | | | - | +-----------+ | | | - | | Auth | | | | - | | Logic +---------+ | | - | | | | | - | | | | | - | +---+-------+ | | - | | | | - | | | | - | | | | - | | | | - | | | | - | | | | - +--------------------------------------------------------------------------------------------------------------------------------------+ - | | - | | - v v + | "443" +----------+ "46656/9986" | + | | "Rate" | | + | +---------------------------+"Limiting"+-----------------------+ | + | | | "Logic" | | | + | | +----+-----+ | | + | | | | | + | | | | | + | | | | | + | | | | | + | | | | | + | | "27017" | | | + | v | v | + | +-------------+ | +------------+ | + | |"HTTPS" | | +------------------> |"Tendermint"| | + | |"Termination"| | | "9986" |"Service" | "46656" | + | | | | | +-------+ | <----+ | + | +-----+-------+ | | | +------------+ | | + | | | | | | | + | | | | v v | + | | | | +------------+ +------------+ | + | | | | |"NGINX" | |"Tendermint"| | + | | | | |"Deployment"| |"Stateful" | | + | | | | |"Pub-Key-Ex"| |"Set" | | + | ^ | | +------------+ +------------+ | + | +-----+-------+ | | | + | "POST" |"Analyze" | "GET" | | | + | |"Request" | | | | + | +-----------+ +--------+ | | | + | | +-------------+ | | | | + | | | | | "Bi+directional, communication between" | + | | | | | "BigchainDB(APP) and Tendermint" | + | | | | | "BFT consensus Engine" | + | | | | | | + | v v | | | + | | | | + | +-------------+ +--------------+ +----+-------------------> +--------------+ | + | | "OpenResty" | | "BigchainDB" | | | "MongoDB" | | + | | "Service" | | "Service" | | | "Service" | | + | | | +----->| | | +-------> | | | + | +------+------+ | +------+-------+ | | +------+-------+ | + | | | | | | | | + | | | | | | | | + | v | v | | v | + | +-------------+ | +-------------+ | | +----------+ | + | | | | | | <------------+ | |"MongoDB" | | + | |"OpenResty" | | | "BigchainDB"| | |"Stateful"| | + | |"Deployment" | | | "Deployment"| | |"Set" | | + | | | | | | | +-----+----+ | + | | | | | +---------------------------+ | | + | | | | | | | | + | +-----+-------+ | +-------------+ | | + | | | | | + | | | | | + | v | | | + | +-----------+ | v | + | | "Auth" | | +------------+ | + | | "Logic" |----------+ |"MongoDB" | | + | | | |"Monitoring"| | + | | | |"Agent" | | + | +---+-------+ +-----+------+ | + | | | | + | | | | + | | | | + | | | | + | | | | + | | | | + +---------------+---------------------------------------------------------------------------------------+------------------------------+ + | | + | | + | | + v v + +------------------------------------+ +------------------------------------+ + | | | | + | | | | + | | | | + | "3Scale" | | "MongoDB Cloud" | + | | | | + | | | | + | | | | + +------------------------------------+ +------------------------------------+ - +------------------------------------+ +------------------------------------+ - | | | | - | | | | - | | | | - | 3Scale | | MongoDB Cloud | - | | | | - | | | | - | | | | - +------------------------------------+ +------------------------------------+ @@ -184,8 +192,6 @@ MongoDB: Standalone ------------------- We use MongoDB as the backend database for BigchainDB. -In a multi-node deployment, MongoDB members communicate with each other via the -public port exposed by the NGINX Service. We achieve security by avoiding DoS attacks at the NGINX proxy layer and by ensuring that MongoDB has TLS enabled for all its connections. diff --git a/docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst b/docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst index aea9d417..4781fccc 100644 --- a/docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst @@ -63,44 +63,44 @@ Lets assume we are deploying a 4 node cluster, your naming conventions could loo .. code:: - { + { "MongoDB": [ - "mdb-instance-1", - "mdb-instance-2", - "mdb-instance-3", - "mdb-instance-4" + "mdb-instance-1", + "mdb-instance-2", + "mdb-instance-3", + "mdb-instance-4" ], "BigchainDB": [ - "bdb-instance-1", - "bdb-instance-2", - "bdb-instance-3", - "bdb-instance-4" + "bdb-instance-1", + "bdb-instance-2", + "bdb-instance-3", + "bdb-instance-4" ], "NGINX": [ - "ngx-instance-1", - "ngx-instance-2", - "ngx-instance-3", - "ngx-instance-4" + "ngx-instance-1", + "ngx-instance-2", + "ngx-instance-3", + "ngx-instance-4" ], "OpenResty": [ - "openresty-instance-1", - "openresty-instance-2", - "openresty-instance-3", - "openresty-instance-4" + "openresty-instance-1", + "openresty-instance-2", + "openresty-instance-3", + "openresty-instance-4" ], "MongoDB_Monitoring_Agent": [ - "mdb-mon-instance-1", - "mdb-mon-instance-2", - "mdb-mon-instance-3", - "mdb-mon-instance-4" + "mdb-mon-instance-1", + "mdb-mon-instance-2", + "mdb-mon-instance-3", + "mdb-mon-instance-4" ], "Tendermint": [ - "tendermint-instance-1", - "tendermint-instance-2", - "tendermint-instance-3", - "tendermint-instance-4" + "tendermint-instance-1", + "tendermint-instance-2", + "tendermint-instance-3", + "tendermint-instance-4" ] - } + } .. note:: diff --git a/docs/server/source/production-deployment-template-tendermint/workflow.rst b/docs/server/source/production-deployment-template-tendermint/workflow.rst index 3cee8a94..e40d861b 100644 --- a/docs/server/source/production-deployment-template-tendermint/workflow.rst +++ b/docs/server/source/production-deployment-template-tendermint/workflow.rst @@ -16,7 +16,7 @@ You can modify them to suit your needs. We use SSL/TLS and self-signed certificates for MongoDB authentication (and message encryption). -The certificates are signed by the organization managing the cluster. +The certificates are signed by the organization managing the :ref:`bigchaindb-node`. If your organization already has a process for signing certificates (i.e. an internal self-signed certificate authority [CA]), @@ -41,6 +41,8 @@ and buy an SSL/TLS certificate for the FQDN. Things Each Node Operator Must Do --------------------------------- +- [ ] Use a standard and unique naming convention for all instances. + #. Name of the MongoDB instance (``mdb-instance-*``) #. Name of the BigchainDB instance (``bdb-instance-*``) #. Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``) @@ -48,21 +50,64 @@ Things Each Node Operator Must Do #. Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``) #. Name of the Tendermint instance (``tendermint-instance-*``) +Example +^^^^^^^ -☐ Generate two keys and corresponding certificate signing requests (CSRs): +.. code:: text + { + "MongoDB": [ + "mdb-instance-1", + "mdb-instance-2", + "mdb-instance-3", + "mdb-instance-4" + ], + "BigchainDB": [ + "bdb-instance-1", + "bdb-instance-2", + "bdb-instance-3", + "bdb-instance-4" + ], + "NGINX": [ + "ngx-instance-1", + "ngx-instance-2", + "ngx-instance-3", + "ngx-instance-4" + ], + "OpenResty": [ + "openresty-instance-1", + "openresty-instance-2", + "openresty-instance-3", + "openresty-instance-4" + ], + "MongoDB_Monitoring_Agent": [ + "mdb-mon-instance-1", + "mdb-mon-instance-2", + "mdb-mon-instance-3", + "mdb-mon-instance-4" + ], + "Tendermint": [ + "tendermint-instance-1", + "tendermint-instance-2", + "tendermint-instance-3", + "tendermint-instance-4" + ] + } + + +☐ Generate three keys and corresponding certificate signing requests (CSRs): + +#. Server Certificate for the MongoDB instance #. Client Certificate for BigchainDB Server to identify itself to MongoDB #. Client Certificate for MongoDB Monitoring Agent to identify itself to MongoDB -Ask the managing organization to use its self-signed CA to sign those four CSRs. -They should send you: +Use the self-signed CA to sign those three CSRs: -* Two certificates (one for each CSR you sent them). -* One ``ca.crt`` file: their CA certificate. -* One ``crl.pem`` file: a certificate revocation list. +* Three certificates (one for each CSR). For help, see the pages: +* :doc:`How to Generate a Server Certificate for MongoDB <../production-deployment-template/server-tls-certificate>` * :doc:`How to Generate a Client Certificate for MongoDB <../production-deployment-template/client-tls-certificate>` ☐ Make up an FQDN for your BigchainDB node (e.g. ``mynode.mycorp.com``). From 93070bf9fe2ac109927ed2022f2601dee07f5f49 Mon Sep 17 00:00:00 2001 From: muawiakh Date: Fri, 12 Jan 2018 18:21:19 +0100 Subject: [PATCH 03/10] Update docs II --- .../workflow.rst | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/docs/server/source/production-deployment-template-tendermint/workflow.rst b/docs/server/source/production-deployment-template-tendermint/workflow.rst index e40d861b..5e55ab4c 100644 --- a/docs/server/source/production-deployment-template-tendermint/workflow.rst +++ b/docs/server/source/production-deployment-template-tendermint/workflow.rst @@ -41,14 +41,19 @@ and buy an SSL/TLS certificate for the FQDN. Things Each Node Operator Must Do --------------------------------- -- [ ] Use a standard and unique naming convention for all instances. +Use a standard and unique naming convention for all instances. -#. Name of the MongoDB instance (``mdb-instance-*``) -#. Name of the BigchainDB instance (``bdb-instance-*``) -#. Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``) -#. Name of the OpenResty instance (``openresty-instance-*``) -#. Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``) -#. Name of the Tendermint instance (``tendermint-instance-*``) +☐ Name of the MongoDB instance (``mdb-instance-*``) + +☐ Name of the BigchainDB instance (``bdb-instance-*``) + +☐ Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``) + +☐ Name of the OpenResty instance (``openresty-instance-*``) + +☐ Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``) + +☐ Name of the Tendermint instance (``tendermint-instance-*``) Example ^^^^^^^ From 03219a9371ca033436bb393abaca767d58c96eae Mon Sep 17 00:00:00 2001 From: muawiakh Date: Thu, 8 Feb 2018 11:54:57 +0100 Subject: [PATCH 04/10] Remove references from BigchainDB 1.x deployment strategy - Remove references from existing deployment model - Address comments, fix typos, minor structure changes. --- docs/server/source/index.rst | 1 - .../architecture.rst | 210 --- .../index.rst | 20 - .../node-config-map-and-secrets.rst | 356 ----- .../node-on-kubernetes.rst | 1178 ----------------- .../workflow.rst | 188 --- .../architecture.rst | 165 ++- .../bigchaindb-network-on-kubernetes.rst | 2 +- .../cloud-manager.rst | 44 +- .../production-deployment-template/index.rst | 11 +- .../node-config-map-and-secrets.rst | 171 ++- .../node-on-kubernetes.rst | 435 ++++-- .../tectonic-azure.rst | 2 - .../template-kubernetes-azure.rst | 3 +- .../workflow.rst | 207 +-- setup.py | 1 + 16 files changed, 714 insertions(+), 2280 deletions(-) delete mode 100644 docs/server/source/production-deployment-template-tendermint/architecture.rst delete mode 100644 docs/server/source/production-deployment-template-tendermint/index.rst delete mode 100644 docs/server/source/production-deployment-template-tendermint/node-config-map-and-secrets.rst delete mode 100644 docs/server/source/production-deployment-template-tendermint/node-on-kubernetes.rst delete mode 100644 docs/server/source/production-deployment-template-tendermint/workflow.rst rename docs/server/source/{production-deployment-template-tendermint => production-deployment-template}/bigchaindb-network-on-kubernetes.rst (99%) diff --git a/docs/server/source/index.rst b/docs/server/source/index.rst index 750316df..65bd8774 100644 --- a/docs/server/source/index.rst +++ b/docs/server/source/index.rst @@ -10,7 +10,6 @@ BigchainDB Server Documentation production-nodes/index clusters production-deployment-template/index - production-deployment-template-tendermint/index dev-and-test/index server-reference/index http-client-server-api diff --git a/docs/server/source/production-deployment-template-tendermint/architecture.rst b/docs/server/source/production-deployment-template-tendermint/architecture.rst deleted file mode 100644 index 7778e45f..00000000 --- a/docs/server/source/production-deployment-template-tendermint/architecture.rst +++ /dev/null @@ -1,210 +0,0 @@ -Architecture of a BigchainDB Node -================================== - -A BigchainDB Production deployment is hosted on a Kubernetes cluster and includes: - -* NGINX, OpenResty, BigchainDB, MongoDB and Tendermint - `Kubernetes Services `_. -* NGINX, OpenResty, BigchainDB and MongoDB Monitoring Agent. - `Kubernetes Deployments `_. -* MongoDB and Tendermint `Kubernetes StatefulSet `_. -* Third party services like `3scale `_, - `MongoDB Cloud Manager `_ and the - `Azure Operations Management Suite - `_. - - -.. _bigchaindb-node: - -BigchainDB Node ---------------- - -.. aafig:: - :aspect: 60 - :scale: 100 - :background: #rgb - :proportional: - - + + - +--------------------------------------------------------------------------------------------------------------------------------------+ - | | | | - | | | | - | | | | - | | | | - | | | | - | | | | - | "BigchainDB API" | | "Tendermint P2P" | - | | | "Communication/" | - | | | "Public Key Exchange" | - | | | | - | | | | - | v v | - | | - | +------------------+ | - | |"NGINX Service" | | - | +-------+----------+ | - | | | - | v | - | | - | +------------------+ | - | | "NGINX" | | - | | "Deployment" | | - | | | | - | +-------+----------+ | - | | | - | | | - | | | - | v | - | | - | "443" +----------+ "46656/9986" | - | | "Rate" | | - | +---------------------------+"Limiting"+-----------------------+ | - | | | "Logic" | | | - | | +----+-----+ | | - | | | | | - | | | | | - | | | | | - | | | | | - | | | | | - | | "27017" | | | - | v | v | - | +-------------+ | +------------+ | - | |"HTTPS" | | +------------------> |"Tendermint"| | - | |"Termination"| | | "9986" |"Service" | "46656" | - | | | | | +-------+ | <----+ | - | +-----+-------+ | | | +------------+ | | - | | | | | | | - | | | | v v | - | | | | +------------+ +------------+ | - | | | | |"NGINX" | |"Tendermint"| | - | | | | |"Deployment"| |"Stateful" | | - | | | | |"Pub-Key-Ex"| |"Set" | | - | ^ | | +------------+ +------------+ | - | +-----+-------+ | | | - | "POST" |"Analyze" | "GET" | | | - | |"Request" | | | | - | +-----------+ +--------+ | | | - | | +-------------+ | | | | - | | | | | "Bi+directional, communication between" | - | | | | | "BigchainDB(APP) and Tendermint" | - | | | | | "BFT consensus Engine" | - | | | | | | - | v v | | | - | | | | - | +-------------+ +--------------+ +----+-------------------> +--------------+ | - | | "OpenResty" | | "BigchainDB" | | | "MongoDB" | | - | | "Service" | | "Service" | | | "Service" | | - | | | +----->| | | +-------> | | | - | +------+------+ | +------+-------+ | | +------+-------+ | - | | | | | | | | - | | | | | | | | - | v | v | | v | - | +-------------+ | +-------------+ | | +----------+ | - | | | | | | <------------+ | |"MongoDB" | | - | |"OpenResty" | | | "BigchainDB"| | |"Stateful"| | - | |"Deployment" | | | "Deployment"| | |"Set" | | - | | | | | | | +-----+----+ | - | | | | | +---------------------------+ | | - | | | | | | | | - | +-----+-------+ | +-------------+ | | - | | | | | - | | | | | - | v | | | - | +-----------+ | v | - | | "Auth" | | +------------+ | - | | "Logic" |----------+ |"MongoDB" | | - | | | |"Monitoring"| | - | | | |"Agent" | | - | +---+-------+ +-----+------+ | - | | | | - | | | | - | | | | - | | | | - | | | | - | | | | - +---------------+---------------------------------------------------------------------------------------+------------------------------+ - | | - | | - | | - v v - +------------------------------------+ +------------------------------------+ - | | | | - | | | | - | | | | - | "3Scale" | | "MongoDB Cloud" | - | | | | - | | | | - | | | | - +------------------------------------+ +------------------------------------+ - - - - -.. note:: - The arrows in the diagram represent the client-server communication. For - example, A-->B implies that A initiates the connection to B. - It does not represent the flow of data; the communication channel is always - fully duplex. - - -NGINX: Entrypoint and Gateway ------------------------------ - -We use an NGINX as HTTP proxy on port 443 (configurable) at the cloud -entrypoint for: - -#. Rate Limiting: We configure NGINX to allow only a certain number of requests - (configurable) which prevents DoS attacks. - -#. HTTPS Termination: The HTTPS connection does not carry through all the way - to BigchainDB and terminates at NGINX for now. - -#. Request Routing: For HTTPS connections on port 443 (or the configured BigchainDB public api port), - the connection is proxied to: - - #. OpenResty Service if it is a POST request. - #. BigchainDB Service if it is a GET request. - - -We use an NGINX TCP proxy on port 27017 (configurable) at the cloud -entrypoint for: - -#. Rate Limiting: We configure NGINX to allow only a certain number of requests - (configurable) which prevents DoS attacks. - -#. Request Routing: For connections on port 27017 (or the configured MongoDB - public api port), the connection is proxied to the MongoDB Service. - - -OpenResty: API Management, Authentication and Authorization ------------------------------------------------------------ - -We use `OpenResty `_ to perform authorization checks -with 3scale using the ``app_id`` and ``app_key`` headers in the HTTP request. - -OpenResty is NGINX plus a bunch of other -`components `_. We primarily depend -on the LuaJIT compiler to execute the functions to authenticate the ``app_id`` -and ``app_key`` with the 3scale backend. - - -MongoDB: Standalone -------------------- - -We use MongoDB as the backend database for BigchainDB. - -We achieve security by avoiding DoS attacks at the NGINX proxy layer and by -ensuring that MongoDB has TLS enabled for all its connections. - - -Tendermint: BFT consensus engine --------------------------------- - -We use Tendermint as the backend consensus engine for BFT replication of BigchainDB. -In a multi-node deployment, Tendermint nodes/peers communicate with each other via -the public ports exposed by the NGINX gateway. - -We use port **9986** (configurable) to allow tendermint nodes to access the public keys -of the peers and port **46656** (configurable) for the rest of the communications between -the peers. - diff --git a/docs/server/source/production-deployment-template-tendermint/index.rst b/docs/server/source/production-deployment-template-tendermint/index.rst deleted file mode 100644 index 8692d180..00000000 --- a/docs/server/source/production-deployment-template-tendermint/index.rst +++ /dev/null @@ -1,20 +0,0 @@ -Production Deployment Template: Tendermint BFT -============================================== - -This section outlines how *we* deploy production BigchainDB, -integrated with Tendermint(backend for BFT consensus), -clusters on Microsoft Azure using -Kubernetes. We improve it constantly. -You may choose to use it as a template or reference for your own deployment, -but *we make no claim that it is suitable for your purposes*. -Feel free change things to suit your needs or preferences. - - -.. toctree:: - :maxdepth: 1 - - workflow - architecture - node-on-kubernetes - node-config-map-and-secrets - bigchaindb-network-on-kubernetes \ No newline at end of file diff --git a/docs/server/source/production-deployment-template-tendermint/node-config-map-and-secrets.rst b/docs/server/source/production-deployment-template-tendermint/node-config-map-and-secrets.rst deleted file mode 100644 index 2e488a38..00000000 --- a/docs/server/source/production-deployment-template-tendermint/node-config-map-and-secrets.rst +++ /dev/null @@ -1,356 +0,0 @@ -.. _how-to-configure-a-bigchaindb-tendermint-node: - -How to Configure a BigchainDB + Tendermint Node -=============================================== - -This page outlines the steps to set a bunch of configuration settings -in your BigchainDB node. -They are pushed to the Kubernetes cluster in two files, -named ``config-map.yaml`` (a set of ConfigMaps) -and ``secret.yaml`` (a set of Secrets). -They are stored in the Kubernetes cluster's key-value store (etcd). - -Make sure you did all the things listed in the section titled -:ref:`things-each-node-operator-must-do-tmt` -(including generation of all the SSL certificates needed -for MongoDB auth). - - -Edit config-map.yaml --------------------- - -Make a copy of the file ``k8s/configuration/config-map.yaml`` -and edit the data values in the various ConfigMaps. -That file already contains many comments to help you -understand each data value, but we make some additional -remarks on some of the values below. - -Note: None of the data values in ``config-map.yaml`` need -to be base64-encoded. (This is unlike ``secret.yaml``, -where all data values must be base64-encoded. -This is true of all Kubernetes ConfigMaps and Secrets.) - - -vars.cluster-fqdn -~~~~~~~~~~~~~~~~~ - -The ``cluster-fqdn`` field specifies the domain you would have -:ref:`registered before `. - - -vars.cluster-frontend-port -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The ``cluster-frontend-port`` field specifies the port on which your cluster -will be available to all external clients. -It is set to the HTTPS port ``443`` by default. - - -vars.cluster-health-check-port -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The ``cluster-healthcheck-port`` is the port number on which health check -probes are sent to the main NGINX instance. -It is set to ``8888`` by default. - - -vars.cluster-dns-server-ip -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The ``cluster-dns-server-ip`` is the IP of the DNS server for a node. -We use DNS for service discovery. A Kubernetes deployment always has a DNS -server (``kube-dns``) running at 10.0.0.10, and since we use Kubernetes, this is -set to ``10.0.0.10`` by default, which is the default ``kube-dns`` IP address. - - -vars.mdb-instance-name and Similar -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Your BigchainDB cluster organization should have a standard way -of naming instances, so the instances in your BigchainDB node -should conform to that standard (i.e. you can't just make up some names). -There are some things worth noting about the ``mdb-instance-name``: - -* This field will be the DNS name of your MongoDB instance, and Kubernetes - maps this name to its internal DNS. -* We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in our - documentation. Your BigchainDB cluster may use a different naming convention. - - -vars.ngx-mdb-instance-name and Similar -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -NGINX needs the FQDN of the servers inside the cluster to be able to forward -traffic. -The ``ngx-openresty-instance-name``, ``ngx-mdb-instance-name`` and -``ngx-bdb-instance-name`` are the FQDNs of the OpenResty instance, the MongoDB -instance, and the BigchainDB instance in this Kubernetes cluster respectively. -In Kubernetes, this is usually the name of the module specified in the -corresponding ``vars.*-instance-name`` followed by the -``.svc.cluster.local``. For example, if you run OpenResty in -the default Kubernetes namespace, this will be -``.default.svc.cluster.local`` - - -vars.mongodb-frontend-port and vars.mongodb-backend-port -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The ``mongodb-frontend-port`` is the port number on which external clients can -access MongoDB. This needs to be restricted to only other MongoDB instances -by enabling an authentication mechanism on MongoDB cluster. -It is set to ``27017`` by default. - -The ``mongodb-backend-port`` is the port number on which MongoDB is actually -available/listening for requests in your cluster. -It is also set to ``27017`` by default. - - -vars.openresty-backend-port -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The ``openresty-backend-port`` is the port number on which OpenResty is -listening for requests. -This is used by the NGINX instance to forward requests -destined for the OpenResty instance to the right port. -This is also used by OpenResty instance to bind to the correct port to -receive requests from NGINX instance. -It is set to ``80`` by default. - - -vars.bigchaindb-wsserver-advertised-scheme -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The ``bigchaindb-wsserver-advertised-scheme`` is the protocol used to access -the WebSocket API in BigchainDB. This can be set to ``wss`` or ``ws``. -It is set to ``wss`` by default. - - -vars.bigchaindb-api-port, vars.bigchaindb-ws-port and Similar -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The ``bigchaindb-api-port`` is the port number on which BigchainDB is -listening for HTTP requests. Currently set to ``9984`` by default. - -The ``bigchaindb-ws-port`` is the port number on which BigchainDB is -listening for Websocket requests. Currently set to ``9985`` by default. - -There's another :doc:`page with a complete listing of all the BigchainDB Server -configuration settings <../server-reference/configuration>`. - - -bdb-config.bdb-user -~~~~~~~~~~~~~~~~~~~ - -This is the user name that BigchainDB uses to authenticate itself to the -backend MongoDB database. - -We need to specify the user name *as seen in the certificate* issued to -the BigchainDB instance in order to authenticate correctly. Use -the following ``openssl`` command to extract the user name from the -certificate: - -.. code:: bash - - $ openssl x509 -in \ - -inform PEM -subject -nameopt RFC2253 - -You should see an output line that resembles: - -.. code:: bash - - subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE - -The ``subject`` line states the complete user name we need to use for this -field (``bdb-config.bdb-user``), i.e. - -.. code:: bash - - emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE - - -tendermint-config.tm-instance-name -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Your BigchainDB cluster organization should have a standard way -of naming instances, so the instances in your BigchainDB node -should conform to that standard. There are some things worth noting -about the ``tm-instance-name``: - -* This field will be the DNS name of your Tendermint instance, and Kubernetes - maps this name to its internal DNS, so all the peer to peer communication - depends on this, in case of a network/multi-node deployment. -* This parameter is also used to access the public key of a particular node. -* We use ``tm-instance-0``, ``tm-instance-1`` and so on in our - documentation. Your BigchainDB cluster may use a different naming convention. - - -tendermint-config.ngx-tm-instance-name -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -NGINX needs the FQDN of the servers inside the cluster to be able to forward -traffic. -``ngx-tm-instance-name`` is the FQDN of the Tendermint -instance in this Kubernetes cluster. -In Kubernetes, this is usually the name of the module specified in the -corresponding ``tendermint-config.*-instance-name`` followed by the -``.svc.cluster.local``. For example, if you run Tendermint in -the default Kubernetes namespace, this will be -``.default.svc.cluster.local`` - - -tendermint-config.tm-seeds -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -``tm-seeds`` is the initial set of peers to connect to. It is a comma separated -list of all the peers part of the cluster. - -If you are deploying a stand-alone BigchainDB node the value should the same as -````. If you are deploying a network this parameter will look -like this: - -.. code:: - - ,,, - - -tendermint-config.tm-validators -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -``tm-validators`` is the initial set of validators in the network. It is a comma separated list -of all the participant validator nodes. - -If you are deploying a stand-alone BigchainDB node the value should be the same as -````. If you are deploying a network this parameter will look like -this: - -.. code:: - - ,,, - - -tendermint-config.tm-validator-power -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -``tm-validator-power`` represents the voting power of each validator. It is a comma separated -list of all the participants in the network. - -**Note**: The order of the validator power list should be the same as the ``tm-validators`` list. - -.. code:: - - tm-validators: ,,, - -For the above list of validators the ``tm-validator-power`` list should look like this: - -.. code:: - - tm-validator-power: ,,, - - -tendermint-config.tm-genesis-time -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -``tm-genesis-time`` represents the official time of blockchain start. Details regarding, how to generate -this parameter are covered :ref:`here `. - - -tendermint-config.tm-chain-id -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -``tm-chain-id`` represents the ID of the blockchain. This must be unique for every blockchain. -Details regarding, how to generate this parameter are covered -:ref:`here `. - - -tendermint-config.tm-abci-port -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -``tm-abci-port`` has a default value ``46658`` which is used by Tendermint Core for -ABCI(Application BlockChain Interface) traffic. BigchainDB nodes use this port -internally to communicate with Tendermint Core. - - -tendermint-config.tm-p2p-port -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -``tm-p2p-port`` has a default value ``46656`` which is used by Tendermint Core for -peer to peer communication. - -For a multi-node/zone deployment, this port needs to be available publicly for P2P -communication between Tendermint nodes. - - -tendermint-config.tm-rpc-port -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -``tm-rpc-port`` has a default value ``46657`` which is used by Tendermint Core for RPC -traffic. BigchainDB nodes use this port with RPC listen address. - - -tendermint-config.tm-pub-key-access -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -``tm-pub-key-access`` has a default value ``9986``, which is used to discover the public -key of a tendermint node. Each Tendermint StatefulSet(Pod, Tendermint + NGINX) hosts its -public key. - -.. code:: - - http://tendermint-instance-1:9986/pub_key.json - - -Edit secret.yaml ----------------- - -Make a copy of the file ``k8s/configuration/secret.yaml`` -and edit the data values in the various Secrets. -That file includes many comments to explain the required values. -**In particular, note that all values must be base64-encoded.** -There are tips at the top of the file -explaining how to convert values into base64-encoded values. - -Your BigchainDB node might not need all the Secrets. -For example, if you plan to access the BigchainDB API over HTTP, you -don't need the ``https-certs`` Secret. -You can delete the Secrets you don't need, -or set their data values to ``""``. - -Note that ``ca.pem`` is just another name for ``ca.crt`` -(the certificate of your BigchainDB cluster's self-signed CA). - - -threescale-credentials.* -~~~~~~~~~~~~~~~~~~~~~~~~ - -If you're not using 3scale, -you can delete the ``threescale-credentials`` Secret -or leave all the values blank (``""``). - -If you *are* using 3scale, get the values for ``secret-token``, -``service-id``, ``version-header`` and ``service-token`` by logging in to 3scale -portal using your admin account, click **APIs** and click on **Integration** -for the relevant API. -Scroll to the bottom of the page and click the small link -in the lower right corner, labelled **Download the NGINX Config files**. -Unzip it(if it is a ``zip`` file). Open the ``.conf`` and the ``.lua`` file. -You should be able to find all the values in those files. -You have to be careful because it will have values for **all** your APIs, -and some values vary from API to API. -The ``version-header`` is the timestamp in a line that looks like: - -.. code:: - - proxy_set_header X-3scale-Version "2017-06-28T14:57:34Z"; - - -Deploy Your config-map.yaml and secret.yaml -------------------------------------------- - -You can deploy your edited ``config-map.yaml`` and ``secret.yaml`` -files to your Kubernetes cluster using the commands: - -.. code:: bash - - $ kubectl apply -f config-map.yaml - - $ kubectl apply -f secret.yaml diff --git a/docs/server/source/production-deployment-template-tendermint/node-on-kubernetes.rst b/docs/server/source/production-deployment-template-tendermint/node-on-kubernetes.rst deleted file mode 100644 index 45695b9c..00000000 --- a/docs/server/source/production-deployment-template-tendermint/node-on-kubernetes.rst +++ /dev/null @@ -1,1178 +0,0 @@ -.. _kubernetes-template-deploy-a-single-bigchaindb-node-with-tendermint: - -Kubernetes Template: Deploy a Single BigchainDB Node with Tendermint -==================================================================== - -This page describes how to deploy a stand-alone BigchainDB + Tendermint node, -or a static network of BigchainDB + Tendermint nodes. -using `Kubernetes `_. -It assumes you already have a running Kubernetes cluster. - -Below, we refer to many files by their directory and filename, -such as ``configuration/config-map-tm.yaml``. Those files are files in the -`bigchaindb/bigchaindb repository on GitHub `_ -in the ``k8s/`` directory. -Make sure you're getting those files from the appropriate Git branch on -GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB -cluster is using. - - -Step 1: Install and Configure kubectl -------------------------------------- - -kubectl is the Kubernetes CLI. -If you don't already have it installed, -then see the `Kubernetes docs to install it -`_. - -The default location of the kubectl configuration file is ``~/.kube/config``. -If you don't have that file, then you need to get it. - -**Azure.** If you deployed your Kubernetes cluster on Azure -using the Azure CLI 2.0 (as per :doc:`our template -<../production-deployment-template/template-kubernetes-azure>`), -then you can get the ``~/.kube/config`` file using: - -.. code:: bash - - $ az acs kubernetes get-credentials \ - --resource-group \ - --name - -If it asks for a password (to unlock the SSH key) -and you enter the correct password, -but you get an error message, -then try adding ``--ssh-key-file ~/.ssh/`` -to the above command (i.e. the path to the private key). - -.. note:: - - **About kubectl contexts.** You might manage several - Kubernetes clusters. To make it easy to switch from one to another, - kubectl has a notion of "contexts," e.g. the context for cluster 1 or - the context for cluster 2. To find out the current context, do: - - .. code:: bash - - $ kubectl config view - - and then look for the ``current-context`` in the output. - The output also lists all clusters, contexts and users. - (You might have only one of each.) - You can switch to a different context using: - - .. code:: bash - - $ kubectl config use-context - - You can also switch to a different context for just one command - by inserting ``--context `` into any kubectl command. - For example: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 get pods - - will get a list of the pods in the Kubernetes cluster associated - with the context named ``k8s-bdb-test-cluster-0``. - -Step 2: Connect to Your Cluster's Web UI (Optional) ---------------------------------------------------- - -You can connect to your cluster's -`Kubernetes Dashboard `_ -(also called the Web UI) using: - -.. code:: bash - - $ kubectl proxy -p 8001 - - or - - $ az acs kubernetes browse -g [Resource Group] -n [Container service instance name] --ssh-key-file /path/to/privateKey - -or, if you prefer to be explicit about the context (explained above): - -.. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 proxy -p 8001 - -The output should be something like ``Starting to serve on 127.0.0.1:8001``. -That means you can visit the dashboard in your web browser at -`http://127.0.0.1:8001/ui `_. - - -Step 3: Configure Your BigchainDB Node --------------------------------------- - -See the page titled :ref:`how-to-configure-a-bigchaindb-tendermint-node`. - - -.. _start-the-nginx-service-tmt: - -Step 4: Start the NGINX Service -------------------------------- - - * This will will give us a public IP for the cluster. - - * Once you complete this step, you might need to wait up to 10 mins for the - public IP to be assigned. - - * You have the option to use vanilla NGINX without HTTPS support or an - NGINX with HTTPS support. - - -Step 4.1: Vanilla NGINX -^^^^^^^^^^^^^^^^^^^^^^^ - - * This configuration is located in the file ``nginx-http/nginx-http-svc-tm.yaml``. - - * Set the ``metadata.name`` and ``metadata.labels.name`` to the value - set in ``ngx-instance-name`` in the ConfigMap above. - - * Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in - the ConfigMap followed by ``-dep``. For example, if the value set in the - ``ngx-instance-name`` is ``ngx-http-instance-0``, set the - ``spec.selector.app`` to ``ngx-http-instance-0-dep``. - - * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the - ``cluster-frontend-port`` in the ConfigMap above. This is the - ``public-cluster-port`` in the file which is the ingress in to the cluster. - - * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the - ``tm-pub-access-port`` in the ConfigMap above. This is the - ``tm-pub-key-access`` in the file which specifies where Public Key for - the Tendermint instance is available. - - * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the - ``tm-p2p-port`` in the ConfigMap above. This is the - ``tm-p2p-port`` in the file which is used for P2P communication for Tendermint - nodes. - - * Start the Kubernetes Service: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc-tm.yaml - - -Step 4.2: NGINX with HTTPS -^^^^^^^^^^^^^^^^^^^^^^^^^^ - - * You have to enable HTTPS for this one and will need an HTTPS certificate - for your domain. - - * You should have already created the necessary Kubernetes Secrets in the previous - step (i.e. ``https-certs``). - - * This configuration is located in the file ``nginx-https/nginx-https-svc-tm.yaml``. - - * Set the ``metadata.name`` and ``metadata.labels.name`` to the value - set in ``ngx-instance-name`` in the ConfigMap above. - - * Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in - the ConfigMap followed by ``-dep``. For example, if the value set in the - ``ngx-instance-name`` is ``ngx-https-instance-0``, set the - ``spec.selector.app`` to ``ngx-https-instance-0-dep``. - - * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the - ``cluster-frontend-port`` in the ConfigMap above. This is the - ``public-secure-cluster-port`` in the file which is the ingress in to the cluster. - - * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the - ``mongodb-frontend-port`` in the ConfigMap above. This is the - ``public-mdb-port`` in the file which specifies where MongoDB is - available. - - * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the - ``tm-pub-access-port`` in the ConfigMap above. This is the - ``tm-pub-key-access`` in the file which specifies where Public Key for - the Tendermint instance is available. - - * Set ``ports[3].port`` and ``ports[3].targetPort`` to the value set in the - ``tm-p2p-port`` in the ConfigMap above. This is the - ``tm-p2p-port`` in the file which is used for P2P communication between Tendermint - nodes. - - - * Start the Kubernetes Service: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc-tm.yaml - - -.. _assign-dns-name-to-nginx-public-ip-tmt: - -Step 5: Assign DNS Name to the NGINX Public IP ----------------------------------------------- - - * This step is required only if you are planning to set up multiple - `BigchainDB nodes - `_ or are using - HTTPS certificates tied to a domain. - - * The following command can help you find out if the NGINX service started - above has been assigned a public IP or external IP address: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 get svc -w - - * Once a public IP is assigned, you can map it to - a DNS name. - We usually assign ``bdb-test-cluster-0``, ``bdb-test-cluster-1`` and - so on in our documentation. - Let's assume that we assign the unique name of ``bdb-test-cluster-0`` here. - - -**Set up DNS mapping in Azure.** -Select the current Azure resource group and look for the ``Public IP`` -resource. You should see at least 2 entries there - one for the Kubernetes -master and the other for the NGINX instance. You may have to ``Refresh`` the -Azure web page listing the resources in a resource group for the latest -changes to be reflected. -Select the ``Public IP`` resource that is attached to your service (it should -have the Azure DNS prefix name along with a long random string, without the -``master-ip`` string), select ``Configuration``, add the DNS assigned above -(for example, ``bdb-test-cluster-0``), click ``Save``, and wait for the -changes to be applied. - -To verify the DNS setting is operational, you can run ``nslookup `` from your local Linux shell. - -This will ensure that when you scale to different geographical zones, other Tendermint -nodes in the network can reach this instance. - - -.. _start-the-mongodb-kubernetes-service-tmt: - -Step 6: Start the MongoDB Kubernetes Service --------------------------------------------- - - * This configuration is located in the file ``mongodb/mongo-svc-tm.yaml``. - - * Set the ``metadata.name`` and ``metadata.labels.name`` to the value - set in ``mdb-instance-name`` in the ConfigMap above. - - * Set the ``spec.selector.app`` to the value set in ``mdb-instance-name`` in - the ConfigMap followed by ``-ss``. For example, if the value set in the - ``mdb-instance-name`` is ``mdb-instance-0``, set the - ``spec.selector.app`` to ``mdb-instance-0-ss``. - - * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the - ``mongodb-backend-port`` in the ConfigMap above. - This is the ``mdb-port`` in the file which specifies where MongoDB listens - for API requests. - - * Start the Kubernetes Service: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc-tm.yaml - - -.. _start-the-bigchaindb-kubernetes-service-tmt: - -Step 7: Start the BigchainDB Kubernetes Service ------------------------------------------------ - - * This configuration is located in the file ``bigchaindb/bigchaindb-svc-tm.yaml``. - - * Set the ``metadata.name`` and ``metadata.labels.name`` to the value - set in ``bdb-instance-name`` in the ConfigMap above. - - * Set the ``spec.selector.app`` to the value set in ``bdb-instance-name`` in - the ConfigMap followed by ``-dep``. For example, if the value set in the - ``bdb-instance-name`` is ``bdb-instance-0``, set the - ``spec.selector.app`` to ``bdb-instance-0-dep``. - - * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the - ``bigchaindb-api-port`` in the ConfigMap above. - This is the ``bdb-api-port`` in the file which specifies where BigchainDB - listens for HTTP API requests. - - * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the - ``bigchaindb-ws-port`` in the ConfigMap above. - This is the ``bdb-ws-port`` in the file which specifies where BigchainDB - listens for Websocket connections. - - * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the - ``tm-abci-port`` in the ConfigMap above. - This is the ``tm-abci-port`` in the file which specifies the port used - for ABCI communication. - - * Start the Kubernetes Service: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc-tm.yaml - - -.. _start-the-openresty-kubernetes-service-tmt: - -Step 8: Start the OpenResty Kubernetes Service ----------------------------------------------- - - * This configuration is located in the file ``nginx-openresty/nginx-openresty-svc-tm.yaml``. - - * Set the ``metadata.name`` and ``metadata.labels.name`` to the value - set in ``openresty-instance-name`` in the ConfigMap above. - - * Set the ``spec.selector.app`` to the value set in ``openresty-instance-name`` in - the ConfigMap followed by ``-dep``. For example, if the value set in the - ``openresty-instance-name`` is ``openresty-instance-0``, set the - ``spec.selector.app`` to ``openresty-instance-0-dep``. - - * Start the Kubernetes Service: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc-tm.yaml - - -.. _start-the-tendermint-kubernetes-service-tmt: - -Step 9: Start the Tendermint Kubernetes Service ------------------------------------------------ - - * This configuration is located in the file ``tendermint/tendermint-svc.yaml``. - - * Set the ``metadata.name`` and ``metadata.labels.name`` to the value - set in ``tm-instance-name`` in the ConfigMap above. - - * Set the ``spec.selector.app`` to the value set in ``tm-instance-name`` in - the ConfigMap followed by ``-ss``. For example, if the value set in the - ``tm-instance-name`` is ``tm-instance-0``, set the - ``spec.selector.app`` to ``tm-instance-0-ss``. - - * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the - ``tm-p2p-port`` in the ConfigMap above. - This is the ``p2p`` in the file which specifies where Tendermint peers - communicate. - - * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the - ``tm-rpc-port`` in the ConfigMap above. - This is the ``rpc`` in the file which specifies the port used by Tendermint core - for RPC traffic. - - * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the - ``tm-pub-key-access`` in the ConfigMap above. - This is the ``pub-key-access`` in the file which specifies the port to host/distribute - the public key for the Tendermint node. - - * Start the Kubernetes Service: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-svc.yaml - - -.. _start-the-nginx-deployment-tmt: - -Step 10: Start the NGINX Kubernetes Deployment ----------------------------------------------- - - * NGINX is used as a proxy to OpenResty, BigchainDB, Tendermint and MongoDB instances in - the node. It proxies HTTP/HTTPS requests on the ``cluster-frontend-port`` - to the corresponding OpenResty or BigchainDB backend, TCP connections - on ``mongodb-frontend-port``, ``tm-p2p-port`` and ``tm-pub-key-access`` - to MongoDB and Tendermint respectively. - - * As in step 4, you have the option to use vanilla NGINX without HTTPS or - NGINX with HTTPS support. - -Step 10.1: Vanilla NGINX -^^^^^^^^^^^^^^^^^^^^^^^^ - - * This configuration is located in the file ``nginx-http/nginx-http-dep-tm.yaml``. - - * Set the ``metadata.name`` and ``spec.template.metadata.labels.app`` - to the value set in ``ngx-instance-name`` in the ConfigMap followed by a - ``-dep``. For example, if the value set in the ``ngx-instance-name`` is - ``ngx-http-instance-0``, set the fields to ``ngx-http-instance-0-dep``. - - * Set the ports to be exposed from the pod in the - ``spec.containers[0].ports`` section. We currently expose 5 ports - - ``mongodb-frontend-port``, ``cluster-frontend-port``, - ``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port``. - Set them to the values specified in the - ConfigMap. - - * The configuration uses the following values set in the ConfigMap: - - - ``cluster-frontend-port`` - - ``cluster-health-check-port`` - - ``cluster-dns-server-ip`` - - ``mongodb-frontend-port`` - - ``ngx-mdb-instance-name`` - - ``mongodb-backend-port`` - - ``ngx-bdb-instance-name`` - - ``bigchaindb-api-port`` - - ``bigchaindb-ws-port`` - - ``ngx-tm-instance-name`` - - ``tm-pub-key-access`` - - ``tm-p2p-port`` - - * Start the Kubernetes Deployment: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep-tm.yaml - - -Step 10.2: NGINX with HTTPS -^^^^^^^^^^^^^^^^^^^^^^^^^^^ - - * This configuration is located in the file - ``nginx-https/nginx-https-dep-tm.yaml``. - - * Set the ``metadata.name`` and ``spec.template.metadata.labels.app`` - to the value set in ``ngx-instance-name`` in the ConfigMap followed by a - ``-dep``. For example, if the value set in the ``ngx-instance-name`` is - ``ngx-https-instance-0``, set the fields to ``ngx-https-instance-0-dep``. - - * Set the ports to be exposed from the pod in the - ``spec.containers[0].ports`` section. We currently expose 6 ports - - ``mongodb-frontend-port``, ``cluster-frontend-port``, - ``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port`` - . Set them to the values specified in the - ConfigMap. - - * The configuration uses the following values set in the ConfigMap: - - - ``cluster-frontend-port`` - - ``cluster-health-check-port`` - - ``cluster-fqdn`` - - ``cluster-dns-server-ip`` - - ``mongodb-frontend-port`` - - ``ngx-mdb-instance-name`` - - ``mongodb-backend-port`` - - ``openresty-backend-port`` - - ``ngx-openresty-instance-name`` - - ``ngx-bdb-instance-name`` - - ``bigchaindb-api-port`` - - ``bigchaindb-ws-port`` - - ``ngx-tm-instance-name`` - - ``tm-pub-key-access`` - - ``tm-p2p-port``` - - * The configuration uses the following values set in the Secret: - - - ``https-certs`` - - * Start the Kubernetes Deployment: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep-tm.yaml - - -.. _create-kubernetes-storage-class-mdb-tmt: - -Step 11: Create Kubernetes Storage Classes for MongoDB ------------------------------------------------------- - -MongoDB needs somewhere to store its data persistently, -outside the container where MongoDB is running. -Our MongoDB Docker container -(based on the official MongoDB Docker container) -exports two volume mounts with correct -permissions from inside the container: - -* The directory where the mongod instance stores its data: ``/data/db``. - There's more explanation in the MongoDB docs about `storage.dbpath `_. - -* The directory where the mongodb instance stores the metadata for a sharded - cluster: ``/data/configdb/``. - There's more explanation in the MongoDB docs about `sharding.configDB `_. - -Explaining how Kubernetes handles persistent volumes, -and the associated terminology, -is beyond the scope of this documentation; -see `the Kubernetes docs about persistent volumes -`_. - -The first thing to do is create the Kubernetes storage classes. - -**Set up Storage Classes in Azure.** -First, you need an Azure storage account. -If you deployed your Kubernetes cluster on Azure -using the Azure CLI 2.0 -(as per :doc:`our template <../production-deployment-template/template-kubernetes-azure>`), -then the `az acs create` command already created a -storage account in the same location and resource group -as your Kubernetes cluster. -Both should have the same "storage account SKU": ``Standard_LRS``. -Standard storage is lower-cost and lower-performance. -It uses hard disk drives (HDD). -LRS means locally-redundant storage: three replicas -in the same data center. -Premium storage is higher-cost and higher-performance. -It uses solid state drives (SSD). -You can create a `storage account `_ -for Premium storage and associate it with your Azure resource group. -For future reference, the command to create a storage account is -`az storage account create `_. - -.. Note:: - Please refer to `Azure documentation `_ - for the list of VMs that are supported by Premium Storage. - -The Kubernetes template for configuration of Storage Class is located in the -file ``mongodb/mongo-sc.yaml``. - -You may have to update the ``parameters.location`` field in the file to -specify the location you are using in Azure. - -If you want to use a custom storage account with the Storage Class, you -can also update `parameters.storageAccount` and provide the Azure storage -account name. - -Create the required storage classes using: - -.. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-sc.yaml - - -You can check if it worked using ``kubectl get storageclasses``. - - -.. _create-kubernetes-persistent-volume-claim-mdb-tmt: - -Step 12: Create Kubernetes Persistent Volume Claims for MongoDB ---------------------------------------------------------------- - -Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and -``mongo-configdb-claim``. - -This configuration is located in the file ``mongodb/mongo-pvc.yaml``. - -Note how there's no explicit mention of Azure, AWS or whatever. -``ReadWriteOnce`` (RWO) means the volume can be mounted as -read-write by a single Kubernetes node. -(``ReadWriteOnce`` is the *only* access mode supported -by AzureDisk.) -``storage: 20Gi`` means the volume has a size of 20 -`gibibytes `_. - -You may want to update the ``spec.resources.requests.storage`` field in both -the files to specify a different disk size. - -Create the required Persistent Volume Claims using: - -.. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-pvc.yaml - - -You can check its status using: ``kubectl get pvc -w`` - -Initially, the status of persistent volume claims might be "Pending" -but it should become "Bound" fairly quickly. - -.. Note:: - The default Reclaim Policy for dynamically created persistent volumes is ``Delete`` - which means the PV and its associated Azure storage resource will be automatically - deleted on deletion of PVC or PV. In order to prevent this from happening do - the following steps to change default reclaim policy of dyanmically created PVs - from ``Delete`` to ``Retain`` - - * Run the following command to list existing PVs - - .. Code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 get pv - - * Run the following command to update a PV's reclaim policy to - - .. Code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' - - For notes on recreating a private volume form a released Azure disk resource consult - :doc:`the page about cluster troubleshooting <../production-deployment-template/troubleshoot>`. - -.. _start-kubernetes-stateful-set-mongodb-tmt: - -Step 13: Start a Kubernetes StatefulSet for MongoDB ---------------------------------------------------- - - * This configuration is located in the file ``mongodb/mongo-ss-tm.yaml``. - - * Set the ``spec.serviceName`` to the value set in ``mdb-instance-name`` in - the ConfigMap. - For example, if the value set in the ``mdb-instance-name`` - is ``mdb-instance-0``, set the field to ``mdb-instance-0``. - - * Set ``metadata.name``, ``spec.template.metadata.name`` and - ``spec.template.metadata.labels.app`` to the value set in - ``mdb-instance-name`` in the ConfigMap, followed by - ``-ss``. - For example, if the value set in the - ``mdb-instance-name`` is ``mdb-instance-0``, set the fields to the value - ``mdb-insance-0-ss``. - - * Note how the MongoDB container uses the ``mongo-db-claim`` and the - ``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and - ``/data/configdb`` directories (mount paths). - - * Note also that we use the pod's ``securityContext.capabilities.add`` - specification to add the ``FOWNER`` capability to the container. That is - because the MongoDB container has the user ``mongodb``, with uid ``999`` and - group ``mongodb``, with gid ``999``. - When this container runs on a host with a mounted disk, the writes fail - when there is no user with uid ``999``. To avoid this, we use the Docker - feature of ``--cap-add=FOWNER``. This bypasses the uid and gid permission - checks during writes and allows data to be persisted to disk. - Refer to the `Docker docs - `_ - for details. - - * As we gain more experience running MongoDB in testing and production, we - will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``. - - * Set the ports to be exposed from the pod in the - ``spec.containers[0].ports`` section. We currently only expose the MongoDB - backend port. Set it to the value specified for ``mongodb-backend-port`` - in the ConfigMap. - - * The configuration uses the following values set in the ConfigMap: - - - ``mdb-instance-name`` - - ``mongodb-backend-port`` - - * The configuration uses the following values set in the Secret: - - - ``mdb-certs`` - - ``ca-auth`` - - * **Optional**: You can change the value for ``STORAGE_ENGINE_CACHE_SIZE`` in the ConfigMap ``storage-engine-cache-size``, for more information - regarding this configuration, please consult the `MongoDB Official - Documentation `_. - - * **Optional**: If you are not using the **Standard_D2_v2** virtual machines for Kubernetes agents as per the guide, - please update the ``resources`` for ``mongo-ss``. We suggest allocating ``memory`` using the following scheme - for a MongoDB StatefulSet: - - .. code:: bash - - memory = (Total_Memory_Agent_VM_GB - 2GB) - STORAGE_ENGINE_CACHE_SIZE = memory / 2 - - * Create the MongoDB StatefulSet using: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss-tm.yaml - - * It might take up to 10 minutes for the disks, specified in the Persistent - Volume Claims above, to be created and attached to the pod. - The UI might show that the pod has errored with the message - "timeout expired waiting for volumes to attach/mount". Use the CLI below - to check the status of the pod in this case, instead of the UI. - This happens due to a bug in Azure ACS. - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 get pods -w - - -.. _configure-users-and-access-control-mongodb-tmt: - -Step 14: Configure Users and Access Control for MongoDB -------------------------------------------------------- - - * In this step, you will create a user on MongoDB with authorization - to create more users and assign - roles to them. - Note: You need to do this only when setting up the first MongoDB node of - the cluster. - - * Find out the name of your MongoDB pod by reading the output - of the ``kubectl ... get pods`` command at the end of the last step. - It should be something like ``mdb-instance-0-ss-0``. - - * Log in to the MongoDB pod using: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 exec -it bash - - * Open a mongo shell using the certificates - already present at ``/etc/mongod/ssl/`` - - .. code:: bash - - $ mongo --host localhost --port 27017 --verbose --ssl \ - --sslCAFile /etc/mongod/ca/ca.pem \ - --sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem - - * Create a user ``adminUser`` on the ``admin`` database with the - authorization to create other users. This will only work the first time you - log in to the mongo shell. For further details, see `localhost - exception `_ - in MongoDB. - - .. code:: bash - - PRIMARY> use admin - PRIMARY> db.createUser( { - user: "adminUser", - pwd: "superstrongpassword", - roles: [ { role: "userAdminAnyDatabase", db: "admin" }, - { role: "clusterManager", db: "admin"} ] - } ) - - * Exit and restart the mongo shell using the above command. - Authenticate as the ``adminUser`` we created earlier: - - .. code:: bash - - PRIMARY> use admin - PRIMARY> db.auth("adminUser", "superstrongpassword") - - ``db.auth()`` returns 0 when authentication is not successful, - and 1 when successful. - - * We need to specify the user name *as seen in the certificate* issued to - the BigchainDB instance in order to authenticate correctly. Use - the following ``openssl`` command to extract the user name from the - certificate: - - .. code:: bash - - $ openssl x509 -in \ - -inform PEM -subject -nameopt RFC2253 - - You should see an output line that resembles: - - .. code:: bash - - subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE - - The ``subject`` line states the complete user name we need to use for - creating the user on the mongo shell as follows: - - .. code:: bash - - PRIMARY> db.getSiblingDB("$external").runCommand( { - createUser: 'emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE', - writeConcern: { w: 'majority' , wtimeout: 5000 }, - roles: [ - { role: 'clusterAdmin', db: 'admin' }, - { role: 'readWriteAnyDatabase', db: 'admin' } - ] - } ) - - * You can similarly create user for MongoDB Monitoring Agent. For example: - - .. code:: bash - - PRIMARY> db.getSiblingDB("$external").runCommand( { - createUser: 'emailAddress=dev@bigchaindb.com,CN=test-mdb-mon-ssl,OU=MongoDB-Mon-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE', - writeConcern: { w: 'majority' , wtimeout: 5000 }, - roles: [ - { role: 'clusterMonitor', db: 'admin' } - ] - } ) - - -.. _create-kubernetes-storage-class-tmt: - -Step 15: Create Kubernetes Storage Classes for Tendermint ----------------------------------------------------------- - -Tendermint needs somewhere to store its data persistently, it uses -LevelDB as the persistent storage layer. - -The Kubernetes template for configuration of Storage Class is located in the -file ``tendermint/tendermint-sc.yaml``. - -Details about how to create a Azure Storage account and how Kubernetes Storage Class works -are already covered in this document: :ref:`create-kubernetes-storage-class-mdb-tmt`. - -Create the required storage classes using: - -.. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-sc.yaml - - -You can check if it worked using ``kubectl get storageclasses``. - -.. _create-kubernetes-persistent-volume-claim-tmt: - -Step 16: Create Kubernetes Persistent Volume Claims for Tendermint ------------------------------------------------------------------- - -Next, you will create two PersistentVolumeClaim objects ``tendermint-db-claim`` and -``tendermint-config-db-claim``. - -This configuration is located in the file ``tendermint/tendermint-pvc.yaml``. - -Details about Kubernetes Persistent Volumes, Persistent Volume Claims -and how they work with Azure are already covered in this -document: :ref:`create-kubernetes-persistent-volume-claim-mdb-tmt`. - -Create the required Persistent Volume Claims using: - -.. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-pvc.yaml - -You can check its status using: - -.. code:: - - kubectl get pvc -w - - -.. _create-kubernetes-stateful-set-tmt: - -Step 17: Start a Kubernetes StatefulSet for Tendermint ------------------------------------------------------- - - * This configuration is located in the file ``tendermint/tendermint-ss.yaml``. - - * Set the ``spec.serviceName`` to the value set in ``tm-instance-name`` in - the ConfigMap. - For example, if the value set in the ``tm-instance-name`` - is ``tm-instance-0``, set the field to ``tm-instance-0``. - - * Set ``metadata.name``, ``spec.template.metadata.name`` and - ``spec.template.metadata.labels.app`` to the value set in - ``tm-instance-name`` in the ConfigMap, followed by - ``-ss``. - For example, if the value set in the - ``tm-instance-name`` is ``tm-instance-0``, set the fields to the value - ``tm-insance-0-ss``. - - * Note how the Tendermint container uses the ``tendermint-db-claim`` and the - ``tendermint-config-db-claim`` PersistentVolumeClaims for its ``/tendermint`` and - ``/tendermint_node_data`` directories (mount paths). - - * As we gain more experience running Tendermint in testing and production, we - will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``. - -We deploy Tendermint as POD(Tendermint + NGINX), Tendermint is used as the consensus -engine while NGINX is used to serve the public key of the Tendermint instance. - - * For the NGINX container,set the ports to be exposed from the container - ``spec.containers[0].ports[0]`` section. Set it to the value specified - for ``tm-pub-key-access`` from ConfigMap. - - * For the Tendermint container, Set the ports to be exposed from the container in the - ``spec.containers[1].ports`` section. We currently expose two Tendermint ports. - Set it to the value specified for ``tm-p2p-port`` and ``tm-rpc-port`` - in the ConfigMap, repectively - - * The configuration uses the following values set in the ConfigMap: - - - ``tm-pub-key-access`` - - ``tm-seeds`` - - ``tm-validator-power`` - - ``tm-validators`` - - ``tm-genesis-time`` - - ``tm-chain-id`` - - ``tm-abci-port`` - - ``bdb-instance-name`` - - * Create the Tendermint StatefulSet using: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-ss.yaml - - * It might take up to 10 minutes for the disks, specified in the Persistent - Volume Claims above, to be created and attached to the pod. - The UI might show that the pod has errored with the message - "timeout expired waiting for volumes to attach/mount". Use the CLI below - to check the status of the pod in this case, instead of the UI. - This happens due to a bug in Azure ACS. - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 get pods -w - -.. _start-kubernetes-deployment-for-mdb-mon-agent-tmt: - -Step 18: Start a Kubernetes Deployment for MongoDB Monitoring Agent -------------------------------------------------------------------- - - * This configuration is located in the file - ``mongodb-monitoring-agent/mongo-mon-dep.yaml``. - - * Set ``metadata.name``, ``spec.template.metadata.name`` and - ``spec.template.metadata.labels.app`` to the value set in - ``mdb-mon-instance-name`` in the ConfigMap, followed by - ``-dep``. - For example, if the value set in the - ``mdb-mon-instance-name`` is ``mdb-mon-instance-0``, set the fields to the - value ``mdb-mon-instance-0-dep``. - - * The configuration uses the following values set in the Secret: - - - ``mdb-mon-certs`` - - ``ca-auth`` - - ``cloud-manager-credentials`` - - * Start the Kubernetes Deployment using: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml - - -.. _start-kubernetes-deployment-bdb-tmt: - -Step 19: Start a Kubernetes Deployment for BigchainDB ------------------------------------------------------ - - * This configuration is located in the file - ``bigchaindb/bigchaindb-dep-tm.yaml``. - - * Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the - value set in ``bdb-instance-name`` in the ConfigMap, followed by - ``-dep``. - For example, if the value set in the - ``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the - value ``bdb-insance-0-dep``. - - * As we gain more experience running BigchainDB in testing and production, - we will tweak the ``resources.limits`` values for CPU and memory, and as - richer monitoring and probing becomes available in BigchainDB, we will - tweak the ``livenessProbe`` and ``readinessProbe`` parameters. - - * Set the ports to be exposed from the pod in the - ``spec.containers[0].ports`` section. We currently expose 3 ports - - ``bigchaindb-api-port``, ``bigchaindb-ws-port`` and ``tm-abci-port``. Set them to the - values specified in the ConfigMap. - - * The configuration uses the following values set in the ConfigMap: - - - ``mdb-instance-name`` - - ``mongodb-backend-port`` - - ``mongodb-replicaset-name`` - - ``bigchaindb-database-name`` - - ``bigchaindb-server-bind`` - - ``bigchaindb-ws-interface`` - - ``cluster-fqdn`` - - ``bigchaindb-ws-port`` - - ``cluster-frontend-port`` - - ``bigchaindb-wsserver-advertised-scheme`` - - ``bdb-public-key`` - - ``bigchaindb-backlog-reassign-delay`` - - ``bigchaindb-database-maxtries`` - - ``bigchaindb-database-connection-timeout`` - - ``bigchaindb-log-level`` - - ``bdb-user`` - - ``tm-instance-name`` - - ``tm-rpc-port`` - - * The configuration uses the following values set in the Secret: - - - ``bdb-certs`` - - ``ca-auth`` - - * Create the BigchainDB Deployment using: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep-tm.yaml - - - * You can check its status using the command ``kubectl get deployments -w`` - - -.. _start-kubernetes-deployment-openresty-tmt: - -Step 20: Start a Kubernetes Deployment for OpenResty ----------------------------------------------------- - - * This configuration is located in the file - ``nginx-openresty/nginx-openresty-dep.yaml``. - - * Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the - value set in ``openresty-instance-name`` in the ConfigMap, followed by - ``-dep``. - For example, if the value set in the - ``openresty-instance-name`` is ``openresty-instance-0``, set the fields to - the value ``openresty-instance-0-dep``. - - * Set the port to be exposed from the pod in the - ``spec.containers[0].ports`` section. We currently expose the port at - which OpenResty is listening for requests, ``openresty-backend-port`` in - the above ConfigMap. - - * The configuration uses the following values set in the Secret: - - - ``threescale-credentials`` - - * The configuration uses the following values set in the ConfigMap: - - - ``cluster-dns-server-ip`` - - ``openresty-backend-port`` - - ``ngx-bdb-instance-name`` - - ``bigchaindb-api-port`` - - * Create the OpenResty Deployment using: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-dep.yaml - - - * You can check its status using the command ``kubectl get deployments -w`` - - -Step 21: Configure the MongoDB Cloud Manager --------------------------------------------- - -Refer to the -:doc:`documentation <../production-deployment-template/cloud-manager>` -for details on how to configure the MongoDB Cloud Manager to enable -monitoring and backup. - - -.. _verify-and-test-bdb-tmt: - -Step 22: Verify the BigchainDB Node Setup ------------------------------------------ - -Step 22.1: Testing Internally -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -To test the setup of your BigchainDB node, you could use a Docker container -that provides utilities like ``nslookup``, ``curl`` and ``dig``. -For example, you could use a container based on our -`bigchaindb/toolbox `_ image. -(The corresponding -`Dockerfile `_ -is in the ``bigchaindb/bigchaindb`` repository on GitHub.) -You can use it as below to get started immediately: - -.. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 \ - run -it toolbox \ - --image bigchaindb/toolbox \ - --image-pull-policy=Always \ - --restart=Never --rm - -It will drop you to the shell prompt. - -To test the MongoDB instance: - -.. code:: bash - - $ nslookup mdb-instance-0 - - $ dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV - - $ curl -X GET http://mdb-instance-0:27017 - -The ``nslookup`` command should output the configured IP address of the service -(in the cluster). -The ``dig`` command should return the configured port numbers. -The ``curl`` command tests the availability of the service. - -To test the BigchainDB instance: - -.. code:: bash - - $ nslookup bdb-instance-0 - - $ dig +noall +answer _bdb-api-port._tcp.bdb-instance-0.default.svc.cluster.local SRV - - $ dig +noall +answer _bdb-ws-port._tcp.bdb-instance-0.default.svc.cluster.local SRV - - $ curl -X GET http://bdb-instance-0:9984 - - $ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions - -To test the Tendermint instance: - -.. code:: bash - - $ nslookup tm-instance-0 - - $ dig +noall +answer _bdb-api-port._tcp.tm-instance-0.default.svc.cluster.local SRV - - $ dig +noall +answer _bdb-ws-port._tcp.tm-instance-0.default.svc.cluster.local SRV - - $ curl -X GET http://tm-instance-0:9986/pub_key.json - - -To test the OpenResty instance: - -.. code:: bash - - $ nslookup openresty-instance-0 - - $ dig +noall +answer _openresty-svc-port._tcp.openresty-instance-0.default.svc.cluster.local SRV - -To verify if OpenResty instance forwards the requests properly, send a ``POST`` -transaction to OpenResty at post ``80`` and check the response from the backend -BigchainDB instance. - - -To test the vanilla NGINX instance: - -.. code:: bash - - $ nslookup ngx-http-instance-0 - - $ dig +noall +answer _public-cluster-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV - - $ dig +noall +answer _public-health-check-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV - - $ wsc -er ws://ngx-http-instance-0/api/v1/streams/valid_transactions - - $ curl -X GET http://ngx-http-instance-0:27017 - -The above curl command should result in the response -``It looks like you are trying to access MongoDB over HTTP on the native driver port.`` - - - -To test the NGINX instance with HTTPS and 3scale integration: - -.. code:: bash - - $ nslookup ngx-instance-0 - - $ dig +noall +answer _public-secure-cluster-port._tcp.ngx-instance-0.default.svc.cluster.local SRV - - $ dig +noall +answer _public-mdb-port._tcp.ngx-instance-0.default.svc.cluster.local SRV - - $ dig +noall +answer _public-insecure-cluster-port._tcp.ngx-instance-0.default.svc.cluster.local SRV - - $ wsc -er wss:///api/v1/streams/valid_transactions - - $ curl -X GET http://:27017 - -The above curl command should result in the response -``It looks like you are trying to access MongoDB over HTTP on the native driver port.`` - - -Step 22.2: Testing Externally -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Check the MongoDB monitoring agent on the MongoDB Cloud Manager -portal to verify they are working fine. - -If you are using the NGINX with HTTP support, accessing the URL -``http://:cluster-frontend-port`` -on your browser should result in a JSON response that shows the BigchainDB -server version, among other things. -If you are using the NGINX with HTTPS support, use ``https`` instead of -``http`` above. - -Use the Python Driver to send some transactions to the BigchainDB node and -verify that your node or cluster works as expected. - -Next, you can set up log analytics and monitoring, by following our templates: - -* :doc:`../production-deployment-template/log-analytics`. diff --git a/docs/server/source/production-deployment-template-tendermint/workflow.rst b/docs/server/source/production-deployment-template-tendermint/workflow.rst deleted file mode 100644 index 5e55ab4c..00000000 --- a/docs/server/source/production-deployment-template-tendermint/workflow.rst +++ /dev/null @@ -1,188 +0,0 @@ -Overview -======== - -This page summarizes the steps *we* go through -to set up a production BigchainDB + Tendermint cluster. -We are constantly improving them. -You can modify them to suit your needs. - -.. Note:: - With our BigchainDB + Tendermint deployment model, we use standalone MongoDB - (without Replica Set), BFT replication is handled by Tendermint. - - -1. Set Up a Self-Signed Certificate Authority -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -We use SSL/TLS and self-signed certificates -for MongoDB authentication (and message encryption). -The certificates are signed by the organization managing the :ref:`bigchaindb-node`. -If your organization already has a process -for signing certificates -(i.e. an internal self-signed certificate authority [CA]), -then you can skip this step. -Otherwise, your organization must -:ref:`set up its own self-signed certificate authority `. - - -.. _register-a-domain-and-get-an-ssl-certificate-for-it-tmt: - -2. Register a Domain and Get an SSL Certificate for It -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -The BigchainDB APIs (HTTP API and WebSocket API) should be served using TLS, -so the organization running the cluster -should choose an FQDN for their API (e.g. api.organization-x.com), -register the domain name, -and buy an SSL/TLS certificate for the FQDN. - -.. _things-each-node-operator-must-do-tmt: - -Things Each Node Operator Must Do ---------------------------------- - -Use a standard and unique naming convention for all instances. - -☐ Name of the MongoDB instance (``mdb-instance-*``) - -☐ Name of the BigchainDB instance (``bdb-instance-*``) - -☐ Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``) - -☐ Name of the OpenResty instance (``openresty-instance-*``) - -☐ Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``) - -☐ Name of the Tendermint instance (``tendermint-instance-*``) - -Example -^^^^^^^ - -.. code:: text - - { - "MongoDB": [ - "mdb-instance-1", - "mdb-instance-2", - "mdb-instance-3", - "mdb-instance-4" - ], - "BigchainDB": [ - "bdb-instance-1", - "bdb-instance-2", - "bdb-instance-3", - "bdb-instance-4" - ], - "NGINX": [ - "ngx-instance-1", - "ngx-instance-2", - "ngx-instance-3", - "ngx-instance-4" - ], - "OpenResty": [ - "openresty-instance-1", - "openresty-instance-2", - "openresty-instance-3", - "openresty-instance-4" - ], - "MongoDB_Monitoring_Agent": [ - "mdb-mon-instance-1", - "mdb-mon-instance-2", - "mdb-mon-instance-3", - "mdb-mon-instance-4" - ], - "Tendermint": [ - "tendermint-instance-1", - "tendermint-instance-2", - "tendermint-instance-3", - "tendermint-instance-4" - ] - } - - -☐ Generate three keys and corresponding certificate signing requests (CSRs): - -#. Server Certificate for the MongoDB instance -#. Client Certificate for BigchainDB Server to identify itself to MongoDB -#. Client Certificate for MongoDB Monitoring Agent to identify itself to MongoDB - -Use the self-signed CA to sign those three CSRs: - -* Three certificates (one for each CSR). - -For help, see the pages: - -* :doc:`How to Generate a Server Certificate for MongoDB <../production-deployment-template/server-tls-certificate>` -* :doc:`How to Generate a Client Certificate for MongoDB <../production-deployment-template/client-tls-certificate>` - -☐ Make up an FQDN for your BigchainDB node (e.g. ``mynode.mycorp.com``). -Make sure you've registered the associated domain name (e.g. ``mycorp.com``), -and have an SSL certificate for the FQDN. -(You can get an SSL certificate from any SSL certificate provider.) - -☐ Ask the managing organization for the user name to use for authenticating to -MongoDB. - -☐ If the cluster uses 3scale for API authentication, monitoring and billing, -you must ask the managing organization for all relevant 3scale credentials - -secret token, service ID, version header and API service token. - -☐ If the cluster uses MongoDB Cloud Manager for monitoring, -you must ask the managing organization for the ``Project ID`` and the -``Agent API Key``. -(Each Cloud Manager "Project" has its own ``Project ID``. A ``Project ID`` can -contain a number of ``Agent API Key`` s. It can be found under -**Settings**. It was recently added to the Cloud Manager to -allow easier periodic rotation of the ``Agent API Key`` with a constant -``Project ID``) - - -.. _generate-the-blockchain-id-and-genesis-time: - -3. Generate the Blockchain ID and Genesis Time -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Tendermint nodes require two parameters that need to be common and shared between all the -participants in the network. - -* ``chain_id`` : ID of the blockchain. This must be unique for every blockchain. - - * Example: ``0001-01-01T00:00:00Z`` - -* ``genesis_time`` : Official time of blockchain start. - - * Example: ``test-chain-9gHylg`` - -The following parameters can be generated using the ``tendermint init`` command. -To `initializae `_. -You will need to `install Tendermint `_ -and verify that a ``genesis.json`` file in created under the `Root Directory -`_. You can use -the ``genesis_time`` and ``chain_id`` from this ``genesis.json``. - -Sample ``genesis.json``: - -.. code:: json - - { - "genesis_time": "0001-01-01T00:00:00Z", - "chain_id": "test-chain-9gHylg", - "validators": [ - { - "pub_key": { - "type": "ed25519", - "data": "D12279E746D3724329E5DE33A5AC44D5910623AA6FB8CDDC63617C959383A468" - }, - "power": 10, - "name": "" - } - ], - "app_hash": "" - } - - - -☐ :doc:`Deploy a Kubernetes cluster on Azure <../production-deployment-template/template-kubernetes-azure>`. - -☐ You can now proceed to set up your :ref:`BigchainDB node -`. diff --git a/docs/server/source/production-deployment-template/architecture.rst b/docs/server/source/production-deployment-template/architecture.rst index beb03d7e..7778e45f 100644 --- a/docs/server/source/production-deployment-template/architecture.rst +++ b/docs/server/source/production-deployment-template/architecture.rst @@ -1,19 +1,144 @@ -Architecture of an IPDB Node -============================ +Architecture of a BigchainDB Node +================================== -An IPDB Production deployment is hosted on a Kubernetes cluster and includes: +A BigchainDB Production deployment is hosted on a Kubernetes cluster and includes: -* NGINX, OpenResty, BigchainDB and MongoDB +* NGINX, OpenResty, BigchainDB, MongoDB and Tendermint `Kubernetes Services `_. -* NGINX, OpenResty, BigchainDB, Monitoring Agent and Backup Agent +* NGINX, OpenResty, BigchainDB and MongoDB Monitoring Agent. `Kubernetes Deployments `_. -* MongoDB `Kubernetes StatefulSet `_. +* MongoDB and Tendermint `Kubernetes StatefulSet `_. * Third party services like `3scale `_, `MongoDB Cloud Manager `_ and the `Azure Operations Management Suite `_. -.. image:: ../_static/arch.jpg + +.. _bigchaindb-node: + +BigchainDB Node +--------------- + +.. aafig:: + :aspect: 60 + :scale: 100 + :background: #rgb + :proportional: + + + + + +--------------------------------------------------------------------------------------------------------------------------------------+ + | | | | + | | | | + | | | | + | | | | + | | | | + | | | | + | "BigchainDB API" | | "Tendermint P2P" | + | | | "Communication/" | + | | | "Public Key Exchange" | + | | | | + | | | | + | v v | + | | + | +------------------+ | + | |"NGINX Service" | | + | +-------+----------+ | + | | | + | v | + | | + | +------------------+ | + | | "NGINX" | | + | | "Deployment" | | + | | | | + | +-------+----------+ | + | | | + | | | + | | | + | v | + | | + | "443" +----------+ "46656/9986" | + | | "Rate" | | + | +---------------------------+"Limiting"+-----------------------+ | + | | | "Logic" | | | + | | +----+-----+ | | + | | | | | + | | | | | + | | | | | + | | | | | + | | | | | + | | "27017" | | | + | v | v | + | +-------------+ | +------------+ | + | |"HTTPS" | | +------------------> |"Tendermint"| | + | |"Termination"| | | "9986" |"Service" | "46656" | + | | | | | +-------+ | <----+ | + | +-----+-------+ | | | +------------+ | | + | | | | | | | + | | | | v v | + | | | | +------------+ +------------+ | + | | | | |"NGINX" | |"Tendermint"| | + | | | | |"Deployment"| |"Stateful" | | + | | | | |"Pub-Key-Ex"| |"Set" | | + | ^ | | +------------+ +------------+ | + | +-----+-------+ | | | + | "POST" |"Analyze" | "GET" | | | + | |"Request" | | | | + | +-----------+ +--------+ | | | + | | +-------------+ | | | | + | | | | | "Bi+directional, communication between" | + | | | | | "BigchainDB(APP) and Tendermint" | + | | | | | "BFT consensus Engine" | + | | | | | | + | v v | | | + | | | | + | +-------------+ +--------------+ +----+-------------------> +--------------+ | + | | "OpenResty" | | "BigchainDB" | | | "MongoDB" | | + | | "Service" | | "Service" | | | "Service" | | + | | | +----->| | | +-------> | | | + | +------+------+ | +------+-------+ | | +------+-------+ | + | | | | | | | | + | | | | | | | | + | v | v | | v | + | +-------------+ | +-------------+ | | +----------+ | + | | | | | | <------------+ | |"MongoDB" | | + | |"OpenResty" | | | "BigchainDB"| | |"Stateful"| | + | |"Deployment" | | | "Deployment"| | |"Set" | | + | | | | | | | +-----+----+ | + | | | | | +---------------------------+ | | + | | | | | | | | + | +-----+-------+ | +-------------+ | | + | | | | | + | | | | | + | v | | | + | +-----------+ | v | + | | "Auth" | | +------------+ | + | | "Logic" |----------+ |"MongoDB" | | + | | | |"Monitoring"| | + | | | |"Agent" | | + | +---+-------+ +-----+------+ | + | | | | + | | | | + | | | | + | | | | + | | | | + | | | | + +---------------+---------------------------------------------------------------------------------------+------------------------------+ + | | + | | + | | + v v + +------------------------------------+ +------------------------------------+ + | | | | + | | | | + | | | | + | "3Scale" | | "MongoDB Cloud" | + | | | | + | | | | + | | | | + +------------------------------------+ +------------------------------------+ + + + .. note:: The arrows in the diagram represent the client-server communication. For @@ -22,8 +147,8 @@ An IPDB Production deployment is hosted on a Kubernetes cluster and includes: fully duplex. -NGINX ------ +NGINX: Entrypoint and Gateway +----------------------------- We use an NGINX as HTTP proxy on port 443 (configurable) at the cloud entrypoint for: @@ -51,8 +176,8 @@ entrypoint for: public api port), the connection is proxied to the MongoDB Service. -OpenResty ---------- +OpenResty: API Management, Authentication and Authorization +----------------------------------------------------------- We use `OpenResty `_ to perform authorization checks with 3scale using the ``app_id`` and ``app_key`` headers in the HTTP request. @@ -63,13 +188,23 @@ on the LuaJIT compiler to execute the functions to authenticate the ``app_id`` and ``app_key`` with the 3scale backend. -MongoDB -------- +MongoDB: Standalone +------------------- We use MongoDB as the backend database for BigchainDB. -In a multi-node deployment, MongoDB members communicate with each other via the -public port exposed by the NGINX Service. We achieve security by avoiding DoS attacks at the NGINX proxy layer and by ensuring that MongoDB has TLS enabled for all its connections. + +Tendermint: BFT consensus engine +-------------------------------- + +We use Tendermint as the backend consensus engine for BFT replication of BigchainDB. +In a multi-node deployment, Tendermint nodes/peers communicate with each other via +the public ports exposed by the NGINX gateway. + +We use port **9986** (configurable) to allow tendermint nodes to access the public keys +of the peers and port **46656** (configurable) for the rest of the communications between +the peers. + diff --git a/docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst b/docs/server/source/production-deployment-template/bigchaindb-network-on-kubernetes.rst similarity index 99% rename from docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst rename to docs/server/source/production-deployment-template/bigchaindb-network-on-kubernetes.rst index 4781fccc..ed6c5433 100644 --- a/docs/server/source/production-deployment-template-tendermint/bigchaindb-network-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/bigchaindb-network-on-kubernetes.rst @@ -218,7 +218,7 @@ the :doc:`deployment steps for each node ` N number of times the number of participants in the network. In our Kubernetes deployment template for a single BigchainDB node, we covered the basic configurations -settings :ref:`here `. +settings :ref:`here `. Since, we index the ConfigMap and Secret Keys for the single site deployment, we need to update all the Kubernetes components to reflect the corresponding changes i.e. For each Kubernetes Service, diff --git a/docs/server/source/production-deployment-template/cloud-manager.rst b/docs/server/source/production-deployment-template/cloud-manager.rst index c438afaf..fb0512df 100644 --- a/docs/server/source/production-deployment-template/cloud-manager.rst +++ b/docs/server/source/production-deployment-template/cloud-manager.rst @@ -1,10 +1,10 @@ -.. _configure-mongodb-cloud-manager-for-monitoring-and-backup: +.. _configure-mongodb-cloud-manager-for-monitoring: -Configure MongoDB Cloud Manager for Monitoring and Backup -========================================================= +Configure MongoDB Cloud Manager for Monitoring +============================================== This document details the steps required to configure MongoDB Cloud Manager to -enable monitoring and backup of data in a MongoDB Replica Set. +enable monitoring of data in a MongoDB Replica Set. Configure MongoDB Cloud Manager for Monitoring @@ -60,39 +60,3 @@ Configure MongoDB Cloud Manager for Monitoring * Verify on the UI that data is being sent by the monitoring agent to the Cloud Manager. It may take upto 5 minutes for data to appear on the UI. - - -Configure MongoDB Cloud Manager for Backup ------------------------------------------- - - * Once the Backup Agent is up and running, open - `MongoDB Cloud Manager `_. - - * Click ``Login`` under ``MongoDB Cloud Manager`` and log in to the Cloud - Manager. - - * Select the group from the dropdown box on the page. - - * Click ``Backup`` tab. - - * Hover over the ``Status`` column of your backup and click ``Start`` - to start the backup. - - * Select the replica set on the side pane. - - * If you have authentication enabled, select the authentication mechanism as - per your deployment. The default BigchainDB production deployment currently - supports ``X.509 Client Certificate`` as the authentication mechanism. - - * If you have TLS enabled, select the checkbox ``Replica set allows TLS/SSL - connections``. This should be selected by default in case you selected - ``X.509 Client Certificate`` as the auth mechanism above. - - * Choose the ``WiredTiger`` storage engine. - - * Verify the details of your MongoDB instance and click on ``Start``. - - * It may take up to 5 minutes for the backup process to start. - During this process, the UI will show the status of the backup process. - - * Verify that data is being backed up on the UI. diff --git a/docs/server/source/production-deployment-template/index.rst b/docs/server/source/production-deployment-template/index.rst index aa966677..64a834db 100644 --- a/docs/server/source/production-deployment-template/index.rst +++ b/docs/server/source/production-deployment-template/index.rst @@ -1,10 +1,10 @@ Production Deployment Template ============================== -This section outlines how *we* deploy production BigchainDB nodes and clusters -on Microsoft Azure -using Kubernetes. -We improve it constantly. +This section outlines how *we* deploy production BigchainDB, +integrated with Tendermint(backend for BFT consensus), +clusters on Microsoft Azure using +Kubernetes. We improve it constantly. You may choose to use it as a template or reference for your own deployment, but *we make no claim that it is suitable for your purposes*. Feel free change things to suit your needs or preferences. @@ -25,8 +25,7 @@ Feel free change things to suit your needs or preferences. cloud-manager easy-rsa upgrade-on-kubernetes - add-node-on-kubernetes - restore-from-mongodb-cloud-manager + bigchaindb-network-on-kubernetes tectonic-azure troubleshoot architecture diff --git a/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst b/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst index 5140e8d6..7ee9d01a 100644 --- a/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst +++ b/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst @@ -11,7 +11,7 @@ and ``secret.yaml`` (a set of Secrets). They are stored in the Kubernetes cluster's key-value store (etcd). Make sure you did all the things listed in the section titled -:ref:`things-each-node-operator-must-do` +:ref:`things-each-node-operator-must-do-tmt` (including generation of all the SSL certificates needed for MongoDB auth). @@ -35,7 +35,7 @@ vars.cluster-fqdn ~~~~~~~~~~~~~~~~~ The ``cluster-fqdn`` field specifies the domain you would have -:ref:`registered before `. +:ref:`registered before `. vars.cluster-frontend-port @@ -71,15 +71,8 @@ of naming instances, so the instances in your BigchainDB node should conform to that standard (i.e. you can't just make up some names). There are some things worth noting about the ``mdb-instance-name``: -* MongoDB reads the local ``/etc/hosts`` file while bootstrapping a replica - set to resolve the hostname provided to the ``rs.initiate()`` command. - It needs to ensure that the replica set is being initialized in the same - instance where the MongoDB instance is running. -* We use the value in the ``mdb-instance-name`` field to achieve this. * This field will be the DNS name of your MongoDB instance, and Kubernetes maps this name to its internal DNS. -* This field will also be used by other MongoDB instances when forming a - MongoDB replica set. * We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in our documentation. Your BigchainDB cluster may use a different naming convention. @@ -145,27 +138,6 @@ There's another :doc:`page with a complete listing of all the BigchainDB Server configuration settings <../server-reference/configuration>`. -bdb-config.bdb-keyring -~~~~~~~~~~~~~~~~~~~~~~~ - -This lists the BigchainDB public keys -of all *other* nodes in your BigchainDB cluster -(not including the public key of your BigchainDB node). Cases: - -* If you're deploying the first node in the cluster, - the value should be ``""`` (an empty string). -* If you're deploying the second node in the cluster, - the value should be the BigchainDB public key of the first/original - node in the cluster. - For example, - ``"EPQk5i5yYpoUwGVM8VKZRjM8CYxB6j8Lu8i8SG7kGGce"`` -* If there are two or more other nodes already in the cluster, - the value should be a colon-separated list - of the BigchainDB public keys - of those other nodes. - For example, - ``"DPjpKbmbPYPKVAuf6VSkqGCf5jzrEh69Ldef6TrLwsEQ:EPQk5i5yYpoUwGVM8VKZRjM8CYxB6j8Lu8i8SG7kGGce"`` - bdb-config.bdb-user ~~~~~~~~~~~~~~~~~~~ @@ -176,16 +148,16 @@ We need to specify the user name *as seen in the certificate* issued to the BigchainDB instance in order to authenticate correctly. Use the following ``openssl`` command to extract the user name from the certificate: - + .. code:: bash $ openssl x509 -in \ -inform PEM -subject -nameopt RFC2253 - + You should see an output line that resembles: - + .. code:: bash - + subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE The ``subject`` line states the complete user name we need to use for this @@ -196,6 +168,137 @@ field (``bdb-config.bdb-user``), i.e. emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE +tendermint-config.tm-instance-name +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Your BigchainDB cluster organization should have a standard way +of naming instances, so the instances in your BigchainDB node +should conform to that standard. There are some things worth noting +about the ``tm-instance-name``: + +* This field will be the DNS name of your Tendermint instance, and Kubernetes + maps this name to its internal DNS, so all the peer to peer communication + depends on this, in case of a network/multi-node deployment. +* This parameter is also used to access the public key of a particular node. +* We use ``tm-instance-0``, ``tm-instance-1`` and so on in our + documentation. Your BigchainDB cluster may use a different naming convention. + + +tendermint-config.ngx-tm-instance-name +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +NGINX needs the FQDN of the servers inside the cluster to be able to forward +traffic. +``ngx-tm-instance-name`` is the FQDN of the Tendermint +instance in this Kubernetes cluster. +In Kubernetes, this is usually the name of the module specified in the +corresponding ``tendermint-config.*-instance-name`` followed by the +``.svc.cluster.local``. For example, if you run Tendermint in +the default Kubernetes namespace, this will be +``.default.svc.cluster.local`` + + +tendermint-config.tm-seeds +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-seeds`` is the initial set of peers to connect to. It is a comma separated +list of all the peers part of the cluster. + +If you are deploying a stand-alone BigchainDB node the value should the same as +````. If you are deploying a network this parameter will look +like this: + +.. code:: + + ,,, + + +tendermint-config.tm-validators +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-validators`` is the initial set of validators in the network. It is a comma separated list +of all the participant validator nodes. + +If you are deploying a stand-alone BigchainDB node the value should be the same as +````. If you are deploying a network this parameter will look like +this: + +.. code:: + + ,,, + + +tendermint-config.tm-validator-power +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-validator-power`` represents the voting power of each validator. It is a comma separated +list of all the participants in the network. + +**Note**: The order of the validator power list should be the same as the ``tm-validators`` list. + +.. code:: + + tm-validators: ,,, + +For the above list of validators the ``tm-validator-power`` list should look like this: + +.. code:: + + tm-validator-power: ,,, + + +tendermint-config.tm-genesis-time +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-genesis-time`` represents the official time of blockchain start. Details regarding, how to generate +this parameter are covered :ref:`here `. + + +tendermint-config.tm-chain-id +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-chain-id`` represents the ID of the blockchain. This must be unique for every blockchain. +Details regarding, how to generate this parameter are covered +:ref:`here `. + + +tendermint-config.tm-abci-port +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-abci-port`` has a default value ``46658`` which is used by Tendermint Core for +ABCI(Application BlockChain Interface) traffic. BigchainDB nodes use this port +internally to communicate with Tendermint Core. + + +tendermint-config.tm-p2p-port +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-p2p-port`` has a default value ``46656`` which is used by Tendermint Core for +peer to peer communication. + +For a multi-node/zone deployment, this port needs to be available publicly for P2P +communication between Tendermint nodes. + + +tendermint-config.tm-rpc-port +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-rpc-port`` has a default value ``46657`` which is used by Tendermint Core for RPC +traffic. BigchainDB nodes use this port with RPC listen address. + + +tendermint-config.tm-pub-key-access +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``tm-pub-key-access`` has a default value ``9986``, which is used to discover the public +key of a tendermint node. Each Tendermint StatefulSet(Pod, Tendermint + NGINX) hosts its +public key. + +.. code:: + + http://tendermint-instance-1:9986/pub_key.json + + Edit secret.yaml ---------------- diff --git a/docs/server/source/production-deployment-template/node-on-kubernetes.rst b/docs/server/source/production-deployment-template/node-on-kubernetes.rst index d45df83a..2989492f 100644 --- a/docs/server/source/production-deployment-template/node-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/node-on-kubernetes.rst @@ -1,20 +1,17 @@ -.. _kubernetes-template-deploy-a-single-node-bigchaindb: +.. _kubernetes-template-deploy-a-single-bigchaindb-node: Kubernetes Template: Deploy a Single BigchainDB Node ==================================================== -This page describes how to deploy the first BigchainDB node -in a BigchainDB cluster, or a stand-alone BigchainDB node, +This page describes how to deploy a stand-alone BigchainDB + Tendermint node, +or a static network of BigchainDB + Tendermint nodes. using `Kubernetes `_. It assumes you already have a running Kubernetes cluster. -If you want to add a new BigchainDB node to an existing BigchainDB cluster, -refer to :doc:`the page about that `. - Below, we refer to many files by their directory and filename, -such as ``configuration/config-map.yaml``. Those files are files in the -`bigchaindb/bigchaindb repository on GitHub -`_ in the ``k8s/`` directory. +such as ``configuration/config-map-tm.yaml``. Those files are files in the +`bigchaindb/bigchaindb repository on GitHub `_ +in the ``k8s/`` directory. Make sure you're getting those files from the appropriate Git branch on GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB cluster is using. @@ -32,7 +29,8 @@ The default location of the kubectl configuration file is ``~/.kube/config``. If you don't have that file, then you need to get it. **Azure.** If you deployed your Kubernetes cluster on Azure -using the Azure CLI 2.0 (as per :doc:`our template `), +using the Azure CLI 2.0 (as per :doc:`our template +<../production-deployment-template/template-kubernetes-azure>`), then you can get the ``~/.kube/config`` file using: .. code:: bash @@ -109,7 +107,8 @@ Step 3: Configure Your BigchainDB Node See the page titled :ref:`how-to-configure-a-bigchaindb-node`. -.. _start-the-nginx-service: + +.. _start-the-nginx-service-tmt: Step 4: Start the NGINX Service ------------------------------- @@ -126,7 +125,7 @@ Step 4: Start the NGINX Service Step 4.1: Vanilla NGINX ^^^^^^^^^^^^^^^^^^^^^^^ - * This configuration is located in the file ``nginx-http/nginx-http-svc.yaml``. + * This configuration is located in the file ``nginx-http/nginx-http-svc-tm.yaml``. * Set the ``metadata.name`` and ``metadata.labels.name`` to the value set in ``ngx-instance-name`` in the ConfigMap above. @@ -140,11 +139,21 @@ Step 4.1: Vanilla NGINX ``cluster-frontend-port`` in the ConfigMap above. This is the ``public-cluster-port`` in the file which is the ingress in to the cluster. + * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the + ``tm-pub-access-port`` in the ConfigMap above. This is the + ``tm-pub-key-access`` in the file which specifies where Public Key for + the Tendermint instance is available. + + * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the + ``tm-p2p-port`` in the ConfigMap above. This is the + ``tm-p2p-port`` in the file which is used for P2P communication for Tendermint + nodes. + * Start the Kubernetes Service: .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc-tm.yaml Step 4.2: NGINX with HTTPS @@ -156,7 +165,7 @@ Step 4.2: NGINX with HTTPS * You should have already created the necessary Kubernetes Secrets in the previous step (i.e. ``https-certs``). - * This configuration is located in the file ``nginx-https/nginx-https-svc.yaml``. + * This configuration is located in the file ``nginx-https/nginx-https-svc-tm.yaml``. * Set the ``metadata.name`` and ``metadata.labels.name`` to the value set in ``ngx-instance-name`` in the ConfigMap above. @@ -175,14 +184,25 @@ Step 4.2: NGINX with HTTPS ``public-mdb-port`` in the file which specifies where MongoDB is available. + * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the + ``tm-pub-access-port`` in the ConfigMap above. This is the + ``tm-pub-key-access`` in the file which specifies where Public Key for + the Tendermint instance is available. + + * Set ``ports[3].port`` and ``ports[3].targetPort`` to the value set in the + ``tm-p2p-port`` in the ConfigMap above. This is the + ``tm-p2p-port`` in the file which is used for P2P communication between Tendermint + nodes. + + * Start the Kubernetes Service: .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc-tm.yaml -.. _assign-dns-name-to-the-nginx-public-ip: +.. _assign-dns-name-to-nginx-public-ip-tmt: Step 5: Assign DNS Name to the NGINX Public IP ---------------------------------------------- @@ -221,16 +241,16 @@ changes to be applied. To verify the DNS setting is operational, you can run ``nslookup `` from your local Linux shell. -This will ensure that when you scale the replica set later, other MongoDB -members in the replica set can reach this instance. +This will ensure that when you scale to different geographical zones, other Tendermint +nodes in the network can reach this instance. -.. _start-the-mongodb-kubernetes-service: +.. _start-the-mongodb-kubernetes-service-tmt: Step 6: Start the MongoDB Kubernetes Service -------------------------------------------- - * This configuration is located in the file ``mongodb/mongo-svc.yaml``. + * This configuration is located in the file ``mongodb/mongo-svc-tm.yaml``. * Set the ``metadata.name`` and ``metadata.labels.name`` to the value set in ``mdb-instance-name`` in the ConfigMap above. @@ -249,15 +269,15 @@ Step 6: Start the MongoDB Kubernetes Service .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc-tm.yaml -.. _start-the-bigchaindb-kubernetes-service: +.. _start-the-bigchaindb-kubernetes-service-tmt: Step 7: Start the BigchainDB Kubernetes Service ----------------------------------------------- - * This configuration is located in the file ``bigchaindb/bigchaindb-svc.yaml``. + * This configuration is located in the file ``bigchaindb/bigchaindb-svc-tm.yaml``. * Set the ``metadata.name`` and ``metadata.labels.name`` to the value set in ``bdb-instance-name`` in the ConfigMap above. @@ -277,19 +297,24 @@ Step 7: Start the BigchainDB Kubernetes Service This is the ``bdb-ws-port`` in the file which specifies where BigchainDB listens for Websocket connections. + * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the + ``tm-abci-port`` in the ConfigMap above. + This is the ``tm-abci-port`` in the file which specifies the port used + for ABCI communication. + * Start the Kubernetes Service: .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc-tm.yaml -.. _start-the-openresty-kubernetes-service: +.. _start-the-openresty-kubernetes-service-tmt: Step 8: Start the OpenResty Kubernetes Service ---------------------------------------------- - * This configuration is located in the file ``nginx-openresty/nginx-openresty-svc.yaml``. + * This configuration is located in the file ``nginx-openresty/nginx-openresty-svc-tm.yaml``. * Set the ``metadata.name`` and ``metadata.labels.name`` to the value set in ``openresty-instance-name`` in the ConfigMap above. @@ -303,26 +328,64 @@ Step 8: Start the OpenResty Kubernetes Service .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc-tm.yaml -.. _start-the-nginx-kubernetes-deployment: +.. _start-the-tendermint-kubernetes-service-tmt: -Step 9: Start the NGINX Kubernetes Deployment ---------------------------------------------- +Step 9: Start the Tendermint Kubernetes Service +----------------------------------------------- - * NGINX is used as a proxy to OpenResty, BigchainDB and MongoDB instances in + * This configuration is located in the file ``tendermint/tendermint-svc.yaml``. + + * Set the ``metadata.name`` and ``metadata.labels.name`` to the value + set in ``tm-instance-name`` in the ConfigMap above. + + * Set the ``spec.selector.app`` to the value set in ``tm-instance-name`` in + the ConfigMap followed by ``-ss``. For example, if the value set in the + ``tm-instance-name`` is ``tm-instance-0``, set the + ``spec.selector.app`` to ``tm-instance-0-ss``. + + * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the + ``tm-p2p-port`` in the ConfigMap above. + This is the ``p2p`` in the file which specifies where Tendermint peers + communicate. + + * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the + ``tm-rpc-port`` in the ConfigMap above. + This is the ``rpc`` in the file which specifies the port used by Tendermint core + for RPC traffic. + + * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the + ``tm-pub-key-access`` in the ConfigMap above. + This is the ``pub-key-access`` in the file which specifies the port to host/distribute + the public key for the Tendermint node. + + * Start the Kubernetes Service: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-svc.yaml + + +.. _start-the-nginx-deployment-tmt: + +Step 10: Start the NGINX Kubernetes Deployment +---------------------------------------------- + + * NGINX is used as a proxy to OpenResty, BigchainDB, Tendermint and MongoDB instances in the node. It proxies HTTP/HTTPS requests on the ``cluster-frontend-port`` - to the corresponding OpenResty or BigchainDB backend, and TCP connections - on ``mongodb-frontend-port`` to the MongoDB backend. + to the corresponding OpenResty or BigchainDB backend, TCP connections + on ``mongodb-frontend-port``, ``tm-p2p-port`` and ``tm-pub-key-access`` + to MongoDB and Tendermint respectively. * As in step 4, you have the option to use vanilla NGINX without HTTPS or NGINX with HTTPS support. -Step 9.1: Vanilla NGINX -^^^^^^^^^^^^^^^^^^^^^^^ +Step 10.1: Vanilla NGINX +^^^^^^^^^^^^^^^^^^^^^^^^ - * This configuration is located in the file ``nginx-http/nginx-http-dep.yaml``. + * This configuration is located in the file ``nginx-http/nginx-http-dep-tm.yaml``. * Set the ``metadata.name`` and ``spec.template.metadata.labels.app`` to the value set in ``ngx-instance-name`` in the ConfigMap followed by a @@ -330,9 +393,10 @@ Step 9.1: Vanilla NGINX ``ngx-http-instance-0``, set the fields to ``ngx-http-instance-0-dep``. * Set the ports to be exposed from the pod in the - ``spec.containers[0].ports`` section. We currently expose 3 ports - - ``mongodb-frontend-port``, ``cluster-frontend-port`` and - ``cluster-health-check-port``. Set them to the values specified in the + ``spec.containers[0].ports`` section. We currently expose 5 ports - + ``mongodb-frontend-port``, ``cluster-frontend-port``, + ``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port``. + Set them to the values specified in the ConfigMap. * The configuration uses the following values set in the ConfigMap: @@ -346,19 +410,22 @@ Step 9.1: Vanilla NGINX - ``ngx-bdb-instance-name`` - ``bigchaindb-api-port`` - ``bigchaindb-ws-port`` + - ``ngx-tm-instance-name`` + - ``tm-pub-key-access`` + - ``tm-p2p-port`` * Start the Kubernetes Deployment: .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep-tm.yaml -Step 9.2: NGINX with HTTPS -^^^^^^^^^^^^^^^^^^^^^^^^^^ +Step 10.2: NGINX with HTTPS +^^^^^^^^^^^^^^^^^^^^^^^^^^^ * This configuration is located in the file - ``nginx-https/nginx-https-dep.yaml``. + ``nginx-https/nginx-https-dep-tm.yaml``. * Set the ``metadata.name`` and ``spec.template.metadata.labels.app`` to the value set in ``ngx-instance-name`` in the ConfigMap followed by a @@ -366,9 +433,10 @@ Step 9.2: NGINX with HTTPS ``ngx-https-instance-0``, set the fields to ``ngx-https-instance-0-dep``. * Set the ports to be exposed from the pod in the - ``spec.containers[0].ports`` section. We currently expose 3 ports - - ``mongodb-frontend-port``, ``cluster-frontend-port`` and - ``cluster-health-check-port``. Set them to the values specified in the + ``spec.containers[0].ports`` section. We currently expose 6 ports - + ``mongodb-frontend-port``, ``cluster-frontend-port``, + ``cluster-health-check-port``, ``tm-pub-key-access`` and ``tm-p2p-port`` + . Set them to the values specified in the ConfigMap. * The configuration uses the following values set in the ConfigMap: @@ -385,6 +453,9 @@ Step 9.2: NGINX with HTTPS - ``ngx-bdb-instance-name`` - ``bigchaindb-api-port`` - ``bigchaindb-ws-port`` + - ``ngx-tm-instance-name`` + - ``tm-pub-key-access`` + - ``tm-p2p-port``` * The configuration uses the following values set in the Secret: @@ -394,12 +465,12 @@ Step 9.2: NGINX with HTTPS .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep-tm.yaml -.. _create-kubernetes-storage-classes-for-mongodb: +.. _create-kubernetes-storage-class-mdb-tmt: -Step 10: Create Kubernetes Storage Classes for MongoDB +Step 11: Create Kubernetes Storage Classes for MongoDB ------------------------------------------------------ MongoDB needs somewhere to store its data persistently, @@ -428,7 +499,7 @@ The first thing to do is create the Kubernetes storage classes. First, you need an Azure storage account. If you deployed your Kubernetes cluster on Azure using the Azure CLI 2.0 -(as per :doc:`our template `), +(as per :doc:`our template <../production-deployment-template/template-kubernetes-azure>`), then the `az acs create` command already created a storage account in the same location and resource group as your Kubernetes cluster. @@ -440,7 +511,7 @@ in the same data center. Premium storage is higher-cost and higher-performance. It uses solid state drives (SSD). You can create a `storage account `_ -for Premium storage and associate it with your Azure resource group. +for Premium storage and associate it with your Azure resource group. For future reference, the command to create a storage account is `az storage account create `_. @@ -456,7 +527,7 @@ specify the location you are using in Azure. If you want to use a custom storage account with the Storage Class, you can also update `parameters.storageAccount` and provide the Azure storage -account name. +account name. Create the required storage classes using: @@ -468,10 +539,10 @@ Create the required storage classes using: You can check if it worked using ``kubectl get storageclasses``. -.. _create-kubernetes-persistent-volume-claims: +.. _create-kubernetes-persistent-volume-claim-mdb-tmt: -Step 11: Create Kubernetes Persistent Volume Claims ---------------------------------------------------- +Step 12: Create Kubernetes Persistent Volume Claims for MongoDB +--------------------------------------------------------------- Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and ``mongo-configdb-claim``. @@ -517,18 +588,18 @@ but it should become "Bound" fairly quickly. * Run the following command to update a PV's reclaim policy to .. Code:: bash - + $ kubectl --context k8s-bdb-test-cluster-0 patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' For notes on recreating a private volume form a released Azure disk resource consult - :ref:`cluster-troubleshooting`. + :doc:`the page about cluster troubleshooting <../production-deployment-template/troubleshoot>`. -.. _start-a-kubernetes-statefulset-for-mongodb: +.. _start-kubernetes-stateful-set-mongodb-tmt: -Step 12: Start a Kubernetes StatefulSet for MongoDB +Step 13: Start a Kubernetes StatefulSet for MongoDB --------------------------------------------------- - * This configuration is located in the file ``mongodb/mongo-ss.yaml``. + * This configuration is located in the file ``mongodb/mongo-ss-tm.yaml``. * Set the ``spec.serviceName`` to the value set in ``mdb-instance-name`` in the ConfigMap. @@ -570,9 +641,8 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB * The configuration uses the following values set in the ConfigMap: - ``mdb-instance-name`` - - ``mongodb-replicaset-name`` - ``mongodb-backend-port`` - + * The configuration uses the following values set in the Secret: - ``mdb-certs`` @@ -595,7 +665,7 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss-tm.yaml * It might take up to 10 minutes for the disks, specified in the Persistent Volume Claims above, to be created and attached to the pod. @@ -608,9 +678,10 @@ Step 12: Start a Kubernetes StatefulSet for MongoDB $ kubectl --context k8s-bdb-test-cluster-0 get pods -w -.. _configure-users-and-access-control-for-mongodb: -Step 13: Configure Users and Access Control for MongoDB +.. _configure-users-and-access-control-mongodb-tmt: + +Step 14: Configure Users and Access Control for MongoDB ------------------------------------------------------- * In this step, you will create a user on MongoDB with authorization @@ -638,28 +709,6 @@ Step 13: Configure Users and Access Control for MongoDB --sslCAFile /etc/mongod/ca/ca.pem \ --sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem - * Initialize the replica set using: - - .. code:: bash - - > rs.initiate( { - _id : "bigchain-rs", - members: [ { - _id : 0, - host :":27017" - } ] - } ) - - The ``hostname`` in this case will be the value set in - ``mdb-instance-name`` in the ConfigMap. - For example, if the value set in the ``mdb-instance-name`` is - ``mdb-instance-0``, set the ``hostname`` above to the value ``mdb-instance-0``. - - * The instance should be voted as the ``PRIMARY`` in the replica set (since - this is the only instance in the replica set till now). - This can be observed from the mongo shell prompt, - which will read ``PRIMARY>``. - * Create a user ``adminUser`` on the ``admin`` database with the authorization to create other users. This will only work the first time you log in to the mongo shell. For further details, see `localhost @@ -717,8 +766,7 @@ Step 13: Configure Users and Access Control for MongoDB ] } ) - * You can similarly create users for MongoDB Monitoring Agent and MongoDB - Backup Agent. For example: + * You can similarly create user for MongoDB Monitoring Agent. For example: .. code:: bash @@ -730,18 +778,127 @@ Step 13: Configure Users and Access Control for MongoDB ] } ) - PRIMARY> db.getSiblingDB("$external").runCommand( { - createUser: 'emailAddress=dev@bigchaindb.com,CN=test-mdb-bak-ssl,OU=MongoDB-Bak-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE', - writeConcern: { w: 'majority' , wtimeout: 5000 }, - roles: [ - { role: 'backup', db: 'admin' } - ] - } ) + +.. _create-kubernetes-storage-class-tmt: + +Step 15: Create Kubernetes Storage Classes for Tendermint +---------------------------------------------------------- + +Tendermint needs somewhere to store its data persistently, it uses +LevelDB as the persistent storage layer. + +The Kubernetes template for configuration of Storage Class is located in the +file ``tendermint/tendermint-sc.yaml``. + +Details about how to create a Azure Storage account and how Kubernetes Storage Class works +are already covered in this document: :ref:`create-kubernetes-storage-class-mdb-tmt`. + +Create the required storage classes using: + +.. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-sc.yaml -.. _start-a-kubernetes-deployment-for-mongodb-monitoring-agent: +You can check if it worked using ``kubectl get storageclasses``. -Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent +.. _create-kubernetes-persistent-volume-claim-tmt: + +Step 16: Create Kubernetes Persistent Volume Claims for Tendermint +------------------------------------------------------------------ + +Next, you will create two PersistentVolumeClaim objects ``tendermint-db-claim`` and +``tendermint-config-db-claim``. + +This configuration is located in the file ``tendermint/tendermint-pvc.yaml``. + +Details about Kubernetes Persistent Volumes, Persistent Volume Claims +and how they work with Azure are already covered in this +document: :ref:`create-kubernetes-persistent-volume-claim-mdb-tmt`. + +Create the required Persistent Volume Claims using: + +.. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-pvc.yaml + +You can check its status using: + +.. code:: + + kubectl get pvc -w + + +.. _create-kubernetes-stateful-set-tmt: + +Step 17: Start a Kubernetes StatefulSet for Tendermint +------------------------------------------------------ + + * This configuration is located in the file ``tendermint/tendermint-ss.yaml``. + + * Set the ``spec.serviceName`` to the value set in ``tm-instance-name`` in + the ConfigMap. + For example, if the value set in the ``tm-instance-name`` + is ``tm-instance-0``, set the field to ``tm-instance-0``. + + * Set ``metadata.name``, ``spec.template.metadata.name`` and + ``spec.template.metadata.labels.app`` to the value set in + ``tm-instance-name`` in the ConfigMap, followed by + ``-ss``. + For example, if the value set in the + ``tm-instance-name`` is ``tm-instance-0``, set the fields to the value + ``tm-insance-0-ss``. + + * Note how the Tendermint container uses the ``tendermint-db-claim`` and the + ``tendermint-config-db-claim`` PersistentVolumeClaims for its ``/tendermint`` and + ``/tendermint_node_data`` directories (mount paths). + + * As we gain more experience running Tendermint in testing and production, we + will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``. + +We deploy Tendermint as POD(Tendermint + NGINX), Tendermint is used as the consensus +engine while NGINX is used to serve the public key of the Tendermint instance. + + * For the NGINX container,set the ports to be exposed from the container + ``spec.containers[0].ports[0]`` section. Set it to the value specified + for ``tm-pub-key-access`` from ConfigMap. + + * For the Tendermint container, Set the ports to be exposed from the container in the + ``spec.containers[1].ports`` section. We currently expose two Tendermint ports. + Set it to the value specified for ``tm-p2p-port`` and ``tm-rpc-port`` + in the ConfigMap, repectively + + * The configuration uses the following values set in the ConfigMap: + + - ``tm-pub-key-access`` + - ``tm-seeds`` + - ``tm-validator-power`` + - ``tm-validators`` + - ``tm-genesis-time`` + - ``tm-chain-id`` + - ``tm-abci-port`` + - ``bdb-instance-name`` + + * Create the Tendermint StatefulSet using: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-ss.yaml + + * It might take up to 10 minutes for the disks, specified in the Persistent + Volume Claims above, to be created and attached to the pod. + The UI might show that the pod has errored with the message + "timeout expired waiting for volumes to attach/mount". Use the CLI below + to check the status of the pod in this case, instead of the UI. + This happens due to a bug in Azure ACS. + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 get pods -w + +.. _start-kubernetes-deployment-for-mdb-mon-agent-tmt: + +Step 18: Start a Kubernetes Deployment for MongoDB Monitoring Agent ------------------------------------------------------------------- * This configuration is located in the file @@ -768,40 +925,13 @@ Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml -.. _start-a-kubernetes-deployment-for-mongodb-backup-agent: +.. _start-kubernetes-deployment-bdb-tmt: -Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent ---------------------------------------------------------------- - - * This configuration is located in the file - ``mongodb-backup-agent/mongo-backup-dep.yaml``. - - * Set ``metadata.name``, ``spec.template.metadata.name`` and - ``spec.template.metadata.labels.app`` to the value set in - ``mdb-bak-instance-name`` in the ConfigMap, followed by - ``-dep``. - For example, if the value set in the - ``mdb-bak-instance-name`` is ``mdb-bak-instance-0``, set the fields to the - value ``mdb-bak-instance-0-dep``. - - * The configuration uses the following values set in the Secret: - - - ``mdb-bak-certs`` - - ``ca-auth`` - - ``cloud-manager-credentials`` - - * Start the Kubernetes Deployment using: - - .. code:: bash - - $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-backup-agent/mongo-backup-dep.yaml - - -Step 16: Start a Kubernetes Deployment for BigchainDB +Step 19: Start a Kubernetes Deployment for BigchainDB ----------------------------------------------------- * This configuration is located in the file - ``bigchaindb/bigchaindb-dep.yaml``. + ``bigchaindb/bigchaindb-dep-tm.yaml``. * Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the value set in ``bdb-instance-name`` in the ConfigMap, followed by @@ -810,21 +940,14 @@ Step 16: Start a Kubernetes Deployment for BigchainDB ``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the value ``bdb-insance-0-dep``. - * Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded). - (In the future, we'd like to pull the BigchainDB private key from - the Secret named ``bdb-private-key``, - but a Secret can only be mounted as a file, - so BigchainDB Server would have to be modified to look for it - in a file.) - * As we gain more experience running BigchainDB in testing and production, we will tweak the ``resources.limits`` values for CPU and memory, and as richer monitoring and probing becomes available in BigchainDB, we will tweak the ``livenessProbe`` and ``readinessProbe`` parameters. * Set the ports to be exposed from the pod in the - ``spec.containers[0].ports`` section. We currently expose 2 ports - - ``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the + ``spec.containers[0].ports`` section. We currently expose 3 ports - + ``bigchaindb-api-port``, ``bigchaindb-ws-port`` and ``tm-abci-port``. Set them to the values specified in the ConfigMap. * The configuration uses the following values set in the ConfigMap: @@ -845,6 +968,8 @@ Step 16: Start a Kubernetes Deployment for BigchainDB - ``bigchaindb-database-connection-timeout`` - ``bigchaindb-log-level`` - ``bdb-user`` + - ``tm-instance-name`` + - ``tm-rpc-port`` * The configuration uses the following values set in the Secret: @@ -855,15 +980,15 @@ Step 16: Start a Kubernetes Deployment for BigchainDB .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep-tm.yaml * You can check its status using the command ``kubectl get deployments -w`` -.. _start-a-kubernetes-deployment-for-openresty: +.. _start-kubernetes-deployment-openresty-tmt: -Step 17: Start a Kubernetes Deployment for OpenResty +Step 20: Start a Kubernetes Deployment for OpenResty ---------------------------------------------------- * This configuration is located in the file @@ -902,21 +1027,21 @@ Step 17: Start a Kubernetes Deployment for OpenResty * You can check its status using the command ``kubectl get deployments -w`` -Step 18: Configure the MongoDB Cloud Manager +Step 21: Configure the MongoDB Cloud Manager -------------------------------------------- Refer to the -:ref:`documentation ` +:doc:`documentation <../production-deployment-template/cloud-manager>` for details on how to configure the MongoDB Cloud Manager to enable monitoring and backup. -.. _verify-the-bigchaindb-node-setup: +.. _verify-and-test-bdb-tmt: -Step 19: Verify the BigchainDB Node Setup +Step 22: Verify the BigchainDB Node Setup ----------------------------------------- -Step 19.1: Testing Internally +Step 22.1: Testing Internally ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To test the setup of your BigchainDB node, you could use a Docker container @@ -967,6 +1092,18 @@ To test the BigchainDB instance: $ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions +To test the Tendermint instance: + +.. code:: bash + + $ nslookup tm-instance-0 + + $ dig +noall +answer _bdb-api-port._tcp.tm-instance-0.default.svc.cluster.local SRV + + $ dig +noall +answer _bdb-ws-port._tcp.tm-instance-0.default.svc.cluster.local SRV + + $ curl -X GET http://tm-instance-0:9986/pub_key.json + To test the OpenResty instance: @@ -1020,10 +1157,10 @@ The above curl command should result in the response ``It looks like you are trying to access MongoDB over HTTP on the native driver port.`` -Step 19.2: Testing Externally +Step 22.2: Testing Externally ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Check the MongoDB monitoring and backup agent on the MongoDB Cloud Manager +Check the MongoDB monitoring agent on the MongoDB Cloud Manager portal to verify they are working fine. If you are using the NGINX with HTTP support, accessing the URL @@ -1035,3 +1172,7 @@ If you are using the NGINX with HTTPS support, use ``https`` instead of Use the Python Driver to send some transactions to the BigchainDB node and verify that your node or cluster works as expected. + +Next, you can set up log analytics and monitoring, by following our templates: + +* :doc:`../production-deployment-template/log-analytics`. \ No newline at end of file diff --git a/docs/server/source/production-deployment-template/tectonic-azure.rst b/docs/server/source/production-deployment-template/tectonic-azure.rst index 68b0afd9..f9d58074 100644 --- a/docs/server/source/production-deployment-template/tectonic-azure.rst +++ b/docs/server/source/production-deployment-template/tectonic-azure.rst @@ -123,8 +123,6 @@ Next, you can follow one of our following deployment templates: * :doc:`node-on-kubernetes`. -* :doc:`../production-deployment-template-tendermint/node-on-kubernetes` - Tectonic References ------------------- diff --git a/docs/server/source/production-deployment-template/template-kubernetes-azure.rst b/docs/server/source/production-deployment-template/template-kubernetes-azure.rst index 7d43fafc..d57abe27 100644 --- a/docs/server/source/production-deployment-template/template-kubernetes-azure.rst +++ b/docs/server/source/production-deployment-template/template-kubernetes-azure.rst @@ -224,6 +224,5 @@ CAUTION: You might end up deleting resources other than the ACS cluster. --name -Next, you can :doc:`run a BigchainDB node(Non-BFT) ` or :doc:`run a BigchainDB -node/cluster(BFT) <../production-deployment-template-tendermint/node-on-kubernetes>` +Next, you can :doc: `run a BigchainDB node/cluster(BFT) ` on your new Kubernetes cluster. \ No newline at end of file diff --git a/docs/server/source/production-deployment-template/workflow.rst b/docs/server/source/production-deployment-template/workflow.rst index a790a619..0a35d65b 100644 --- a/docs/server/source/production-deployment-template/workflow.rst +++ b/docs/server/source/production-deployment-template/workflow.rst @@ -6,28 +6,13 @@ to set up a production BigchainDB cluster. We are constantly improving them. You can modify them to suit your needs. - -Things the Managing Organization Must Do First ----------------------------------------------- +.. Note:: + We use standalone MongoDB (without Replica Set), BFT replication is handled by Tendermint. -1. Set Up a Self-Signed Certificate Authority -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +.. _register-a-domain-and-get-an-ssl-certificate-for-it-tmt: -We use SSL/TLS and self-signed certificates -for MongoDB authentication (and message encryption). -The certificates are signed by the organization managing the cluster. -If your organization already has a process -for signing certificates -(i.e. an internal self-signed certificate authority [CA]), -then you can skip this step. -Otherwise, your organization must -:ref:`set up its own self-signed certificate authority `. - - -.. _register-a-domain-and-get-an-ssl-certificate-for-it: - -2. Register a Domain and Get an SSL Certificate for It +1. Register a Domain and Get an SSL Certificate for It ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The BigchainDB APIs (HTTP API and WebSocket API) should be served using TLS, @@ -36,83 +21,149 @@ should choose an FQDN for their API (e.g. api.organization-x.com), register the domain name, and buy an SSL/TLS certificate for the FQDN. -.. _things-each-node-operator-must-do: + +.. _generate-the-blockchain-id-and-genesis-time: + +2. Generate the Blockchain ID and Genesis Time +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Tendermint nodes require two parameters that need to be common and shared between all the +participants in the network. + +* ``chain_id`` : ID of the blockchain. This must be unique for every blockchain. + + * Example: ``test-chain-9gHylg`` + +* ``genesis_time`` : Official time of blockchain start. + + * Example: ``0001-01-01T00:00:00Z`` + +The preceding parameters can be generated using the ``tendermint init`` command. +To `initialize `_. +,you will need to `install Tendermint `_ +and verify that a ``genesis.json`` file is created under the `Root Directory +`_. You can use +the ``genesis_time`` and ``chain_id`` from this example ``genesis.json`` file: + +.. code:: json + + { + "genesis_time": "0001-01-01T00:00:00Z", + "chain_id": "test-chain-9gHylg", + "validators": [ + { + "pub_key": { + "type": "ed25519", + "data": "D12279E746D3724329E5DE33A5AC44D5910623AA6FB8CDDC63617C959383A468" + }, + "power": 10, + "name": "" + } + ], + "app_hash": "" + } + +.. _things-each-node-operator-must-do-tmt: Things Each Node Operator Must Do --------------------------------- -☐ Every MongoDB instance in the cluster must have a unique (one-of-a-kind) name. -Ask the organization managing your cluster if they have a standard -way of naming instances in the cluster. -For example, maybe they assign a unique number to each node, -so that if you're operating node 12, your MongoDB instance would be named -``mdb-instance-12``. -Similarly, other instances must also have unique names in the cluster. +☐ Set Up a Self-Signed Certificate Authority -#. Name of the MongoDB instance (``mdb-instance-*``) -#. Name of the BigchainDB instance (``bdb-instance-*``) -#. Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``) -#. Name of the OpenResty instance (``openresty-instance-*``) -#. Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``) -#. Name of the MongoDB backup agent instance (``mdb-bak-instance-*``) +We use SSL/TLS and self-signed certificates +for MongoDB authentication (and message encryption). +The certificates are signed by the organization managing the :ref:`bigchaindb-node`. +If your organization already has a process +for signing certificates +(i.e. an internal self-signed certificate authority [CA]), +then you can skip this step. +Otherwise, your organization must +:ref:`set up its own self-signed certificate authority `. -☐ Generate four keys and corresponding certificate signing requests (CSRs): +☐ Follow Standard and Unique Naming Convention -#. Server Certificate (a.k.a. Member Certificate) for the MongoDB instance + ☐ Name of the MongoDB instance (``mdb-instance-*``) + + ☐ Name of the BigchainDB instance (``bdb-instance-*``) + + ☐ Name of the NGINX instance (``ngx-http-instance-*`` or ``ngx-https-instance-*``) + + ☐ Name of the OpenResty instance (``openresty-instance-*``) + + ☐ Name of the MongoDB monitoring agent instance (``mdb-mon-instance-*``) + + ☐ Name of the Tendermint instance (``tm-instance-*``) + +**Example** + + +.. code:: text + + { + "MongoDB": [ + "mdb-instance-1", + "mdb-instance-2", + "mdb-instance-3", + "mdb-instance-4" + ], + "BigchainDB": [ + "bdb-instance-1", + "bdb-instance-2", + "bdb-instance-3", + "bdb-instance-4" + ], + "NGINX": [ + "ngx-instance-1", + "ngx-instance-2", + "ngx-instance-3", + "ngx-instance-4" + ], + "OpenResty": [ + "openresty-instance-1", + "openresty-instance-2", + "openresty-instance-3", + "openresty-instance-4" + ], + "MongoDB_Monitoring_Agent": [ + "mdb-mon-instance-1", + "mdb-mon-instance-2", + "mdb-mon-instance-3", + "mdb-mon-instance-4" + ], + "Tendermint": [ + "tendermint-instance-1", + "tendermint-instance-2", + "tendermint-instance-3", + "tendermint-instance-4" + ] + } + + +☐ Generate three keys and corresponding certificate signing requests (CSRs): + +#. Server Certificate for the MongoDB instance #. Client Certificate for BigchainDB Server to identify itself to MongoDB #. Client Certificate for MongoDB Monitoring Agent to identify itself to MongoDB -#. Client Certificate for MongoDB Backup Agent to identify itself to MongoDB -Ask the managing organization to use its self-signed CA to sign those four CSRs. -They should send you: - -* Four certificates (one for each CSR you sent them). -* One ``ca.crt`` file: their CA certificate. -* One ``crl.pem`` file: a certificate revocation list. - -For help, see the pages: - -* :ref:`how-to-generate-a-server-certificate-for-mongodb` -* :ref:`how-to-generate-a-client-certificate-for-mongodb` - - -☐ Every node in a BigchainDB cluster needs its own -BigchainDB keypair (i.e. a public key and corresponding private key). -You can generate a BigchainDB keypair for your node, for example, -using the `BigchainDB Python Driver `_. - -.. code:: python - - from bigchaindb_driver.crypto import generate_keypair - print(generate_keypair()) - - -☐ Share your BigchaindB *public* key with all the other nodes -in the BigchainDB cluster. -Don't share your private key. - - -☐ Get the BigchainDB public keys of all the other nodes in the cluster. -That list of public keys is known as the BigchainDB "keyring." +Use the self-signed CA to sign those three CSRs. For help, see the pages: +* :doc:`How to Generate a Server Certificate for MongoDB <../production-deployment-template/server-tls-certificate>` +* :doc:`How to Generate a Client Certificate for MongoDB <../production-deployment-template/client-tls-certificate>` ☐ Make up an FQDN for your BigchainDB node (e.g. ``mynode.mycorp.com``). Make sure you've registered the associated domain name (e.g. ``mycorp.com``), and have an SSL certificate for the FQDN. (You can get an SSL certificate from any SSL certificate provider.) - -☐ Ask the managing organization for the user name to use for authenticating to +☐ Ask the BigchainDB Node operator/owner for the username to use for authenticating to MongoDB. - ☐ If the cluster uses 3scale for API authentication, monitoring and billing, -you must ask the managing organization for all relevant 3scale credentials - +you must ask the BigchainDB node operator/owner for all relevant 3scale credentials - secret token, service ID, version header and API service token. - -☐ If the cluster uses MongoDB Cloud Manager for monitoring and backup, +☐ If the cluster uses MongoDB Cloud Manager for monitoring, you must ask the managing organization for the ``Project ID`` and the ``Agent API Key``. (Each Cloud Manager "Project" has its own ``Project ID``. A ``Project ID`` can @@ -122,11 +173,7 @@ allow easier periodic rotation of the ``Agent API Key`` with a constant ``Project ID``) -☐ :doc:`Deploy a Kubernetes cluster on Azure `. +☐ :doc:`Deploy a Kubernetes cluster on Azure <../production-deployment-template/template-kubernetes-azure>`. - -☐ You can now proceed to set up your BigchainDB node based on whether it is the -:ref:`first node in a new cluster -` or a -:ref:`node that will be added to an existing cluster -`. +☐ You can now proceed to set up your :ref:`BigchainDB node +`. diff --git a/setup.py b/setup.py index 49325444..6fd910e4 100644 --- a/setup.py +++ b/setup.py @@ -41,6 +41,7 @@ docs_require = [ 'sphinx-rtd-theme>=0.1.9', 'sphinxcontrib-httpdomain>=1.5.0', 'sphinxcontrib-napoleon>=0.4.4', + 'aafigure>=0.6', ] tests_require = [ From 6052043a03d23fb8c1f4b54681edbfac8f6b7357 Mon Sep 17 00:00:00 2001 From: muawiakh Date: Thu, 8 Feb 2018 12:17:40 +0100 Subject: [PATCH 05/10] Remove unwanted references --- .../add-node-on-kubernetes.rst | 384 ------------------ .../restore-from-mongodb-cloud-manager.rst | 146 ------- 2 files changed, 530 deletions(-) delete mode 100644 docs/server/source/production-deployment-template/add-node-on-kubernetes.rst delete mode 100644 docs/server/source/production-deployment-template/restore-from-mongodb-cloud-manager.rst diff --git a/docs/server/source/production-deployment-template/add-node-on-kubernetes.rst b/docs/server/source/production-deployment-template/add-node-on-kubernetes.rst deleted file mode 100644 index da2d58fa..00000000 --- a/docs/server/source/production-deployment-template/add-node-on-kubernetes.rst +++ /dev/null @@ -1,384 +0,0 @@ -.. _kubernetes-template-add-a-bigchaindb-node-to-an-existing-cluster: - -Kubernetes Template: Add a BigchainDB Node to an Existing BigchainDB Cluster -============================================================================ - -This page describes how to deploy a BigchainDB node using Kubernetes, -and how to add that node to an existing BigchainDB cluster. -It assumes you already have a running Kubernetes cluster -where you can deploy the new BigchainDB node. - -If you want to deploy the first BigchainDB node in a BigchainDB cluster, -or a stand-alone BigchainDB node, -then see :doc:`the page about that `. - - -Terminology Used ----------------- - -``existing cluster`` will refer to one of the existing Kubernetes clusters -hosting one of the existing BigchainDB nodes. - -``ctx-1`` will refer to the kubectl context of the existing cluster. - -``new cluster`` will refer to the new Kubernetes cluster that will run a new -BigchainDB node (including a BigchainDB instance and a MongoDB instance). - -``ctx-2`` will refer to the kubectl context of the new cluster. - -``new MongoDB instance`` will refer to the MongoDB instance in the new cluster. - -``existing MongoDB instance`` will refer to the MongoDB instance in the -existing cluster. - -``new BigchainDB instance`` will refer to the BigchainDB instance in the new -cluster. - -``existing BigchainDB instance`` will refer to the BigchainDB instance in the -existing cluster. - -Below, we refer to multiple files by their directory and filename, -such as ``mongodb/mongo-ext-conn-svc.yaml``. Those files are files in the -`bigchaindb/bigchaindb repository on GitHub -`_ in the ``k8s/`` directory. -Make sure you're getting those files from the appropriate Git branch on -GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB -cluster is using. - - -Step 1: Prerequisites ---------------------- - -* :ref:`List of all the things to be done by each node operator `. - -* The public key should be shared offline with the other existing BigchainDB - nodes in the existing BigchainDB cluster. - -* You will need the public keys of all the existing BigchainDB nodes. - -* A new Kubernetes cluster setup with kubectl configured to access it. - -* Some familiarity with deploying a BigchainDB node on Kubernetes. - See our :doc:`other docs about that `. - -Note: If you are managing multiple Kubernetes clusters, from your local -system, you can run ``kubectl config view`` to list all the contexts that -are available for the local kubectl. -To target a specific cluster, add a ``--context`` flag to the kubectl CLI. For -example: - -.. code:: bash - - $ kubectl --context ctx-1 apply -f example.yaml - $ kubectl --context ctx-2 apply -f example.yaml - $ kubectl --context ctx-1 proxy --port 8001 - $ kubectl --context ctx-2 proxy --port 8002 - - -Step 2: Configure the BigchainDB Node -------------------------------------- - -See the section on how to :ref:`how-to-configure-a-bigchaindb-node`. - - -Step 3: Start the NGINX Service --------------------------------- - -Please see the following section: - -* :ref:`start-the-nginx-service`. - - -Step 4: Assign DNS Name to the NGINX Public IP ----------------------------------------------- - -Please see the following section: - -* :ref:`assign-dns-name-to-the-nginx-public-ip`. - - -Step 5: Start the MongoDB Kubernetes Service --------------------------------------------- - -Please see the following section: - -* :ref:`start-the-mongodb-kubernetes-service`. - - -Step 6: Start the BigchainDB Kubernetes Service ------------------------------------------------ - -Please see the following section: - -* :ref:`start-the-bigchaindb-kubernetes-service`. - - -Step 7: Start the OpenResty Kubernetes Service ----------------------------------------------- - -Please see the following section: - -* :ref:`start-the-openresty-kubernetes-service`. - - -Step 8: Start the NGINX Kubernetes Deployment ---------------------------------------------- - -Please see the following section: - -* :ref:`start-the-nginx-kubernetes-deployment`. - - -Step 9: Create Kubernetes Storage Classes for MongoDB ------------------------------------------------------ - -Please see the following section: - -* :ref:`create-kubernetes-storage-classes-for-mongodb`. - - -Step 10: Create Kubernetes Persistent Volume Claims ---------------------------------------------------- - -Please see the following section: - -* :ref:`create-kubernetes-persistent-volume-claims`. - - -Step 11: Start a Kubernetes StatefulSet for MongoDB ---------------------------------------------------- - -Please see the following section: - -* :ref:`start-a-kubernetes-statefulset-for-mongodb`. - - -Step 12: Verify network connectivity between the MongoDB instances ------------------------------------------------------------------- - -Make sure your MongoDB instances can access each other over the network. *If* you are deploying -the new MongoDB node in a different cluster or geographical location using Azure Kubernetes Container -Service, you will have to set up networking between the two clusters using `Kubernetes -Services `_. - -Assuming we have an existing MongoDB instance ``mdb-instance-0`` residing in Azure data center location ``westeurope`` and we -want to add a new MongoDB instance ``mdb-instance-1`` located in Azure data center location ``eastus`` to the existing MongoDB -replica set. Unless you already have explicitly set up networking for ``mdb-instance-0`` to communicate with ``mdb-instance-1`` and -vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a -MongoDB replica set. -It is similar to ensuring that there is a ``CNAME`` record in the DNS -infrastructure to resolve ``mdb-instance-X`` to the host where it is actually available. -We can do this in Kubernetes using a Kubernetes Service of ``type`` -``ExternalName``. - -* This configuration is located in the file ``mongodb/mongo-ext-conn-svc.yaml``. - -* Set the name of the ``metadata.name`` to the host name of the MongoDB instance you are trying to connect to. - For instance if you are configuring this service on cluster with ``mdb-instance-0`` then the ``metadata.name`` will - be ``mdb-instance-1`` and vice versa. - -* Set ``spec.ports.port[0]`` to the ``mongodb-backend-port`` from the ConfigMap for the other cluster. - -* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to. - For more information about the FQDN please refer to: :ref:`assign-dns-name-to-the-nginx-public-ip`. - -.. note:: - This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs - we need to communicate with. - - If you are not the system administrator of the cluster, you have to get in - touch with the system administrator/s of the other ``n-1`` clusters and - share with them your instance name (``mdb-instance-name`` in the ConfigMap) - and the FQDN for your node (``cluster-fqdn`` in the ConfigMap). - - -Step 13: Add the New MongoDB Instance to the Existing Replica Set ------------------------------------------------------------------ - -Note that by ``replica set``, we are referring to the MongoDB replica set, -not a Kubernetes' ``ReplicaSet``. - -If you are not the administrator of an existing BigchainDB node, you -will have to coordinate offline with an existing administrator so that they can -add the new MongoDB instance to the replica set. - -Add the new instance of MongoDB from an existing instance by accessing the -``mongo`` shell and authenticate as the ``adminUser`` we created for existing MongoDB instance OR -contact the admin of the PRIMARY MongoDB node: - -.. code:: bash - - $ kubectl --context ctx-1 exec -it bash - $ mongo --host --port 27017 --verbose --ssl \ - --sslCAFile /etc/mongod/ssl/ca.pem \ - --sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pem - - PRIMARY> use admin - PRIMARY> db.auth("adminUser", "superstrongpassword") - -One can only add members to a replica set from the ``PRIMARY`` instance. -The ``mongo`` shell prompt should state that this is the primary member in the -replica set. -If not, then you can use the ``rs.status()`` command to find out who the -primary is and login to the ``mongo`` shell in the primary. - -Run the ``rs.add()`` command with the FQDN and port number of the other instances: - -.. code:: bash - - PRIMARY> rs.add(":") - - -Step 14: Verify the Replica Set Membership ------------------------------------------- - -You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the -mongo shell to verify the replica set membership. - -The new MongoDB instance should be listed in the membership information -displayed. - - -Step 15: Configure Users and Access Control for MongoDB -------------------------------------------------------- - -* Create the users in MongoDB with the appropriate roles assigned to them. This - will enable the new BigchainDB instance, new MongoDB Monitoring Agent - instance and the new MongoDB Backup Agent instance to function correctly. - -* Please refer to - :ref:`configure-users-and-access-control-for-mongodb` to create and - configure the new BigchainDB, MongoDB Monitoring Agent and MongoDB Backup - Agent users on the cluster. - -.. note:: - You will not have to create the MongoDB replica set or create the admin user, as they already exist. - - If you do not have access to the ``PRIMARY`` member of the replica set, you - need to get in touch with the administrator who can create the users in the - MongoDB cluster. - - - -Step 16: Start a Kubernetes Deployment for MongoDB Monitoring Agent -------------------------------------------------------------------- - -Please see the following section: - -* :ref:`start-a-kubernetes-deployment-for-mongodb-monitoring-agent`. - -.. note:: - Every MMS group has only one active Monitoring and Backup Agent and having - multiple agents provides high availability and failover, in case one goes - down. For more information about Monitoring and Backup Agents please - consult the `official MongoDB documenation - `_. - - -Step 17: Start a Kubernetes Deployment for MongoDB Backup Agent ---------------------------------------------------------------- - -Please see the following section: - -* :ref:`start-a-kubernetes-deployment-for-mongodb-backup-agent`. - -.. note:: - Every MMS group has only one active Monitoring and Backup Agent and having - multiple agents provides high availability and failover, in case one goes - down. For more information about Monitoring and Backup Agents please - consult the `official MongoDB documenation - `_. - - -Step 18: Start a Kubernetes Deployment for BigchainDB ------------------------------------------------------ - -* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the - value set in ``bdb-instance-name`` in the ConfigMap, followed by - ``-dep``. - For example, if the value set in the - ``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the - value ``bdb-instance-0-dep``. - -* Set the value of ``BIGCHAINDB_KEYPAIR_PRIVATE`` (not base64-encoded). - (In the future, we'd like to pull the BigchainDB private key from - the Secret named ``bdb-private-key``, but a Secret can only be mounted as a file, - so BigchainDB Server would have to be modified to look for it - in a file.) - -* As we gain more experience running BigchainDB in testing and production, - we will tweak the ``resources.limits`` values for CPU and memory, and as - richer monitoring and probing becomes available in BigchainDB, we will - tweak the ``livenessProbe`` and ``readinessProbe`` parameters. - -* Set the ports to be exposed from the pod in the - ``spec.containers[0].ports`` section. We currently expose 2 ports - - ``bigchaindb-api-port`` and ``bigchaindb-ws-port``. Set them to the - values specified in the ConfigMap. - -* Uncomment the env var ``BIGCHAINDB_KEYRING``, it will pick up the - ``:`` delimited list of all the public keys in the BigchainDB cluster from the ConfigMap. - -Create the required Deployment using: - -.. code:: bash - - $ kubectl --context ctx-2 apply -f bigchaindb-dep.yaml - -You can check its status using the command ``kubectl --context ctx-2 get deploy -w`` - - -Step 19: Restart the Existing BigchainDB Instance(s) ----------------------------------------------------- - -* Add the public key of the new BigchainDB instance to the ConfigMap - ``bdb-keyring`` variable of all the existing BigchainDB instances. - Update all the existing ConfigMap using: - -.. code:: bash - - $ kubectl --context ctx-1 apply -f configuration/config-map.yaml - -* Uncomment the ``BIGCHAINDB_KEYRING`` variable from the - ``bigchaindb/bigchaindb-dep.yaml`` to refer to the keyring updated in the - ConfigMap. - Update the running BigchainDB instance using: - -.. code:: bash - - $ kubectl --context ctx-1 delete -f bigchaindb/bigchaindb-dep.yaml - $ kubectl --context ctx-1 apply -f bigchaindb/bigchaindb-dep.yaml - - -See the page titled :ref:`how-to-configure-a-bigchaindb-node` -for more information about ConfigMap configuration. - -You can SSH to an existing BigchainDB instance and run the ``bigchaindb -show-config`` command to check that the keyring is updated. - - -Step 20: Start a Kubernetes Deployment for OpenResty ----------------------------------------------------- - -Please see the following section: - -* :ref:`start-a-kubernetes-deployment-for-openresty`. - - -Step 21: Configure the MongoDB Cloud Manager --------------------------------------------- - -* MongoDB Cloud Manager auto-detects the members of the replica set and - configures the agents to act as a master/slave accordingly. - -* You can verify that the new MongoDB instance is detected by the - Monitoring and Backup Agent using the Cloud Manager UI. - - -Step 22: Test Your New BigchainDB Node --------------------------------------- - -* Please refer to the testing steps :ref:`here - ` to verify that your new BigchainDB - node is working as expected. - diff --git a/docs/server/source/production-deployment-template/restore-from-mongodb-cloud-manager.rst b/docs/server/source/production-deployment-template/restore-from-mongodb-cloud-manager.rst deleted file mode 100644 index a4f32a56..00000000 --- a/docs/server/source/production-deployment-template/restore-from-mongodb-cloud-manager.rst +++ /dev/null @@ -1,146 +0,0 @@ -How to Restore Data Backed On MongoDB Cloud Manager -=================================================== - -This page describes how to restore data backed up on -`MongoDB Cloud Manager `_ by -the backup agent when using a single instance MongoDB replica set. - - -Prerequisites -------------- - -- You can restore to either new hardware or existing hardware. We cover - restoring data to an existing MongoDB Kubernetes StatefulSet using a - Kubernetes Persistent Volume Claim below as described - :doc:`here `. - -- If the backup and destination database storage engines or settings do not - match, mongod cannot start once the backup is restored. - -- If the backup and destination database do not belong to the same MongoDB - Cloud Manager group, then the database will start but never initialize - properly. - -- The backup restore file includes a metadata file, restoreInfo.txt. This file - captures the options the database used when the snapshot was taken. The - database must be run with the listed options after it has been restored. It - contains: - 1. Group name - 2. Replica Set name - 3. Cluster Id (if applicable) - 4. Snapshot timestamp (as Timestamp at UTC) - 5. Last Oplog applied (as a BSON Timestamp at UTC) - 6. MongoDB version - 7. Storage engine type - 8. mongod startup options used on the database when the snapshot was taken - - -Step 1: Get the Backup/Archived Data from Cloud Manager -------------------------------------------------------- - -- Log in to the Cloud Manager. - -- Select the Group that you want to restore data from. - -- Click Backup. Hover over the Status column, click on the - ``Restore Or Download`` button. - -- Select the appropriate SNAPSHOT, and click Next. - -.. note:: - - We currently do not support restoring data using the ``POINT IN TIME`` and - ``OPLOG TIMESTAMP`` method. - -- Select 'Pull via Secure HTTP'. Select the number of times the link can be - used to download data in the dropdown box. We select ``Once``. - Select the link expiration time - the time till the download link is active. - We usually select ``1 hour``. - -- Check for the email from MongoDB. - -.. note:: - - This can take some time as the Cloud Manager needs to prepare an archive of - the backed up data. - -- Once you receive the email, click on the link to open the - ``restore jobs page``. Follow the instructions to download the backup data. - -.. note:: - - You will be shown a link to download the back up archive. You can either - click on the ``Download`` button to download it using the browser. - Under rare circumstances, the download is interrupted and errors out; I have - no idea why. - An alternative is to copy the download link and use the ``wget`` tool on - Linux systems to download the data. - -Step 2: Copy the archive to the MongoDB Instance ------------------------------------------------- - -- Once you have the archive, you can copy it to the MongoDB instance running - on a Kubernetes cluster using something similar to: - -.. code:: bash - - $ kubectl --context ctx-1 cp bigchain-rs-XXXX.tar.gz mdb-instance-name:/ - - where ``bigchain-rs-XXXX.tar.gz`` is the archive downloaded from Cloud - Manager, and ``mdb-instance-name`` is the name of your MongoDB instance. - - -Step 3: Prepare the MongoDB Instance for Restore ------------------------------------------------- - -- Log in to the MongoDB instance using something like: - -.. code:: bash - - $ kubectl --context ctx-1 exec -it mdb-instance-name bash - -- Extract the archive that we have copied to the instance at the proper - location using: - -.. code:: bash - - $ mv /bigchain-rs-XXXX.tar.gz /data/db - - $ cd /data/db - - $ tar xzvf bigchain-rs-XXXX.tar.gz - - -- Rename the directories on the disk, so that MongoDB can find the correct - data after we restart it. - -- The current database will be located in the ``/data/db/main`` directory. - We simply rename the old directory to ``/data/db/main.BAK`` and rename the - backup directory ``bigchain-rs-XXXX`` to ``main``. - -.. code:: bash - - $ mv main main.BAK - - $ mv bigchain-rs-XXXX main - -.. note:: - - Ensure that there are no connections to MongoDB from any client, in our - case, BigchainDB. This can be done in multiple ways - iptable rules, - shutting down BigchainDB, stop sending any transactions to BigchainDB, etc. - The simplest way to do it is to stop the MongoDB Kubernetes Service. - BigchainDB has a retry mechanism built in, and it will keep trying to - connect to MongoDB backend repeatedly till it succeeds. - -Step 4: Restart the MongoDB Instance ------------------------------------- - -- This can be achieved using something like: - -.. code:: bash - - $ kubectl --context ctx-1 delete -f k8s/mongo/mongo-ss.yaml - - $ kubectl --context ctx-1 apply -f k8s/mongo/mongo-ss.yaml - From 8945dd23329096f78c87e2854227375b1b4162e3 Mon Sep 17 00:00:00 2001 From: Troy McConaghy Date: Thu, 8 Feb 2018 14:20:50 +0100 Subject: [PATCH 06/10] Re-org headings in Prod. Dep. Template - Overview page --- .../source/production-deployment-template/workflow.rst | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/docs/server/source/production-deployment-template/workflow.rst b/docs/server/source/production-deployment-template/workflow.rst index 0a35d65b..4b8c6dec 100644 --- a/docs/server/source/production-deployment-template/workflow.rst +++ b/docs/server/source/production-deployment-template/workflow.rst @@ -10,10 +10,13 @@ You can modify them to suit your needs. We use standalone MongoDB (without Replica Set), BFT replication is handled by Tendermint. +Things to Do Before Deploying Any Nodes +--------------------------------------- + .. _register-a-domain-and-get-an-ssl-certificate-for-it-tmt: 1. Register a Domain and Get an SSL Certificate for It -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The BigchainDB APIs (HTTP API and WebSocket API) should be served using TLS, so the organization running the cluster @@ -25,7 +28,7 @@ and buy an SSL/TLS certificate for the FQDN. .. _generate-the-blockchain-id-and-genesis-time: 2. Generate the Blockchain ID and Genesis Time -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tendermint nodes require two parameters that need to be common and shared between all the participants in the network. From 1ba9233310a04b78aa9a76b15072f59be0f7cf2e Mon Sep 17 00:00:00 2001 From: Troy McConaghy Date: Thu, 8 Feb 2018 14:52:45 +0100 Subject: [PATCH 07/10] tendermint-instance-? --> tm-instance-? --- .../source/production-deployment-template/workflow.rst | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/server/source/production-deployment-template/workflow.rst b/docs/server/source/production-deployment-template/workflow.rst index 4b8c6dec..3d8771f4 100644 --- a/docs/server/source/production-deployment-template/workflow.rst +++ b/docs/server/source/production-deployment-template/workflow.rst @@ -135,10 +135,10 @@ Otherwise, your organization must "mdb-mon-instance-4" ], "Tendermint": [ - "tendermint-instance-1", - "tendermint-instance-2", - "tendermint-instance-3", - "tendermint-instance-4" + "tm-instance-1", + "tm-instance-2", + "tm-instance-3", + "tm-instance-4" ] } From b68c0ccbc01b6e36fa4230509b5bfd72a93511f7 Mon Sep 17 00:00:00 2001 From: muawiakh Date: Thu, 8 Feb 2018 15:48:45 +0100 Subject: [PATCH 08/10] Addressing comments and removing *-tm* or *-tmt* references --- .../bigchaindb-network-on-kubernetes.rst | 80 +++++++++--------- .../node-config-map-and-secrets.rst | 4 +- .../node-on-kubernetes.rst | 83 +++++++++---------- .../workflow.rst | 4 +- 4 files changed, 85 insertions(+), 86 deletions(-) diff --git a/docs/server/source/production-deployment-template/bigchaindb-network-on-kubernetes.rst b/docs/server/source/production-deployment-template/bigchaindb-network-on-kubernetes.rst index ed6c5433..19bcae5c 100644 --- a/docs/server/source/production-deployment-template/bigchaindb-network-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/bigchaindb-network-on-kubernetes.rst @@ -3,7 +3,7 @@ Kubernetes Template: Deploying a BigchainDB network =================================================== -This page describes how to deploy a BigchainDB + Tendermint network. +This page describes how to deploy a static BigchainDB + Tendermint network. If you want to deploy a stand-alone BigchainDB node in a BigchainDB cluster, or a stand-alone BigchainDB node, @@ -48,7 +48,7 @@ cluster is using. `github repository `_. -.. _pre-reqs-bdb-network-tmt: +.. _pre-reqs-bdb-network: Prerequisites ------------- @@ -57,7 +57,7 @@ The deployment methodology is similar to one covered with :doc:`node-on-kubernet we need to tweak some configurations depending on your choice of deployment. The operator needs to follow some consistent naming convention for all the components -covered :ref:`here `. +covered :ref:`here `. Lets assume we are deploying a 4 node cluster, your naming conventions could look like this: @@ -109,24 +109,24 @@ Lets assume we are deploying a 4 node cluster, your naming conventions could loo Edit config.yaml and secret.yaml ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Make N(number of nodes) copies of ``configuration/config-map-tm.yaml`` and ``configuration/secret-tm.yaml``. +Make N(number of nodes) copies of ``configuration/config-map.yaml`` and ``configuration/secret.yaml``. .. code:: text - # For config-map-tm.yaml + # For config-map.yaml config-map-node-1.yaml config-map-node-2.yaml config-map-node-3.yaml config-map-node-4.yaml - # For secret-tm.yaml + # For secret.yaml secret-node-1.yaml secret-node-2.yaml secret-node-3.yaml secret-node-4.yaml Edit the data values as described in :doc:`this document `, based -on the naming convention described :ref:`above `. +on the naming convention described :ref:`above `. **Only for single site deployments**: Since all the configuration files use the same ConfigMap and Secret Keys i.e. @@ -136,7 +136,7 @@ same ConfigMap and Secret Keys i.e. will overwrite the configuration of the previously deployed one. We want each node to have its own unique configurations. One way to go about it is that, using the -:ref:`naming convention above ` we edit the ConfigMap and Secret keys. +:ref:`naming convention above ` we edit the ConfigMap and Secret keys. .. code:: text @@ -208,7 +208,7 @@ Deploy all your configuration maps and secrets. *-node-4-dep.yaml -.. _single-site-network-tmt: +.. _single-site-network: Single Site: Single Azure Kubernetes Cluster ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -380,12 +380,12 @@ nodes, we need to replicate the :doc:`deployment steps for each node `. +discussed in this document `. .. note:: Assuming we are using independent Kubernetes clusters, the ConfigMap and Secret Keys - do not need to be updated unlike :ref:`single-site-network-tmt`, and we also do not + do not need to be updated unlike :ref:`single-site-network`, and we also do not need to update corresponding ConfigMap/Secret imports in the Kubernetes components. @@ -393,19 +393,19 @@ Deploy Kubernetes Services -------------------------- Deploy the following services for each node by following the naming convention -described :ref:`above `: +described :ref:`above `: -* :ref:`Start the NGINX Service `. +* :ref:`Start the NGINX Service `. -* :ref:`Assign DNS Name to the NGINX Public IP ` +* :ref:`Assign DNS Name to the NGINX Public IP ` -* :ref:`Start the MongoDB Kubernetes Service `. +* :ref:`Start the MongoDB Kubernetes Service `. -* :ref:`Start the BigchainDB Kubernetes Service `. +* :ref:`Start the BigchainDB Kubernetes Service `. -* :ref:`Start the OpenResty Kubernetes Service `. +* :ref:`Start the OpenResty Kubernetes Service `. -* :ref:`Start the Tendermint Kubernetes Service `. +* :ref:`Start the Tendermint Kubernetes Service `. Only for multi site deployments @@ -439,7 +439,7 @@ We can do this in Kubernetes using a Kubernetes Service of ``type`` * Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to. For more information about the FQDN please refer to: :ref:`Assign DNS name to NGINX Public - IP `. + IP `. .. note:: This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs @@ -449,47 +449,47 @@ We can do this in Kubernetes using a Kubernetes Service of ``type`` touch with the system administrator/s of the other ``n-1`` clusters and share with them your instance name (``tendermint-instance-name`` in the ConfigMap) and the FQDN of the NGINX instance acting as Gateway(set in: :ref:`Assign DNS name to NGINX - Public IP `). + Public IP `). Start NGINX Kubernetes deployments ---------------------------------- Start the NGINX deployment that serves as a Gateway for each node by following the -naming convention described :ref:`above ` and referring to the following instructions: +naming convention described :ref:`above ` and referring to the following instructions: -* :ref:`Start the NGINX Kubernetes Deployment `. +* :ref:`Start the NGINX Kubernetes Deployment `. Deploy Kubernetes StorageClasses for MongoDB and Tendermint ----------------------------------------------------------- Deploy the following StorageClasses for each node by following the naming convention -described :ref:`above `: +described :ref:`above `: -* :ref:`Create Kubernetes Storage Classes for MongoDB `. +* :ref:`Create Kubernetes Storage Classes for MongoDB `. -* :ref:`Create Kubernetes Storage Classes for Tendermint `. +* :ref:`Create Kubernetes Storage Classes for Tendermint `. Deploy Kubernetes PersistentVolumeClaims for MongoDB and Tendermint -------------------------------------------------------------------- Deploy the following services for each node by following the naming convention -described :ref:`above `: +described :ref:`above `: -* :ref:`Create Kubernetes Persistent Volume Claims for MongoDB `. +* :ref:`Create Kubernetes Persistent Volume Claims for MongoDB `. -* :ref:`Create Kubernetes Persistent Volume Claims for Tendermint ` +* :ref:`Create Kubernetes Persistent Volume Claims for Tendermint ` Deploy MongoDB Kubernetes StatefulSet -------------------------------------- Deploy the MongoDB StatefulSet (standalone MongoDB) for each node by following the naming convention -described :ref:`above `: and referring to the following section: +described :ref:`above `: and referring to the following section: -* :ref:`Start a Kubernetes StatefulSet for MongoDB `. +* :ref:`Start a Kubernetes StatefulSet for MongoDB `. Configure Users and Access Control for MongoDB @@ -498,43 +498,43 @@ Configure Users and Access Control for MongoDB Configure users and access control for each MongoDB instance in the network by referring to the following section: -* :ref:`Configure Users and Access Control for MongoDB `. +* :ref:`Configure Users and Access Control for MongoDB `. Deploy Tendermint Kubernetes StatefulSet ---------------------------------------- Deploy the Tendermint Stateful for each node by following the -naming convention described :ref:`above ` and referring to the following instructions: +naming convention described :ref:`above ` and referring to the following instructions: -* :ref:`create-kubernetes-stateful-set-tmt`. +* :ref:`create-kubernetes-stateful-set`. Start Kubernetes Deployment for MongoDB Monitoring Agent --------------------------------------------------------- Start the MongoDB monitoring agent Kubernetes deployment for each node by following the -naming convention described :ref:`above ` and referring to the following instructions: +naming convention described :ref:`above ` and referring to the following instructions: -* :ref:`Start a Kubernetes StatefulSet for Tendermint `. +* :ref:`Start a Kubernetes StatefulSet for Tendermint `. Start Kubernetes Deployment for BigchainDB ------------------------------------------ Start the BigchainDB Kubernetes deployment for each node by following the -naming convention described :ref:`above ` and referring to the following instructions: +naming convention described :ref:`above ` and referring to the following instructions: -* :ref:`Start a Kubernetes Deployment for BigchainDB `. +* :ref:`Start a Kubernetes Deployment for BigchainDB `. Start Kubernetes Deployment for OpenResty ------------------------------------------ Start the OpenResty Kubernetes deployment for each node by following the -naming convention described :ref:`above ` and referring to the following instructions: +naming convention described :ref:`above ` and referring to the following instructions: -* :ref:` Start a Kubernetes Deployment for OpenResty `. +* :ref:`Start a Kubernetes Deployment for OpenResty `. Verify and Test @@ -542,5 +542,5 @@ Verify and Test Verify and test your setup by referring to the following instructions: -* :ref:`Verify the BigchainDB Node Setup `. +* :ref:`Verify the BigchainDB Node Setup `. diff --git a/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst b/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst index 7ee9d01a..6d7ac55c 100644 --- a/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst +++ b/docs/server/source/production-deployment-template/node-config-map-and-secrets.rst @@ -11,7 +11,7 @@ and ``secret.yaml`` (a set of Secrets). They are stored in the Kubernetes cluster's key-value store (etcd). Make sure you did all the things listed in the section titled -:ref:`things-each-node-operator-must-do-tmt` +:ref:`things-each-node-operator-must-do` (including generation of all the SSL certificates needed for MongoDB auth). @@ -35,7 +35,7 @@ vars.cluster-fqdn ~~~~~~~~~~~~~~~~~ The ``cluster-fqdn`` field specifies the domain you would have -:ref:`registered before `. +:ref:`registered before `. vars.cluster-frontend-port diff --git a/docs/server/source/production-deployment-template/node-on-kubernetes.rst b/docs/server/source/production-deployment-template/node-on-kubernetes.rst index 2989492f..a17d1be6 100644 --- a/docs/server/source/production-deployment-template/node-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/node-on-kubernetes.rst @@ -3,13 +3,12 @@ Kubernetes Template: Deploy a Single BigchainDB Node ==================================================== -This page describes how to deploy a stand-alone BigchainDB + Tendermint node, -or a static network of BigchainDB + Tendermint nodes. +This page describes how to deploy a stand-alone BigchainDB + Tendermint node using `Kubernetes `_. It assumes you already have a running Kubernetes cluster. Below, we refer to many files by their directory and filename, -such as ``configuration/config-map-tm.yaml``. Those files are files in the +such as ``configuration/config-map.yaml``. Those files are files in the `bigchaindb/bigchaindb repository on GitHub `_ in the ``k8s/`` directory. Make sure you're getting those files from the appropriate Git branch on @@ -108,7 +107,7 @@ Step 3: Configure Your BigchainDB Node See the page titled :ref:`how-to-configure-a-bigchaindb-node`. -.. _start-the-nginx-service-tmt: +.. _start-the-nginx-service: Step 4: Start the NGINX Service ------------------------------- @@ -125,7 +124,7 @@ Step 4: Start the NGINX Service Step 4.1: Vanilla NGINX ^^^^^^^^^^^^^^^^^^^^^^^ - * This configuration is located in the file ``nginx-http/nginx-http-svc-tm.yaml``. + * This configuration is located in the file ``nginx-http/nginx-http-svc.yaml``. * Set the ``metadata.name`` and ``metadata.labels.name`` to the value set in ``ngx-instance-name`` in the ConfigMap above. @@ -153,7 +152,7 @@ Step 4.1: Vanilla NGINX .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc-tm.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc.yaml Step 4.2: NGINX with HTTPS @@ -165,7 +164,7 @@ Step 4.2: NGINX with HTTPS * You should have already created the necessary Kubernetes Secrets in the previous step (i.e. ``https-certs``). - * This configuration is located in the file ``nginx-https/nginx-https-svc-tm.yaml``. + * This configuration is located in the file ``nginx-https/nginx-https-svc.yaml``. * Set the ``metadata.name`` and ``metadata.labels.name`` to the value set in ``ngx-instance-name`` in the ConfigMap above. @@ -199,10 +198,10 @@ Step 4.2: NGINX with HTTPS .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc-tm.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc.yaml -.. _assign-dns-name-to-nginx-public-ip-tmt: +.. _assign-dns-name-to-nginx-public-ip: Step 5: Assign DNS Name to the NGINX Public IP ---------------------------------------------- @@ -245,12 +244,12 @@ This will ensure that when you scale to different geographical zones, other Tend nodes in the network can reach this instance. -.. _start-the-mongodb-kubernetes-service-tmt: +.. _start-the-mongodb-kubernetes-service: Step 6: Start the MongoDB Kubernetes Service -------------------------------------------- - * This configuration is located in the file ``mongodb/mongo-svc-tm.yaml``. + * This configuration is located in the file ``mongodb/mongo-svc.yaml``. * Set the ``metadata.name`` and ``metadata.labels.name`` to the value set in ``mdb-instance-name`` in the ConfigMap above. @@ -269,15 +268,15 @@ Step 6: Start the MongoDB Kubernetes Service .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc-tm.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc.yaml -.. _start-the-bigchaindb-kubernetes-service-tmt: +.. _start-the-bigchaindb-kubernetes-service: Step 7: Start the BigchainDB Kubernetes Service ----------------------------------------------- - * This configuration is located in the file ``bigchaindb/bigchaindb-svc-tm.yaml``. + * This configuration is located in the file ``bigchaindb/bigchaindb-svc.yaml``. * Set the ``metadata.name`` and ``metadata.labels.name`` to the value set in ``bdb-instance-name`` in the ConfigMap above. @@ -306,15 +305,15 @@ Step 7: Start the BigchainDB Kubernetes Service .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc-tm.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc.yaml -.. _start-the-openresty-kubernetes-service-tmt: +.. _start-the-openresty-kubernetes-service: Step 8: Start the OpenResty Kubernetes Service ---------------------------------------------- - * This configuration is located in the file ``nginx-openresty/nginx-openresty-svc-tm.yaml``. + * This configuration is located in the file ``nginx-openresty/nginx-openresty-svc.yaml``. * Set the ``metadata.name`` and ``metadata.labels.name`` to the value set in ``openresty-instance-name`` in the ConfigMap above. @@ -328,10 +327,10 @@ Step 8: Start the OpenResty Kubernetes Service .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc-tm.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc.yaml -.. _start-the-tendermint-kubernetes-service-tmt: +.. _start-the-tendermint-kubernetes-service: Step 9: Start the Tendermint Kubernetes Service ----------------------------------------------- @@ -368,7 +367,7 @@ Step 9: Start the Tendermint Kubernetes Service $ kubectl --context k8s-bdb-test-cluster-0 apply -f tendermint/tendermint-svc.yaml -.. _start-the-nginx-deployment-tmt: +.. _start-the-nginx-deployment: Step 10: Start the NGINX Kubernetes Deployment ---------------------------------------------- @@ -385,7 +384,7 @@ Step 10: Start the NGINX Kubernetes Deployment Step 10.1: Vanilla NGINX ^^^^^^^^^^^^^^^^^^^^^^^^ - * This configuration is located in the file ``nginx-http/nginx-http-dep-tm.yaml``. + * This configuration is located in the file ``nginx-http/nginx-http-dep.yaml``. * Set the ``metadata.name`` and ``spec.template.metadata.labels.app`` to the value set in ``ngx-instance-name`` in the ConfigMap followed by a @@ -418,14 +417,14 @@ Step 10.1: Vanilla NGINX .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep-tm.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep.yaml Step 10.2: NGINX with HTTPS ^^^^^^^^^^^^^^^^^^^^^^^^^^^ * This configuration is located in the file - ``nginx-https/nginx-https-dep-tm.yaml``. + ``nginx-https/nginx-https-dep.yaml``. * Set the ``metadata.name`` and ``spec.template.metadata.labels.app`` to the value set in ``ngx-instance-name`` in the ConfigMap followed by a @@ -465,10 +464,10 @@ Step 10.2: NGINX with HTTPS .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep-tm.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep.yaml -.. _create-kubernetes-storage-class-mdb-tmt: +.. _create-kubernetes-storage-class-mdb: Step 11: Create Kubernetes Storage Classes for MongoDB ------------------------------------------------------ @@ -539,7 +538,7 @@ Create the required storage classes using: You can check if it worked using ``kubectl get storageclasses``. -.. _create-kubernetes-persistent-volume-claim-mdb-tmt: +.. _create-kubernetes-persistent-volume-claim-mdb: Step 12: Create Kubernetes Persistent Volume Claims for MongoDB --------------------------------------------------------------- @@ -594,12 +593,12 @@ but it should become "Bound" fairly quickly. For notes on recreating a private volume form a released Azure disk resource consult :doc:`the page about cluster troubleshooting <../production-deployment-template/troubleshoot>`. -.. _start-kubernetes-stateful-set-mongodb-tmt: +.. _start-kubernetes-stateful-set-mongodb: Step 13: Start a Kubernetes StatefulSet for MongoDB --------------------------------------------------- - * This configuration is located in the file ``mongodb/mongo-ss-tm.yaml``. + * This configuration is located in the file ``mongodb/mongo-ss.yaml``. * Set the ``spec.serviceName`` to the value set in ``mdb-instance-name`` in the ConfigMap. @@ -665,7 +664,7 @@ Step 13: Start a Kubernetes StatefulSet for MongoDB .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss-tm.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss.yaml * It might take up to 10 minutes for the disks, specified in the Persistent Volume Claims above, to be created and attached to the pod. @@ -679,7 +678,7 @@ Step 13: Start a Kubernetes StatefulSet for MongoDB $ kubectl --context k8s-bdb-test-cluster-0 get pods -w -.. _configure-users-and-access-control-mongodb-tmt: +.. _configure-users-and-access-control-mongodb: Step 14: Configure Users and Access Control for MongoDB ------------------------------------------------------- @@ -779,7 +778,7 @@ Step 14: Configure Users and Access Control for MongoDB } ) -.. _create-kubernetes-storage-class-tmt: +.. _create-kubernetes-storage-class: Step 15: Create Kubernetes Storage Classes for Tendermint ---------------------------------------------------------- @@ -791,7 +790,7 @@ The Kubernetes template for configuration of Storage Class is located in the file ``tendermint/tendermint-sc.yaml``. Details about how to create a Azure Storage account and how Kubernetes Storage Class works -are already covered in this document: :ref:`create-kubernetes-storage-class-mdb-tmt`. +are already covered in this document: :ref:`create-kubernetes-storage-class-mdb`. Create the required storage classes using: @@ -802,7 +801,7 @@ Create the required storage classes using: You can check if it worked using ``kubectl get storageclasses``. -.. _create-kubernetes-persistent-volume-claim-tmt: +.. _create-kubernetes-persistent-volume-claim: Step 16: Create Kubernetes Persistent Volume Claims for Tendermint ------------------------------------------------------------------ @@ -814,7 +813,7 @@ This configuration is located in the file ``tendermint/tendermint-pvc.yaml``. Details about Kubernetes Persistent Volumes, Persistent Volume Claims and how they work with Azure are already covered in this -document: :ref:`create-kubernetes-persistent-volume-claim-mdb-tmt`. +document: :ref:`create-kubernetes-persistent-volume-claim-mdb`. Create the required Persistent Volume Claims using: @@ -829,7 +828,7 @@ You can check its status using: kubectl get pvc -w -.. _create-kubernetes-stateful-set-tmt: +.. _create-kubernetes-stateful-set: Step 17: Start a Kubernetes StatefulSet for Tendermint ------------------------------------------------------ @@ -896,7 +895,7 @@ engine while NGINX is used to serve the public key of the Tendermint instance. $ kubectl --context k8s-bdb-test-cluster-0 get pods -w -.. _start-kubernetes-deployment-for-mdb-mon-agent-tmt: +.. _start-kubernetes-deployment-for-mdb-mon-agent: Step 18: Start a Kubernetes Deployment for MongoDB Monitoring Agent ------------------------------------------------------------------- @@ -925,13 +924,13 @@ Step 18: Start a Kubernetes Deployment for MongoDB Monitoring Agent $ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml -.. _start-kubernetes-deployment-bdb-tmt: +.. _start-kubernetes-deployment-bdb: Step 19: Start a Kubernetes Deployment for BigchainDB ----------------------------------------------------- * This configuration is located in the file - ``bigchaindb/bigchaindb-dep-tm.yaml``. + ``bigchaindb/bigchaindb-dep.yaml``. * Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the value set in ``bdb-instance-name`` in the ConfigMap, followed by @@ -980,13 +979,13 @@ Step 19: Start a Kubernetes Deployment for BigchainDB .. code:: bash - $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep-tm.yaml + $ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep.yaml * You can check its status using the command ``kubectl get deployments -w`` -.. _start-kubernetes-deployment-openresty-tmt: +.. _start-kubernetes-deployment-openresty: Step 20: Start a Kubernetes Deployment for OpenResty ---------------------------------------------------- @@ -1036,7 +1035,7 @@ for details on how to configure the MongoDB Cloud Manager to enable monitoring and backup. -.. _verify-and-test-bdb-tmt: +.. _verify-and-test-bdb: Step 22: Verify the BigchainDB Node Setup ----------------------------------------- @@ -1175,4 +1174,4 @@ verify that your node or cluster works as expected. Next, you can set up log analytics and monitoring, by following our templates: -* :doc:`../production-deployment-template/log-analytics`. \ No newline at end of file +* :doc:`../production-deployment-template/log-analytics`. diff --git a/docs/server/source/production-deployment-template/workflow.rst b/docs/server/source/production-deployment-template/workflow.rst index 0a35d65b..23dd185a 100644 --- a/docs/server/source/production-deployment-template/workflow.rst +++ b/docs/server/source/production-deployment-template/workflow.rst @@ -10,7 +10,7 @@ You can modify them to suit your needs. We use standalone MongoDB (without Replica Set), BFT replication is handled by Tendermint. -.. _register-a-domain-and-get-an-ssl-certificate-for-it-tmt: +.. _register-a-domain-and-get-an-ssl-certificate-for-it: 1. Register a Domain and Get an SSL Certificate for It ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -63,7 +63,7 @@ the ``genesis_time`` and ``chain_id`` from this example ``genesis.json`` file: "app_hash": "" } -.. _things-each-node-operator-must-do-tmt: +.. _things-each-node-operator-must-do: Things Each Node Operator Must Do --------------------------------- From a4ac9cf30811aea468a588c0e3a6de6253a629a0 Mon Sep 17 00:00:00 2001 From: muawiakh Date: Mon, 12 Feb 2018 14:35:02 +0100 Subject: [PATCH 09/10] Address comments - Update docs for better readability --- .../node-on-kubernetes.rst | 18 +++++++++--------- .../template-kubernetes-azure.rst | 2 +- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/docs/server/source/production-deployment-template/node-on-kubernetes.rst b/docs/server/source/production-deployment-template/node-on-kubernetes.rst index a17d1be6..0772d75e 100644 --- a/docs/server/source/production-deployment-template/node-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/node-on-kubernetes.rst @@ -323,6 +323,9 @@ Step 8: Start the OpenResty Kubernetes Service ``openresty-instance-name`` is ``openresty-instance-0``, set the ``spec.selector.app`` to ``openresty-instance-0-dep``. + * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the + ``openresty-backend-port`` in the ConfigMap. + * Start the Kubernetes Service: .. code:: bash @@ -347,18 +350,15 @@ Step 9: Start the Tendermint Kubernetes Service * Set ``ports[0].port`` and ``ports[0].targetPort`` to the value set in the ``tm-p2p-port`` in the ConfigMap above. - This is the ``p2p`` in the file which specifies where Tendermint peers - communicate. + It specifies where Tendermint peers communicate. * Set ``ports[1].port`` and ``ports[1].targetPort`` to the value set in the ``tm-rpc-port`` in the ConfigMap above. - This is the ``rpc`` in the file which specifies the port used by Tendermint core - for RPC traffic. + It specifies the port used by Tendermint core for RPC traffic. * Set ``ports[2].port`` and ``ports[2].targetPort`` to the value set in the ``tm-pub-key-access`` in the ConfigMap above. - This is the ``pub-key-access`` in the file which specifies the port to host/distribute - the public key for the Tendermint node. + It specifies the port to host/distribute the public key for the Tendermint node. * Start the Kubernetes Service: @@ -479,10 +479,10 @@ Our MongoDB Docker container exports two volume mounts with correct permissions from inside the container: -* The directory where the mongod instance stores its data: ``/data/db``. +* The directory where the MongoDB instance stores its data: ``/data/db``. There's more explanation in the MongoDB docs about `storage.dbpath `_. -* The directory where the mongodb instance stores the metadata for a sharded +* The directory where the MongoDB instance stores the metadata for a sharded cluster: ``/data/configdb/``. There's more explanation in the MongoDB docs about `sharding.configDB `_. @@ -518,7 +518,7 @@ For future reference, the command to create a storage account is Please refer to `Azure documentation `_ for the list of VMs that are supported by Premium Storage. -The Kubernetes template for configuration of Storage Class is located in the +The Kubernetes template for configuration of the MongoDB Storage Class is located in the file ``mongodb/mongo-sc.yaml``. You may have to update the ``parameters.location`` field in the file to diff --git a/docs/server/source/production-deployment-template/template-kubernetes-azure.rst b/docs/server/source/production-deployment-template/template-kubernetes-azure.rst index d57abe27..7f642a18 100644 --- a/docs/server/source/production-deployment-template/template-kubernetes-azure.rst +++ b/docs/server/source/production-deployment-template/template-kubernetes-azure.rst @@ -104,7 +104,7 @@ Finally, you can deploy an ACS using something like: --master-count 3 \ --agent-count 2 \ --admin-username ubuntu \ - --agent-vm-size Standard_D2_v2 \ + --agent-vm-size Standard_L4s \ --dns-prefix \ --ssh-key-value ~/.ssh/.pub \ --orchestrator-type kubernetes \ From 5669514ee779c7aa3aef5e18800c7c71d68abb96 Mon Sep 17 00:00:00 2001 From: Ahmed Muawia Khan Date: Wed, 21 Feb 2018 12:13:45 +0100 Subject: [PATCH 10/10] Fix label docs --- docs/server/source/http-client-server-api.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/server/source/http-client-server-api.rst b/docs/server/source/http-client-server-api.rst index 461efa83..aa2653d5 100644 --- a/docs/server/source/http-client-server-api.rst +++ b/docs/server/source/http-client-server-api.rst @@ -162,7 +162,7 @@ Transactions If the server is returning a ``202`` HTTP status code when ``mode=aysnc`` or ``mode=sync``, then the transaction has been accepted for processing. The client can subscribe to the - :ref:`WebSocket Event Stream API ` to listen for comitted transactions. + :ref:`WebSocket Event Stream API ` to listen for comitted transactions. :resheader Content-Type: ``application/json``