Problem: BigchainDB and Tendermint inconsistencies because disjoint services (#2242)

Solution
Since BigchainDB and Tendermint are so tightly coupled we need to introduce a process supervisor to make them act like a single microservice, so that if BigchainDB crashes, Tendermint does as well and both are restarted and Tendermint requests a connection towards the proxy app.

In Kubernetes, they can be exposed as part of a one POD.
For BigchainDB as a system service/process, we need to introduce a process supervisor such as systemd.
This PR only solves the former.

Changes
Upgrade deployment from Tendermint v0.12.0 to v0.19.0
Update some documentation
Fix nginx-http entrypoint issues.
Update generate-configs.sh script to handle config generation without https-certificates.
Update Dockerfile to process dependency links introduced by abci
Integrate BigchainDB and Tendermint as a single microservice.
This required making BigchainDB to be exposed as a statefulset.
Introduce new liveness probe checks.
Issues Resolved
Partially fixes #2232
This commit is contained in:
Ahmed Muawia Khan 2018-04-27 15:54:47 +02:00 committed by Shahbaz Nazir
parent 9b71026d4b
commit dbabe94887
40 changed files with 569 additions and 624 deletions

View File

@ -27,7 +27,7 @@ to form a network.
Below, we refer to multiple files by their directory and filename,
such as ``tendermint/tendermint-ext-conn-svc.yaml``. Those files are located in the
such as ``bigchaindb/bigchaindb-ext-conn-svc.yaml``. Those files are located in the
`bigchaindb/bigchaindb repository on GitHub
<https://github.com/bigchaindb/bigchaindb/>`_ in the ``k8s/`` directory.
Make sure you're getting those files from the appropriate Git branch on
@ -93,12 +93,6 @@ Lets assume we are deploying a 4 node cluster, your naming conventions could loo
"mdb-mon-instance-2",
"mdb-mon-instance-3",
"mdb-mon-instance-4"
],
"Tendermint": [
"tendermint-instance-1",
"tendermint-instance-2",
"tendermint-instance-3",
"tendermint-instance-4"
]
}
@ -355,17 +349,13 @@ The above example is meant to be repeated for all the Kubernetes components of a
* ``mongodb/mongodb-node-X-ss.yaml``
* ``tendermint/tendermint-node-X-svc.yaml``
* ``tendermint/tendermint-node-X-sc.yaml``
* ``tendermint/tendermint-node-X-pvc.yaml``
* ``tendermint/tendermint-node-X-ss.yaml``
* ``bigchaindb/bigchaindb-node-X-svc.yaml``
* ``bigchaindb/bigchaindb-node-X-dep.yaml``
* ``bigchaindb/bigchaindb-node-X-sc.yaml``
* ``bigchaindb/bigchaindb-node-X-pvc.yaml``
* ``bigchaindb/bigchaindb-node-X-ss.yaml``
* ``nginx-openresty/nginx-openresty-node-X-svc.yaml``
@ -405,33 +395,31 @@ described :ref:`above <pre-reqs-bdb-network>`:
* :ref:`Start the OpenResty Kubernetes Service <start-the-openresty-kubernetes-service>`.
* :ref:`Start the Tendermint Kubernetes Service <start-the-tendermint-kubernetes-service>`.
Only for multi site deployments
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We need to make sure that clusters are able
to talk to each other i.e. specifically the communication between the
Tendermint peers. Set up networking between the clusters using
BigchainDB peers. Set up networking between the clusters using
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
Assuming we have a Tendermint instance ``tendermint-instance-1`` residing in Azure data center location ``westeurope`` and we
want to connect to ``tendermint-instance-2``, ``tendermint-instance-3``, and ``tendermint-instance-4`` located in Azure data centers
Assuming we have a BigchainDB instance ``bigchaindb-instance-1`` residing in Azure data center location ``westeurope`` and we
want to connect to ``bigchaindb-instance-2``, ``bigchaindb-instance-3``, and ``bigchaindb-instance-4`` located in Azure data centers
``eastus``, ``centralus`` and ``westus``, respectively. Unless you already have explicitly set up networking for
``tendermint-instance-1`` to communicate with ``tendermint-instance-2/3/4`` and
``bigchaindb-instance-1`` to communicate with ``bigchaindb-instance-2/3/4`` and
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
Tendermint P2P network.
BigchainDB P2P network.
It is similar to ensuring that there is a ``CNAME`` record in the DNS
infrastructure to resolve ``tendermint-instance-X`` to the host where it is actually available.
infrastructure to resolve ``bigchaindb-instance-X`` to the host where it is actually available.
We can do this in Kubernetes using a Kubernetes Service of ``type``
``ExternalName``.
* This configuration is located in the file ``tendermint/tendermint-ext-conn-svc.yaml``.
* This configuration is located in the file ``bigchaindb/bigchaindb-ext-conn-svc.yaml``.
* Set the name of the ``metadata.name`` to the host name of the Tendermint instance you are trying to connect to.
For instance if you are configuring this service on cluster with ``tendermint-instance-1`` then the ``metadata.name`` will
be ``tendermint-instance-2`` and vice versa.
* Set the name of the ``metadata.name`` to the host name of the BigchainDB instance you are trying to connect to.
For instance if you are configuring this service on cluster with ``bigchaindb-instance-1`` then the ``metadata.name`` will
be ``bigchaindb-instance-2`` and vice versa.
* Set ``spec.ports.port[0]`` to the ``tm-p2p-port`` from the ConfigMap for the other cluster.
@ -447,7 +435,7 @@ We can do this in Kubernetes using a Kubernetes Service of ``type``
If you are not the system administrator of the cluster, you have to get in
touch with the system administrator/s of the other ``n-1`` clusters and
share with them your instance name (``tendermint-instance-name`` in the ConfigMap)
share with them your instance name (``bigchaindb-instance-name`` in the ConfigMap)
and the FQDN of the NGINX instance acting as Gateway(set in: :ref:`Assign DNS name to NGINX
Public IP <assign-dns-name-to-nginx-public-ip>`).
@ -461,18 +449,18 @@ naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to
* :ref:`Start the NGINX Kubernetes Deployment <start-the-nginx-deployment>`.
Deploy Kubernetes StorageClasses for MongoDB and Tendermint
-----------------------------------------------------------
Deploy Kubernetes StorageClasses for MongoDB and BigchainDB
------------------------------------------------------------
Deploy the following StorageClasses for each node by following the naming convention
described :ref:`above <pre-reqs-bdb-network>`:
* :ref:`Create Kubernetes Storage Classes for MongoDB <create-kubernetes-storage-class-mdb>`.
* :ref:`Create Kubernetes Storage Classes for Tendermint <create-kubernetes-storage-class>`.
* :ref:`Create Kubernetes Storage Classes for BigchainDB <create-kubernetes-storage-class>`.
Deploy Kubernetes PersistentVolumeClaims for MongoDB and Tendermint
Deploy Kubernetes PersistentVolumeClaims for MongoDB and BigchainDB
--------------------------------------------------------------------
Deploy the following services for each node by following the naming convention
@ -480,7 +468,7 @@ described :ref:`above <pre-reqs-bdb-network>`:
* :ref:`Create Kubernetes Persistent Volume Claims for MongoDB <create-kubernetes-persistent-volume-claim-mdb>`.
* :ref:`Create Kubernetes Persistent Volume Claims for Tendermint <create-kubernetes-persistent-volume-claim>`
* :ref:`Create Kubernetes Persistent Volume Claims for BigchainDB <create-kubernetes-persistent-volume-claim>`
Deploy MongoDB Kubernetes StatefulSet
@ -501,13 +489,13 @@ in the network by referring to the following section:
* :ref:`Configure Users and Access Control for MongoDB <configure-users-and-access-control-mongodb>`.
Deploy Tendermint Kubernetes StatefulSet
----------------------------------------
Start Kubernetes StatefulSet for BigchainDB
-------------------------------------------
Deploy the Tendermint Stateful for each node by following the
Start the BigchainDB Kubernetes StatefulSet for each node by following the
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
* :ref:`create-kubernetes-stateful-set`.
* :ref:`Start a Kubernetes Deployment for BigchainDB <start-kubernetes-stateful-set-bdb>`.
Start Kubernetes Deployment for MongoDB Monitoring Agent
@ -516,16 +504,7 @@ Start Kubernetes Deployment for MongoDB Monitoring Agent
Start the MongoDB monitoring agent Kubernetes deployment for each node by following the
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
* :ref:`Start a Kubernetes StatefulSet for Tendermint <start-kubernetes-deployment-for-mdb-mon-agent>`.
Start Kubernetes Deployment for BigchainDB
------------------------------------------
Start the BigchainDB Kubernetes deployment for each node by following the
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
* :ref:`Start a Kubernetes Deployment for BigchainDB <start-kubernetes-deployment-bdb>`.
* :ref:`Start a Kubernetes Deployment for MongoDB Monitoring Agent <start-kubernetes-deployment-for-mdb-mon-agent>`.
Start Kubernetes Deployment for OpenResty

View File

@ -56,15 +56,8 @@ MongoDB admin user credentials, username and password.
This user is created on the *admin* database with the authorization to create other users.
vars.TM_INSTANCE_NAME
~~~~~~~~~~~~~~~~~~~~~~
Name of tendermint instance that is part of your BigchainDB node.
This name should be unique across the cluster, for more information please refer to
:ref:`generate-the-blockchain-id-and-genesis-time`.
vars.TM_SEEDS, TM_VALIDATORS, TM_VALIDATORS_POWERS, TM_GENESIS_TIME and TM_CHAIN_ID
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
vars.BDB_SEEDS, BDB_VALIDATORS, BDB_VALIDATORS_POWERS, BDB_GENESIS_TIME and BDB_CHAIN_ID
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These parameters are shared across the cluster. More information about the generation
of these parameters can be found at :ref:`generate-the-blockchain-id-and-genesis-time`.

View File

@ -137,6 +137,10 @@ Step 4: Start the NGINX Service
$ kubectl apply -f nginx-https/nginx-https-svc.yaml
OR
$ kubectl apply -f nginx-http/nginx-http-svc.yaml
.. _assign-dns-name-to-nginx-public-ip:
@ -217,30 +221,9 @@ Step 8(Optional): Start the OpenResty Kubernetes Service
$ kubectl apply -f nginx-openresty/nginx-openresty-svc.yaml
.. _start-the-tendermint-kubernetes-service:
Step 9: Start the Tendermint Kubernetes Service
-----------------------------------------------
* This configuration is located in the file ``tendermint/tendermint-svc.yaml``.
* Set the ``metadata.name`` and ``metadata.labels.name`` to the value
set in ``tm-instance-name`` in the ConfigMap above.
* Set the ``spec.selector.app`` to the value set in ``tm-instance-name`` in
the ConfigMap followed by ``-ss``. For example, if the value set in the
``tm-instance-name`` is ``tm-instance-0``, set the
``spec.selector.app`` to ``tm-instance-0-ss``.
* Start the Kubernetes Service:
.. code:: bash
$ kubectl apply -f tendermint/tendermint-svc.yaml
.. _start-the-nginx-deployment:
Step 10: Start the NGINX Kubernetes Deployment
Step 9: Start the NGINX Kubernetes Deployment
----------------------------------------------
* NGINX is used as a proxy to the BigchainDB, Tendermint and MongoDB instances in
@ -249,12 +232,8 @@ Step 10: Start the NGINX Kubernetes Deployment
on ``mongodb-frontend-port``, ``tm-p2p-port`` and ``tm-pub-key-access``
to MongoDB and Tendermint respectively.
Step 10.2: NGINX with HTTPS
^^^^^^^^^^^^^^^^^^^^^^^^^^^
* This configuration is located in the file
``nginx-https/nginx-https-dep.yaml``.
``nginx-https/nginx-https-dep.yaml`` or ``nginx-http/nginx-http-dep.yaml``.
* Start the Kubernetes Deployment:
@ -262,10 +241,14 @@ Step 10.2: NGINX with HTTPS
$ kubectl apply -f nginx-https/nginx-https-dep.yaml
OR
$ kubectl apaply -f nginx-http/nginx-http-dep.yaml
.. _create-kubernetes-storage-class-mdb:
Step 11: Create Kubernetes Storage Classes for MongoDB
Step 10: Create Kubernetes Storage Classes for MongoDB
------------------------------------------------------
MongoDB needs somewhere to store its data persistently,
@ -338,7 +321,7 @@ You can check if it worked using ``kubectl get storageclasses``.
.. _create-kubernetes-persistent-volume-claim-mdb:
Step 12: Create Kubernetes Persistent Volume Claims for MongoDB
Step 11: Create Kubernetes Persistent Volume Claims for MongoDB
---------------------------------------------------------------
Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and
@ -393,7 +376,7 @@ but it should become "Bound" fairly quickly.
.. _start-kubernetes-stateful-set-mongodb:
Step 13: Start a Kubernetes StatefulSet for MongoDB
Step 12: Start a Kubernetes StatefulSet for MongoDB
---------------------------------------------------
* Create the MongoDB StatefulSet using:
@ -416,7 +399,7 @@ Step 13: Start a Kubernetes StatefulSet for MongoDB
.. _configure-users-and-access-control-mongodb:
Step 14: Configure Users and Access Control for MongoDB
Step 13: Configure Users and Access Control for MongoDB
-------------------------------------------------------
* In this step, you will create a user on MongoDB with authorization
@ -430,14 +413,14 @@ Step 14: Configure Users and Access Control for MongoDB
.. _create-kubernetes-storage-class:
Step 15: Create Kubernetes Storage Classes for Tendermint
Step 14: Create Kubernetes Storage Classes for BigchainDB
----------------------------------------------------------
Tendermint needs somewhere to store its data persistently, it uses
BigchainDB needs somewhere to store Tendermint data persistently, Tendermint uses
LevelDB as the persistent storage layer.
The Kubernetes template for configuration of Storage Class is located in the
file ``tendermint/tendermint-sc.yaml``.
file ``bigchaindb/bigchaindb-sc.yaml``.
Details about how to create a Azure Storage account and how Kubernetes Storage Class works
are already covered in this document: :ref:`create-kubernetes-storage-class-mdb`.
@ -446,20 +429,20 @@ Create the required storage classes using:
.. code:: bash
$ kubectl apply -f tendermint/tendermint-sc.yaml
$ kubectl apply -f bigchaindb/bigchaindb-sc.yaml
You can check if it worked using ``kubectl get storageclasses``.
.. _create-kubernetes-persistent-volume-claim:
Step 16: Create Kubernetes Persistent Volume Claims for Tendermint
Step 15: Create Kubernetes Persistent Volume Claims for BigchainDB
------------------------------------------------------------------
Next, you will create two PersistentVolumeClaim objects ``tendermint-db-claim`` and
``tendermint-config-db-claim``.
This configuration is located in the file ``tendermint/tendermint-pvc.yaml``.
This configuration is located in the file ``bigchaindb/bigchaindb-pvc.yaml``.
Details about Kubernetes Persistent Volumes, Persistent Volume Claims
and how they work with Azure are already covered in this
@ -469,7 +452,7 @@ Create the required Persistent Volume Claims using:
.. code:: bash
$ kubectl apply -f tendermint/tendermint-pvc.yaml
$ kubectl apply -f bigchaindb/bigchaindb-pvc.yaml
You can check its status using:
@ -478,56 +461,43 @@ You can check its status using:
kubectl get pvc -w
.. _create-kubernetes-stateful-set:
.. _start-kubernetes-stateful-set-bdb:
Step 17: Start a Kubernetes StatefulSet for Tendermint
Step 16: Start a Kubernetes StatefulSet for BigchainDB
------------------------------------------------------
* This configuration is located in the file ``tendermint/tendermint-ss.yaml``.
* This configuration is located in the file ``bigchaindb/bigchaindb-ss.yaml``.
* Set the ``spec.serviceName`` to the value set in ``tm-instance-name`` in
* Set the ``spec.serviceName`` to the value set in ``bdb-instance-name`` in
the ConfigMap.
For example, if the value set in the ``tm-instance-name``
is ``tm-instance-0``, set the field to ``tm-instance-0``.
For example, if the value set in the ``bdb-instance-name``
is ``bdb-instance-0``, set the field to ``tm-instance-0``.
* Set ``metadata.name``, ``spec.template.metadata.name`` and
``spec.template.metadata.labels.app`` to the value set in
``tm-instance-name`` in the ConfigMap, followed by
``bdb-instance-name`` in the ConfigMap, followed by
``-ss``.
For example, if the value set in the
``tm-instance-name`` is ``tm-instance-0``, set the fields to the value
``tm-insance-0-ss``.
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the value
``bdb-insance-0-ss``.
* As we gain more experience running Tendermint in testing and production, we
will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
* Create the Tendermint StatefulSet using:
* Create the BigchainDB StatefulSet using:
.. code:: bash
$ kubectl apply -f tendermint/tendermint-ss.yaml
$ kubectl apply -f bigchaindb/bigchaindb-ss.yaml
.. code:: bash
$ kubectl get pods -w
.. _start-kubernetes-deployment-bdb:
Step 18: Start a Kubernetes Deployment for BigchainDB
-----------------------------------------------------
* Create the BigchainDB Deployment using:
.. code:: bash
$ kubectl apply -f bigchaindb/bigchaindb-dep.yaml
* You can check its status using the command ``kubectl get deployments -w``
.. _start-kubernetes-deployment-for-mdb-mon-agent:
Step 19(Optional): Start a Kubernetes Deployment for MongoDB Monitoring Agent
Step 17(Optional): Start a Kubernetes Deployment for MongoDB Monitoring Agent
------------------------------------------------------------------------------
* This configuration is located in the file
@ -556,7 +526,7 @@ Step 19(Optional): Start a Kubernetes Deployment for MongoDB Monitoring Agent
.. _start-kubernetes-deployment-openresty:
Step 20(Optional): Start a Kubernetes Deployment for OpenResty
Step 18(Optional): Start a Kubernetes Deployment for OpenResty
--------------------------------------------------------------
* This configuration is located in the file
@ -595,7 +565,7 @@ Step 20(Optional): Start a Kubernetes Deployment for OpenResty
* You can check its status using the command ``kubectl get deployments -w``
Step 21(Optional): Configure the MongoDB Cloud Manager
Step 19(Optional): Configure the MongoDB Cloud Manager
------------------------------------------------------
Refer to the
@ -604,7 +574,7 @@ for details on how to configure the MongoDB Cloud Manager to enable
monitoring and backup.
Step 22(Optional): Only for multi site deployments(Geographically dispersed)
Step 20(Optional): Only for multi site deployments(Geographically dispersed)
----------------------------------------------------------------------------
We need to make sure that clusters are able
@ -612,22 +582,22 @@ to talk to each other i.e. specifically the communication between the
Tendermint peers. Set up networking between the clusters using
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
Assuming we have a Tendermint instance ``tendermint-instance-1`` residing in Azure data center location ``westeurope`` and we
want to connect to ``tendermint-instance-2``, ``tendermint-instance-3``, and ``tendermint-instance-4`` located in Azure data centers
Assuming we have a BigchainDB instance ``bdb-instance-1`` residing in Azure data center location ``westeurope`` and we
want to connect to ``bdb-instance-2``, ``bdb-instance-3``, and ``bdb-instance-4`` located in Azure data centers
``eastus``, ``centralus`` and ``westus``, respectively. Unless you already have explicitly set up networking for
``tendermint-instance-1`` to communicate with ``tendermint-instance-2/3/4`` and
``bdb-instance-1`` to communicate with ``bdb-instance-2/3/4`` and
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
Tendermint P2P network.
It is similar to ensuring that there is a ``CNAME`` record in the DNS
infrastructure to resolve ``tendermint-instance-X`` to the host where it is actually available.
infrastructure to resolve ``bdb-instance-X`` to the host where it is actually available.
We can do this in Kubernetes using a Kubernetes Service of ``type``
``ExternalName``.
* This configuration is located in the file ``tendermint/tendermint-ext-conn-svc.yaml``.
* This configuration is located in the file ``bigchaindb/bigchaindb-ext-conn-svc.yaml``.
* Set the name of the ``metadata.name`` to the host name of the Tendermint instance you are trying to connect to.
For instance if you are configuring this service on cluster with ``tendermint-instance-1`` then the ``metadata.name`` will
be ``tendermint-instance-2`` and vice versa.
* Set the name of the ``metadata.name`` to the host name of the BigchainDB instance you are trying to connect to.
For instance if you are configuring this service on cluster with ``bdb-instance-1`` then the ``metadata.name`` will
be ``bdb-instance-2`` and vice versa.
* Set ``spec.ports.port[0]`` to the ``tm-p2p-port`` from the ConfigMap for the other cluster.
@ -650,10 +620,10 @@ We can do this in Kubernetes using a Kubernetes Service of ``type``
.. _verify-and-test-bdb:
Step 23: Verify the BigchainDB Node Setup
Step 21: Verify the BigchainDB Node Setup
-----------------------------------------
Step 23.1: Testing Internally
Step 21.1: Testing Internally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To test the setup of your BigchainDB node, you could use a Docker container
@ -702,20 +672,12 @@ To test the BigchainDB instance:
$ curl -X GET http://bdb-instance-0:9984
$ curl -X GET http://bdb-instance-0:9986/pub_key.json
$ curl -X GET http://bdb-instance-0:46657/abci_info
$ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions
To test the Tendermint instance:
.. code:: bash
$ nslookup tm-instance-0
$ dig +noall +answer _bdb-api-port._tcp.tm-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _bdb-ws-port._tcp.tm-instance-0.default.svc.cluster.local SRV
$ curl -X GET http://tm-instance-0:9986/pub_key.json
To test the OpenResty instance:
@ -769,7 +731,7 @@ The above curl command should result in the response
``It looks like you are trying to access MongoDB over HTTP on the native driver port.``
Step 23.2: Testing Externally
Step 21.2: Testing Externally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Check the MongoDB monitoring agent on the MongoDB Cloud Manager

View File

@ -10,10 +10,10 @@ You can modify them to suit your needs.
.. _generate-the-blockchain-id-and-genesis-time:
Generate All Shared Tendermint Setup Parameters
Generate All Shared BigchainDB Setup Parameters
-----------------------------------------------
There are some shared Tendermint setup paramters that every node operator
There are some shared BigchainDB setup paramters that every node operator
in the consortium shares
because they are properties of the Tendermint cluster.
They look like this:
@ -21,24 +21,24 @@ They look like this:
.. code::
# Tendermint data
TM_SEEDS='tm-instance-1,tm-instance-2,tm-instance-3,tm-instance-4'
TM_VALIDATORS='tm-instance-1,tm-instance-2,tm-instance-3,tm-instance-4'
TM_VALIDATOR_POWERS='10,10,10,10'
TM_GENESIS_TIME='0001-01-01T00:00:00Z'
TM_CHAIN_ID='test-chain-rwcPML'
BDB_SEEDS='bdb-instance-1,bdb-instance-2,bdb-instance-3,bdb-instance-4'
BDB_VALIDATORS='bdb-instance-1,bdb-instance-2,bdb-instance-3,bdb-instance-4'
BDB_VALIDATOR_POWERS='10,10,10,10'
BDB_GENESIS_TIME='0001-01-01T00:00:00Z'
BDB_CHAIN_ID='test-chain-rwcPML'
Those paramters only have to be generated once, by one member of the consortium.
That person will then share the results (Tendermint setup parameters)
with all the node operators.
The above example parameters are for a cluster of 4 initial (seed) nodes.
Note how ``TM_SEEDS``, ``TM_VALIDATORS`` and ``TM_VALIDATOR_POWERS`` are lists
Note how ``BDB_SEEDS``, ``BDB_VALIDATORS`` and ``BDB_VALIDATOR_POWERS`` are lists
with 4 items each.
**If your consortium has a different number of initial nodes,
then those lists should have that number or items.**
Use ``10`` for all the power values.
To generate a ``TM_GENESIS_TIME`` and a ``TM_CHAIN_ID``,
To generate a ``BDB_GENESIS_TIME`` and a ``BDB_CHAIN_ID``,
you can do this:
.. code::
@ -63,10 +63,10 @@ You should see something that looks like:
"app_hash": ""
}
The value with ``"genesis_time"`` is ``TM_GENESIS_TIME`` and
the value with ``"chain_id"`` is ``TM_CHAIN_ID``.
The value with ``"genesis_time"`` is ``BDB_GENESIS_TIME`` and
the value with ``"chain_id"`` is ``BDB_CHAIN_ID``.
Now you have all the Tendermint setup parameters and can share them
Now you have all the BigchainDB setup parameters and can share them
with all of the node operators. (They will put them in their ``vars`` file.
We'll say more about that file below.)

View File

@ -1,165 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bdb-instance-0-dep
spec:
replicas: 1
template:
metadata:
labels:
app: bdb-instance-0-dep
spec:
terminationGracePeriodSeconds: 10
containers:
- name: bigchaindb
image: bigchaindb/bigchaindb:2.0.0-alpha2
imagePullPolicy: Always
args:
- start
env:
- name: BIGCHAINDB_DATABASE_HOST
valueFrom:
configMapKeyRef:
name: vars
key: mdb-instance-name
- name: BIGCHAINDB_DATABASE_PORT
valueFrom:
configMapKeyRef:
name: vars
key: mongodb-backend-port
- name: BIGCHAINDB_DATABASE_BACKEND
value: "localmongodb"
- name: BIGCHAINDB_DATABASE_NAME
valueFrom:
configMapKeyRef:
name: vars
key: bigchaindb-database-name
- name: BIGCHAINDB_SERVER_BIND
valueFrom:
configMapKeyRef:
name: vars
key: bigchaindb-server-bind
- name: BIGCHAINDB_WSSERVER_HOST
valueFrom:
configMapKeyRef:
name: vars
key: bigchaindb-ws-interface
- name: BIGCHAINDB_WSSERVER_ADVERTISED_HOST
valueFrom:
configMapKeyRef:
name: vars
key: node-fqdn
- name: BIGCHAINDB_WSSERVER_PORT
valueFrom:
configMapKeyRef:
name: vars
key: bigchaindb-ws-port
- name: BIGCHAINDB_WSSERVER_ADVERTISED_PORT
valueFrom:
configMapKeyRef:
name: vars
key: node-frontend-port
- name: BIGCHAINDB_WSSERVER_ADVERTISED_SCHEME
valueFrom:
configMapKeyRef:
name: vars
key: bigchaindb-wsserver-advertised-scheme
- name: BIGCHAINDB_BACKLOG_REASSIGN_DELAY
valueFrom:
configMapKeyRef:
name: bdb-config
key: bigchaindb-backlog-reassign-delay
- name: BIGCHAINDB_DATABASE_MAXTRIES
valueFrom:
configMapKeyRef:
name: bdb-config
key: bigchaindb-database-maxtries
- name: BIGCHAINDB_DATABASE_CONNECTION_TIMEOUT
valueFrom:
configMapKeyRef:
name: bdb-config
key: bigchaindb-database-connection-timeout
- name: BIGCHAINDB_LOG_LEVEL_CONSOLE
valueFrom:
configMapKeyRef:
name: bdb-config
key: bigchaindb-log-level
- name: BIGCHAINDB_SERVER_WORKERS
value: "1"
- name: BIGCHAINDB_SERVER_THREADS
value: "1"
- name: BIGCHAINDB_DATABASE_SSL
value: "true"
- name: BIGCHAINDB_DATABASE_CA_CERT
value: /etc/bigchaindb/ca/ca.pem
- name: BIGCHAINDB_DATABASE_CRLFILE
value: /etc/bigchaindb/ca/crl.pem
- name: BIGCHAINDB_DATABASE_CERTFILE
value: /etc/bigchaindb/ssl/bdb-instance.pem
- name: BIGCHAINDB_DATABASE_KEYFILE
value: /etc/bigchaindb/ssl/bdb-instance.key
- name: BIGCHAINDB_DATABASE_LOGIN
valueFrom:
configMapKeyRef:
name: bdb-config
key: bdb-user
- name: BIGCHAINDB_TENDERMINT_HOST
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-instance-name
- name: TENDERMINT_PORT
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-rpc-port
command:
- bash
- "-c"
- |
bigchaindb -l DEBUG start
ports:
- containerPort: 9984
protocol: TCP
name: bdb-port
- containerPort: 9985
protocol: TCP
name: bdb-ws-port
- containerPort: 46658
protocol: TCP
name: tm-abci-port
volumeMounts:
- name: bdb-certs
mountPath: /etc/bigchaindb/ssl/
readOnly: true
- name: ca-auth
mountPath: /etc/bigchaindb/ca/
readOnly: true
resources:
limits:
cpu: 200m
memory: 768Mi
livenessProbe:
httpGet:
path: /
port: bdb-port
initialDelaySeconds: 15
periodSeconds: 15
failureThreshold: 3
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /
port: bdb-port
initialDelaySeconds: 15
timeoutSeconds: 10
restartPolicy: Always
volumes:
- name: bdb-certs
secret:
secretName: bdb-certs
defaultMode: 0400
- name: ca-auth
secret:
secretName: ca-auth
defaultMode: 0400

View File

@ -1,9 +1,9 @@
apiVersion: v1
kind: Service
metadata:
# Name of tendermint instance you are trying to connect to
# name: "tm-instance-0"
name: "<remote-tendermint-host>"
# Name of BigchainDB instance you are trying to connect to
# name: "bdb-instance-0"
name: "<remote-bdb-host>"
namespace: default
spec:
ports:
@ -15,6 +15,6 @@ spec:
name: pubkey
type: ExternalName
# FQDN of remote cluster/NGINX instance
#externalName: "nginx-instance-for-tm-instance-0.westeurope.cloudapp.azure.com"
#externalName: "nginx-instance-for-bdb-instance-0.westeurope.cloudapp.azure.com"
externalName: "<dns-name-remote-nginx>"

View File

@ -29,4 +29,3 @@ spec:
resources:
requests:
storage: 1Gi

View File

@ -8,7 +8,7 @@ metadata:
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Premium_LRS #[Premium_LRS, Standard_LRS]
location: westeurope
location: <Storage account location>
# If you have created a different storage account e.g. for Premium Storage
storageAccount: <Storage account name>
# Use Managed Disk(s) with VMs using Managed Disks(Only used for Tectonic deployment)
@ -24,7 +24,7 @@ metadata:
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Premium_LRS #[Premium_LRS, Standard_LRS]
location: westeurope
location: <Storage account location>
# If you have created a different storage account e.g. for Premium Storage
storageAccount: <Storage account name>
# Use Managed Disk(s) with VMs using Managed Disks(Only used for Tectonic deployment)

View File

@ -0,0 +1,307 @@
#################################################################################
# This YAML file desribes a StatefulSet with a service for running and exposing #
# a Tendermint instance. It depends on the tendermint-config-db-claim #
# and tendermint-db-claim k8s pvc. #
#################################################################################
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: bdb-instance-0-ss
namespace: default
spec:
serviceName: bdb-instance-0
replicas: 1
template:
metadata:
name: bdb-instance-0-ss
labels:
app: bdb-instance-0-ss
spec:
restartPolicy: Always
volumes:
- name: bdb-data
persistentVolumeClaim:
claimName: tendermint-db-claim
- name: bdb-config-data
persistentVolumeClaim:
claimName: tendermint-config-db-claim
- name: bdb-certs
secret:
secretName: bdb-certs
defaultMode: 0400
- name: ca-auth
secret:
secretName: ca-auth
defaultMode: 0400
containers:
# Treating bigchaindb+ nginx + tendermint as a POD because they should not
# exist without each other
# Nginx container for hosting public key of this ndoe
- name: nginx
imagePullPolicy: Always
image: bigchaindb/nginx_pub_key_access:2.0.0-alpha3
env:
- name: TM_PUB_KEY_ACCESS_PORT
valueFrom:
configMapKeyRef:
name: tendermint-config
key: bdb-pub-key-access
ports:
- containerPort: 9986
name: bdb-pk-access
volumeMounts:
- name: bdb-config-data
mountPath: /usr/share/nginx
readOnly: true
#Tendermint container
- name: tendermint
imagePullPolicy: Always
image: bigchaindb/tendermint:2.0.0-alpha3
env:
- name: TM_SEEDS
valueFrom:
configMapKeyRef:
name: tendermint-config
key: bdb-seeds
- name: TM_VALIDATOR_POWER
valueFrom:
configMapKeyRef:
name: tendermint-config
key: bdb-validator-power
- name: TM_VALIDATORS
valueFrom:
configMapKeyRef:
name: tendermint-config
key: bdb-validators
- name: TM_PUB_KEY_ACCESS_PORT
valueFrom:
configMapKeyRef:
name: tendermint-config
key: bdb-pub-key-access
- name: TM_GENESIS_TIME
valueFrom:
configMapKeyRef:
name: tendermint-config
key: bdb-genesis-time
- name: TM_CHAIN_ID
valueFrom:
configMapKeyRef:
name: tendermint-config
key: bdb-chain-id
- name: TM_P2P_PORT
valueFrom:
configMapKeyRef:
name: tendermint-config
key: bdb-p2p-port
- name: TM_INSTANCE_NAME
valueFrom:
configMapKeyRef:
name: vars
key: bdb-instance-name
- name: TMHOME
value: /tendermint
- name: TM_PROXY_APP
valueFrom:
configMapKeyRef:
name: vars
key: bdb-instance-name
- name: TM_ABCI_PORT
valueFrom:
configMapKeyRef:
name: tendermint-config
key: bdb-abci-port
- name: TM_RPC_PORT
valueFrom:
configMapKeyRef:
name: tendermint-config
key: bdb-rpc-port
resources:
limits:
cpu: 200m
memory: 5G
volumeMounts:
- name: bdb-data
mountPath: /tendermint
- name: bdb-config-data
mountPath: /tendermint_node_data
ports:
- containerPort: 46656
name: p2p
- containerPort: 46657
name: rpc
livenessProbe:
exec:
command:
- /bin/bash
- "-c"
- |
curl -s --fail --max-time 10 "http://${TM_INSTANCE_NAME}:${TM_RPC_PORT}/abci_info" > /dev/null
ERR=$?
if [ "$ERR" == 28 ]; then
exit 1
elif [[ $(curl --max-time 10 "http://${TM_INSTANCE_NAME}:${TM_RPC_PORT}/abci_info" | jq -r ".error.code") == -32603 ]]; then
exit 1
elif [ "$ERR" != 0 ]; then
exit 1
else
exit 0
fi
initialDelaySeconds: 60
periodSeconds: 15
failureThreshold: 3
timeoutSeconds: 15
# BigchainDB container
- name: bigchaindb
image: bigchaindb/bigchaindb:2.0.0-alpha3
imagePullPolicy: Always
args:
- start
env:
- name: BIGCHAINDB_DATABASE_HOST
valueFrom:
configMapKeyRef:
name: vars
key: mdb-instance-name
- name: BIGCHAINDB_DATABASE_PORT
valueFrom:
configMapKeyRef:
name: vars
key: mongodb-backend-port
- name: BIGCHAINDB_DATABASE_BACKEND
value: "localmongodb"
- name: BIGCHAINDB_DATABASE_NAME
valueFrom:
configMapKeyRef:
name: vars
key: bigchaindb-database-name
- name: BIGCHAINDB_SERVER_BIND
valueFrom:
configMapKeyRef:
name: vars
key: bigchaindb-server-bind
- name: BIGCHAINDB_WSSERVER_HOST
valueFrom:
configMapKeyRef:
name: vars
key: bigchaindb-ws-interface
- name: BIGCHAINDB_WSSERVER_ADVERTISED_HOST
valueFrom:
configMapKeyRef:
name: vars
key: node-fqdn
- name: BIGCHAINDB_WSSERVER_PORT
valueFrom:
configMapKeyRef:
name: vars
key: bigchaindb-ws-port
- name: BIGCHAINDB_WSSERVER_ADVERTISED_PORT
valueFrom:
configMapKeyRef:
name: vars
key: node-frontend-port
- name: BIGCHAINDB_WSSERVER_ADVERTISED_SCHEME
valueFrom:
configMapKeyRef:
name: vars
key: bigchaindb-wsserver-advertised-scheme
- name: BIGCHAINDB_BACKLOG_REASSIGN_DELAY
valueFrom:
configMapKeyRef:
name: bdb-config
key: bigchaindb-backlog-reassign-delay
- name: BIGCHAINDB_DATABASE_MAXTRIES
valueFrom:
configMapKeyRef:
name: bdb-config
key: bigchaindb-database-maxtries
- name: BIGCHAINDB_DATABASE_CONNECTION_TIMEOUT
valueFrom:
configMapKeyRef:
name: bdb-config
key: bigchaindb-database-connection-timeout
- name: BIGCHAINDB_LOG_LEVEL_CONSOLE
valueFrom:
configMapKeyRef:
name: bdb-config
key: bigchaindb-log-level
- name: BIGCHAINDB_DATABASE_SSL
value: "true"
- name: BIGCHAINDB_DATABASE_CA_CERT
value: /etc/bigchaindb/ca/ca.pem
- name: BIGCHAINDB_DATABASE_CRLFILE
value: /etc/bigchaindb/ca/crl.pem
- name: BIGCHAINDB_DATABASE_CERTFILE
value: /etc/bigchaindb/ssl/bdb-instance.pem
- name: BIGCHAINDB_DATABASE_KEYFILE
value: /etc/bigchaindb/ssl/bdb-instance.key
- name: BIGCHAINDB_DATABASE_LOGIN
valueFrom:
configMapKeyRef:
name: bdb-config
key: bdb-user
- name: BIGCHAINDB_TENDERMINT_HOST
valueFrom:
configMapKeyRef:
name: vars
key: bdb-instance-name
- name: BIGCHAINDB_TENDERMINT_PORT
valueFrom:
configMapKeyRef:
name: tendermint-config
key: bdb-rpc-port
command:
- bash
- "-c"
- |
curl -s --fail "http://${BIGCHAINDB_TENDERMINT_HOST}:9986/pub_key.json" > /dev/null
ERR=$?
while [ "$ERR" != 0 ]; do
sleep 30
curl -s --fail "http://${BIGCHAINDB_TENDERMINT_HOST}:9986/pub_key.json" > /dev/null
ERR=$?
echo "Waiting for Tendermint instance."
done
bigchaindb -l DEBUG start
ports:
- containerPort: 9984
protocol: TCP
name: bdb-port
- containerPort: 9985
protocol: TCP
name: bdb-ws-port
- containerPort: 46658
protocol: TCP
name: bdb-abci-port
volumeMounts:
- name: bdb-certs
mountPath: /etc/bigchaindb/ssl/
readOnly: true
- name: ca-auth
mountPath: /etc/bigchaindb/ca/
readOnly: true
resources:
limits:
cpu: 200m
memory: 2G
livenessProbe:
exec:
command:
- /bin/bash
- "-c"
- |
curl -s --fail --max-time 10 "http://${BIGCHAINDB_TENDERMINT_HOST}:${BIGCHAINDB_TENDERMINT_PORT}/abci_info" > /dev/null
ERR=$?
if [ "$ERR" == 28 ]; then
exit 1
elif [[ $(curl --max-time 10 "http://${BIGCHAINDB_TENDERMINT_HOST}:${BIGCHAINDB_TENDERMINT_PORT}/abci_info" | jq -r ".error.code") == -32603 ]]; then
exit 1
elif [ "$ERR" != 0 ]; then
exit 1
else
exit 0
fi
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 3
timeoutSeconds: 15

View File

@ -7,7 +7,7 @@ metadata:
name: bdb-instance-0
spec:
selector:
app: bdb-instance-0-dep
app: bdb-instance-0-ss
ports:
- port: 9984
targetPort: 9984
@ -21,5 +21,17 @@ spec:
targetPort: 46658
name: tm-abci-port
protocol: TCP
- port: 46656
targetPort: 46656
name: tm-p2p-port
protocol: TCP
- port: 46657
targetPort: 46657
name: tm-rpc-port
protocol: TCP
- port: 9986
targetPort: 9986
name: pub-key-access
protocol: TCP
type: ClusterIP
clusterIP: None

View File

@ -0,0 +1,5 @@
#!/bin/bash
docker build -t bigchaindb/nginx_pub_key_access:2.0.0-alpha3 .
docker push bigchaindb/nginx_pub_key_access:2.0.0-alpha3

View File

@ -1,6 +1,8 @@
FROM tendermint/tendermint:0.12
FROM tendermint/tendermint:0.19.0
LABEL maintainer "dev@bigchaindb.com"
WORKDIR /
USER root
RUN apk --update add bash
COPY genesis.json.template /etc/tendermint/genesis.json
COPY tendermint_entrypoint.bash /
VOLUME /tendermint /tendermint_node_data

View File

@ -0,0 +1,5 @@
#!/bin/bash
docker build -t bigchaindb/tendermint:2.0.0-alpha3 .
docker push bigchaindb/tendermint:2.0.0-alpha3

View File

@ -49,20 +49,25 @@ else
fi
# copy template
cp /etc/tendermint/genesis.json /tendermint/genesis.json
mkdir -p /tendermint/config
cp /etc/tendermint/genesis.json /tendermint/config/genesis.json
TM_GENESIS_FILE=/tendermint/genesis.json
TM_GENESIS_FILE=/tendermint/config/genesis.json
TM_PUB_KEY_DIR=/tendermint_node_data
# configure the nginx.conf file with env variables
sed -i "s|TM_GENESIS_TIME|\"${tm_genesis_time}\"|g" ${TM_GENESIS_FILE}
sed -i "s|TM_CHAIN_ID|\"${tm_chain_id}\"|g" ${TM_GENESIS_FILE}
if [ ! -f /tendermint/priv_validator.json ]; then
tendermint gen_validator > /tendermint/priv_validator.json
if [ ! -f /tendermint/config/priv_validator.json ]; then
tendermint gen_validator > /tendermint/config/priv_validator.json
# pub_key.json will be served by the nginx container
cat /tendermint/priv_validator.json
cat /tendermint/priv_validator.json | jq ".pub_key" > "$TM_PUB_KEY_DIR"/pub_key.json
cat /tendermint/config/priv_validator.json
cat /tendermint/config/priv_validator.json | jq ".pub_key" > "$TM_PUB_KEY_DIR"/pub_key.json
fi
if [ ! -f /tendermint/config/node_key.json ]; then
tendermint gen_node_key > "$TM_PUB_KEY_DIR"/address
fi
# fill genesis file with validators
@ -90,20 +95,36 @@ for i in "${!VALS_ARR[@]}"; do
sleep 30
curl -s --fail "http://${VALS_ARR[$i]}:$tm_pub_key_access_port/pub_key.json" > /dev/null
ERR=$?
echo "Cannot connect to Tendermint instance: ${VALS_ARR[$i]}"
echo "Cannot get public key for Tendermint instance: ${VALS_ARR[$i]}"
done
set -e
# add validator to genesis file along with its pub_key
curl -s "http://${VALS_ARR[$i]}:$tm_pub_key_access_port/pub_key.json" | jq ". as \$k | {pub_key: \$k, power: ${VAL_POWERS_ARR[$i]}, name: \"${VALS_ARR[$i]}\"}" > pub_validator.json
cat /tendermint/genesis.json | jq ".validators |= .+ [$(cat pub_validator.json)]" > tmpgenesis && mv tmpgenesis /tendermint/genesis.json
cat /tendermint/config/genesis.json | jq ".validators |= .+ [$(cat pub_validator.json)]" > tmpgenesis && mv tmpgenesis /tendermint/config/genesis.json
rm pub_validator.json
done
done
# construct seeds
IFS=',' read -ra SEEDS_ARR <<< "$tm_seeds"
seeds=()
for s in "${SEEDS_ARR[@]}"; do
seeds+=("$s:$tm_p2p_port")
echo "http://$s:$tm_pub_key_access_port/address"
curl -s --fail "http://$s:$tm_pub_key_access_port/address" > /dev/null
ERR=$?
while [ "$ERR" != 0 ]; do
RETRIES=$((RETRIES+1))
if [ $RETRIES -eq 10 ]; then
echo "${CANNOT_INITIATLIZE_INSTANCE}"
exit 1
fi
# 300(30 * 10(retries)) second timeout before container dies if it cannot find initial peers
sleep 30
curl -s --fail "http://$s:$tm_pub_key_access_port/address" > /dev/null
ERR=$?
echo "Cannot get address for Tendermint instance: ${s}"
done
seed_addr=$(curl -s "http://$s:$tm_pub_key_access_port/address")
seeds+=("$seed_addr@$s:$tm_p2p_port")
done
seeds=$(IFS=','; echo "${seeds[*]}")

View File

@ -129,46 +129,39 @@ metadata:
name: tendermint-config
namespace: default
data:
# tm-seeds is the list of all the peers in the network.
tm-seeds: "<',' separated list of all tendermint nodes in the network>"
# bdb-seeds is the list of all the peers in the network.
bdb-seeds: "<',' separated list of all tendermint nodes in the network>"
# tm-validators is the list of all validators in the network.
tm-validators: "<',' separated list of all validators in the network>"
# bdb-validators is the list of all validators in the network.
bdb-validators: "<',' separated list of all validators in the network>"
# tm-validator-power is the validators voting power, make sure the order and
# bdb-validator-power is the validators voting power, make sure the order and
# the number of nodes in tm-validator-power and tm-validators is the same.
tm-validator-power: "<',' separated list of validator power of each node in the network>"
bdb-validator-power: "<',' separated list of validator power of each node in the network>"
# tm-genesis-time is the official time of blockchain start.
# bdb-genesis-time is the official time of blockchain start.
# example: 0001-01-01T00:00:00Z
tm-genesis-time: "<timestamp of blockchain start>"
bdb-genesis-time: "<timestamp of blockchain start>"
# tm-chain-id is the ID of the blockchain. Must be unique for every blockchain.
# bdb-chain-id is the ID of the blockchain. Must be unique for every blockchain.
# example: test-chain-KPI1Ud
tm-chain-id: "<ID of the blockchain>"
bdb-chain-id: "<ID of the blockchain>"
# tendermint-instance-name is the name of the Tendermint instance
# in the cluster
tm-instance-name: "<name of tendermint instance>"
# ngx-tm-instance-name is the FQDN of the tendermint instance in this cluster
ngx-tm-instance-name: "<name of tendermint instance>.default.svc.cluster.local"
# tm-abci-port is used by Tendermint Core for ABCI traffic. BigchainDB nodes
# bdb-abci-port is used by Tendermint Core for ABCI traffic. BigchainDB nodes
# use that internally.
tm-abci-port: "46658"
bdb-abci-port: "46658"
# tm-p2p-port is used by Tendermint Core to communicate with
# bdb-p2p-port is used by Tendermint Core to communicate with
# other peers in the network. This port is accessible publicly.
tm-p2p-port: "46656"
bdb-p2p-port: "46656"
# tm-rpc-port is used by Tendermint Core to rpc. BigchainDB nodes
# bdb-rpc-port is used by Tendermint Core to rpc. BigchainDB nodes
# use this port internally.
tm-rpc-port: "46657"
bbd-rpc-port: "46657"
# tm-pub-key-access is the port number used to host/publish the
# bdb-pub-key-access is the port number used to host/publish the
# public key of the tendemrint node in this cluster.
tm-pub-key-access: "9986"
bdb-pub-key-access: "9986"
---
apiVersion: v1

View File

@ -1,4 +1,4 @@
#!/bin/bash
docker build -t bigchaindb/localmongodb:2.0.0-alpha .
docker push bigchaindb/localmongodb:2.0.0-alpha
docker build -t bigchaindb/localmongodb:2.0.0-alpha3 .
docker push bigchaindb/localmongodb:2.0.0-alpha3

View File

@ -8,7 +8,7 @@ metadata:
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Premium_LRS #[Premium_LRS, Standard_LRS]
location: westeurope
location: <Storage account location>
# If you have created a different storage account e.g. for Premium Storage
storageAccount: <Storage account name>
# Use Managed Disk(s) with VMs using Managed Disks(Only used for Tectonic deployment)
@ -24,7 +24,7 @@ metadata:
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Premium_LRS #[Premium_LRS, Standard_LRS]
location: westeurope
location: <Storage account location>
# If you have created a different storage account e.g. for Premium Storage
storageAccount: <Storage account name>
# Use Managed Disk(s) with VMs using Managed Disks(Only used for Tectonic deployment)

View File

@ -1,5 +1,5 @@
#!/bin/bash
docker build -t bigchaindb/nginx_http:2.0.0-alpha .
docker build -t bigchaindb/nginx_http:2.0.0-alpha3 .
docker push bigchaindb/nginx_http:2.0.0-alpha
docker push bigchaindb/nginx_http:2.0.0-alpha3

View File

@ -146,17 +146,17 @@ stream {
# DNS resolver to use for all the backend names specified in this configuration.
resolver DNS_SERVER valid=30s ipv6=off;
# The following map block enables lazy-binding to the backend at runtime,
# The following map blocks enable lazy-binding to the backend at runtime,
# rather than binding as soon as NGINX starts.
map $remote_addr $tm_backend {
default TM_BACKEND_HOST;
map $remote_addr $bdb_backend {
default BIGCHAINDB_BACKEND_HOST;
}
# Server to forward connection to nginx instance hosting
# tendermint node public key.
server {
listen TM_PUB_KEY_ACCESS_PORT;
proxy_pass $tm_backend:TM_PUB_KEY_ACCESS_PORT;
proxy_pass $bdb_backend:TM_PUB_KEY_ACCESS_PORT;
}
# Server to forward p2p connections to Tendermint instance.
@ -164,7 +164,7 @@ stream {
listen TM_P2P_PORT so_keepalive=3m:1m:5;
preread_timeout 60s;
tcp_nodelay on;
proxy_pass $tm_backend:TM_P2P_PORT;
proxy_pass $bdb_backend:TM_P2P_PORT;
}
}

View File

@ -21,6 +21,10 @@ bdb_backend_host=`printenv BIGCHAINDB_BACKEND_HOST`
bdb_api_port=`printenv BIGCHAINDB_API_PORT`
bdb_ws_port=`printenv BIGCHAINDB_WS_PORT`
# Tendermint vars
tm_pub_key_access_port=`printenv TM_PUB_KEY_ACCESS_PORT`
tm_p2p_port=`printenv TM_P2P_PORT`
# sanity check
if [[ -z "${node_frontend_port:?NODE_FRONTEND_PORT not specified. Exiting!}" || \
@ -33,7 +37,6 @@ if [[ -z "${node_frontend_port:?NODE_FRONTEND_PORT not specified. Exiting!}" ||
-z "${dns_server:?DNS_SERVER not specified. Exiting!}" || \
-z "${health_check_port:?HEALTH_CHECK_PORT not specified.}" || \
-z "${tm_pub_key_access_port:?TM_PUB_KEY_ACCESS_PORT not specified. Exiting!}" || \
-z "${tm_backend_host:?TM_BACKEND_HOST not specified. Exiting!}" || \
-z "${tm_p2p_port:?TM_P2P_PORT not specified. Exiting!}" ]]; then
exit 1
else
@ -47,7 +50,6 @@ else
echo BIGCHAINDB_API_PORT="$bdb_api_port"
echo BIGCHAINDB_WS_PORT="$bdb_ws_port"
echo TM_PUB_KEY_ACCESS_PORT="$tm_pub_key_access_port"
echo TM_BACKEND_HOST="$tm_backend_host"
echo TM_P2P_PORT="$tm_p2p_port"
fi
@ -64,7 +66,6 @@ sed -i "s|BIGCHAINDB_WS_PORT|${bdb_ws_port}|g" ${NGINX_CONF_FILE}
sed -i "s|DNS_SERVER|${dns_server}|g" ${NGINX_CONF_FILE}
sed -i "s|HEALTH_CHECK_PORT|${health_check_port}|g" ${NGINX_CONF_FILE}
sed -i "s|TM_PUB_KEY_ACCESS_PORT|${tm_pub_key_access_port}|g" ${NGINX_CONF_FILE}
sed -i "s|TM_BACKEND_HOST|${tm_backend_host}|g" ${NGINX_CONF_FILE}
sed -i "s|TM_P2P_PORT|${tm_p2p_port}|g" ${NGINX_CONF_FILE}
# start nginx

View File

@ -59,29 +59,24 @@ spec:
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-pub-key-access
- name: TM_BACKEND_HOST
valueFrom:
configMapKeyRef:
name: tendermint-config
key: ngx-tm-instance-name
key: bdb-pub-key-access
- name: TM_P2P_PORT
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-p2p-port
key: bdb-p2p-port
ports:
- containerPort: "<node-health-check-port from ConfigMap>"
- containerPort: 8888
protocol: TCP
name: ngx-health
- containerPort: "<node-frontend-port from ConfigMap>"
- containerPort: 80
protocol: TCP
- containerPort: "<tm-pub-key-access from ConfigMap>"
- containerPort: 9986
protocol: TCP
name: tm-pub-key
- containerPort: "<tm-p2p-port from ConfigMap>"
name: bdb-pub-key
- containerPort: 46656
protocol: TCP
name: tm-p2p-port
name: bdb-p2p-port
livenessProbe:
httpGet:
path: /health

View File

@ -1,5 +1,5 @@
#!/bin/bash
docker build -t bigchaindb/nginx_https:2.0.0-alpha .
docker build -t bigchaindb/nginx_https:2.0.0-alpha3 .
docker push bigchaindb/nginx_https:2.0.0-alpha
docker push bigchaindb/nginx_https:2.0.0-alpha3

View File

@ -177,17 +177,17 @@ stream {
# DNS resolver to use for all the backend names specified in this configuration.
resolver DNS_SERVER valid=30s ipv6=off;
# The following map block enables lazy-binding to the backend at runtime,
# The following map blocks enable lazy-binding to the backend at runtime,
# rather than binding as soon as NGINX starts.
map $remote_addr $tm_backend {
default TM_BACKEND_HOST;
map $remote_addr $bdb_backend {
default BIGCHAINDB_BACKEND_HOST;
}
# Server to forward connection to nginx instance hosting
# tendermint node public key.
server {
listen TM_PUB_KEY_ACCESS_PORT;
proxy_pass $tm_backend:TM_PUB_KEY_ACCESS_PORT;
proxy_pass $bdb_backend:TM_PUB_KEY_ACCESS_PORT;
}
# Server to forward p2p connections to Tendermint instance.
@ -195,7 +195,7 @@ stream {
listen TM_P2P_PORT so_keepalive=3m:1m:5;
preread_timeout 60s;
tcp_nodelay on;
proxy_pass $tm_backend:TM_P2P_PORT;
proxy_pass $bdb_backend:TM_P2P_PORT;
}
}

View File

@ -174,17 +174,17 @@ stream {
# DNS resolver to use for all the backend names specified in this configuration.
resolver DNS_SERVER valid=30s ipv6=off;
# The following map block enables lazy-binding to the backend at runtime,
# The following map blocks enable lazy-binding to the backend at runtime,
# rather than binding as soon as NGINX starts.
map $remote_addr $tm_backend {
default TM_BACKEND_HOST;
map $remote_addr $bdb_backend {
default BIGCHAINDB_BACKEND_HOST;
}
# Server to forward connection to nginx instance hosting
# tendermint node public key.
server {
listen TM_PUB_KEY_ACCESS_PORT;
proxy_pass $tm_backend:TM_PUB_KEY_ACCESS_PORT;
proxy_pass $bdb_backend:TM_PUB_KEY_ACCESS_PORT;
}
# Server to forward p2p connections to Tendermint instance.
@ -192,7 +192,7 @@ stream {
listen TM_P2P_PORT so_keepalive=3m:1m:5;
preread_timeout 60s;
tcp_nodelay on;
proxy_pass $tm_backend:TM_P2P_PORT;
proxy_pass $bdb_backend:TM_P2P_PORT;
}
}

View File

@ -31,7 +31,6 @@ bdb_ws_port=`printenv BIGCHAINDB_WS_PORT`
# Tendermint vars
tm_pub_key_access_port=`printenv TM_PUB_KEY_ACCESS_PORT`
tm_backend_host=`printenv TM_BACKEND_HOST`
tm_p2p_port=`printenv TM_P2P_PORT`
@ -48,7 +47,6 @@ if [[ -z "${node_frontend_port:?NODE_FRONTEND_PORT not specified. Exiting!}" ||
-z "${health_check_port:?HEALTH_CHECK_PORT not specified. Exiting!}" || \
-z "${node_fqdn:?NODE_FQDN not specified. Exiting!}" || \
-z "${tm_pub_key_access_port:?TM_PUB_KEY_ACCESS_PORT not specified. Exiting!}" || \
-z "${tm_backend_host:?TM_BACKEND_HOST not specified. Exiting!}" || \
-z "${tm_p2p_port:?TM_P2P_PORT not specified. Exiting!}" ]]; then
echo "Missing required environment variables. Exiting!"
exit 1
@ -65,7 +63,6 @@ else
echo BIGCHAINDB_API_PORT="$bdb_api_port"
echo BIGCHAINDB_WS_PORT="$bdb_ws_port"
echo TM_PUB_KEY_ACCESS_PORT="$tm_pub_key_access_port"
echo TM_BACKEND_HOST="$tm_backend_host"
echo TM_P2P_PORT="$tm_p2p_port"
fi
@ -93,7 +90,6 @@ sed -i "s|BIGCHAINDB_WS_PORT|${bdb_ws_port}|g" ${NGINX_CONF_FILE}
sed -i "s|DNS_SERVER|${dns_server}|g" ${NGINX_CONF_FILE}
sed -i "s|HEALTH_CHECK_PORT|${health_check_port}|g" ${NGINX_CONF_FILE}
sed -i "s|TM_PUB_KEY_ACCESS_PORT|${tm_pub_key_access_port}|g" ${NGINX_CONF_FILE}
sed -i "s|TM_BACKEND_HOST|${tm_backend_host}|g" ${NGINX_CONF_FILE}
sed -i "s|TM_P2P_PORT|${tm_p2p_port}|g" ${NGINX_CONF_FILE}
# start nginx

View File

@ -12,7 +12,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: bigchaindb/nginx_https:2.0.0-alpha
image: bigchaindb/nginx_https:2.0.0-alpha3
imagePullPolicy: Always
env:
- name: NODE_FRONTEND_PORT
@ -74,17 +74,12 @@ spec:
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-pub-key-access
- name: TM_BACKEND_HOST
valueFrom:
configMapKeyRef:
name: tendermint-config
key: ngx-tm-instance-name
key: bdb-pub-key-access
- name: TM_P2P_PORT
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-p2p-port
key: bdb-p2p-port
- name: AUTHORIZATION_MODE
valueFrom:
configMapKeyRef:
@ -107,10 +102,10 @@ spec:
name: ngx-port
- containerPort: 9986
protocol: TCP
name: tm-pub-key
name: bdb-pub-key
- containerPort: 46656
protocol: TCP
name: tm-p2p-port
name: bdb-p2p-port
livenessProbe:
httpGet:
path: /health

View File

@ -100,8 +100,8 @@ function convert_b64(){
}
function configure_common(){
sudo apt-get update -y
sudo apt-get install openssl -y
apt-get update -y
apt-get install openssl -y
wget https://github.com/OpenVPN/easy-rsa/archive/3.0.1.tar.gz -P $1
tar xzvf $1/3.0.1.tar.gz -C $1/
rm $1/3.0.1.tar.gz
@ -134,8 +134,16 @@ function generate_secretes_no_threescale(){
root_crl_pem=`cat $1/crl.pem.b64`
secrete_token=`echo $2 | base64 -w 0`
https_cert_key=`cat $3 | base64 -w 0`
https_cert_chain_pem=`cat $4 | base64 -w 0`
if [ -f $3 ]; then
https_cert_key=`cat $3 | base64 -w 0`
else
https_cert_key=""
fi
if [ -f $4 ]; then
https_cert_chain_pem=`cat $4 | base64 -w 0`
else
https_cert_chain_pem=""
fi
mdb_admin_password=`echo $5 | base64 -w 0`
@ -215,18 +223,17 @@ EOF
function generate_config_map(){
mdb_instance_name="$MDB_CN-$INDEX"
bdb_instance_name="$BDB_CN-$INDEX"
ngx_instance_name="ngx-instance-$INDEX"
bdb_user=`cat "${1}"/"$BDB_CN"-"${INDEX}".user`
mdb_admin_username="${2}"
node_fqdn="${3}"
tm_seeds="${4}"
tm_validators="${5}"
tm_validators_power="${6}"
tm_genesis_time="${7}"
tm_chain_id="${8}"
tm_instance_name="${9}"
bdb_seeds="${4}"
bdb_validators="${5}"
bdb_validators_power="${6}"
bdb_genesis_time="${7}"
bdb_chain_id="${8}"
bdb_instance_name="${9}"
dns_resolver_k8s="${10}"
cat > config-map.yaml << EOF
@ -354,46 +361,39 @@ metadata:
name: tendermint-config
namespace: default
data:
# tm-seeds is the list of all the peers in the network.
tm-seeds: "${tm_seeds}"
# bdb-seeds is the list of all the peers in the network.
bdb-seeds: "${bdb_seeds}"
# tm-validators is the list of all validators in the network.
tm-validators: "${tm_validators}"
# bdb-validators is the list of all validators in the network.
bdb-validators: "${bdb_validators}"
# tm-validator-power is the validators voting power, make sure the order and
# the number of nodes in tm-validator-power and tm-validators is the same.
tm-validator-power: "${tm_validators_power}"
# bdb-validator-power is the validators voting power, make sure the order and
# the number of nodes in bdb-validator-power and bdb-validators is the same.
bdb-validator-power: "${bdb_validators_power}"
# tm-genesis-time is the official time of blockchain start.
# bdb-genesis-time is the official time of blockchain start.
# example: 0001-01-01T00:00:00Z
tm-genesis-time: "${tm_genesis_time}"
bdb-genesis-time: "${bdb_genesis_time}"
# tm-chain-id is the ID of the blockchain. Must be unique for every blockchain.
# bdb-chain-id is the ID of the blockchain. Must be unique for every blockchain.
# example: test-chain-KPI1Ud
tm-chain-id: "${tm_chain_id}"
bdb-chain-id: "${bdb_chain_id}"
# tendermint-instance-name is the name of the Tendermint instance
# in the cluster
tm-instance-name: "${tm_instance_name}"
# ngx-tm-instance-name is the FQDN of the tendermint instance in this cluster
ngx-tm-instance-name: "${tm_instance_name}.default.svc.cluster.local"
# tm-abci-port is used by Tendermint Core for ABCI traffic. BigchainDB nodes
# bdb-abci-port is used by Tendermint Core for ABCI traffic. BigchainDB nodes
# use that internally.
tm-abci-port: "46658"
bdb-abci-port: "46658"
# tm-p2p-port is used by Tendermint Core to communicate with
# bdb-p2p-port is used by Tendermint Core to communicate with
# other peers in the network. This port is accessible publicly.
tm-p2p-port: "46656"
bdb-p2p-port: "46656"
# tm-rpc-port is used by Tendermint Core to rpc. BigchainDB nodes
# bdb-rpc-port is used by Tendermint Core to rpc. BigchainDB nodes
# use this port internally.
tm-rpc-port: "46657"
bdb-rpc-port: "46657"
# tm-pub-key-access is the port number used to host/publish the
# bdb-pub-key-access is the port number used to host/publish the
# public key of the tendemrint node in this cluster.
tm-pub-key-access: "9986"
bdb-pub-key-access: "9986"
---
apiVersion: v1

View File

@ -87,4 +87,4 @@ convert_b64 $BASE_K8S_DIR $BASE_CA_DIR/$BASE_EASY_RSA_PATH $BASE_CLIENT_CERT_DIR
get_users $BASE_USERS_DIR $BASE_CA_DIR/$BASE_EASY_RSA_PATH
generate_secretes_no_threescale $BASE_K8S_DIR $SECRET_TOKEN $HTTPS_CERT_KEY_FILE_NAME $HTTPS_CERT_CHAIN_FILE_NAME $MDB_ADMIN_PASSWORD
generate_config_map $BASE_USERS_DIR $MDB_ADMIN_USER $NODE_FQDN $TM_SEEDS $TM_VALIDATORS $TM_VALIDATOR_POWERS $TM_GENESIS_TIME $TM_CHAIN_ID $TM_INSTANCE_NAME $NODE_DNS_SERVER
generate_config_map $BASE_USERS_DIR $MDB_ADMIN_USER $NODE_FQDN $BDB_SEEDS $BDB_VALIDATORS $BDB_VALIDATOR_POWERS $BDB_GENESIS_TIME $BDB_CHAIN_ID $BDB_INSTANCE_NAME $NODE_DNS_SERVER

View File

@ -1,44 +1,43 @@
# DNS name of the bigchaindb node
NODE_FQDN="test-node.bigchaindb.com"
NODE_FQDN="test.bigchaindb.com"
# Secret token used for authorization of
# POST requests to the bigchaindb node
SECRET_TOKEN="test-secret"
# Absolute path for the SSL certificate key
HTTPS_CERT_KEY_FILE_NAME="</path/to/https.key>"
HTTPS_CERT_KEY_FILE_NAME="/path/to/https.key"
# Absolute path for the SSL certificate chain
HTTPS_CERT_CHAIN_FILE_NAME="</path/to/https.pem>"
HTTPS_CERT_CHAIN_FILE_NAME="/path/to/https.crt"
# MongoDB Admin user credentials
MDB_ADMIN_USER='adminUser'
MDB_ADMIN_PASSWORD='superstrongpassword'
# Tendermint instance name of the bigchaindb
# node. This name should be unique
TM_INSTANCE_NAME='tm-instance-0'
# BigchainDB instance name. This name should be unique
BDB_INSTANCE_NAME='bdb-instance-0'
# Comma separated list of initial peers in the
# network.
TM_SEEDS='tm-instance-0,tm-instance-1,tm-instance-2'
BDB_SEEDS='bdb-instance-0,bdb-instance-1,bdb-instance-2,bdb-instance-3'
# Comma separated list of validators in the
# network
TM_VALIDATORS='tm-instance-0,tm-instance-1,tm-instance-2'
BDB_VALIDATORS='bdb-instance-0,bdb-instance-1,bdb-instance-2,bdb-instance-3'
# Comma separated list of voting
# power of all validators. Make sure
# order and number of powers corresponds
# to TM_VALIDATORS
TM_VALIDATOR_POWERS='10,10,10'
# to BDB_VALIDATORS
BDB_VALIDATOR_POWERS='10,10,10,10'
# Offical time of blockchain start
TM_GENESIS_TIME='0001-01-01T00:00:00Z'
BDB_GENESIS_TIME='0001-01-01T00:00:00Z'
# Blockchain ID must be unique for
# every blockchain
TM_CHAIN_ID='test-chain-rwcPML'
BDB_CHAIN_ID='test-chain-rwcPML'
# IP Address of the resolver(DNS server).
# i.e. IP of `kube-dns`, can be retrieved using:

View File

@ -1,5 +0,0 @@
#!/bin/bash
docker build -t bigchaindb/nginx_pub_key_access:2.0.0-alpha .
docker push bigchaindb/nginx_pub_key_access:2.0.0-alpha

View File

@ -1,120 +0,0 @@
#################################################################################
# This YAML file desribes a StatefulSet with a service for running and exposing #
# a Tendermint instance. It depends on the tendermint-config-db-claim #
# and tendermint-db-claim k8s pvc. #
#################################################################################
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: tm-instance-0-ss
namespace: default
spec:
serviceName: tm-instance-0
replicas: 1
template:
metadata:
name: tm-instance-0-ss
labels:
app: tm-instance-0-ss
spec:
restartPolicy: Always
volumes:
- name: tm-data
persistentVolumeClaim:
claimName: tendermint-db-claim
- name: tm-config-data
persistentVolumeClaim:
claimName: tendermint-config-db-claim
containers:
# Treating nginx + tendermint as a POD because they should not
# exist without each other
# Nginx container for hosting public key of this ndoe
- name: nginx
imagePullPolicy: Always
image: bigchaindb/nginx_pub_key_access:2.0.0-alpha
env:
- name: TM_PUB_KEY_ACCESS_PORT
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-pub-key-access
ports:
- containerPort: 9986
name: tm-pk-access
volumeMounts:
- name: tm-config-data
mountPath: /usr/share/nginx
readOnly: true
#Tendermint container
- name: tendermint
imagePullPolicy: Always
image: bigchaindb/tendermint:2.0.0-alpha
env:
- name: TM_SEEDS
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-seeds
- name: TM_VALIDATOR_POWER
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-validator-power
- name: TM_VALIDATORS
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-validators
- name: TM_PUB_KEY_ACCESS_PORT
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-pub-key-access
- name: TM_GENESIS_TIME
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-genesis-time
- name: TM_CHAIN_ID
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-chain-id
- name: TM_P2P_PORT
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-p2p-port
- name: TM_INSTANCE_NAME
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-instance-name
- name: TMHOME
value: /tendermint
- name: TM_PROXY_APP
valueFrom:
configMapKeyRef:
name: vars
key: bdb-instance-name
- name: TM_ABCI_PORT
valueFrom:
configMapKeyRef:
name: tendermint-config
key: tm-abci-port
# Resource constraint on the pod, can be changed
resources:
limits:
cpu: 200m
memory: 5G
volumeMounts:
- name: tm-data
mountPath: /tendermint
- name: tm-config-data
mountPath: /tendermint_node_data
ports:
- containerPort: 46656
name: p2p
- containerPort: 46657
name: rpc

View File

@ -1,24 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: tm-instance-1
namespace: default
labels:
name: tm-instance-1
spec:
selector:
app: tm-instance-1-ss
ports:
- port: 46656
targetPort: 46656
name: p2p
protocol: TCP
- port: 46657
targetPort: 46657
name: rpc
protocol: TCP
- port: 9986
targetPort: 9986
name: pub-key-access
protocol: TCP
clusterIP: None

View File

@ -1,5 +0,0 @@
#!/bin/bash
docker build -t bigchaindb/tendermint:2.0.0-alpha .
docker push bigchaindb/tendermint:2.0.0-alpha