31 restructue documentation (#138)

* removed korean documentation

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* removed CN and KOR readme

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* changed to the press theme

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* first changes

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixe H3 vs H1 issues

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added missing png

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added missing file

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed warnings

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* moved documents

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* removed obsolete files

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* removed obsolete folder

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* removed obs. file

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added some final changes

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* removed obs. reference

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
This commit is contained in:
Jürgen Eckel
2022-06-09 15:00:11 +02:00
committed by GitHub
parent fa2c8a5cc5
commit 4ffd8ca9df
117 changed files with 314 additions and 1139 deletions

View File

@@ -0,0 +1,18 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Networks & Fedreations
######################
There are several ways to setup a network. You can use the Kubernetes deployment template in this section, or use the Ansible solution in the Contributing section. Also, you can setup a single node on your machine and connect to an existing network.
.. include:: networks.md
:parser: myst_parser.sphinx_
.. include:: network-setup.md
:parser: myst_parser.sphinx_
.. include:: k8s-deployment-template/index.rst

View File

@@ -0,0 +1,228 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Architecture of a Planetmint Node Running in a Kubernetes Cluster
=================================================================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a Planetmint node
if that's the only thing the Kubernetes cluster will be running.
Instead, see our `Node Setup <../../node_setup>`_.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
If you deploy a Planetmint node into a Kubernetes cluster
as described in these docs, it will include:
* NGINX, OpenResty, Planetmint, MongoDB and Tendermint
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
* NGINX, OpenResty, Planetmint and MongoDB Monitoring Agent
`Kubernetes Deployments <https://kubernetes.io/docs/concepts/workloads/controllers/deployment/>`_.
* MongoDB and Tendermint `Kubernetes StatefulSets <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/>`_.
* Third party services like `3scale <https://3scale.net>`_,
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_ and the
`Azure Operations Management Suite
<https://docs.microsoft.com/en-us/azure/operations-management-suite/>`_.
.. _planetmint-node:
Planetmint Node Diagram
-----------------------
.. aafig::
:aspect: 60
:scale: 100
:background: #rgb
:proportional:
+ +
+--------------------------------------------------------------------------------------------------------------------------------------+
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| "Planetmint API" | | "Tendermint P2P" |
| | | "Communication/" |
| | | "Public Key Exchange" |
| | | |
| | | |
| v v |
| |
| +------------------+ |
| |"NGINX Service" | |
| +-------+----------+ |
| | |
| v |
| |
| +------------------+ |
| | "NGINX" | |
| | "Deployment" | |
| | | |
| +-------+----------+ |
| | |
| | |
| | |
| v |
| |
| "443" +----------+ "26656/9986" |
| | "Rate" | |
| +---------------------------+"Limiting"+-----------------------+ |
| | | "Logic" | | |
| | +----+-----+ | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | "27017" | | |
| v | v |
| +-------------+ | +------------+ |
| |"HTTPS" | | +------------------> |"Tendermint"| |
| |"Termination"| | | "9986" |"Service" | "26656" |
| | | | | +-------+ | <----+ |
| +-----+-------+ | | | +------------+ | |
| | | | | | |
| | | | v v |
| | | | +------------+ +------------+ |
| | | | |"NGINX" | |"Tendermint"| |
| | | | |"Deployment"| |"Stateful" | |
| | | | |"Pub-Key-Ex"| |"Set" | |
| ^ | | +------------+ +------------+ |
| +-----+-------+ | | |
| "POST" |"Analyze" | "GET" | | |
| |"Request" | | | |
| +-----------+ +--------+ | | |
| | +-------------+ | | | |
| | | | | "Bi+directional, communication between" |
| | | | | "PlanetmintAPP) and Tendermint" |
| | | | | "BFT consensus Engine" |
| | | | | |
| v v | | |
| | | |
| +-------------+ +--------------+ +----+-------------------> +--------------+ |
| | "OpenResty" | | "Planetmint" | | | "MongoDB" | |
| | "Service" | | "Service" | | | "Service" | |
| | | +----->| | | +-------> | | |
| +------+------+ | +------+-------+ | | +------+-------+ |
| | | | | | | |
| | | | | | | |
| v | v | | v |
| +-------------+ | +-------------+ | | +----------+ |
| | | | | | <------------+ | |"MongoDB" | |
| |"OpenResty" | | | "Planetmint"| | |"Stateful"| |
| |"Deployment" | | | "Deployment"| | |"Set" | |
| | | | | | | +-----+----+ |
| | | | | +---------------------------+ | |
| | | | | | | |
| +-----+-------+ | +-------------+ | |
| | | | |
| | | | |
| v | | |
| +-----------+ | v |
| | "Auth" | | +------------+ |
| | "Logic" |----------+ |"MongoDB" | |
| | | |"Monitoring"| |
| | | |"Agent" | |
| +---+-------+ +-----+------+ |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
+---------------+---------------------------------------------------------------------------------------+------------------------------+
| |
| |
| |
v v
+------------------------------------+ +------------------------------------+
| | | |
| | | |
| | | |
| "3Scale" | | "MongoDB Cloud" |
| | | |
| | | |
| | | |
+------------------------------------+ +------------------------------------+
.. note::
The arrows in the diagram represent the client-server communication. For
example, A-->B implies that A initiates the connection to B.
It does not represent the flow of data; the communication channel is always
fully duplex.
NGINX: Entrypoint and Gateway
-----------------------------
We use an NGINX as HTTP proxy on port 443 (configurable) at the cloud
entrypoint for:
#. Rate Limiting: We configure NGINX to allow only a certain number of requests
(configurable) which prevents DoS attacks.
#. HTTPS Termination: The HTTPS connection does not carry through all the way
to Planetmint and terminates at NGINX for now.
#. Request Routing: For HTTPS connections on port 443 (or the configured Planetmint public api port),
the connection is proxied to:
#. OpenResty Service if it is a POST request.
#. Planetmint Service if it is a GET request.
We use an NGINX TCP proxy on port 27017 (configurable) at the cloud
entrypoint for:
#. Rate Limiting: We configure NGINX to allow only a certain number of requests
(configurable) which prevents DoS attacks.
#. Request Routing: For connections on port 27017 (or the configured MongoDB
public api port), the connection is proxied to the MongoDB Service.
OpenResty: API Management, Authentication and Authorization
-----------------------------------------------------------
We use `OpenResty <https://openresty.org/>`_ to perform authorization checks
with 3scale using the ``app_id`` and ``app_key`` headers in the HTTP request.
OpenResty is NGINX plus a bunch of other
`components <https://openresty.org/en/components.html>`_. We primarily depend
on the LuaJIT compiler to execute the functions to authenticate the ``app_id``
and ``app_key`` with the 3scale backend.
MongoDB: Standalone
-------------------
We use MongoDB as the backend database for Planetmint.
We achieve security by avoiding DoS attacks at the NGINX proxy layer and by
ensuring that MongoDB has TLS enabled for all its connections.
Tendermint: BFT consensus engine
--------------------------------
We use Tendermint as the backend consensus engine for BFT replication of Planetmint.
In a multi-node deployment, Tendermint nodes/peers communicate with each other via
the public ports exposed by the NGINX gateway.
We use port **9986** (configurable) to allow tendermint nodes to access the public keys
of the peers and port **26656** (configurable) for the rest of the communications between
the peers.

View File

@@ -0,0 +1,101 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
.. _how-to-set-up-a-self-signed-certificate-authority:
How to Set Up a Self-Signed Certificate Authority
=================================================
This page enumerates the steps *we* use to set up a self-signed certificate authority (CA).
This is something that only needs to be done once per Planetmint network,
by the organization managing the network, i.e. the CA is for the whole network.
We use Easy-RSA.
Step 1: Install & Configure Easy-RSA
------------------------------------
First create a directory for the CA and cd into it:
.. code:: bash
mkdir bdb-node-ca
cd bdb-node-ca
Then :ref:`install and configure Easy-RSA in that directory <how-to-install-and-configure-easyrsa>`.
Step 2: Create a Self-Signed CA
-------------------------------
You can create a self-signed CA
by going to the ``bdb-node-ca/easy-rsa-3.0.1/easyrsa3`` directory and using:
.. code:: bash
./easyrsa init-pki
./easyrsa build-ca
You will also be asked to enter a PEM pass phrase (for encrypting the ``ca.key`` file).
Make sure to securely store that PEM pass phrase.
If you lose it, you won't be able to add or remove entities from your PKI infrastructure in the future.
You will be prompted to enter the Distinguished Name (DN) information for this CA.
For each field, you can accept the default value [in brackets] by pressing Enter.
.. warning::
Don't accept the default value of OU (``IT``). Instead, enter the value ``ROOT-CA``.
While ``Easy-RSA CA`` *is* a valid and acceptable Common Name,
you should probably enter a name based on the name of the managing organization,
e.g. ``Omega Ledger CA``.
Tip: You can get help with the ``easyrsa`` command (and its subcommands)
by using the subcommand ``./easyrsa help``
Step 3: Create an Intermediate CA
---------------------------------
TODO
Step 4: Generate a Certificate Revocation List
----------------------------------------------
You can generate a Certificate Revocation List (CRL) using:
.. code:: bash
./easyrsa gen-crl
You will need to run this command every time you revoke a certificate.
The generated ``crl.pem`` needs to be uploaded to your infrastructure to
prevent the revoked certificate from being used again.
Step 5: Secure the CA
---------------------
The security of your infrastructure depends on the security of this CA.
- Ensure that you restrict access to the CA and enable only legitimate and
required people to sign certificates and generate CRLs.
- Restrict access to the machine where the CA is hosted.
- Many certificate providers keep the CA offline and use a rotating
intermediate CA to sign and revoke certificates, to mitigate the risk of the
CA getting compromised.
- In case you want to destroy the machine where you created the CA
(for example, if this was set up on a cloud provider instance),
you can backup the entire ``easyrsa`` directory
to secure storage. You can always restore it to a trusted instance again
during the times when you want to sign or revoke certificates.
Remember to backup the directory after every update.

View File

@@ -0,0 +1,111 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
.. _how-to-generate-a-client-certificate-for-mongodb:
How to Generate a Client Certificate for MongoDB
================================================
This page enumerates the steps *we* use to generate a client certificate to be
used by clients who want to connect to a TLS-secured MongoDB database.
We use Easy-RSA.
Step 1: Install and Configure Easy-RSA
--------------------------------------
First create a directory for the client certificate and cd into it:
.. code:: bash
mkdir client-cert
cd client-cert
Then :ref:`install and configure Easy-RSA in that directory <how-to-install-and-configure-easyrsa>`.
Step 2: Create the Client Private Key and CSR
---------------------------------------------
You can create the client private key and certificate signing request (CSR)
by going into the directory ``client-cert/easy-rsa-3.0.1/easyrsa3``
and using:
.. code:: bash
./easyrsa init-pki
./easyrsa gen-req bdb-instance-0 nopass
You should change the Common Name (e.g. ``bdb-instance-0``)
to a value that reflects what the
client certificate is being used for, e.g. ``mdb-mon-instance-3`` or ``mdb-bak-instance-4``. (The final integer is specific to your Planetmint node in the Planetmint network.)
You will be prompted to enter the Distinguished Name (DN) information for this certificate. For each field, you can accept the default value [in brackets] by pressing Enter.
.. warning::
Don't accept the default value of OU (``IT``). Instead, enter the value
``Planetmint-Instance``, ``MongoDB-Mon-Instance`` or ``MongoDB-Backup-Instance``
as appropriate.
Aside: The ``nopass`` option means "do not encrypt the private key (default is encrypted)". You can get help with the ``easyrsa`` command (and its subcommands)
by using the subcommand ``./easyrsa help``.
.. note::
For more information about requirements for MongoDB client certificates, please consult the `official MongoDB
documentation <https://docs.mongodb.com/manual/tutorial/configure-x509-client-authentication/>`_.
Step 3: Get the Client Certificate Signed
-----------------------------------------
The CSR file created in the previous step
should be located in ``pki/reqs/bdb-instance-0.req``
(or whatever Common Name you used in the ``gen-req`` command above).
You need to send it to the organization managing the Planetmint network
so that they can use their CA
to sign the request.
(The managing organization should already have a self-signed CA.)
If you are the admin of the managing organization's self-signed CA,
then you can import the CSR and use Easy-RSA to sign it.
Go to your ``bdb-node-ca/easy-rsa-3.0.1/easyrsa3/``
directory and do something like:
.. code:: bash
./easyrsa import-req /path/to/bdb-instance-0.req bdb-instance-0
./easyrsa sign-req client bdb-instance-0
Once you have signed it, you can send the signed certificate
and the CA certificate back to the requestor.
The files are ``pki/issued/bdb-instance-0.crt`` and ``pki/ca.crt``.
Step 4: Generate the Consolidated Client PEM File
-------------------------------------------------
.. note::
This step can be skipped for Planetmint client certificate as Planetmint
uses the PyMongo driver, which accepts separate certificate and key files.
MongoDB, MongoDB Backup Agent and MongoDB Monitoring Agent require a single,
consolidated file containing both the public and private keys.
.. code:: bash
cat /path/to/bdb-instance-0.crt /path/to/bdb-instance-0.key > bdb-instance-0.pem
OR
cat /path/to/mdb-mon-instance-0.crt /path/to/mdb-mon-instance-0.key > mdb-mon-instance-0.pem
OR
cat /path/to/mdb-bak-instance-0.crt /path/to/mdb-bak-instance-0.key > mdb-bak-instance-0.pem

View File

@@ -0,0 +1,68 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
.. _configure-mongodb-cloud-manager-for-monitoring:
Configure MongoDB Cloud Manager for Monitoring
==============================================
This document details the steps required to configure MongoDB Cloud Manager to
enable monitoring of data in a MongoDB Replica Set.
Configure MongoDB Cloud Manager for Monitoring Step by Step
-----------------------------------------------------------
* Once the Monitoring Agent is up and running, open
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_.
* Click ``Login`` under ``MongoDB Cloud Manager`` and log in to the Cloud
Manager.
* Select the group from the dropdown box on the page.
* Go to Settings and add a ``Preferred Hostnames`` entry as
a regexp based on the ``mdb-instance-name`` of the nodes in your cluster.
It may take up to 5 mins till this setting takes effect.
You may refresh the browser window and verify whether the changes have
been saved or not.
For example, for the nodes in a cluster that are named ``mdb-instance-0``,
``mdb-instance-1`` and so on, a regex like ``^mdb-instance-[0-9]{1,2}$``
is recommended.
* Next, click the ``Deployment`` tab, and then the ``Manage Existing``
button.
* On the ``Import your deployment for monitoring`` page, enter the hostname
to be the same as the one set for ``mdb-instance-name`` in the global
ConfigMap for a node.
For example, if the ``mdb-instance-name`` is set to ``mdb-instance-0``,
enter ``mdb-instance-0`` as the value in this field.
* Enter the port number as ``27017``, with no authentication.
* If you have authentication enabled, select the option to enable
authentication and specify the authentication mechanism as per your
deployment. The default Planetmint Kubernetes deployment template currently
supports ``X.509 Client Certificate`` as the authentication mechanism.
* If you have TLS enabled, select the option to enable TLS/SSL for MongoDB
connections, and click ``Continue``. This should already be selected for
you in case you selected ``X.509 Client Certificate`` above.
* Wait a minute or two for the deployment to be found and then
click the ``Continue`` button again.
* Verify that you see your process on the Cloud Manager UI.
It should look something like this:
.. image:: ../../_static/mongodb_cloud_manager_1.png
* Click ``Continue``.
* Verify on the UI that data is being sent by the monitoring agent to the
Cloud Manager. It may take upto 5 minutes for data to appear on the UI.

View File

@@ -0,0 +1,98 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
.. _how-to-install-and-configure-easyrsa:
How to Install & Configure Easy-RSA
===================================
We use
`Easy-RSA version 3
<https://community.openvpn.net/openvpn/wiki/EasyRSA3-OpenVPN-Howto>`_, a
wrapper over complex ``openssl`` commands.
`Easy-RSA is available on GitHub <https://github.com/OpenVPN/easy-rsa/releases>`_ and licensed under GPLv2.
Step 1: Install Easy-RSA Dependencies
-------------------------------------
The only dependency for Easy-RSA v3 is ``openssl``,
which is available from the ``openssl`` package on Ubuntu and other
Debian-based operating systems, i.e. you can install it using:
.. code:: bash
sudo apt-get update
sudo apt-get install openssl
Step 2: Install Easy-RSA
------------------------
Make sure you're in the directory where you want Easy-RSA to live,
then download it and extract it within that directory:
.. code:: bash
wget https://github.com/OpenVPN/easy-rsa/archive/3.0.1.tar.gz
tar xzvf 3.0.1.tar.gz
rm 3.0.1.tar.gz
There should now be a directory named ``easy-rsa-3.0.1``
in your current directory.
Step 3: Customize the Easy-RSA Configuration
--------------------------------------------
We now create a config file named ``vars``
by copying the existing ``vars.example`` file
and then editing it.
You should change the
country, province, city, org and email
to the correct values for your organisation.
(Note: The country, province, city, org and email are part of
the `Distinguished Name <https://en.wikipedia.org/wiki/X.509#Certificates>`_ (DN).)
The comments in the file explain what each of the variables mean.
.. code:: bash
cd easy-rsa-3.0.1/easyrsa3
cp vars.example vars
echo 'set_var EASYRSA_DN "org"' >> vars
echo 'set_var EASYRSA_KEY_SIZE 4096' >> vars
echo 'set_var EASYRSA_REQ_COUNTRY "DE"' >> vars
echo 'set_var EASYRSA_REQ_PROVINCE "Berlin"' >> vars
echo 'set_var EASYRSA_REQ_CITY "Berlin"' >> vars
echo 'set_var EASYRSA_REQ_ORG "Planetmint GmbH"' >> vars
echo 'set_var EASYRSA_REQ_OU "IT"' >> vars
echo 'set_var EASYRSA_REQ_EMAIL "contact@ipdb.global"' >> vars
Note: Later, when building a CA or generating a certificate signing request, you will be prompted to enter a value for the OU (or to accept the default). You should change the default OU from ``IT`` to one of the following, as appropriate:
``ROOT-CA``,
``MongoDB-Instance``, ``Planetmint-Instance``, ``MongoDB-Mon-Instance`` or
``MongoDB-Backup-Instance``.
To understand why, see `the MongoDB Manual <https://docs.mongodb.com/manual/tutorial/configure-x509-client-authentication/>`_.
There are reminders to do this in the relevant docs.
Step 4: Maybe Edit x509-types/server
------------------------------------
.. warning::
Only do this step if you are setting up a self-signed CA.
Edit the file ``x509-types/server`` and change
``extendedKeyUsage = serverAuth`` to
``extendedKeyUsage = serverAuth,clientAuth``.
See `the MongoDB documentation about x.509 authentication <https://docs.mongodb.com/manual/core/security-x.509/>`_ to understand why.

View File

@@ -0,0 +1,48 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
.. _kubernetes-deployment-template:
Kubernetes Deployment Template
==============================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a Planetmint node
if that's the only thing the Kubernetes cluster will be running.
Instead, see our `Node Setup <../../node_setup>`_.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
This section outlines a way to deploy a Planetmint node (or Planetmint network)
on Microsoft Azure using Kubernetes.
You may choose to use it as a template or reference for your own deployment,
but *we make no claim that it is suitable for your purposes*.
Feel free change things to suit your needs or preferences.
.. toctree::
:maxdepth: 1
workflow
ca-installation
server-tls-certificate
client-tls-certificate
revoke-tls-certificate
template-kubernetes-azure
node-on-kubernetes
node-config-map-and-secrets
log-analytics
cloud-manager
easy-rsa
upgrade-on-kubernetes
planetmint-network-on-kubernetes
tectonic-azure
troubleshoot
architecture

View File

@@ -0,0 +1,343 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Log Analytics on Azure
======================
This page describes how we use Microsoft Operations Management Suite (OMS)
to collect all logs from a Kubernetes cluster,
to search those logs,
and to set up email alerts based on log messages.
The :ref:`oms-k8s-references` section (below) contains links
to more detailed documentation.
There are two steps:
1. Setup: Create a log analytics OMS workspace
and a Containers solution under that workspace.
2. Deploy OMS agents to your Kubernetes cluster.
Step 1: Setup
-------------
Step 1 can be done the web browser way or the command-line way.
The Web Browser Way
~~~~~~~~~~~~~~~~~~~
To create a new log analytics OMS workspace:
1. Go to the Azure Portal in your web browser.
2. Click on **More services >** in the lower left corner of the Azure Portal.
3. Type "log analytics" or similar.
4. Select **Log Analytics** from the list of options.
5. Click on **+ Add** to add a new log analytics OMS workspace.
6. Give answers to the questions. You can call the OMS workspace anything,
but use the same resource group and location as your Kubernetes cluster.
The free option will suffice, but of course you can also use a paid one.
To add a "Containers solution" to that new workspace:
1. In Azure Portal, in the Log Analytics section, click the name of the new workspace
2. Click **OMS Workspace**.
3. Click **OMS Portal**. It should launch the OMS Portal in a new tab.
4. Click the **Solutions Gallery** tile.
5. Click the **Containers** tile.
6. Click **Add**.
The Command-Line Way
~~~~~~~~~~~~~~~~~~~~
We'll assume your Kubernetes cluster has a resource
group named:
* ``resource_group``
and the workspace we'll create will be named:
* ``work_space``
If you feel creative you may replace these names by more interesting ones.
.. code-block:: bash
$ az group deployment create --debug \
--resource-group resource_group \
--name "Microsoft.LogAnalyticsOMS" \
--template-file log_analytics_oms.json \
--parameters @log_analytics_oms.parameters.json
An example of a simple template file (``--template-file``):
.. code-block:: json
{
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"sku": {
"type": "String"
},
"workspaceName": {
"type": "String"
},
"solutionType": {
"type": "String"
},
"resources": [
{
"apiVersion": "2015-03-20",
"type": "Microsoft.OperationalInsights/workspaces",
"name": "[parameters('workspaceName')]",
"location": "[resourceGroup().location]",
"properties": {
"sku": {
"name": "[parameters('sku')]"
}
},
"resources": [
{
"apiVersion": "2015-11-01-preview",
"location": "[resourceGroup().location]",
"name": "[Concat(parameters('solutionType'), '(', parameters('workspaceName'), ')')]",
"type": "Microsoft.OperationsManagement/solutions",
"id": "[Concat(resourceGroup().id, '/providers/Microsoft.OperationsManagement/solutions/', parameters('solutionType'), '(', parameters('workspaceName'), ')')]",
"dependsOn": [
"[concat('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
],
"properties": {
"workspaceResourceId": "[resourceId('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
},
"plan": {
"publisher": "Microsoft",
"product": "[Concat('OMSGallery/', parameters('solutionType'))]",
"name": "[Concat(parameters('solutionType'), '(', parameters('workspaceName'), ')')]",
"promotionCode": ""
}
}
]
}
]
}
}
An example of the associated parameter file (``--parameters``):
.. code-block:: json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"sku": {
"value": "Free"
},
"workspaceName": {
"value": "work_space"
},
"solutionType": {
"value": "Containers"
}
}
}
Step 2: Deploy the OMS Agents
-----------------------------
To deploy an OMS agent, two important pieces of information are needed:
1. workspace id
2. workspace key
You can obtain the workspace id using:
.. code-block:: bash
$ az resource show \
--resource-group resource_group
--resource-type Microsoft.OperationalInsights/workspaces
--name work_space \
| grep customerId
"customerId": "12345678-1234-1234-1234-123456789012",
Until we figure out a way to obtain the *workspace key* via the command line,
you can get it via the OMS Portal.
To get to the OMS Portal, go to the Azure Portal and click on:
Resource Groups > (Your Kubernetes cluster's resource group) > Log analytics (OMS) > (Name of the only item listed) > OMS Workspace > OMS Portal
(Let us know if you find a faster way.)
Then see `Microsoft's instructions to obtain your workspace ID and key
<https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-oms#obtain-your-workspace-id-and-key>`_ (via the OMS Portal).
Once you have the workspace id and key, you can include them in the following
YAML file (:download:`oms-daemonset.yaml
<../../../../../../k8s/logging-and-monitoring/oms-daemonset.yaml>`):
.. code-block:: yaml
# oms-daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: omsagent
spec:
template:
metadata:
labels:
app: omsagent
spec:
containers:
- env:
- name: WSID
value: <workspace_id>
- name: KEY
value: <workspace_key>
image: microsoft/oms
name: omsagent
ports:
- containerPort: 25225
protocol: TCP
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
To deploy the OMS agents (one per Kubernetes node, i.e. one per computer),
simply run the following command:
.. code-block:: bash
$ kubectl create -f oms-daemonset.yaml
Search the OMS Logs
-------------------
OMS should now be getting, storing and indexing all the logs
from all the containers in your Kubernetes cluster.
You can search the OMS logs from the Azure Portal
or the OMS Portal, but at the time of writing,
there was more functionality in the OMS Portal
(e.g. the ability to create an Alert based on a search).
There are instructions to get to the OMS Portal above.
Once you're in the OMS Portal, click on **Log Search**
and enter a query.
Here are some example queries:
All logging messages containing the strings "critical" or "error" (not case-sensitive):
``Type=ContainerLog (critical OR error)``
.. note::
You can filter the results even more by clicking on things in the left sidebar.
For OMS Log Search syntax help, see the
`Log Analytics search reference <https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-search-reference>`_.
All logging messages containing the string "error" but not "404":
``Type=ContainerLog error NOT(404)``
All logging messages containing the string "critical" but not "CriticalAddonsOnly":
``Type=ContainerLog critical NOT(CriticalAddonsOnly)``
All logging messages from containers running the Docker image planetmint/nginx_3scale:1.3, containing the string "GET" but not the strings "Go-http-client" or "runscope" (where those exclusions filter out tests by Kubernetes and Runscope):
``Type=ContainerLog Image="planetmint/nginx_3scale:1.3" GET NOT("Go-http-client") NOT(runscope)``
.. note::
We wrote a small Python 3 script to analyze the logs found by the above NGINX search.
It's in ``k8s/logging-and-monitoring/analyze.py``. The docsting at the top
of the script explains how to use it.
Create an Email Alert
---------------------
Once you're satisfied with an OMS Log Search query string,
click the **🔔 Alert** icon in the top menu,
fill in the form,
and click **Save** when you're done.
Some Useful Management Tasks
----------------------------
List workspaces:
.. code-block:: bash
$ az resource list \
--resource-group resource_group \
--resource-type Microsoft.OperationalInsights/workspaces
List solutions:
.. code-block:: bash
$ az resource list \
--resource-group resource_group \
--resource-type Microsoft.OperationsManagement/solutions
Delete the containers solution:
.. code-block:: bash
$ az group deployment delete --debug \
--resource-group resource_group \
--name Microsoft.ContainersOMS
.. code-block:: bash
$ az resource delete \
--resource-group resource_group \
--resource-type Microsoft.OperationsManagement/solutions \
--name "Containers(work_space)"
Delete the workspace:
.. code-block:: bash
$ az group deployment delete --debug \
--resource-group resource_group \
--name Microsoft.LogAnalyticsOMS
.. code-block:: bash
$ az resource delete \
--resource-group resource_group \
--resource-type Microsoft.OperationalInsights/workspaces \
--name work_space
.. _oms-k8s-references:
References
----------
* `Monitor an Azure Container Service cluster with Microsoft Operations Management Suite (OMS) <https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-oms>`_
* `Manage Log Analytics using Azure Resource Manager templates <https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-template-workspace-configuration>`_
* `azure commands for deployments <https://docs.microsoft.com/en-us/cli/azure/group/deployment>`_
(``az group deployment``)
* `Understand the structure and syntax of Azure Resource Manager templates <https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates>`_
* `Kubernetes DaemonSet`_
.. _Azure Resource Manager templates: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates
.. _Kubernetes DaemonSet: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

View File

@@ -0,0 +1,124 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
.. _how-to-configure-a-planetmint-node:
How to Configure a Planetmint Node
==================================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a Planetmint node
if that's the only thing the Kubernetes cluster will be running.
Instead, see our `Node Setup <../../node_setup>`_.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
This page outlines the steps to set a bunch of configuration settings
in your Planetmint node.
They are pushed to the Kubernetes cluster in two files,
named ``config-map.yaml`` (a set of ConfigMaps)
and ``secret.yaml`` (a set of Secrets).
They are stored in the Kubernetes cluster's key-value store (etcd).
Make sure you did the first four operations listed in the section titled
:ref:`things-each-node-operator-must-do`.
Edit vars
---------
This file is located at: ``k8s/scripts/vars`` and edit
the configuration parameters.
That file already contains many comments to help you
understand each data value, but we make some additional
remarks on some of the values below.
vars.NODE_FQDN
~~~~~~~~~~~~~~~
FQDN for your Planetmint node. This is the domain name
used to query and access your Planetmint node. More information can be
found in our :ref:`Kubernetes template overview guide <kubernetes-template-overview>`.
vars.SECRET_TOKEN
~~~~~~~~~~~~~~~~~
This parameter is specific to your Planetmint node and is used for
authentication and authorization of requests to your Planetmint node.
More information can be found in our :ref:`Kubernetes template overview guide <kubernetes-template-overview>`.
vars.HTTPS_CERT_KEY_FILE_NAME
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Absolute path of the HTTPS certificate chain of your domain.
More information can be found in our :ref:`Kubernetes template overview guide <kubernetes-template-overview>`.
vars.HTTPS_CERT_CHAIN_FILE_NAME
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Absolute path of the HTTPS certificate key of your domain.
More information can be found in our :ref:`Kubernetes template overview guide <kubernetes-template-overview>`.
vars.MDB_ADMIN_USER and vars.MDB_ADMIN_PASSWORD
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MongoDB admin user credentials, username and password.
This user is created on the *admin* database with the authorization to create other users.
vars.BDB_PERSISTENT_PEERS, BDB_VALIDATORS, BDB_VALIDATORS_POWERS, BDB_GENESIS_TIME and BDB_CHAIN_ID
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These parameters are shared across the Planetmint network. More information about the generation
of these parameters can be found at :ref:`generate-the-blockchain-id-and-genesis-time`.
vars.NODE_DNS_SERVER
^^^^^^^^^^^^^^^^^^^^
IP of Kubernetes service(kube-dns), can be retrieved using
using CLI(kubectl) or k8s dashboard. This parameter is used by the Nginx gateway instance
to resolve the hostnames of all the services running in the Kubernetes cluster.
.. code::
# retrieval via commandline.
$ kubectl get services --namespace=kube-system -l k8s-app=kube-dns
.. _generate-config:
Generate configuration
~~~~~~~~~~~~~~~~~~~~~~
After populating the ``k8s/scripts/vars`` file, we need to generate
all the configuration required for the Planetmint node, for that purpose
we need to execute ``k8s/scripts/generate_configs.sh`` script.
.. code::
$ bash generate_configs.sh
.. Note::
During execution the script will prompt the user for some inputs.
After successful execution, this routine will generate ``config-map.yaml`` and
``secret.yaml`` under ``k8s/scripts``.
.. _deploy-config-map-and-secret:
Deploy Your config-map.yaml and secret.yaml
-------------------------------------------
You can deploy your edited ``config-map.yaml`` and ``secret.yaml``
files to your Kubernetes cluster using the commands:
.. code:: bash
$ kubectl apply -f config-map.yaml
$ kubectl apply -f secret.yaml

View File

@@ -0,0 +1,769 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
.. _kubernetes-template-deploy-a-single-planetmint-node:
Kubernetes Template: Deploy a Single Planetmint Node
====================================================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a Planetmint node
if that's the only thing the Kubernetes cluster will be running.
Instead, see our `Node Setup <../../node_setup>`_.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
This page describes how to deploy a Planetmint node
using `Kubernetes <https://kubernetes.io/>`_.
It assumes you already have a running Kubernetes cluster.
Below, we refer to many files by their directory and filename,
such as ``configuration/config-map.yaml``. Those files are files in the
`planetmint/planetmint repository on GitHub <https://github.com/planetmint/planetmint/>`_
in the ``k8s/`` directory.
Make sure you're getting those files from the appropriate Git branch on
GitHub, i.e. the branch for the version of Planetmint that your Planetmint
cluster is using.
Step 1: Install and Configure kubectl
-------------------------------------
kubectl is the Kubernetes CLI.
If you don't already have it installed,
then see the `Kubernetes docs to install it
<https://kubernetes.io/docs/user-guide/prereqs/>`_.
The default location of the kubectl configuration file is ``~/.kube/config``.
If you don't have that file, then you need to get it.
**Azure.** If you deployed your Kubernetes cluster on Azure
using the Azure CLI 2.0 (as per :doc:`our template
<../k8s-deployment-template/template-kubernetes-azure>`),
then you can get the ``~/.kube/config`` file using:
.. code:: bash
$ az acs kubernetes get-credentials \
--resource-group <name of resource group containing the cluster> \
--name <ACS cluster name>
If it asks for a password (to unlock the SSH key)
and you enter the correct password,
but you get an error message,
then try adding ``--ssh-key-file ~/.ssh/<name>``
to the above command (i.e. the path to the private key).
.. note::
**About kubectl contexts.** You might manage several
Kubernetes clusters. To make it easy to switch from one to another,
kubectl has a notion of "contexts," e.g. the context for cluster 1 or
the context for cluster 2. To find out the current context, do:
.. code:: bash
$ kubectl config view
and then look for the ``current-context`` in the output.
The output also lists all clusters, contexts and users.
(You might have only one of each.)
You can switch to a different context using:
.. code:: bash
$ kubectl config use-context <new-context-name>
You can also switch to a different context for just one command
by inserting ``--context <context-name>`` into any kubectl command.
For example:
.. code:: bash
$ kubectl get pods
will get a list of the pods in the Kubernetes cluster associated
with the context named ``k8s-bdb-test-cluster-0``.
Step 2: Connect to Your Kubernetes Cluster's Web UI (Optional)
---------------------------------------------------------------
You can connect to your cluster's
`Kubernetes Dashboard <https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/>`_
(also called the Web UI) using:
.. code:: bash
$ kubectl proxy -p 8001
or
$ az acs kubernetes browse -g [Resource Group] -n [Container service instance name] --ssh-key-file /path/to/privateKey
or, if you prefer to be explicit about the context (explained above):
.. code:: bash
$ kubectl proxy -p 8001
The output should be something like ``Starting to serve on 127.0.0.1:8001``.
That means you can visit the dashboard in your web browser at
`http://127.0.0.1:8001/ui <http://127.0.0.1:8001/ui>`_.
.. note::
**Known Issue:** If you are having accessing the UI i.e.
accessing `http://127.0.0.1:8001/ui <http://127.0.0.1:8001/ui>`_
in your browser returns a blank page and is redirected to
`http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
<http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy>`_
, you can access the UI by adding a **/** at the end of the redirected URL i.e.
`http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/
<http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/>`_
Step 3: Configure Your Planetmint Node
--------------------------------------
See the page titled :ref:`how-to-configure-a-planetmint-node`.
.. _start-the-nginx-service:
Step 4: Start the NGINX Service
-------------------------------
* This will will give us a public IP for the cluster.
* Once you complete this step, you might need to wait up to 10 mins for the
public IP to be assigned.
* You have the option to use vanilla NGINX without HTTPS support or an
NGINX with HTTPS support.
* Start the Kubernetes Service:
.. code:: bash
$ kubectl apply -f nginx-https/nginx-https-svc.yaml
OR
$ kubectl apply -f nginx-http/nginx-http-svc.yaml
.. _assign-dns-name-to-nginx-public-ip:
Step 5: Assign DNS Name to the NGINX Public IP
----------------------------------------------
* This step is required only if you are planning to set up multiple
`Planetmint nodes
<https://docs.planetmint.io/en/latest/terminology.html>`_ or are using
HTTPS certificates tied to a domain.
* The following command can help you find out if the NGINX service started
above has been assigned a public IP or external IP address:
.. code:: bash
$ kubectl get svc -w
* Once a public IP is assigned, you can map it to
a DNS name.
We usually assign ``bdb-test-node-0``, ``bdb-test-node-1`` and
so on in our documentation.
Let's assume that we assign the unique name of ``bdb-test-node-0`` here.
**Set up DNS mapping in Azure.**
Select the current Azure resource group and look for the ``Public IP``
resource. You should see at least 2 entries there - one for the Kubernetes
master and the other for the NGINX instance. You may have to ``Refresh`` the
Azure web page listing the resources in a resource group for the latest
changes to be reflected.
Select the ``Public IP`` resource that is attached to your service (it should
have the Azure DNS prefix name along with a long random string, without the
``master-ip`` string), select ``Configuration``, add the DNS assigned above
(for example, ``bdb-test-node-0``), click ``Save``, and wait for the
changes to be applied.
To verify the DNS setting is operational, you can run ``nslookup <DNS
name added in Azure configuration>`` from your local Linux shell.
This will ensure that when you scale to different geographical zones, other Tendermint
nodes in the network can reach this instance.
.. _start-the-mongodb-kubernetes-service:
Step 6: Start the MongoDB Kubernetes Service
--------------------------------------------
* Start the Kubernetes Service:
.. code:: bash
$ kubectl apply -f mongodb/mongo-svc.yaml
.. _start-the-planetmint-kubernetes-service:
Step 7: Start the Planetmint Kubernetes Service
-----------------------------------------------
* Start the Kubernetes Service:
.. code:: bash
$ kubectl apply -f planetmint/planetmint-svc.yaml
.. _start-the-openresty-kubernetes-service:
Step 8(Optional): Start the OpenResty Kubernetes Service
---------------------------------------------------------
* Start the Kubernetes Service:
.. code:: bash
$ kubectl apply -f nginx-openresty/nginx-openresty-svc.yaml
.. _start-the-nginx-deployment:
Step 9: Start the NGINX Kubernetes Deployment
----------------------------------------------
* NGINX is used as a proxy to the Planetmint, Tendermint and MongoDB instances in
the node. It proxies HTTP/HTTPS requests on the ``node-frontend-port``
to the corresponding OpenResty(if 3scale enabled) or Planetmint backend, TCP connections
on ``mongodb-frontend-port``, ``tm-p2p-port`` and ``tm-pub-key-access``
to MongoDB and Tendermint respectively.
* This configuration is located in the file
``nginx-https/nginx-https-dep.yaml`` or ``nginx-http/nginx-http-dep.yaml``.
* Start the Kubernetes Deployment:
.. code:: bash
$ kubectl apply -f nginx-https/nginx-https-dep.yaml
OR
$ kubectl apaply -f nginx-http/nginx-http-dep.yaml
.. _create-kubernetes-storage-class-mdb:
Step 10: Create Kubernetes Storage Classes for MongoDB
------------------------------------------------------
MongoDB needs somewhere to store its data persistently,
outside the container where MongoDB is running.
Our MongoDB Docker container
(based on the official MongoDB Docker container)
exports two volume mounts with correct
permissions from inside the container:
* The directory where the MongoDB instance stores its data: ``/data/db``.
There's more explanation in the MongoDB docs about `storage.dbpath <https://docs.mongodb.com/manual/reference/configuration-options/#storage.dbPath>`_.
* The directory where the MongoDB instance stores the metadata for a sharded
cluster: ``/data/configdb/``.
There's more explanation in the MongoDB docs about `sharding.configDB <https://docs.mongodb.com/manual/reference/configuration-options/#sharding.configDB>`_.
Explaining how Kubernetes handles persistent volumes,
and the associated terminology,
is beyond the scope of this documentation;
see `the Kubernetes docs about persistent volumes
<https://kubernetes.io/docs/user-guide/persistent-volumes>`_.
The first thing to do is create the Kubernetes storage classes.
**Set up Storage Classes in Azure.**
First, you need an Azure storage account.
If you deployed your Kubernetes cluster on Azure
using the Azure CLI 2.0
(as per :doc:`our template <../k8s-deployment-template/template-kubernetes-azure>`),
then the `az acs create` command already created a
storage account in the same location and resource group
as your Kubernetes cluster.
Both should have the same "storage account SKU": ``Standard_LRS``.
Standard storage is lower-cost and lower-performance.
It uses hard disk drives (HDD).
LRS means locally-redundant storage: three replicas
in the same data center.
Premium storage is higher-cost and higher-performance.
It uses solid state drives (SSD).
We recommend using Premium storage with our Kubernetes deployment template.
Create a `storage account <https://docs.microsoft.com/en-us/azure/storage/common/storage-create-storage-account>`_
for Premium storage and associate it with your Azure resource group.
For future reference, the command to create a storage account is
`az storage account create <https://docs.microsoft.com/en-us/cli/azure/storage/account#create>`_.
.. note::
Please refer to `Azure documentation <https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage>`_
for the list of VMs that are supported by Premium Storage.
The Kubernetes template for configuration of the MongoDB Storage Class is located in the
file ``mongodb/mongo-sc.yaml``.
You may have to update the ``parameters.location`` field in the file to
specify the location you are using in Azure.
If you want to use a custom storage account with the Storage Class, you
can also update `parameters.storageAccount` and provide the Azure storage
account name.
Create the required storage classes using:
.. code:: bash
$ kubectl apply -f mongodb/mongo-sc.yaml
You can check if it worked using ``kubectl get storageclasses``.
.. _create-kubernetes-persistent-volume-claim-mdb:
Step 11: Create Kubernetes Persistent Volume Claims for MongoDB
---------------------------------------------------------------
Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and
``mongo-configdb-claim``.
This configuration is located in the file ``mongodb/mongo-pvc.yaml``.
Note how there's no explicit mention of Azure, AWS or whatever.
``ReadWriteOnce`` (RWO) means the volume can be mounted as
read-write by a single Kubernetes node.
(``ReadWriteOnce`` is the *only* access mode supported
by AzureDisk.)
``storage: 20Gi`` means the volume has a size of 20
`gibibytes <https://en.wikipedia.org/wiki/Gibibyte>`_.
You may want to update the ``spec.resources.requests.storage`` field in both
the files to specify a different disk size.
Create the required Persistent Volume Claims using:
.. code:: bash
$ kubectl apply -f mongodb/mongo-pvc.yaml
You can check its status using: ``kubectl get pvc -w``
Initially, the status of persistent volume claims might be "Pending"
but it should become "Bound" fairly quickly.
.. note::
The default Reclaim Policy for dynamically created persistent volumes is ``Delete``
which means the PV and its associated Azure storage resource will be automatically
deleted on deletion of PVC or PV. In order to prevent this from happening do
the following steps to change default reclaim policy of dyanmically created PVs
from ``Delete`` to ``Retain``
* Run the following command to list existing PVs
.. Code:: bash
$ kubectl get pv
* Run the following command to update a PV's reclaim policy to <Retain>
.. Code:: bash
$ kubectl patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
For notes on recreating a private volume form a released Azure disk resource consult
:doc:`the page about cluster troubleshooting <../k8s-deployment-template/troubleshoot>`.
.. _start-kubernetes-stateful-set-mongodb:
Step 12: Start a Kubernetes StatefulSet for MongoDB
---------------------------------------------------
* Create the MongoDB StatefulSet using:
.. code:: bash
$ kubectl apply -f mongodb/mongo-ss.yaml
* It might take up to 10 minutes for the disks, specified in the Persistent
Volume Claims above, to be created and attached to the pod.
The UI might show that the pod has errored with the message
"timeout expired waiting for volumes to attach/mount". Use the CLI below
to check the status of the pod in this case, instead of the UI.
This happens due to a bug in Azure ACS.
.. code:: bash
$ kubectl get pods -w
.. _configure-users-and-access-control-mongodb:
Step 13: Configure Users and Access Control for MongoDB
-------------------------------------------------------
* In this step, you will create a user on MongoDB with authorization
to create more users and assign roles to it. We will also create
MongoDB client users for Planetmint and MongoDB Monitoring agent(Optional).
.. code:: bash
$ kubectl apply -f mongodb/configure_mdb.sh
.. _create-kubernetes-storage-class:
Step 14: Create Kubernetes Storage Classes for Planetmint
----------------------------------------------------------
Planetmint needs somewhere to store Tendermint data persistently, Tendermint uses
LevelDB as the persistent storage layer.
The Kubernetes template for configuration of Storage Class is located in the
file ``planetmint/planetmint-sc.yaml``.
Details about how to create a Azure Storage account and how Kubernetes Storage Class works
are already covered in this document: :ref:`create-kubernetes-storage-class-mdb`.
Create the required storage classes using:
.. code:: bash
$ kubectl apply -f planetmint/planetmint-sc.yaml
You can check if it worked using ``kubectl get storageclasses``.
.. _create-kubernetes-persistent-volume-claim:
Step 15: Create Kubernetes Persistent Volume Claims for Planetmint
------------------------------------------------------------------
Next, you will create two PersistentVolumeClaim objects ``tendermint-db-claim`` and
``tendermint-config-db-claim``.
This configuration is located in the file ``planetmint/planetmint-pvc.yaml``.
Details about Kubernetes Persistent Volumes, Persistent Volume Claims
and how they work with Azure are already covered in this
document: :ref:`create-kubernetes-persistent-volume-claim-mdb`.
Create the required Persistent Volume Claims using:
.. code:: bash
$ kubectl apply -f planetmint/planetmint-pvc.yaml
You can check its status using:
.. code::
kubectl get pvc -w
.. _start-kubernetes-stateful-set-bdb:
Step 16: Start a Kubernetes StatefulSet for Planetmint
------------------------------------------------------
* This configuration is located in the file ``planetmint/planetmint-ss.yaml``.
* Set the ``spec.serviceName`` to the value set in ``bdb-instance-name`` in
the ConfigMap.
For example, if the value set in the ``bdb-instance-name``
is ``bdb-instance-0``, set the field to ``tm-instance-0``.
* Set ``metadata.name``, ``spec.template.metadata.name`` and
``spec.template.metadata.labels.app`` to the value set in
``bdb-instance-name`` in the ConfigMap, followed by
``-ss``.
For example, if the value set in the
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the value
``bdb-insance-0-ss``.
* As we gain more experience running Tendermint in testing and production, we
will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
* Create the Planetmint StatefulSet using:
.. code:: bash
$ kubectl apply -f planetmint/planetmint-ss.yaml
.. code:: bash
$ kubectl get pods -w
.. _start-kubernetes-deployment-for-mdb-mon-agent:
Step 17(Optional): Start a Kubernetes Deployment for MongoDB Monitoring Agent
------------------------------------------------------------------------------
* This configuration is located in the file
``mongodb-monitoring-agent/mongo-mon-dep.yaml``.
* Set ``metadata.name``, ``spec.template.metadata.name`` and
``spec.template.metadata.labels.app`` to the value set in
``mdb-mon-instance-name`` in the ConfigMap, followed by
``-dep``.
For example, if the value set in the
``mdb-mon-instance-name`` is ``mdb-mon-instance-0``, set the fields to the
value ``mdb-mon-instance-0-dep``.
* The configuration uses the following values set in the Secret:
- ``mdb-mon-certs``
- ``ca-auth``
- ``cloud-manager-credentials``
* Start the Kubernetes Deployment using:
.. code:: bash
$ kubectl apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml
.. _start-kubernetes-deployment-openresty:
Step 18(Optional): Start a Kubernetes Deployment for OpenResty
--------------------------------------------------------------
* This configuration is located in the file
``nginx-openresty/nginx-openresty-dep.yaml``.
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
value set in ``openresty-instance-name`` in the ConfigMap, followed by
``-dep``.
For example, if the value set in the
``openresty-instance-name`` is ``openresty-instance-0``, set the fields to
the value ``openresty-instance-0-dep``.
* Set the port to be exposed from the pod in the
``spec.containers[0].ports`` section. We currently expose the port at
which OpenResty is listening for requests, ``openresty-backend-port`` in
the above ConfigMap.
* The configuration uses the following values set in the Secret:
- ``threescale-credentials``
* The configuration uses the following values set in the ConfigMap:
- ``node-dns-server-ip``
- ``openresty-backend-port``
- ``ngx-bdb-instance-name``
- ``planetmint-api-port``
* Create the OpenResty Deployment using:
.. code:: bash
$ kubectl apply -f nginx-openresty/nginx-openresty-dep.yaml
* You can check its status using the command ``kubectl get deployments -w``
Step 19(Optional): Configure the MongoDB Cloud Manager
------------------------------------------------------
Refer to the
:doc:`documentation <../k8s-deployment-template/cloud-manager>`
for details on how to configure the MongoDB Cloud Manager to enable
monitoring and backup.
Step 20(Optional): Only for multi site deployments(Geographically dispersed)
----------------------------------------------------------------------------
We need to make sure that clusters are able
to talk to each other i.e. specifically the communication between the
Tendermint peers. Set up networking between the clusters using
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
Assuming we have a Planetmint instance ``bdb-instance-1`` residing in Azure data center location ``westeurope`` and we
want to connect to ``bdb-instance-2``, ``bdb-instance-3``, and ``bdb-instance-4`` located in Azure data centers
``eastus``, ``centralus`` and ``westus``, respectively. Unless you already have explicitly set up networking for
``bdb-instance-1`` to communicate with ``bdb-instance-2/3/4`` and
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
Tendermint P2P network.
It is similar to ensuring that there is a ``CNAME`` record in the DNS
infrastructure to resolve ``bdb-instance-X`` to the host where it is actually available.
We can do this in Kubernetes using a Kubernetes Service of ``type``
``ExternalName``.
* This configuration is located in the file ``planetmint/planetmint-ext-conn-svc.yaml``.
* Set the name of the ``metadata.name`` to the host name of the Planetmint instance you are trying to connect to.
For instance if you are configuring this service on cluster with ``bdb-instance-1`` then the ``metadata.name`` will
be ``bdb-instance-2`` and vice versa.
* Set ``spec.ports.port[0]`` to the ``tm-p2p-port`` from the ConfigMap for the other cluster.
* Set ``spec.ports.port[1]`` to the ``tm-rpc-port`` from the ConfigMap for the other cluster.
* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to.
For more information about the FQDN please refer to: :ref:`Assign DNS name to NGINX Public
IP <assign-dns-name-to-nginx-public-ip>`.
.. note::
This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs
we need to communicate with.
If you are not the system administrator of the cluster, you have to get in
touch with the system administrator/s of the other ``n-1`` clusters and
share with them your instance name (``tendermint-instance-name`` in the ConfigMap)
and the FQDN of the NGINX instance acting as Gateway(set in: :ref:`Assign DNS name to NGINX
Public IP <assign-dns-name-to-nginx-public-ip>`).
.. _verify-and-test-bdb:
Step 21: Verify the Planetmint Node Setup
-----------------------------------------
Step 21.1: Testing Internally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To test the setup of your Planetmint node, you could use a Docker container
that provides utilities like ``nslookup``, ``curl`` and ``dig``.
For example, you could use a container based on our
`planetmint/toolbox <https://hub.docker.com/r/planetmint/toolbox/>`_ image.
(The corresponding
`Dockerfile <https://github.com/planetmint/planetmint/blob/master/k8s/toolbox/Dockerfile>`_
is in the ``planetmint/planetmint`` repository on GitHub.)
You can use it as below to get started immediately:
.. code:: bash
$ kubectl \
run -it toolbox \
--image planetmint/toolbox \
--image-pull-policy=Always \
--restart=Never --rm
It will drop you to the shell prompt.
To test the MongoDB instance:
.. code:: bash
$ nslookup mdb-instance-0
$ dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV
$ curl -X GET http://mdb-instance-0:27017
The ``nslookup`` command should output the configured IP address of the service
(in the cluster).
The ``dig`` command should return the configured port numbers.
The ``curl`` command tests the availability of the service.
To test the Planetmint instance:
.. code:: bash
$ nslookup bdb-instance-0
$ dig +noall +answer _bdb-api-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _bdb-ws-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
$ curl -X GET http://bdb-instance-0:9984
$ curl -X GET http://bdb-instance-0:9986/pub_key.json
$ curl -X GET http://bdb-instance-0:26657/abci_info
$ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions
To test the OpenResty instance:
.. code:: bash
$ nslookup openresty-instance-0
$ dig +noall +answer _openresty-svc-port._tcp.openresty-instance-0.default.svc.cluster.local SRV
To verify if OpenResty instance forwards the requests properly, send a ``POST``
transaction to OpenResty at post ``80`` and check the response from the backend
Planetmint instance.
To test the vanilla NGINX instance:
.. code:: bash
$ nslookup ngx-http-instance-0
$ dig +noall +answer _public-node-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _public-health-check-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
$ wsc -er ws://ngx-http-instance-0/api/v1/streams/valid_transactions
$ curl -X GET http://ngx-http-instance-0:27017
The above curl command should result in the response
``It looks like you are trying to access MongoDB over HTTP on the native driver port.``
To test the NGINX instance with HTTPS and 3scale integration:
.. code:: bash
$ nslookup ngx-instance-0
$ dig +noall +answer _public-secure-node-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _public-mdb-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _public-insecure-node-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
$ wsc -er wss://<node-fqdn>/api/v1/streams/valid_transactions
$ curl -X GET http://<node-fqdn>:27017
The above curl command should result in the response
``It looks like you are trying to access MongoDB over HTTP on the native driver port.``
Step 21.2: Testing Externally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Check the MongoDB monitoring agent on the MongoDB Cloud Manager
portal to verify they are working fine.
If you are using the NGINX with HTTP support, accessing the URL
``http://<DNS/IP of your exposed Planetmint service endpoint>:node-frontend-port``
on your browser should result in a JSON response that shows the Planetmint
server version, among other things.
If you are using the NGINX with HTTPS support, use ``https`` instead of
``http`` above.
Use the Python Driver to send some transactions to the Planetmint node and
verify that your node or cluster works as expected.
Next, you can set up log analytics and monitoring, by following our templates:
* :doc:`../k8s-deployment-template/log-analytics`.

View File

@@ -0,0 +1,542 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
.. _kubernetes-template-deploy-planetmint-network:
Kubernetes Template: Deploying a Planetmint network
===================================================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a Planetmint node
if that's the only thing the Kubernetes cluster will be running.
Instead, see our `Node Setup <../../node_setup>`_.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
This page describes how to deploy a static Planetmint + Tendermint network.
If you want to deploy a stand-alone Planetmint node in a Planetmint cluster,
or a stand-alone Planetmint node,
then see :doc:`the page about that <node-on-kubernetes>`.
We can use this guide to deploy a Planetmint network in the following scenarios:
* Single Azure Kubernetes Site.
* Multiple Azure Kubernetes Sites (Geographically dispersed).
Terminology Used
----------------
``Planetmint node`` is a set of Kubernetes components that join together to
form a Planetmint single node. Please refer to the :doc:`architecture diagram <architecture>`
for more details.
``Planetmint network`` will refer to a collection of nodes working together
to form a network.
Below, we refer to multiple files by their directory and filename,
such as ``planetmint/planetmint-ext-conn-svc.yaml``. Those files are located in the
`planetmint/planetmint repository on GitHub
<https://github.com/planetmint/planetmint/>`_ in the ``k8s/`` directory.
Make sure you're getting those files from the appropriate Git branch on
GitHub, i.e. the branch for the version of Planetmint that your Planetmint
cluster is using.
.. note::
This deployment strategy is currently used for testing purposes only,
operated by a single stakeholder or tightly coupled stakeholders.
.. note::
Currently, we only support a static set of participants in the network.
Once a Planetmint network is started with a certain number of validators
and a genesis file. Users cannot add new validator nodes dynamically.
You can track the progress of this funtionality on our
`github repository <https://github.com/planetmint/planetmint/milestones>`_.
.. _pre-reqs-bdb-network:
Prerequisites
-------------
The deployment methodology is similar to one covered with :doc:`node-on-kubernetes`, but
we need to tweak some configurations depending on your choice of deployment.
The operator needs to follow some consistent naming convention for all the components
covered :ref:`here <things-each-node-operator-must-do>`.
Lets assume we are deploying a 4 node cluster, your naming conventions could look like this:
.. code::
{
"MongoDB": [
"mdb-instance-1",
"mdb-instance-2",
"mdb-instance-3",
"mdb-instance-4"
],
"Planetmint": [
"bdb-instance-1",
"bdb-instance-2",
"bdb-instance-3",
"bdb-instance-4"
],
"NGINX": [
"ngx-instance-1",
"ngx-instance-2",
"ngx-instance-3",
"ngx-instance-4"
],
"OpenResty": [
"openresty-instance-1",
"openresty-instance-2",
"openresty-instance-3",
"openresty-instance-4"
],
"MongoDB_Monitoring_Agent": [
"mdb-mon-instance-1",
"mdb-mon-instance-2",
"mdb-mon-instance-3",
"mdb-mon-instance-4"
]
}
.. note::
Blockchain Genesis ID and Time will be shared across all nodes.
Edit config.yaml and secret.yaml
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Make N(number of nodes) copies of ``configuration/config-map.yaml`` and ``configuration/secret.yaml``.
.. code:: text
# For config-map.yaml
config-map-node-1.yaml
config-map-node-2.yaml
config-map-node-3.yaml
config-map-node-4.yaml
# For secret.yaml
secret-node-1.yaml
secret-node-2.yaml
secret-node-3.yaml
secret-node-4.yaml
Edit the data values as described in :doc:`this document <node-config-map-and-secrets>`, based
on the naming convention described :ref:`above <pre-reqs-bdb-network>`.
**Only for single site deployments**: Since all the configuration files use the
same ConfigMap and Secret Keys i.e.
``metadata.name -> vars, bdb-config and tendermint-config`` and
``metadata.name -> cloud-manager-credentials, mdb-certs, mdb-mon-certs, bdb-certs,``
``https-certs, three-scale-credentials, ca-auth`` respectively, each file
will overwrite the configuration of the previously deployed one.
We want each node to have its own unique configurations.
One way to go about it is that, using the
:ref:`naming convention above <pre-reqs-bdb-network>` we edit the ConfigMap and Secret keys.
.. code:: text
# For config-map-node-1.yaml
metadata.name: vars -> vars-node-1
metadata.name: bdb-config -> bdb-config-node-1
metadata.name: tendermint-config -> tendermint-config-node-1
# For secret-node-1.yaml
metadata.name: cloud-manager-credentials -> cloud-manager-credentials-node-1
metadata.name: mdb-certs -> mdb-certs-node-1
metadata.name: mdb-mon-certs -> mdb-mon-certs-node-1
metadata.name: bdb-certs -> bdb-certs-node-1
metadata.name: https-certs -> https-certs-node-1
metadata.name: threescale-credentials -> threescale-credentials-node-1
metadata.name: ca-auth -> ca-auth-node-1
# Repeat for the remaining files.
Deploy all your configuration maps and secrets.
.. code:: bash
kubectl apply -f configuration/config-map-node-1.yaml
kubectl apply -f configuration/config-map-node-2.yaml
kubectl apply -f configuration/config-map-node-3.yaml
kubectl apply -f configuration/config-map-node-4.yaml
kubectl apply -f configuration/secret-node-1.yaml
kubectl apply -f configuration/secret-node-2.yaml
kubectl apply -f configuration/secret-node-3.yaml
kubectl apply -f configuration/secret-node-4.yaml
.. note::
Similar to what we did, with config-map.yaml and secret.yaml i.e. indexing them
per node, we have to do the same for each Kubernetes component
i.e. Services, StorageClasses, PersistentVolumeClaims, StatefulSets, Deployments etc.
.. code:: text
# For Services
*-node-1-svc.yaml
*-node-2-svc.yaml
*-node-3-svc.yaml
*-node-4-svc.yaml
# For StorageClasses
*-node-1-sc.yaml
*-node-2-sc.yaml
*-node-3-sc.yaml
*-node-4-sc.yaml
# For PersistentVolumeClaims
*-node-1-pvc.yaml
*-node-2-pvc.yaml
*-node-3-pvc.yaml
*-node-4-pvc.yaml
# For StatefulSets
*-node-1-ss.yaml
*-node-2-ss.yaml
*-node-3-ss.yaml
*-node-4-ss.yaml
# For Deployments
*-node-1-dep.yaml
*-node-2-dep.yaml
*-node-3-dep.yaml
*-node-4-dep.yaml
.. _single-site-network:
Single Site: Single Azure Kubernetes Cluster
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For the deployment of a Planetmint network under a single cluster, we need to replicate
the :doc:`deployment steps for each node <node-on-kubernetes>` N number of times, N being
the number of participants in the network.
In our Kubernetes deployment template for a single Planetmint node, we covered the basic configurations
settings :ref:`here <how-to-configure-a-planetmint-node>`.
Since, we index the ConfigMap and Secret Keys for the single site deployment, we need to update
all the Kubernetes components to reflect the corresponding changes i.e. For each Kubernetes Service,
StatefulSet, PersistentVolumeClaim, Deployment, and StorageClass, we need to update the respective
`*.yaml` file and update the ConfigMapKeyRef.name OR secret.secretName.
Example
"""""""
Assuming we are deploying the MongoDB StatefulSet for Node 1. We need to update
the ``mongo-node-1-ss.yaml`` and update the corresponding ConfigMapKeyRef.name or secret.secretNames.
.. code:: text
########################################################################
# This YAML file desribes a StatefulSet with a service for running and #
# exposing a MongoDB instance. #
# It depends on the configdb and db k8s pvc. #
########################################################################
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mdb-instance-0-ss
namespace: default
spec:
serviceName: mdb-instance-0
replicas: 1
template:
metadata:
name: mdb-instance-0-ss
labels:
app: mdb-instance-0-ss
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongodb
image: planetmint/mongodb:3.2
imagePullPolicy: IfNotPresent
env:
- name: MONGODB_FQDN
valueFrom:
configMapKeyRef:
name: vars-1 # Changed from ``vars``
key: mdb-instance-name
- name: MONGODB_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MONGODB_PORT
valueFrom:
configMapKeyRef:
name: vars-1 # Changed from ``vars``
key: mongodb-backend-port
- name: STORAGE_ENGINE_CACHE_SIZE
valueFrom:
configMapKeyRef:
name: vars-1 # Changed from ``vars``
key: storage-engine-cache-size
args:
- --mongodb-port
- $(MONGODB_PORT)
- --mongodb-key-file-path
- /etc/mongod/ssl/mdb-instance.pem
- --mongodb-ca-file-path
- /etc/mongod/ca/ca.pem
- --mongodb-crl-file-path
- /etc/mongod/ca/crl.pem
- --mongodb-fqdn
- $(MONGODB_FQDN)
- --mongodb-ip
- $(MONGODB_POD_IP)
- --storage-engine-cache-size
- $(STORAGE_ENGINE_CACHE_SIZE)
securityContext:
capabilities:
add:
- FOWNER
ports:
- containerPort: "<mongodb-backend-port from ConfigMap>"
protocol: TCP
name: mdb-api-port
volumeMounts:
- name: mdb-db
mountPath: /data/db
- name: mdb-configdb
mountPath: /data/configdb
- name: mdb-certs
mountPath: /etc/mongod/ssl/
readOnly: true
- name: ca-auth
mountPath: /etc/mongod/ca/
readOnly: true
resources:
limits:
cpu: 200m
memory: 5G
livenessProbe:
tcpSocket:
port: mdb-api-port
initialDelaySeconds: 15
successThreshold: 1
failureThreshold: 3
periodSeconds: 15
timeoutSeconds: 10
restartPolicy: Always
volumes:
- name: mdb-db
persistentVolumeClaim:
claimName: mongo-db-claim-1 # Changed from ``mongo-db-claim``
- name: mdb-configdb
persistentVolumeClaim:
claimName: mongo-configdb-claim-1 # Changed from ``mongo-configdb-claim``
- name: mdb-certs
secret:
secretName: mdb-certs-1 # Changed from ``mdb-certs``
defaultMode: 0400
- name: ca-auth
secret:
secretName: ca-auth-1 # Changed from ``ca-auth``
defaultMode: 0400
The above example is meant to be repeated for all the Kubernetes components of a Planetmint node.
* ``nginx-http/nginx-http-node-X-svc.yaml`` or ``nginx-https/nginx-https-node-X-svc.yaml``
* ``nginx-http/nginx-http-node-X-dep.yaml`` or ``nginx-https/nginx-https-node-X-dep.yaml``
* ``mongodb/mongodb-node-X-svc.yaml``
* ``mongodb/mongodb-node-X-sc.yaml``
* ``mongodb/mongodb-node-X-pvc.yaml``
* ``mongodb/mongodb-node-X-ss.yaml``
* ``planetmint/planetmint-node-X-svc.yaml``
* ``planetmint/planetmint-node-X-sc.yaml``
* ``planetmint/planetmint-node-X-pvc.yaml``
* ``planetmint/planetmint-node-X-ss.yaml``
* ``nginx-openresty/nginx-openresty-node-X-svc.yaml``
* ``nginx-openresty/nginx-openresty-node-X-dep.yaml``
Multi Site: Multiple Azure Kubernetes Clusters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For the multi site deployment of a Planetmint network with geographically dispersed
nodes, we need to replicate the :doc:`deployment steps for each node <node-on-kubernetes>` N number of times,
N being the number of participants in the network.
The operator needs to follow a consistent naming convention which has :ref:`already
discussed in this document <pre-reqs-bdb-network>`.
.. note::
Assuming we are using independent Kubernetes clusters, the ConfigMap and Secret Keys
do not need to be updated unlike :ref:`single-site-network`, and we also do not
need to update corresponding ConfigMap/Secret imports in the Kubernetes components.
Deploy Kubernetes Services
--------------------------
Deploy the following services for each node by following the naming convention
described :ref:`above <pre-reqs-bdb-network>`:
* :ref:`Start the NGINX Service <start-the-nginx-service>`.
* :ref:`Assign DNS Name to the NGINX Public IP <assign-dns-name-to-nginx-public-ip>`
* :ref:`Start the MongoDB Kubernetes Service <start-the-mongodb-kubernetes-service>`.
* :ref:`Start the Planetmint Kubernetes Service <start-the-planetmint-kubernetes-service>`.
* :ref:`Start the OpenResty Kubernetes Service <start-the-openresty-kubernetes-service>`.
Only for multi site deployments
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We need to make sure that clusters are able
to talk to each other i.e. specifically the communication between the
Planetmint peers. Set up networking between the clusters using
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
Assuming we have a Planetmint instance ``planetmint-instance-1`` residing in Azure data center location ``westeurope`` and we
want to connect to ``planetmint-instance-2``, ``planetmint-instance-3``, and ``planetmint-instance-4`` located in Azure data centers
``eastus``, ``centralus`` and ``westus``, respectively. Unless you already have explicitly set up networking for
``planetmint-instance-1`` to communicate with ``planetmint-instance-2/3/4`` and
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
Planetmint P2P network.
It is similar to ensuring that there is a ``CNAME`` record in the DNS
infrastructure to resolve ``planetmint-instance-X`` to the host where it is actually available.
We can do this in Kubernetes using a Kubernetes Service of ``type``
``ExternalName``.
* This configuration is located in the file ``planetmint/planetmint-ext-conn-svc.yaml``.
* Set the name of the ``metadata.name`` to the host name of the Planetmint instance you are trying to connect to.
For instance if you are configuring this service on cluster with ``planetmint-instance-1`` then the ``metadata.name`` will
be ``planetmint-instance-2`` and vice versa.
* Set ``spec.ports.port[0]`` to the ``tm-p2p-port`` from the ConfigMap for the other cluster.
* Set ``spec.ports.port[1]`` to the ``tm-rpc-port`` from the ConfigMap for the other cluster.
* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to.
For more information about the FQDN please refer to: :ref:`Assign DNS name to NGINX Public
IP <assign-dns-name-to-nginx-public-ip>`.
.. note::
This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs
we need to communicate with.
If you are not the system administrator of the cluster, you have to get in
touch with the system administrator/s of the other ``n-1`` clusters and
share with them your instance name (``planetmint-instance-name`` in the ConfigMap)
and the FQDN of the NGINX instance acting as Gateway(set in: :ref:`Assign DNS name to NGINX
Public IP <assign-dns-name-to-nginx-public-ip>`).
Start NGINX Kubernetes deployments
----------------------------------
Start the NGINX deployment that serves as a Gateway for each node by following the
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
* :ref:`Start the NGINX Kubernetes Deployment <start-the-nginx-deployment>`.
Deploy Kubernetes StorageClasses for MongoDB and Planetmint
------------------------------------------------------------
Deploy the following StorageClasses for each node by following the naming convention
described :ref:`above <pre-reqs-bdb-network>`:
* :ref:`Create Kubernetes Storage Classes for MongoDB <create-kubernetes-storage-class-mdb>`.
* :ref:`Create Kubernetes Storage Classes for Planetmint <create-kubernetes-storage-class>`.
Deploy Kubernetes PersistentVolumeClaims for MongoDB and Planetmint
--------------------------------------------------------------------
Deploy the following services for each node by following the naming convention
described :ref:`above <pre-reqs-bdb-network>`:
* :ref:`Create Kubernetes Persistent Volume Claims for MongoDB <create-kubernetes-persistent-volume-claim-mdb>`.
* :ref:`Create Kubernetes Persistent Volume Claims for Planetmint <create-kubernetes-persistent-volume-claim>`
Deploy MongoDB Kubernetes StatefulSet
--------------------------------------
Deploy the MongoDB StatefulSet (standalone MongoDB) for each node by following the naming convention
described :ref:`above <pre-reqs-bdb-network>`: and referring to the following section:
* :ref:`Start a Kubernetes StatefulSet for MongoDB <start-kubernetes-stateful-set-mongodb>`.
Configure Users and Access Control for MongoDB
----------------------------------------------
Configure users and access control for each MongoDB instance
in the network by referring to the following section:
* :ref:`Configure Users and Access Control for MongoDB <configure-users-and-access-control-mongodb>`.
Start Kubernetes StatefulSet for Planetmint
-------------------------------------------
Start the Planetmint Kubernetes StatefulSet for each node by following the
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
* :ref:`Start a Kubernetes Deployment for Planetmint <start-kubernetes-stateful-set-bdb>`.
Start Kubernetes Deployment for MongoDB Monitoring Agent
---------------------------------------------------------
Start the MongoDB monitoring agent Kubernetes deployment for each node by following the
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
* :ref:`Start a Kubernetes Deployment for MongoDB Monitoring Agent <start-kubernetes-deployment-for-mdb-mon-agent>`.
Start Kubernetes Deployment for OpenResty
------------------------------------------
Start the OpenResty Kubernetes deployment for each node by following the
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
* :ref:`Start a Kubernetes Deployment for OpenResty <start-kubernetes-deployment-openresty>`.
Verify and Test
---------------
Verify and test your setup by referring to the following instructions:
* :ref:`Verify the Planetmint Node Setup <verify-and-test-bdb>`.

View File

@@ -0,0 +1,49 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
How to Revoke an SSL/TLS Certificate
====================================
This page enumerates the steps *we* take to revoke a self-signed SSL/TLS
certificate in a Planetmint network.
It can only be done by someone with access to the self-signed CA
associated with the network's managing organization.
Step 1: Revoke a Certificate
----------------------------
Since we used Easy-RSA version 3 to
:ref:`set up the CA <how-to-set-up-a-self-signed-certificate-authority>`,
we use it to revoke certificates too.
Go to the following directory (associated with the self-signed CA):
``.../bdb-node-ca/easy-rsa-3.0.1/easyrsa3``.
You need to be aware of the file name used to import the certificate using the
``./easyrsa import-req`` before. Run the following command to revoke a
certificate:
.. code:: bash
./easyrsa revoke <filename>
This will update the CA database with the revocation details.
The next step is to use the updated database to issue an up-to-date
certificate revocation list (CRL).
Step 2: Generate a New CRL
--------------------------
Generate a new CRL for your infrastructure using:
.. code:: bash
./easyrsa gen-crl
The generated ``crl.pem`` file needs to be uploaded to your infrastructure to
prevent the revoked certificate from being used again.
In particlar, the generated ``crl.pem`` file should be sent to all Planetmint node operators in your Planetmint network, so that they can update it in their MongoDB instance and their Planetmint Server instance.

View File

@@ -0,0 +1,102 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
.. _how-to-generate-a-server-certificate-for-mongodb:
How to Generate a Server Certificate for MongoDB
================================================
This page enumerates the steps *we* use to generate a
server certificate for a MongoDB instance.
A server certificate is also referred to as a "member certificate"
in the MongoDB documentation.
We use Easy-RSA.
Step 1: Install & Configure EasyRSA
------------------------------------
First create a directory for the server certificate (member cert) and cd into it:
.. code:: bash
mkdir member-cert
cd member-cert
Then :ref:`install and configure Easy-RSA in that directory <how-to-install-and-configure-easyrsa>`.
Step 2: Create the Server Private Key and CSR
---------------------------------------------
You can create the server private key and certificate signing request (CSR)
by going into the directory ``member-cert/easy-rsa-3.0.1/easyrsa3``
and using something like:
.. note::
Please make sure you are fullfilling the requirements for `MongoDB server/member certificates
<https://docs.mongodb.com/manual/tutorial/configure-x509-member-authentication>`_.
.. code:: bash
./easyrsa init-pki
./easyrsa --req-cn=mdb-instance-0 --subject-alt-name=DNS:localhost,DNS:mdb-instance-0 gen-req mdb-instance-0 nopass
You should replace the Common Name (``mdb-instance-0`` above) with the correct name for *your* MongoDB instance in the network, e.g. ``mdb-instance-5`` or ``mdb-instance-12``. (This name is decided by the organization managing the network.)
You will be prompted to enter the Distinguished Name (DN) information for this certificate.
For each field, you can accept the default value [in brackets] by pressing Enter.
.. warning::
Don't accept the default value of OU (``IT``). Instead, enter the value ``MongoDB-Instance``.
Aside: You need to provide the ``DNS:localhost`` SAN during certificate generation
for using the ``localhost exception`` in the MongoDB instance.
All certificates can have this attribute without compromising security as the
``localhost exception`` works only the first time.
Step 3: Get the Server Certificate Signed
-----------------------------------------
The CSR file created in the last step
should be located in ``pki/reqs/mdb-instance-0.req``
(where the integer ``0`` may be different for you).
You need to send it to the organization managing the Planetmint network
so that they can use their CA
to sign the request.
(The managing organization should already have a self-signed CA.)
If you are the admin of the managing organization's self-signed CA,
then you can import the CSR and use Easy-RSA to sign it.
Go to your ``bdb-node-ca/easy-rsa-3.0.1/easyrsa3/``
directory and do something like:
.. code:: bash
./easyrsa import-req /path/to/mdb-instance-0.req mdb-instance-0
./easyrsa --subject-alt-name=DNS:localhost,DNS:mdb-instance-0 sign-req server mdb-instance-0
Once you have signed it, you can send the signed certificate
and the CA certificate back to the requestor.
The files are ``pki/issued/mdb-instance-0.crt`` and ``pki/ca.crt``.
Step 4: Generate the Consolidated Server PEM File
-------------------------------------------------
MongoDB requires a single, consolidated file containing both the public and
private keys.
.. code:: bash
cat /path/to/mdb-instance-0.crt /path/to/mdb-instance-0.key > mdb-instance-0.pem

View File

@@ -0,0 +1,149 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Walkthrough: Deploy a Kubernetes Cluster on Azure using Tectonic by CoreOS
==========================================================================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a Planetmint node
if that's the only thing the Kubernetes cluster will be running.
Instead, see our `Node Setup <../../node_setup>`_.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
A Planetmint node can be run inside a `Kubernetes <https://kubernetes.io/>`_
cluster.
This page describes one way to deploy a Kubernetes cluster on Azure using Tectonic.
Tectonic helps in easier cluster management of Kubernetes clusters.
If you would rather use Azure Container Service to manage Kubernetes Clusters,
please read :doc:`our guide for that <template-kubernetes-azure>`.
Step 1: Prerequisites for Deploying Tectonic Cluster
----------------------------------------------------
Get an Azure account. Refer to
:ref:`this step in our docs <get-a-pay-as-you-go-azure-subscription>`.
Create an SSH Key pair for the new Tectonic cluster. Refer to
:ref:`this step in our docs <create-an-ssh-key-pair>`.
Step 2: Get a Tectonic Subscription
-----------------------------------
CoreOS offers Tectonic for free for up to 10 nodes.
Sign up for an account `here <https://coreos.com/tectonic>`__ if you do not
have one already and get a license for 10 nodes.
Login to your account, go to Overview > Your Account and save the
``CoreOS License`` and the ``Pull Secret`` to your local machine.
Step 3: Deploy the cluster on Azure
-----------------------------------
The latest instructions for deployment can be found
`here <https://coreos.com/tectonic/docs/latest/tutorials/azure/install.html>`__.
The following points suggests some customizations for a Planetmint deployment
when following the steps above:
#. Set the ``CLUSTER`` variable to the name of the cluster. Also note that the
cluster will be deployed in a resource group named
``tectonic-cluster-CLUSTER``.
#. Set the ``tectonic_base_domain`` to ``""`` if you want to use Azure managed
DNS. You will be assigned a ``cloudapp.azure.com`` sub-domain by default and
you can skip the ``Configuring Azure DNS`` section from the Tectonic installation
guide.
#. Set the ``tectonic_cl_channel`` to ``"stable"`` unless you want to
experiment or test with the latest release.
#. Set the ``tectonic_cluster_name`` to the ``CLUSTER`` variable defined in
the step above.
#. Set the ``tectonic_license_path`` and ``tectonic_pull_secret_path`` to the
location where you have stored the ``tectonic-license.txt`` and the
``config.json`` files downloaded in the previous step.
#. Set the ``tectonic_etcd_count`` to ``"3"``, so that you have a multi-node
etcd cluster that can tolerate a single node failure.
#. Set the ``tectonic_etcd_tls_enabled`` to ``"true"`` as this will enable TLS
connectivity between the etcd nodes and their clients.
#. Set the ``tectonic_master_count`` to ``"3"`` so that you cane tolerate a
single master failure.
#. Set the ``tectonic_worker_count`` to ``"2"``.
#. Set the ``tectonic_azure_location`` to ``"westeurope"`` if you want to host
the cluster in Azure's ``westeurope`` datacenter.
#. Set the ``tectonic_azure_ssh_key`` to the path of the public key created in
the previous step.
#. We recommend setting up or using a CA(Certificate Authority) to generate Tectonic
Console's server certificate(s) and adding it to your trusted authorities on the client side,
accessing the Tectonic Console i.e. Browser. If you already have a CA(self-signed or otherwise),
Set the ``tectonic_ca_cert`` and ``tectonic_ca_key`` configurations with the content
of PEM-encoded certificate and key files, respectively. For more information about, how to set
up a self-signed CA, Please refer to
:doc:`How to Set up self-signed CA <ca-installation>`.
#. Note that the ``tectonic_azure_client_secret`` is the same as the
``ARM_CLIENT_SECRET``.
#. Note that the URL for the Tectonic console using these settings will be the
cluster name set in the configutation file, the datacenter name and
``cloudapp.azure.com``. For example, if you named your cluster as
``test-cluster`` and specified the datacenter as ``westeurope``, the Tectonic
console will be available at ``test-cluster.westeurope.cloudapp.azure.com``.
#. Note that, if you do not specify ``tectonic_ca_cert``, a CA certificate will
be generated automatically and you will encounter the untrusted certificate
message on your client(Browser), when accessing the Tectonic Console.
Step 4: Configure kubectl
-------------------------
#. Refer to `this tutorial
<https://coreos.com/tectonic/docs/latest/tutorials/azure/first-app.html>`__
for instructions on how to download the kubectl configuration files for
your cluster.
#. Set the ``KUBECONFIG`` environment variable to make ``kubectl`` use the new
config file along with the existing configuration.
.. code:: bash
$ export KUBECONFIG=$HOME/.kube/config:/path/to/config/kubectl-config
# OR to only use the new configuration, try
$ export KUBECONFIG=/path/to/config/kubectl-config
Next, you can follow one of our following deployment templates:
* :doc:`node-on-kubernetes`.
Tectonic References
-------------------
#. https://coreos.com/tectonic/docs/latest/tutorials/azure/install.html
#. https://coreos.com/tectonic/docs/latest/troubleshooting/installer-terraform.html
#. https://coreos.com/tectonic/docs/latest/tutorials/azure/first-app.html

View File

@@ -0,0 +1,271 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Template: Deploy a Kubernetes Cluster on Azure
==============================================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a Planetmint node
if that's the only thing the Kubernetes cluster will be running.
Instead, see our `Node Setup <../../node_setup>`_.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
A Planetmint node can be run inside a `Kubernetes <https://kubernetes.io/>`_
cluster.
This page describes one way to deploy a Kubernetes cluster on Azure.
.. _get-a-pay-as-you-go-azure-subscription:
Step 1: Get a Pay-As-You-Go Azure Subscription
----------------------------------------------
Microsoft Azure has a Free Trial subscription (at the time of writing),
but it's too limited to run an advanced Planetmint node.
Sign up for a Pay-As-You-Go Azure subscription
via `the Azure website <https://azure.microsoft.com>`_.
You may find that you have to sign up for a Free Trial subscription first.
That's okay: you can have many subscriptions.
.. _create-an-ssh-key-pair:
Step 2: Create an SSH Key Pair
------------------------------
You'll want an SSH key pair so you'll be able to SSH
to the virtual machines that you'll deploy in the next step.
(If you already have an SSH key pair, you *could* reuse it,
but it's probably a good idea to make a new SSH key pair
for your Kubernetes VMs and nothing else.)
See the
:doc:`page about how to generate a key pair for SSH
<../../appendices/generate-key-pair-for-ssh>`.
Step 3: Deploy an Azure Container Service (ACS)
-----------------------------------------------
It's *possible* to deploy an Azure Container Service (ACS)
from the `Azure Portal <https://portal.azure.com>`_
(i.e. online in your web browser)
but it's actually easier to do it using the Azure
Command-Line Interface (CLI).
Microsoft has `instructions to install the Azure CLI 2.0
on most common operating systems
<https://docs.microsoft.com/en-us/cli/azure/install-az-cli2>`_.
Do that.
If you already *have* the Azure CLI installed, you may want to update it.
.. warning::
``az component update`` isn't supported if you installed the CLI using some of Microsoft's provided installation instructions. See `the Microsoft docs for update instructions <https://docs.microsoft.com/en-us/cli/azure/install-az-cli2>`_.
Next, login to your account using:
.. code:: bash
$ az login
It will tell you to open a web page and to copy a code to that page.
If the login is a success, you will see some information
about all your subscriptions, including the one that is currently
enabled (``"state": "Enabled"``). If the wrong one is enabled,
you can switch to the right one using:
.. code:: bash
$ az account set --subscription <subscription name or ID>
Next, you will have to pick the Azure data center location
where you'd like to deploy your cluster.
You can get a list of all available locations using:
.. code:: bash
$ az account list-locations
Next, create an Azure "resource group" to contain all the
resources (virtual machines, subnets, etc.) associated
with your soon-to-be-deployed cluster. You can name it
whatever you like but avoid fancy characters because they may
confuse some software.
.. code:: bash
$ az group create --name <resource group name> --location <location name>
Example location names are ``koreacentral`` and ``westeurope``.
Finally, you can deploy an ACS using something like:
.. code:: bash
$ az acs create --name <a made-up cluster name> \
--resource-group <name of resource group created earlier> \
--master-count 3 \
--agent-count 3 \
--admin-username ubuntu \
--agent-vm-size Standard_L4s \
--dns-prefix <make up a name> \
--ssh-key-value ~/.ssh/<name>.pub \
--orchestrator-type kubernetes \
--debug --output json
.. Note::
The `Azure documentation <https://docs.microsoft.com/en-us/cli/azure/acs?view=azure-cli-latest#az_acs_create>`_
has a list of all ``az acs create`` options.
You might prefer a smaller agent VM size, for example.
You can also get a list of the options using:
.. code:: bash
$ az acs create --help
It takes a few minutes for all the resources to deploy.
You can watch the progress in the `Azure Portal
<https://portal.azure.com>`_:
go to **Resource groups** (with the blue cube icon)
and click on the one you created
to see all the resources in it.
Trouble with the Service Principal? Then Read This!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the ``az acs create`` command fails with an error message including the text,
"The Service Principal in ServicePrincipalProfile could not be validated",
then we found you can prevent that by creating a Service Principal ahead of time
and telling ``az acs create`` to use that one. (It's supposed to create one,
but sometimes that fails, I guess.)
Create a new resource group, even if you created one before. They're free anyway:
.. code:: bash
$ az login
$ az group create --name <new resource group name> \
--location <Azure region like westeurope>
Note the ``id`` in the output. It looks like
``"/subscriptions/369284be-0104-421a-8488-1aeac0caecbb/resourceGroups/examplerg"``.
It can be copied into the next command.
Create a Service Principal using:
.. code:: bash
$ az ad sp create-for-rbac --role="Contributor" \
--scopes=<id value copied from above, including the double quotes on the ends>
Note the ``appId`` and ``password``.
Put those in a new ``az acs create`` command like above, with two new options added:
.. code:: bash
$ az acs create ... \
--service-principal <appId> \
--client-secret <password>
.. _ssh-to-your-new-kubernetes-cluster-nodes:
Optional: SSH to Your New Kubernetes Cluster Nodes
--------------------------------------------------
You can SSH to one of the just-deployed Kubernetes "master" nodes
(virtual machines) using:
.. code:: bash
$ ssh -i ~/.ssh/<name> ubuntu@<master-ip-address-or-fqdn>
where you can get the IP address or FQDN
of a master node from the Azure Portal. For example:
.. code:: bash
$ ssh -i ~/.ssh/mykey123 ubuntu@mydnsprefix.westeurope.cloudapp.azure.com
.. note::
All the master nodes are accessible behind the *same* public IP address and
FQDN. You connect to one of the masters randomly based on the load balancing
policy.
The "agent" nodes shouldn't get public IP addresses or externally accessible
FQDNs, so you can't SSH to them *directly*,
but you can first SSH to the master
and then SSH to an agent from there using their hostname.
To do that, you could
copy your SSH key pair to the master (a bad idea),
or use SSH agent forwarding (better).
To do the latter, do the following on the machine you used
to SSH to the master:
.. code:: bash
$ echo -e "Host <FQDN of the cluster from Azure Portal>\n ForwardAgent yes" >> ~/.ssh/config
To verify that SSH agent forwarding works properly,
SSH to the one of the master nodes and do:
.. code:: bash
$ echo "$SSH_AUTH_SOCK"
If you get an empty response,
then SSH agent forwarding hasn't been set up correctly.
If you get a non-empty response,
then SSH agent forwarding should work fine
and you can SSH to one of the agent nodes (from a master)
using:
.. code:: bash
$ ssh ubuntu@k8s-agent-4AC80E97-0
where ``k8s-agent-4AC80E97-0`` is the name
of a Kubernetes agent node in your Kubernetes cluster.
You will have to replace it by the name
of an agent node in your cluster.
Optional: Delete the Kubernetes Cluster
---------------------------------------
.. code:: bash
$ az acs delete \
--name <ACS cluster name> \
--resource-group <name of resource group containing the cluster>
Optional: Delete the Resource Group
-----------------------------------
CAUTION: You might end up deleting resources other than the ACS cluster.
.. code:: bash
$ az group delete \
--name <name of resource group containing the cluster>
Next, you can :doc:`run a Planetmint node/cluster(BFT) <node-on-kubernetes>`
on your new Kubernetes cluster.

View File

@@ -0,0 +1,147 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
.. _cluster-troubleshooting:
Cluster Troubleshooting
=======================
This page describes some basic issues we have faced while deploying and
operating the cluster.
1. MongoDB Restarts
-------------------
We define the following in the ``mongo-ss.yaml`` file:
.. code:: bash
resources:
limits:
cpu: 200m
memory: 5G
When the MongoDB cache occupies a memory greater than 5GB, it is
terminated by the ``kubelet``.
This can usually be verified by logging in to the worker node running MongoDB
container and looking at the syslog (the ``journalctl`` command should usually
work).
This issue is resolved in
`PR #1757 <https://github.com/planetmint/planetmint/pull/1757>`_.
2. 502 Bad Gateway Error on Runscope Tests
------------------------------------------
It means that NGINX could not find the appropriate backed to forward the
requests to. This typically happens when:
#. MongoDB goes down (as described above) and Planetmint, after trying for
``PLANETMINT_DATABASE_MAXTRIES`` times, gives up. The Kubernetes Planetmint
Deployment then restarts the Planetmint pod.
#. Planetmint crashes for some reason. We have seen this happen when updating
Planetmint from one version to the next. This usually means the older
connections to the service gets disconnected; retrying the request one more
time, forwards the connection to the new instance and succeed.
3. Service Unreachable
----------------------
Communication between Kubernetes Services and Deployments fail in
v1.6.6 and before due to a trivial key lookup error for non-existent services
in the ``kubelet``.
This error can be reproduced by restarting any public facing (that is, services
using the cloud load balancer) Kubernetes services, and watching the
``kube-proxy`` failure in its logs.
The solution to this problem is to restart ``kube-proxy`` on the affected
worker/agent node. Login to the worker node and run:
.. code:: bash
docker stop `docker ps | grep k8s_kube-proxy | cut -d" " -f1`
docker logs -f `docker ps | grep k8s_kube-proxy | cut -d" " -f1`
`This issue <https://github.com/kubernetes/kubernetes/issues/48705>`_ is
`fixed in Kubernetes v1.7 <https://github.com/kubernetes/kubernetes/commit/41c4e965c353187889f9b86c3e541b775656dc18>`_.
4. Single Disk Attached to Multiple Mountpoints in a Container
--------------------------------------------------------------
This is currently the issue faced in one of the clusters and being debugged by
the support team at Microsoft.
The issue was first seen on August 29, 2017 on the Test Network and has been
logged in the `Azure/acs-engine repo on GitHub <https://github.com/Azure/acs-engine/issues/1364>`_.
This is apparently fixed in Kubernetes v1.7.2 which include a new disk driver,
but is yet to tested by us.
5. MongoDB Monitoring Agent throws a dial error while connecting to MongoDB
---------------------------------------------------------------------------
You might see something similar to this in the MongoDB Monitoring Agent logs:
.. code:: bash
Failure dialing host without auth. Err: `no reachable servers`
at monitoring-agent/components/dialing.go:278
at monitoring-agent/components/dialing.go:116
at monitoring-agent/components/dialing.go:213
at src/runtime/asm_amd64.s:2086
The first thing to check is if the networking is set up correctly. You can use
the (maybe using the `toolbox` container).
If everything looks fine, it might be a problem with the ``Preferred
Hostnames`` setting in MongoDB Cloud Manager. If you do need to change the
regular expression, ensure that it is correct and saved properly (maybe try
refreshing the MongoDB Cloud Manager web page to see if the setting sticks).
Once you update the regular expression, you will need to remove the deployment
and add it again for the Monitoring Agent to discover and connect to the
MongoDB instance correctly.
More information about this configuration is provided in
:doc:`this document <cloud-manager>`.
6. Create a Persistent Volume from existing Azure disk storage Resource
---------------------------------------------------------------------------
When deleting a k8s cluster, all dynamically-created PVs are deleted, along with the
underlying Azure storage disks (so those can't be used in a new cluster). resources
are also deleted thus cannot be used in a new cluster. This workflow will preserve
the Azure storage disks while deleting the k8s cluster and re-use the same disks on a new
cluster for MongoDB persistent storage without losing any data.
The template to create two PVs for MongoDB Stateful Set (One for MongoDB data store and
the other for MongoDB config store) is located at ``mongodb/mongo-pv.yaml``.
You need to configure ``diskName`` and ``diskURI`` in ``mongodb/mongo-pv.yaml`` file. You can get
these values by logging into your Azure portal and going to ``Resource Groups`` and click on your
relevant resource group. From the list of resources click on the storage account resource and
click the container (usually named as ``vhds``) that contains storage disk blobs that are available
for PVs. Click on the storage disk file that you wish to use for your PV and you will be able to
see ``NAME`` and ``URL`` parameters which you can use for ``diskName`` and ``diskURI`` values in
your template respectively and run the following command to create PVs:
.. code:: bash
$ kubectl --context <context-name> apply -f mongodb/mongo-pv.yaml
.. note::
Please make sure the storage disks you are using are not already being used by any
other PVs. To check the existing PVs in your cluster, run the following command
to get PVs and Storage disk file mapping.
.. code:: bash
$ kubectl --context <context-name> get pv --output yaml

View File

@@ -0,0 +1,122 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Kubernetes Template: Upgrade all Software in a Planetmint Node
==============================================================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a Planetmint node
if that's the only thing the Kubernetes cluster will be running.
Instead, see our `Node Setup <../../node_setup>`_.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
This page outlines how to upgrade all the software associated
with a Planetmint node running on Kubernetes,
including host operating systems, Docker, Kubernetes,
and Planetmint-related software.
Upgrade Host OS, Docker and Kubernetes
--------------------------------------
Some Kubernetes installation & management systems
can do full or partial upgrades of host OSes, Docker,
or Kubernetes, e.g.
`Tectonic <https://coreos.com/tectonic/>`_,
`Rancher <https://docs.rancher.com/rancher/v1.5/en/>`_,
and
`Kubo <https://pivotal.io/kubo>`_.
Consult the documentation for your system.
**Azure Container Service (ACS).**
On Dec. 15, 2016, a Microsoft employee
`wrote <https://github.com/colemickens/azure-kubernetes-status/issues/15#issuecomment-267453251>`_:
"In the coming months we [the Azure Kubernetes team] will be building managed updates in the ACS service."
At the time of writing, managed updates were not yet available,
but you should check the latest
`ACS documentation <https://docs.microsoft.com/en-us/azure/container-service/>`_
to see what's available now.
Also at the time of writing, ACS only supported Ubuntu
as the host (master and agent) operating system.
You can upgrade Ubuntu and Docker on Azure
by SSHing into each of the hosts,
as documented on
:ref:`another page <ssh-to-your-new-kubernetes-cluster-nodes>`.
In general, you can SSH to each host in your Kubernetes Cluster
to update the OS and Docker.
.. note::
Once you are in an SSH session with a host,
the ``docker info`` command is a handy way to detemine the
host OS (including version) and the Docker version.
When you want to upgrade the software on a Kubernetes node,
you should "drain" the node first,
i.e. tell Kubernetes to gracefully terminate all pods
on the node and mark it as unscheduleable
(so no new pods get put on the node during its downtime).
.. code::
kubectl drain $NODENAME
There are `more details in the Kubernetes docs <https://kubernetes.io/docs/concepts/cluster-administration/cluster-management/#maintenance-on-a-node>`_,
including instructions to make the node scheduleable again.
To manually upgrade the host OS,
see the docs for that OS.
To manually upgrade Docker, see
`the Docker docs <https://docs.docker.com/>`_.
To manually upgrade all Kubernetes software in your Kubernetes cluster, see
`the Kubernetes docs <https://kubernetes.io/docs/admin/cluster-management/>`_.
Upgrade Planetmint-Related Software
-----------------------------------
We use Kubernetes "Deployments" for NGINX, Planetmint,
and most other Planetmint-related software.
The only exception is MongoDB; we use a Kubernetes
StatefulSet for that.
The nice thing about Kubernetes Deployments
is that Kubernetes can manage most of the upgrade process.
A typical upgrade workflow for a single Deployment would be:
.. code::
$ KUBE_EDITOR=nano kubectl edit deployment/<name of Deployment>
The ``kubectl edit`` command
opens the specified editor (nano in the above example),
allowing you to edit the specified Deployment *in the Kubernetes cluster*.
You can change the version tag on the Docker image, for example.
Don't forget to save your edits before exiting the editor.
The Kubernetes docs have more information about
`Deployments <https://kubernetes.io/docs/concepts/workloads/controllers/deployment/>`_ (including updating them).
The upgrade story for the MongoDB StatefulSet is *different*.
(This is because MongoDB has persistent state,
which is stored in some storage associated with a PersistentVolumeClaim.)
At the time of writing, StatefulSets were still in beta,
and they did not support automated image upgrade (Docker image tag upgrade).
We expect that to change.
Rather than trying to keep these docs up-to-date,
we advise you to check out the current
`Kubernetes docs about updating containers in StatefulSets
<https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-containers>`_.

View File

@@ -0,0 +1,162 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
.. _kubernetes-template-overview:
Overview
========
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a Planetmint node
if that's the only thing the Kubernetes cluster will be running.
Instead, see our `Node Setup <../../node_setup>`_.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
This page summarizes some steps to go through
to set up a Planetmint network.
You can modify them to suit your needs.
.. _generate-the-blockchain-id-and-genesis-time:
Generate All Shared Planetmint Setup Parameters
-----------------------------------------------
There are some shared Planetmint setup paramters that every node operator
in the consortium shares
because they are properties of the Tendermint network.
They look like this:
.. code::
# Tendermint data
BDB_PERSISTENT_PEERS='bdb-instance-1,bdb-instance-2,bdb-instance-3,bdb-instance-4'
BDB_VALIDATORS='bdb-instance-1,bdb-instance-2,bdb-instance-3,bdb-instance-4'
BDB_VALIDATOR_POWERS='10,10,10,10'
BDB_GENESIS_TIME='0001-01-01T00:00:00Z'
BDB_CHAIN_ID='test-chain-rwcPML'
Those paramters only have to be generated once, by one member of the consortium.
That person will then share the results (Tendermint setup parameters)
with all the node operators.
The above example parameters are for a network of 4 initial (seed) nodes.
Note how ``BDB_PERSISTENT_PEERS``, ``BDB_VALIDATORS`` and ``BDB_VALIDATOR_POWERS`` are lists
with 4 items each.
**If your consortium has a different number of initial nodes,
then those lists should have that number or items.**
Use ``10`` for all the power values.
To generate a ``BDB_GENESIS_TIME`` and a ``BDB_CHAIN_ID``,
you can do this:
.. code::
$ mkdir $(pwd)/tmdata
$ docker run --rm -v $(pwd)/tmdata:/tendermint/config tendermint/tendermint:v0.34.15 init
$ cat $(pwd)/tmdata/genesis.json
You should see something that looks like:
.. code:: json
{"genesis_time": "0001-01-01T00:00:00Z",
"chain_id": "test-chain-bGX7PM",
"validators": [
{"pub_key":
{"type": "ed25519",
"data": "4669C4B966EB8B99E45E40982B2716A9D3FA53B54C68088DAB2689935D7AF1A9"},
"power": 10,
"name": ""}
],
"app_hash": ""
}
The value with ``"genesis_time"`` is ``BDB_GENESIS_TIME`` and
the value with ``"chain_id"`` is ``BDB_CHAIN_ID``.
Now you have all the Planetmint setup parameters and can share them
with all of the node operators. (They will put them in their ``vars`` file.
We'll say more about that file below.)
.. _things-each-node-operator-must-do:
Things Each Node Operator Must Do
---------------------------------
1. Make up an `FQDN <https://en.wikipedia.org/wiki/Fully_qualified_domain_name>`_
for your Planetmint node (e.g. ``mynode.mycorp.com``).
This is where external users will access the Planetmint HTTP API, for example.
Make sure you've registered the associated domain name (e.g. ``mycorp.com``).
Get an SSL certificate for your Planetmint node's FQDN.
Also get the root CA certificate and all intermediate certificates.
They should all be provided by your SSL certificate provider.
Put all those certificates together in one certificate chain file in the following order:
- Domain certificate (i.e. the one you ordered for your FQDN)
- All intermediate certificates
- Root CA certificate
DigiCert has `a web page explaining certificate chains <https://www.digicert.com/ssl-support/pem-ssl-creation.htm>`_.
You will put the path to that certificate chain file in the ``vars`` file,
when you configure your node later.
2a. If your Planetmint node will use 3scale for API authentication, monitoring and billing,
you will need all relevant 3scale settings and credentials.
2b. If your Planetmint node will not use 3scale, then write authorization will be granted
to all POST requests with a secret token in the HTTP headers.
(All GET requests are allowed to pass.)
You can make up that ``SECRET_TOKEN`` now.
For example, ``superSECRET_token4-POST*requests``.
You will put it in the ``vars`` file later.
Every Planetmint node in a Planetmint network can have a different secret token.
To make an HTTP POST request to your Planetmint node,
you must include an HTTP header named ``X-Secret-Access-Token``
and set it equal to your secret token, e.g.
``X-Secret-Access-Token: superSECRET_token4-POST*requests``
3. Deploy a Kubernetes cluster for your Planetmint node. We have some instructions for how to
:doc:`Deploy a Kubernetes cluster on Azure <../k8s-deployment-template/template-kubernetes-azure>`.
.. warning::
In theory, you can deploy your Planetmint node to any Kubernetes cluster, but there can be differences
between different Kubernetes clusters, especially if they are running different versions of Kubernetes.
We tested this Kubernetes Deployment Template on Azure ACS in February 2018 and at that time
ACS was deploying a **Kubernetes 1.7.7** cluster. If you can force your cluster to have that version of Kubernetes,
then you'll increase the likelihood that everything will work.
4. Deploy your Planetmint node inside your new Kubernetes cluster.
You will fill up the ``vars`` file,
then you will run a script which reads that file to generate some Kubernetes config files,
you will send those config files to your Kubernetes cluster,
and then you will deploy all the stuff that you need to have a Planetmint node.
⟶ Proceed to :ref:`deploy your Planetmint node <kubernetes-template-deploy-a-single-planetmint-node>`.
.. raw:: html
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>

View File

@@ -0,0 +1,208 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# How to Set Up a Planetmint Network
You can setup or connect to a network once you have a single node running.
Until now, everything could be done by a node operator, by themselves.
Now the node operators, also called **Members**, must share some information
with each other, so they can form a network.
There is one special Member who helps coordinate everyone: the **Coordinator**.
## Member: Share hostname, pub_key.value and node_id
Each Planetmint node is identified by its:
* `hostname`, i.e. the node's DNS subdomain, such as `bnode.example.com`, or its IP address, such as `46.145.17.32`
* Tendermint `pub_key.value`
* Tendermint `node_id`
The Tendermint `pub_key.value` is stored
in the file `$HOME/.tendermint/config/priv_validator.json`.
That file should look like:
```json
{
"address": "E22D4340E5A92E4A9AD7C62DA62888929B3921E9",
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": "P+aweH73Hii8RyCmNWbwPsa9o4inq3I+0fSfprVkZa0="
},
"last_height": "0",
"last_round": "0",
"last_step": 0,
"priv_key": {
"type": "tendermint/PrivKeyEd25519",
"value": "AHBiZXdZhkVZoPUAiMzClxhl0VvUp7Xl3YT6GvCc93A/5rB4fvceKLxHIKY1ZvA+xr2jiKercj7R9J+mtWRlrQ=="
}
}
```
To get your Tendermint `node_id`, run the command:
```
tendermint show_node_id
```
An example `node_id` is `9b989cd5ac65fec52652a457aed6f5fd200edc22`.
**Share your `hostname`, `pub_key.value` and `node_id` with all other Members.**
## Coordinator: Create & Share the genesis.json File
At this point the Coordinator should have received the data
from all the Members, and should combine them in the file
`$HOME/.tendermint/config/genesis.json`:
```json
{
"genesis_time":"0001-01-01T00:00:00Z",
"chain_id":"test-chain-la6HSr",
"consensus_params":{
"block_size_params":{
"max_bytes":"22020096",
"max_txs":"10000",
"max_gas":"-1"
},
"tx_size_params":{
"max_bytes":"10240",
"max_gas":"-1"
},
"block_gossip_params":{
"block_part_size_bytes":"65536"
},
"evidence_params":{
"max_age":"100000"
}
},
"validators":[
{
"pub_key":{
"type":"tendermint/PubKeyEd25519",
"value":"<Member 1 public key>"
},
"power":10,
"name":"<Member 1 name>"
},
{
"pub_key":{
"type":"tendermint/PubKeyEd25519",
"value":"<Member 2 public key>"
},
"power":10,
"name":"<Member 2 name>"
},
{
"...":{
},
},
{
"pub_key":{
"type":"tendermint/PubKeyEd25519",
"value":"<Member N public key>"
},
"power":10,
"name":"<Member N name>"
}
],
"app_hash":""
}
```
**Note:** The above `consensus_params` in the `genesis.json`
are default values.
The new `genesis.json` file contains the data that describes the Network.
The key `name` is the Member's moniker; it can be any valid string,
but put something human-readable like `"Alice's Node Shop"`.
At this point, the Coordinator must share the new `genesis.json` file with all Members.
## Member: Connect to the Other Members
At this point the Member should have received the `genesis.json` file.
The Member must copy the `genesis.json` file
into their local `$HOME/.tendermint/config` directory.
Every Member now shares the same `chain_id` and `genesis_time` (used to identify the Network),
and the same list of `validators`.
Each Member must edit their `$HOME/.tendermint/config/config.toml` file
and make the following changes:
```
moniker = "Name of our node"
create_empty_blocks = false
log_level = "main:info,state:info,*:error"
persistent_peers = "<Member 1 node id>@<Member 1 hostname>:26656,\
<Member 2 node id>@<Member 2 hostname>:26656,\
<Member N node id>@<Member N hostname>:26656,"
send_rate = 102400000
recv_rate = 102400000
recheck = false
```
Note: The list of `persistent_peers` doesn't have to include all nodes
in the network.
## Member: Start MongoDB
If you installed MongoDB using `sudo apt install mongodb`, then MongoDB should already be running in the background. You can check using `systemctl status mongodb`.
If MongoDB isn't running, then you can start it using the command `mongod`, but that will run it in the foreground. If you want to run it in the background (so it will continue running after you logout), you can use `mongod --fork --logpath /var/log/mongodb.log`. (You might have to create the `/var/log` directory if it doesn't already exist.)
If you installed MongoDB using `sudo apt install mongodb`, then a MongoDB startup script should already be installed (so MongoDB will start automatically when the machine is restarted). Otherwise, you should install a startup script for MongoDB.
## Member: Start Planetmint and Tendermint Using Monit
This section describes how to manage the Planetmint and Tendermint processes using [Monit][monit], a small open-source utility for managing and monitoring Unix processes. Planetmint and Tendermint are managed together, because if Planetmint is stopped (or crashes) and is restarted, *Tendermint won't try reconnecting to it*. (That's not a bug. It's just how Tendermint works.)
Install Monit:
```
sudo apt install monit
```
If you installed the `planetmint` Python package as above, you should have the `planetmint-monit-config` script in your `PATH` now. Run the script to build a configuration file for Monit:
```
planetmint-monit-config
```
Run Monit as a daemon, instructing it to wake up every second to check on processes:
```
monit -d 1
```
Monit will run the Planetmint and Tendermint processes and restart them when they crash. If the root `planetmint_` process crashes, Monit will also restart the Tendermint process.
You can check the status by running `monit status` or `monit summary`.
By default, it will collect program logs into the `~/.planetmint-monit/logs` folder.
To learn more about Monit, use `monit -h` (help) or read [the Monit documentation][monit-manual].
Check `planetmint-monit-config -h` if you want to arrange a different folder for logs or some of the Monit internal artifacts.
If you want to start and manage the Planetmint and Tendermint processes yourself, then look inside the file [planetmint/pkg/scripts/planetmint-monit-config](https://github.com/planetmint/planetmint/blob/master/pkg/scripts/planetmint-monit-config) to see how *it* starts Planetmint and Tendermint.
## How Others Can Access Your Node
If you followed the above instructions, then your node should be publicly-accessible with Planetmint Root URL `https://hostname` or `http://hostname:9984`. That is, anyone can interact with your node using the [Planetmint HTTP API](../connecting/api/http-client-server-api) exposed at that address. The most common way to do that is to use one of the [Planetmint Drivers](../connecting/drivers).
[bdb:software]: https://github.com/planetmint/planetmint/
[bdb:pypi]: https://pypi.org/project/Planetmint/#history
[tendermint:releases]: https://github.com/tendermint/tendermint/releases
[monit]: https://www.mmonit.com/monit
[monit-manual]: https://mmonit.com/monit/documentation/monit.html

View File

@@ -0,0 +1,44 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Planetmint Networks
A **Planetmint network** is a set of connected **Planetmint nodes**, managed by a **Planetmint consortium** (i.e. an organization). Those terms are defined in the [Planetmint Terminology page](https://docs.planetmint.io/en/latest/terminology.html).
## Consortium Structure & Governance
The consortium might be a company, a foundation, a cooperative, or [some other form of organization](https://en.wikipedia.org/wiki/Organizational_structure).
It must make many decisions, e.g. How will new members be added? Who can read the stored data? What kind of data will be stored?
A governance process is required to make those decisions, and therefore one of the first steps for any new consortium is to specify its governance process (if one doesn't already exist).
This documentation doesn't explain how to create a consortium, nor does it outline the possible governance processes.
It's worth noting that the decentralization of a Planetmint network depends,
to some extent, on the decentralization of the associated consortium. See the pages about [decentralization](https://docs.planetmint.io/en/latest/decentralized.html) and [node diversity](https://docs.planetmint.io/en/latest/diversity.html).
## DNS Records and SSL Certificates
We now describe how *we* set up the external (public-facing) DNS records for a Planetmint network. Your consortium may opt to do it differently.
There were several goals:
* Allow external users/clients to connect directly to any Planetmint node in the network (over the internet), if they want.
* Each Planetmint node operator should get an SSL certificate for their Planetmint node, so that their Planetmint node can serve the [Planetmint HTTP API](../connecting/api/http-client-server-api) via HTTPS. (The same certificate might also be used to serve the [WebSocket API](../connecting/api/websocket-event-stream-api).)
* There should be no sharing of SSL certificates among Planetmint node operators.
* Optional: Allow clients to connect to a "random" Planetmint node in the network at one particular domain (or subdomain).
### Node Operator Responsibilities
1. Register a domain (or use one that you already have) for your Planetmint node. You can use a subdomain if you like. For example, you might opt to use `abc-org73.net`, `api.dynabob8.io` or `figmentdb3.ninja`.
2. Get an SSL certificate for your domain or subdomain, and properly install it in your node (e.g. in your NGINX instance).
3. Create a DNS A Record mapping your domain or subdomain to the public IP address of your node (i.e. the one that serves the Planetmint HTTP API).
### Consortium Responsibilities
Optional: The consortium managing the Planetmint network could register a domain name and set up CNAME records mapping that domain name (or one of its subdomains) to each of the nodes in the network. For example, if the consortium registered `bdbnetwork.io`, they could set up CNAME records like the following:
* CNAME record mapping `api.bdbnetwork.io` to `abc-org73.net`
* CNAME record mapping `api.bdbnetwork.io` to `api.dynabob8.io`
* CNAME record mapping `api.bdbnetwork.io` to `figmentdb3.ninja`