mirror of
https://github.com/planetmint/planetmint.git
synced 2025-03-30 15:08:31 +00:00
migrated parts of the docs
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
This commit is contained in:
parent
9db55c555a
commit
dd27341a08
@ -1 +1,25 @@
|
||||
# test
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Planetmint
|
||||
|
||||
Meet Planetmint. The metadata blockchain.
|
||||
|
||||
It has some database characteristics and some blockchain `properties <properties.html>`_,
|
||||
including decentralization, immutability and native support for assets.
|
||||
|
||||
At a high level, one can communicate with a Planetmint network (set of nodes) using the Planetmint HTTP API, or a wrapper for that API, such as the Planetmint Python Driver. Each Planetmint node runs Planetmint Server and various other software. The `terminology page <terminology.html>`_ explains some of those terms in more detail.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 4
|
||||
|
||||
Introdcution <introduction/index>
|
||||
Using Planetmint <basic-usage>
|
||||
Node Setup <node-setup/index>
|
||||
Networks & Federations <network-setup/index>
|
||||
Connecting to Planetmint <connecting/index>
|
||||
References <references/index>
|
@ -1,6 +1,48 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
|
||||
# Summary
|
||||
# Summary
|
||||
|
||||
## Use headings to create page groups like this one
|
||||
|
||||
* [Introdcution](intorduction/README.md)
|
||||
* [About Planetmint](introdcution/about-planetmint.md)
|
||||
* [Quickstart](introdcution/quickstart.md)
|
||||
* [Properties of Planetmint](introdcution/properties.md)
|
||||
|
||||
* [Node Setup](node-setup/README.md)
|
||||
* [Deploy a Node](node-setup/deploy-a-machine.md)
|
||||
* [AWS Setup](node-setup/aws-setup.md)
|
||||
* [Running in one Container](node-setup/all-in-one-planetmint.md)
|
||||
* [Setting up via ansbile](node-setup/planetmint-node-ansible.md)
|
||||
* [Setting up Planetmint, Tenermint & Tarantool](node-setup/set-up-node-software.md)
|
||||
* [Setup NGINX](node-setup/set-up-nginx.md)
|
||||
* [Configuration Settings](node-setup/configuration.md)
|
||||
* [Production Nodes](node-setup/production-node/README.md)
|
||||
* [Requirements](node-setup/production-node/node-requirements.md)
|
||||
* [Assumptions](node-setup/production-node/node-assumptoins.md)
|
||||
* [Components & Layers](node-setup/production-node/node-components.md)
|
||||
* [Security & Privacy](node-setup/production-node/node-security-and-privacy.md)
|
||||
* [Using a Reverse Proxy](node-setup/production-node/reverse-proxy-nodes.md)
|
||||
|
||||
* [Networks & Federations](network-setup/README.md)
|
||||
* [Networks](network-setup/networks.md)
|
||||
* [Network Setup](network-setup/network-setup.md)
|
||||
|
||||
|
||||
* [Using Planetmint](using-planetmint/README.md)
|
||||
* [Using Planetmint](using-planetmint/README.md)
|
||||
|
||||
|
||||
Connecting to Planetmint <connecting/index>
|
||||
References <references/index>
|
||||
|
||||
|
||||
|
||||
## A second-page group
|
||||
|
||||
* [Yet another page](another-page.md)
|
6
docs/new/introduction/README.md
Normal file
6
docs/new/introduction/README.md
Normal file
@ -0,0 +1,6 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
131
docs/new/introduction/about-planetmint copy.rst
Normal file
131
docs/new/introduction/about-planetmint copy.rst
Normal file
@ -0,0 +1,131 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
What is Planetmint
|
||||
==================
|
||||
|
||||
Basic Facts
|
||||
-----------
|
||||
|
||||
1. One can store arbitrary data (including encrypted data) in a Planetmint network, within limits: there’s a maximum transaction size. Every transaction has a ``metadata`` section which can store almost any Unicode string (up to some maximum length). Similarly, every CREATE transaction has an ``asset.data`` section which can store almost any Unicode string.
|
||||
2. . The data stored in certain Planetmint transaction fields must not be encrypted, e.g. public keys and amounts. Planetmint doesn’t offer private transactions akin to Zcoin.
|
||||
3. Once data has been stored in a Planetmint network, it’s best to assume it can’t be change or deleted.
|
||||
4. Every node in a Planetmint network has a full copy of all the stored data.
|
||||
5. Every node in a Planetmint network can read all the stored data.
|
||||
6. Everyone with full access to a Planetmint node (e.g. the sysadmin of a node) can read all the data stored on that node.
|
||||
7. Everyone given access to a node via the Planetmint HTTP API can find and read all the data stored by Planetmint. The list of people with access might be quite short.
|
||||
8. If the connection between an external user and a Planetmint node isn’t encrypted (using HTTPS, for example), then a wiretapper can read all HTTP requests and responses in transit.
|
||||
9. If someone gets access to plaintext (regardless of where they got it), then they can (in principle) share it with the whole world. One can make it difficult for them to do that, e.g. if it is a lot of data and they only get access inside a secure room where they are searched as they leave the room.
|
||||
|
||||
Planetmint for Asset Registrations & Transfers
|
||||
----------------------------------------------
|
||||
|
||||
Planetmint can store data of any kind, but it's designed to be particularly good for storing asset registrations and transfers:
|
||||
|
||||
* The fundamental thing that one sends to a Planetmint network, to be checked and stored (if valid), is a *transaction*, and there are two kinds: CREATE transactions and TRANSFER transactions.
|
||||
* A CREATE transaction can be use to register any kind of asset (divisible or indivisible), along with arbitrary metadata.
|
||||
* An asset can have zero, one, or several owners.
|
||||
* The owners of an asset can specify (crypto-)conditions which must be satisfied by anyone wishing transfer the asset to new owners. For example, a condition might be that at least 3 of the 5 current owners must cryptographically sign a TRANSFER transaction.
|
||||
* Planetmint verifies that the conditions have been satisfied as part of checking the validity of TRANSFER transactions. (Moreover, anyone can check that they were satisfied.)
|
||||
* Planetmint prevents double-spending of an asset.
|
||||
* Validated transactions are immutable.
|
||||
|
||||
.. note::
|
||||
|
||||
We used the word "owners" somewhat loosely above. A more accurate word might be fulfillers, signers, controllers, or transfer-enablers. See the section titled **A Note about Owners** in the relevant `Planetmint Transactions Spec <https://github.com/planetmint/BEPs/tree/master/tx-specs/>`_.
|
||||
|
||||
Production-Ready?
|
||||
-----------------
|
||||
|
||||
Depending on your use case, Planetmint may or may not be production-ready. You should ask your service provider.
|
||||
If you want to go live (into production) with Planetmint, please consult with your service provider.
|
||||
|
||||
Note: Planetmint has an open source license with a "no warranty" section that is typical of open source licenses. This is standard in the software industry. For example, the Linux kernel is used in production by billions of machines even though its license includes a "no warranty" section. Warranties are usually provided above the level of the software license, by service providers.
|
||||
|
||||
Storing Private Data Off-Chain
|
||||
------------------------------
|
||||
|
||||
A system could store data off-chain, e.g. in a third-party database, document store, or content management system (CMS) and it could use Planetmint to:
|
||||
|
||||
- Keep track of who has read permissions (or other permissions) in a third-party system. An example of how this could be done is described below.
|
||||
- Keep a permanent record of all requests made to the third-party system.
|
||||
- Store hashes of documents-stored-elsewhere, so that a change in any document can be detected.
|
||||
- Record all handshake-establishing requests and responses between two off-chain parties (e.g. a Diffie-Hellman key exchange), so as to prove that they established an encrypted tunnel (without giving readers access to that tunnel). There are more details about this idea in `the Privacy Protocols repository <https://github.com/planetmint/privacy-protocols>`_.
|
||||
|
||||
A simple way to record who has read permission on a particular document would be for the third-party system (“DocPile”) to store a CREATE transaction in a Planetmint network for every document+user pair, to indicate that that user has read permissions for that document. The transaction could be signed by DocPile (or maybe by a document owner, as a variation). The asset data field would contain 1) the unique ID of the user and 2) the unique ID of the document. The one output on the CREATE transaction would only be transferable/spendable by DocPile (or, again, a document owner).
|
||||
|
||||
To revoke the read permission, DocPile could create a TRANSFER transaction, to spend the one output on the original CREATE transaction, with a metadata field to say that the user in question no longer has read permission on that document.
|
||||
|
||||
This can be carried on indefinitely, i.e. another TRANSFER transaction could be created by DocPile to indicate that the user now has read permissions again.
|
||||
|
||||
DocPile can figure out if a given user has read permissions on a given document by reading the last transaction in the CREATE → TRANSFER → TRANSFER → etc. chain for that user+document pair.
|
||||
|
||||
There are other ways to accomplish the same thing. The above is just one example.
|
||||
|
||||
You might have noticed that the above example didn’t treat the “read permission” as an asset owned (controlled) by a user because if the permission asset is given to (transferred to or created by) the user then it cannot be controlled any further (by DocPile) until the user transfers it back to DocPile. Moreover, the user could transfer the asset to someone else, which might be problematic.
|
||||
|
||||
Storing Private Data On-Chain, Encrypted
|
||||
-----------------------------------------
|
||||
|
||||
There are many ways to store private data on-chain, encrypted. Every use case has its own objectives and constraints, and the best solution depends on the use case. `The IPDB consulting team <contact@ipdb.global>`_ can help you design the best solution for your use case.
|
||||
|
||||
Below we describe some example system setups, using various crypto primitives, to give a sense of what’s possible.
|
||||
|
||||
Please note:
|
||||
|
||||
- Ed25519 keypairs are designed for signing and verifying cryptographic signatures, `not for encrypting and decrypting messages <https://crypto.stackexchange.com/questions/27866/why-curve25519-for-encryption-but-ed25519-for-signatures>`_. For encryption, you should use keypairs designed for encryption, such as X25519.
|
||||
- If someone (or some group) publishes how to decrypt some encrypted data on-chain, then anyone with access to that encrypted data will be able to get the plaintext. The data can’t be deleted.
|
||||
- Encrypted data can’t be indexed or searched by MongoDB. (It can index and search the ciphertext, but that’s not very useful.) One might use homomorphic encryption to index and search encrypted data, but MongoDB doesn’t have any plans to support that any time soon. If there is indexing or keyword search needed, then some fields of the ``asset.data`` or ``metadata`` objects can be left as plain text and the sensitive information can be stored in an encrypted child-object.
|
||||
|
||||
System Example 1
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Encrypt the data with a symmetric key and store the ciphertext on-chain (in ``metadata`` or ``asset.data``). To communicate the key to a third party, use their public key to encrypt the symmetric key and send them that. They can decrypt the symmetric key with their private key, and then use that symmetric key to decrypt the on-chain ciphertext.
|
||||
|
||||
The reason for using a symmetric key along with public/private keypairs is so the ciphertext only has to be stored once.
|
||||
|
||||
System Example 2
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
This example uses `proxy re-encryption <https://en.wikipedia.org/wiki/Proxy_re-encryption>`_:
|
||||
|
||||
#. MegaCorp encrypts some data using its own public key, then stores that encrypted data (ciphertext 1) in a Planetmint network.
|
||||
#. MegaCorp wants to let others read that encrypted data, but without ever sharing their private key and without having to re-encrypt themselves for every new recipient. Instead, they find a “proxy” named Moxie, to provide proxy re-encryption services.
|
||||
#. Zorban contacts MegaCorp and asks for permission to read the data.
|
||||
#. MegaCorp asks Zorban for his public key.
|
||||
#. MegaCorp generates a “re-encryption key” and sends it to their proxy, Moxie.
|
||||
#. Moxie (the proxy) uses the re-encryption key to encrypt ciphertext 1, creating ciphertext 2.
|
||||
#. Moxie sends ciphertext 2 to Zorban (or to MegaCorp who forwards it to Zorban).
|
||||
#. Zorban uses his private key to decrypt ciphertext 2, getting the original un-encrypted data.
|
||||
|
||||
Note:
|
||||
|
||||
- The proxy only ever sees ciphertext. They never see any un-encrypted data.
|
||||
- Zorban never got the ability to decrypt ciphertext 1, i.e. the on-chain data.
|
||||
- There are variations on the above flow.
|
||||
|
||||
System Example 3
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
This example uses `erasure coding <https://en.wikipedia.org/wiki/Erasure_code>`_:
|
||||
|
||||
#. Erasure-code the data into n pieces.
|
||||
#. Encrypt each of the n pieces with a different encryption key.
|
||||
#. Store the n encrypted pieces on-chain, e.g. in n separate transactions.
|
||||
#. Share each of the the n decryption keys with a different party.
|
||||
|
||||
If k < N of the key-holders gets and decrypts k of the pieces, they can reconstruct the original plaintext. Less than k would not be enough.
|
||||
|
||||
System Example 4
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
This setup could be used in an enterprise blockchain scenario where a special node should be able to see parts of the data, but the others should not.
|
||||
|
||||
- The special node generates an X25519 keypair (or similar asymmetric *encryption* keypair).
|
||||
- A Planetmint end user finds out the X25519 public key (encryption key) of the special node.
|
||||
- The end user creates a valid Planetmint transaction, with either the asset.data or the metadata (or both) encrypted using the above-mentioned public key.
|
||||
- This is only done for transactions where the contents of asset.data or metadata don't matter for validation, so all node operators can validate the transaction.
|
||||
- The special node is able to decrypt the encrypted data, but the other node operators can't, and nor can any other end user.
|
125
docs/new/introduction/about-planetmint.md
Normal file
125
docs/new/introduction/about-planetmint.md
Normal file
@ -0,0 +1,125 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# About Planetmint
|
||||
|
||||
### Basic Facts
|
||||
|
||||
1. One can store arbitrary data (including encrypted data) in a Planetmint network, within limits: there’s a maximum transaction size. Every transaction has a `metadata` section which can store almost any Unicode string (up to some maximum length). Similarly, every CREATE transaction has an `asset.data` section which can store almost any Unicode string.
|
||||
2. The data stored in certain Planetmint transaction fields must not be encrypted, e.g. public keys and amounts. Planetmint doesn’t offer private transactions akin to Zcoin.
|
||||
3. Once data has been stored in a Planetmint network, it’s best to assume it can’t be change or deleted.
|
||||
4. Every node in a Planetmint network has a full copy of all the stored data.
|
||||
5. Every node in a Planetmint network can read all the stored data.
|
||||
6. Everyone with full access to a Planetmint node (e.g. the sysadmin of a node) can read all the data stored on that node.
|
||||
7. Everyone given access to a node via the Planetmint HTTP API can find and read all the data stored by Planetmint. The list of people with access might be quite short.
|
||||
8. If the connection between an external user and a Planetmint node isn’t encrypted (using HTTPS, for example), then a wiretapper can read all HTTP requests and responses in transit.
|
||||
9. If someone gets access to plaintext (regardless of where they got it), then they can (in principle) share it with the whole world. One can make it difficult for them to do that, e.g. if it is a lot of data and they only get access inside a secure room where they are searched as they leave the room.
|
||||
|
||||
### Planetmint for Asset Registrations & Transfers
|
||||
|
||||
Planetmint can store data of any kind, but it’s designed to be particularly good for storing asset registrations and transfers:
|
||||
|
||||
* The fundamental thing that one sends to a Planetmint network, to be checked and stored (if valid), is a _transaction_, and there are two kinds: CREATE transactions and TRANSFER transactions.
|
||||
* A CREATE transaction can be use to register any kind of asset (divisible or indivisible), along with arbitrary metadata.
|
||||
* An asset can have zero, one, or several owners.
|
||||
* The owners of an asset can specify (crypto-)conditions which must be satisfied by anyone wishing transfer the asset to new owners. For example, a condition might be that at least 3 of the 5 current owners must cryptographically sign a TRANSFER transaction.
|
||||
* Planetmint verifies that the conditions have been satisfied as part of checking the validity of TRANSFER transactions. (Moreover, anyone can check that they were satisfied.)
|
||||
* Planetmint prevents double-spending of an asset.
|
||||
* Validated transactions are immutable.
|
||||
|
||||
{% hint style="info" %}
|
||||
**Note**
|
||||
|
||||
We used the word “owners” somewhat loosely above. A more accurate word might be fulfillers, signers, controllers, or transfer-enablers. See the section titled **A Note about Owners** in the relevant [Planetmint Transactions Spec](https://github.com/Planetmint/PRPs/tree/master/tx-specs/).
|
||||
{% endhint %}
|
||||
|
||||
#### # Production-Ready?
|
||||
|
||||
Depending on your use case, Planetmint may or may not be production-ready. You should ask your service provider. If you want to go live (into production) with Planetmint, please consult with your service provider.
|
||||
|
||||
Note: Planetmint has an open source license with a “no warranty” section that is typical of open source licenses. This is standard in the software industry. For example, the Linux kernel is used in production by billions of machines even though its license includes a “no warranty” section. Warranties are usually provided above the level of the software license, by service providers.
|
||||
|
||||
### Storing Private Data Off-Chain
|
||||
|
||||
A system could store data off-chain, e.g. in a third-party database, document store, or content management system (CMS) and it could use Planetmint to:
|
||||
|
||||
* Keep track of who has read permissions (or other permissions) in a third-party system. An example of how this could be done is described below.
|
||||
* Keep a permanent record of all requests made to the third-party system.
|
||||
* Store hashes of documents-stored-elsewhere, so that a change in any document can be detected.
|
||||
* Record all handshake-establishing requests and responses between two off-chain parties (e.g. a Diffie-Hellman key exchange), so as to prove that they established an encrypted tunnel (without giving readers access to that tunnel). There are more details about this idea in [the Privacy Protocols repository](https://github.com/Planetmint/privacy-protocols).
|
||||
|
||||
A simple way to record who has read permission on a particular document would be for the third-party system (“DocPile”) to store a CREATE transaction in a Planetmint network for every document+user pair, to indicate that that user has read permissions for that document. The transaction could be signed by DocPile (or maybe by a document owner, as a variation). The asset data field would contain 1) the unique ID of the user and 2) the unique ID of the document. The one output on the CREATE transaction would only be transferable/spendable by DocPile (or, again, a document owner).
|
||||
|
||||
To revoke the read permission, DocPile could create a TRANSFER transaction, to spend the one output on the original CREATE transaction, with a metadata field to say that the user in question no longer has read permission on that document.
|
||||
|
||||
This can be carried on indefinitely, i.e. another TRANSFER transaction could be created by DocPile to indicate that the user now has read permissions again.
|
||||
|
||||
DocPile can figure out if a given user has read permissions on a given document by reading the last transaction in the CREATE → TRANSFER → TRANSFER → etc. chain for that user+document pair.
|
||||
|
||||
There are other ways to accomplish the same thing. The above is just one example.
|
||||
|
||||
You might have noticed that the above example didn’t treat the “read permission” as an asset owned (controlled) by a user because if the permission asset is given to (transferred to or created by) the user then it cannot be controlled any further (by DocPile) until the user transfers it back to DocPile. Moreover, the user could transfer the asset to someone else, which might be problematic.
|
||||
|
||||
### Storing Private Data On-Chain, Encrypted
|
||||
|
||||
There are many ways to store private data on-chain, encrypted. Every use case has its own objectives and constraints, and the best solution depends on the use case. [The IPDB consulting team](mailto:contact%40ipdb.global) can help you design the best solution for your use case.
|
||||
|
||||
Below we describe some example system setups, using various crypto primitives, to give a sense of what’s possible.
|
||||
|
||||
Please note:
|
||||
|
||||
* Ed25519 keypairs are designed for signing and verifying cryptographic signatures, [not for encrypting and decrypting messages](https://crypto.stackexchange.com/questions/27866/why-curve25519-for-encryption-but-ed25519-for-signatures). For encryption, you should use keypairs designed for encryption, such as X25519.
|
||||
* If someone (or some group) publishes how to decrypt some encrypted data on-chain, then anyone with access to that encrypted data will be able to get the plaintext. The data can’t be deleted.
|
||||
* Encrypted data can’t be indexed or searched by MongoDB. (It can index and search the ciphertext, but that’s not very useful.) One might use homomorphic encryption to index and search encrypted data, but MongoDB doesn’t have any plans to support that any time soon. If there is indexing or keyword search needed, then some fields of the `asset.data` or `metadata` objects can be left as plain text and the sensitive information can be stored in an encrypted child-object.
|
||||
|
||||
#### System Example 1
|
||||
|
||||
Encrypt the data with a symmetric key and store the ciphertext on-chain (in `metadata` or `asset.data`). To communicate the key to a third party, use their public key to encrypt the symmetric key and send them that. They can decrypt the symmetric key with their private key, and then use that symmetric key to decrypt the on-chain ciphertext.
|
||||
|
||||
The reason for using a symmetric key along with public/private keypairs is so the ciphertext only has to be stored once.
|
||||
|
||||
#### System Example 2
|
||||
|
||||
This example uses [proxy re-encryption](https://en.wikipedia.org/wiki/Proxy\_re-encryption):
|
||||
|
||||
1. MegaCorp encrypts some data using its own public key, then stores that encrypted data (ciphertext 1) in a Planetmint network.
|
||||
2. MegaCorp wants to let others read that encrypted data, but without ever sharing their private key and without having to re-encrypt themselves for every new recipient. Instead, they find a “proxy” named Moxie, to provide proxy re-encryption services.
|
||||
3. Zorban contacts MegaCorp and asks for permission to read the data.
|
||||
4. MegaCorp asks Zorban for his public key.
|
||||
5. MegaCorp generates a “re-encryption key” and sends it to their proxy, Moxie.
|
||||
6. Moxie (the proxy) uses the re-encryption key to encrypt ciphertext 1, creating ciphertext 2.
|
||||
7. Moxie sends ciphertext 2 to Zorban (or to MegaCorp who forwards it to Zorban).
|
||||
8. Zorban uses his private key to decrypt ciphertext 2, getting the original un-encrypted data.
|
||||
|
||||
{% hint style="info" %}
|
||||
**Note**
|
||||
|
||||
* The proxy only ever sees ciphertext. They never see any un-encrypted data.
|
||||
* Zorban never got the ability to decrypt ciphertext 1, i.e. the on-chain data.
|
||||
* There are variations on the above flow.
|
||||
{% endhint %}
|
||||
|
||||
#### System Example 3
|
||||
|
||||
This example uses [erasure coding](https://en.wikipedia.org/wiki/Erasure\_code):
|
||||
|
||||
1. Erasure-code the data into n pieces.
|
||||
2. Encrypt each of the n pieces with a different encryption key.
|
||||
3. Store the n encrypted pieces on-chain, e.g. in n separate transactions.
|
||||
4. Share each of the the n decryption keys with a different party.
|
||||
|
||||
If k < N of the key-holders gets and decrypts k of the pieces, they can reconstruct the original plaintext. Less than k would not be enough.
|
||||
|
||||
#### System Example 4
|
||||
|
||||
This setup could be used in an enterprise blockchain scenario where a special node should be able to see parts of the data, but the others should not.
|
||||
|
||||
* The special node generates an X25519 keypair (or similar asymmetric _encryption_ keypair).
|
||||
* A Planetmint end user finds out the X25519 public key (encryption key) of the special node.
|
||||
* The end user creates a valid Planetmint transaction, with either the asset.data or the metadata (or both) encrypted using the above-mentioned public key.
|
||||
* This is only done for transactions where the contents of asset.data or metadata don’t matter for validation, so all node operators can validate the transaction.
|
||||
* The special node is able to decrypt the encrypted data, but the other node operators can’t, and nor can any other end user.
|
60
docs/new/introduction/properties.md
Normal file
60
docs/new/introduction/properties.md
Normal file
@ -0,0 +1,60 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Properties of Planetmint
|
||||
|
||||
## Decentralization
|
||||
|
||||
Decentralization means that no one owns or controls everything, and there is no single point of failure.
|
||||
|
||||
Ideally, each node in a Planetmint network is owned and controlled by a different person or organization. Even if the network lives within one organization, it's still preferable to have each node controlled by a different person or subdivision.
|
||||
|
||||
We use the phrase "Planetmint consortium" (or just "consortium") to refer to the set of people and/or organizations who run the nodes of a Planetmint network. A consortium requires some form of governance to make decisions such as membership and policies. The exact details of the governance process are determined by each consortium, but it can be very decentralized.
|
||||
|
||||
A consortium can increase its decentralization (and its resilience) by increasing its jurisdictional diversity, geographic diversity, and other kinds of diversity.
|
||||
|
||||
There’s no node that has a long-term special position in the Planetmint network. All nodes run the same software and perform the same duties.
|
||||
|
||||
If someone has (or gets) admin access to a node, they can mess with that node (e.g. change or delete data stored on that node), but those changes should remain isolated to that node. The Planetmint network can only be compromised if more than one third of the nodes get compromised. See the [Tendermint documentation](https://tendermint.com/docs/introduction/introduction.html) for more details.
|
||||
|
||||
It’s worth noting that not even the admin or superuser of a node can transfer assets. The only way to create a valid transfer transaction is to fulfill the current crypto-conditions on the asset, and the admin/superuser can’t do that because the admin user doesn’t have the necessary information (e.g. private keys).
|
||||
|
||||
## Byzantine Fault Tolerance
|
||||
|
||||
[Tendermint](https://www.tendermint.com/) is used for consensus and transaction replication,
|
||||
and Tendermint is [Byzantine Fault Tolerant (BFT)](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance).
|
||||
|
||||
## Node Diversity
|
||||
|
||||
Steps should be taken to make it difficult for any one actor or event to control or damage “enough” of the nodes. (Because Planetmint Server uses Tendermint, "enough" is ⅓.) There are many kinds of diversity to consider, listed below. It may be quite difficult to have high diversity of all kinds.
|
||||
|
||||
1. **Jurisdictional diversity.** The nodes should be controlled by entities within multiple legal jurisdictions, so that it becomes difficult to use legal means to compel enough of them to do something.
|
||||
1. **Geographic diversity.** The servers should be physically located at multiple geographic locations, so that it becomes difficult for a natural disaster (such as a flood or earthquake) to damage enough of them to cause problems.
|
||||
1. **Hosting diversity.** The servers should be hosted by multiple hosting providers (e.g. Amazon Web Services, Microsoft Azure, Digital Ocean, Rackspace), so that it becomes difficult for one hosting provider to influence enough of the nodes.
|
||||
1. **Diversity in general.** In general, membership diversity (of all kinds) confers many advantages on a consortium. For example, it provides the consortium with a source of various ideas for addressing challenges.
|
||||
|
||||
Note: If all the nodes are running the same code, i.e. the same implementation of Planetmint, then a bug in that code could be used to compromise all of the nodes. Ideally, there would be several different, well-maintained implementations of Planetmint Server (e.g. one in Python, one in Go, etc.), so that a consortium could also have a diversity of server implementations. Similar remarks can be made about the operating system.
|
||||
|
||||
## Immutability
|
||||
|
||||
The blockchain community often describes blockchains as “immutable.” If we interpret that word literally, it means that blockchain data is unchangeable or permanent, which is absurd. The data _can_ be changed. For example, a plague might drive humanity extinct; the data would then get corrupted over time due to water damage, thermal noise, and the general increase of entropy.
|
||||
|
||||
It’s true that blockchain data is more difficult to change (or delete) than usual. It's more than just "tamper-resistant" (which implies intent), blockchain data also resists random changes that can happen without any intent, such as data corruption on a hard drive. Therefore, in the context of blockchains, we interpret the word “immutable” to mean *practically* immutable, for all intents and purposes. (Linguists would say that the word “immutable” is a _term of art_ in the blockchain community.)
|
||||
|
||||
Blockchain data can be made immutable in several ways:
|
||||
|
||||
1. **No APIs for changing or deleting data.** Blockchain software usually doesn't expose any APIs for changing or deleting the data stored in the blockchain. Planetmint has no such APIs. This doesn't prevent changes or deletions from happening in _other_ ways; it's just one line of defense.
|
||||
1. **Replication.** All data is replicated (copied) to several different places. The higher the replication factor, the more difficult it becomes to change or delete all replicas.
|
||||
1. **Internal watchdogs.** All nodes monitor all changes and if some unallowed change happens, then appropriate action can be taken.
|
||||
1. **External watchdogs.** A consortium may opt to have trusted third-parties to monitor and audit their data, looking for irregularities. For a consortium with publicly-readable data, the public can act as an auditor.
|
||||
1. **Economic incentives.** Some blockchain systems make it very expensive to change old stored data. Examples include proof-of-work and proof-of-stake systems. Planetmint doesn't use explicit incentives like those.
|
||||
1. Data can be stored using fancy techniques, such as error-correction codes, to make some kinds of changes easier to undo.
|
||||
1. **Cryptographic signatures** are often used as a way to check if messages (e.g. transactions) have been tampered with enroute, and as a way to verify who signed the messages. In Planetmint, each transaction must be signed by one or more parties.
|
||||
1. **Full or partial backups** may be recorded from time to time, possibly on magnetic tape storage, other blockchains, printouts, etc.
|
||||
1. **Strong security.** Node owners can adopt and enforce strong security policies.
|
||||
|
||||
|
88
docs/new/introduction/quickstart.md
Normal file
88
docs/new/introduction/quickstart.md
Normal file
@ -0,0 +1,88 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
|
||||
|
||||
# Quickstart
|
||||
Planetmint is a metadata blockchain. This introduction gives an overview about how to attest data to Planetmint. First, simple transaction creation and sending is shown. Thereafter, an introdcution about how to set up a single node or a cluster is given.
|
||||
|
||||
|
||||
|
||||
## The IPDB Testnet - sending transactions
|
||||
The IPDB foundation hosts a testnet server that is reset every night at 4am UTC.
|
||||
|
||||
The following sequence shows a simple asset notarization / attestion on that testnet:
|
||||
Create a file named notarize.py
|
||||
|
||||
```
|
||||
from planetmint_driver import Planetmint
|
||||
from planetmint_driver.crypto import generate_keypair
|
||||
|
||||
plntmnt = Planetmint('https://test.ipdb.io')
|
||||
alice = generate_keypair()
|
||||
tx = plntmnt.transactions.prepare(
|
||||
operation='CREATE',
|
||||
signers=alice.public_key,
|
||||
asset={'data': {'message': 'Blockchain all the things!'}})
|
||||
signed_tx = plntmnt.transactions.fulfill(
|
||||
tx,
|
||||
private_keys=alice.private_key)
|
||||
plntmnt.transactions.send_commit(signed_tx)
|
||||
```
|
||||
|
||||
install dependencies and execute it
|
||||
|
||||
```
|
||||
$ pip install planetmint-driver
|
||||
$ python notarize.py
|
||||
```
|
||||
# Install Planetmint
|
||||
## Local Node
|
||||
Planemtint is a Tendermint applicatoin with an attached database.
|
||||
A basic installation installs the database, Tendermint and therafter Planetmint.
|
||||
|
||||
Planetmint currently supports Tarantool and MongoDB database. The installation is as follows:
|
||||
```
|
||||
# Tarantool
|
||||
$ curl -L https://tarantool.io/release/2/installer.sh | bash
|
||||
$ sudo apt-get -y install tarantool
|
||||
```
|
||||
*Caveat:* Tarantool versions before [2.4.2](https://www.tarantool.io/en/doc/latest/release/2.4.2/) automatically enable and start a demonstration instance that listens on port `3301` by default. Refer to the [Tarantool documentation](https://www.tarantool.io/en/doc/latest/getting_started/getting_started_db/#creating-db-locally) for more information.
|
||||
|
||||
```
|
||||
# MongoDB
|
||||
$ sudo apt install mongodb
|
||||
```
|
||||
Tendermint can be installed and started as follows
|
||||
```
|
||||
$ wget https://github.com/tendermint/tendermint/releases/download/v0.34.15/tendermint_0.34.15_linux_amd64.tar.gz
|
||||
$ tar zxf tendermint_0.34.15_linux_amd64.tar.gz
|
||||
$ ./tendermint init
|
||||
$ ./tendermint node --proxy_app=tcp://localhost:26658
|
||||
```
|
||||
Planetmint installs and starts as described below
|
||||
```
|
||||
$ pip install planetmint
|
||||
$ planetmint configure
|
||||
$ planetmint start
|
||||
```
|
||||
|
||||
## Cluster of nodes
|
||||
Setting up a cluster of nodes comes down to set up a cluster of tendermint nodes as documented at [Tendermint](https://docs.tendermint.com/v0.35/introduction/quick-start.html#cluster-of-nodes). In addition to that, the database and Planetmint need to be installed on the servers as described above.
|
||||
|
||||
## Setup Instructions for Various Cases
|
||||
|
||||
- Quickstart link below
|
||||
- [Set up a local Planetmint node for development, experimenting and testing](../node-setup/index)
|
||||
- [Set up and run a Planetmint network](../network-setup/index)
|
||||
|
||||
## Develop an App Test
|
||||
|
||||
To develop an app that talks to a Planetmint network, you'll want a test network to test it against. You have a few options:
|
||||
|
||||
1. The IPDB Test Network (or "Testnet") is a free-to-use, publicly-available test network that you can test against. It is available at [IPDB testnet](https://test.ipdb.io/).
|
||||
1. You could also run a Planetmint node on you local machine. One way is to use this node setup guide with a one-node "network" by using the all-in-one docker solution, or manual installation and configuration of the components. Another way is to use one of the deployment methods listed in the [network setup guide](../network-setup/index) or in the [the docs about contributing to Planetmint](../references/contributing/index).
|
10
docs/new/network-setup/README.md
Normal file
10
docs/new/network-setup/README.md
Normal file
@ -0,0 +1,10 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Networks & Federations
|
||||
|
||||
There are several ways to setup a network. You can use the Kubernetes deployment template in this section, or use the Ansible solution in the Contributing section. Also, you can setup a single node on your machine and connect to an existing network.
|
26
docs/new/network-setup/k8s-deployment-template/README.md
Normal file
26
docs/new/network-setup/k8s-deployment-template/README.md
Normal file
@ -0,0 +1,26 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
|
||||
# Kubernetes Deployment Template
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a Planetmint node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see our `Node Setup <../../node_setup>`_.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
This section outlines a way to deploy a Planetmint node (or Planetmint network)
|
||||
on Microsoft Azure using Kubernetes.
|
||||
You may choose to use it as a template or reference for your own deployment,
|
||||
but *we make no claim that it is suitable for your purposes*.
|
||||
Feel free change things to suit your needs or preferences.
|
228
docs/new/network-setup/k8s-deployment-template/architecture.rst
Normal file
228
docs/new/network-setup/k8s-deployment-template/architecture.rst
Normal file
@ -0,0 +1,228 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
Architecture of a Planetmint Node Running in a Kubernetes Cluster
|
||||
=================================================================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a Planetmint node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see our `Node Setup <../../node_setup>`_.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
If you deploy a Planetmint node into a Kubernetes cluster
|
||||
as described in these docs, it will include:
|
||||
|
||||
* NGINX, OpenResty, Planetmint, MongoDB and Tendermint
|
||||
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
|
||||
* NGINX, OpenResty, Planetmint and MongoDB Monitoring Agent
|
||||
`Kubernetes Deployments <https://kubernetes.io/docs/concepts/workloads/controllers/deployment/>`_.
|
||||
* MongoDB and Tendermint `Kubernetes StatefulSets <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/>`_.
|
||||
* Third party services like `3scale <https://3scale.net>`_,
|
||||
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_ and the
|
||||
`Azure Operations Management Suite
|
||||
<https://docs.microsoft.com/en-us/azure/operations-management-suite/>`_.
|
||||
|
||||
|
||||
.. _planetmint-node:
|
||||
|
||||
Planetmint Node Diagram
|
||||
-----------------------
|
||||
|
||||
.. aafig::
|
||||
:aspect: 60
|
||||
:scale: 100
|
||||
:background: #rgb
|
||||
:proportional:
|
||||
|
||||
+ +
|
||||
+--------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| "Planetmint API" | | "Tendermint P2P" |
|
||||
| | | "Communication/" |
|
||||
| | | "Public Key Exchange" |
|
||||
| | | |
|
||||
| | | |
|
||||
| v v |
|
||||
| |
|
||||
| +------------------+ |
|
||||
| |"NGINX Service" | |
|
||||
| +-------+----------+ |
|
||||
| | |
|
||||
| v |
|
||||
| |
|
||||
| +------------------+ |
|
||||
| | "NGINX" | |
|
||||
| | "Deployment" | |
|
||||
| | | |
|
||||
| +-------+----------+ |
|
||||
| | |
|
||||
| | |
|
||||
| | |
|
||||
| v |
|
||||
| |
|
||||
| "443" +----------+ "26656/9986" |
|
||||
| | "Rate" | |
|
||||
| +---------------------------+"Limiting"+-----------------------+ |
|
||||
| | | "Logic" | | |
|
||||
| | +----+-----+ | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | "27017" | | |
|
||||
| v | v |
|
||||
| +-------------+ | +------------+ |
|
||||
| |"HTTPS" | | +------------------> |"Tendermint"| |
|
||||
| |"Termination"| | | "9986" |"Service" | "26656" |
|
||||
| | | | | +-------+ | <----+ |
|
||||
| +-----+-------+ | | | +------------+ | |
|
||||
| | | | | | |
|
||||
| | | | v v |
|
||||
| | | | +------------+ +------------+ |
|
||||
| | | | |"NGINX" | |"Tendermint"| |
|
||||
| | | | |"Deployment"| |"Stateful" | |
|
||||
| | | | |"Pub-Key-Ex"| |"Set" | |
|
||||
| ^ | | +------------+ +------------+ |
|
||||
| +-----+-------+ | | |
|
||||
| "POST" |"Analyze" | "GET" | | |
|
||||
| |"Request" | | | |
|
||||
| +-----------+ +--------+ | | |
|
||||
| | +-------------+ | | | |
|
||||
| | | | | "Bi+directional, communication between" |
|
||||
| | | | | "PlanetmintAPP) and Tendermint" |
|
||||
| | | | | "BFT consensus Engine" |
|
||||
| | | | | |
|
||||
| v v | | |
|
||||
| | | |
|
||||
| +-------------+ +--------------+ +----+-------------------> +--------------+ |
|
||||
| | "OpenResty" | | "Planetmint" | | | "MongoDB" | |
|
||||
| | "Service" | | "Service" | | | "Service" | |
|
||||
| | | +----->| | | +-------> | | |
|
||||
| +------+------+ | +------+-------+ | | +------+-------+ |
|
||||
| | | | | | | |
|
||||
| | | | | | | |
|
||||
| v | v | | v |
|
||||
| +-------------+ | +-------------+ | | +----------+ |
|
||||
| | | | | | <------------+ | |"MongoDB" | |
|
||||
| |"OpenResty" | | | "Planetmint"| | |"Stateful"| |
|
||||
| |"Deployment" | | | "Deployment"| | |"Set" | |
|
||||
| | | | | | | +-----+----+ |
|
||||
| | | | | +---------------------------+ | |
|
||||
| | | | | | | |
|
||||
| +-----+-------+ | +-------------+ | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| v | | |
|
||||
| +-----------+ | v |
|
||||
| | "Auth" | | +------------+ |
|
||||
| | "Logic" |----------+ |"MongoDB" | |
|
||||
| | | |"Monitoring"| |
|
||||
| | | |"Agent" | |
|
||||
| +---+-------+ +-----+------+ |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
+---------------+---------------------------------------------------------------------------------------+------------------------------+
|
||||
| |
|
||||
| |
|
||||
| |
|
||||
v v
|
||||
+------------------------------------+ +------------------------------------+
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
| "3Scale" | | "MongoDB Cloud" |
|
||||
| | | |
|
||||
| | | |
|
||||
| | | |
|
||||
+------------------------------------+ +------------------------------------+
|
||||
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
The arrows in the diagram represent the client-server communication. For
|
||||
example, A-->B implies that A initiates the connection to B.
|
||||
It does not represent the flow of data; the communication channel is always
|
||||
fully duplex.
|
||||
|
||||
|
||||
NGINX: Entrypoint and Gateway
|
||||
-----------------------------
|
||||
|
||||
We use an NGINX as HTTP proxy on port 443 (configurable) at the cloud
|
||||
entrypoint for:
|
||||
|
||||
#. Rate Limiting: We configure NGINX to allow only a certain number of requests
|
||||
(configurable) which prevents DoS attacks.
|
||||
|
||||
#. HTTPS Termination: The HTTPS connection does not carry through all the way
|
||||
to Planetmint and terminates at NGINX for now.
|
||||
|
||||
#. Request Routing: For HTTPS connections on port 443 (or the configured Planetmint public api port),
|
||||
the connection is proxied to:
|
||||
|
||||
#. OpenResty Service if it is a POST request.
|
||||
#. Planetmint Service if it is a GET request.
|
||||
|
||||
|
||||
We use an NGINX TCP proxy on port 27017 (configurable) at the cloud
|
||||
entrypoint for:
|
||||
|
||||
#. Rate Limiting: We configure NGINX to allow only a certain number of requests
|
||||
(configurable) which prevents DoS attacks.
|
||||
|
||||
#. Request Routing: For connections on port 27017 (or the configured MongoDB
|
||||
public api port), the connection is proxied to the MongoDB Service.
|
||||
|
||||
|
||||
OpenResty: API Management, Authentication and Authorization
|
||||
-----------------------------------------------------------
|
||||
|
||||
We use `OpenResty <https://openresty.org/>`_ to perform authorization checks
|
||||
with 3scale using the ``app_id`` and ``app_key`` headers in the HTTP request.
|
||||
|
||||
OpenResty is NGINX plus a bunch of other
|
||||
`components <https://openresty.org/en/components.html>`_. We primarily depend
|
||||
on the LuaJIT compiler to execute the functions to authenticate the ``app_id``
|
||||
and ``app_key`` with the 3scale backend.
|
||||
|
||||
|
||||
MongoDB: Standalone
|
||||
-------------------
|
||||
|
||||
We use MongoDB as the backend database for Planetmint.
|
||||
|
||||
We achieve security by avoiding DoS attacks at the NGINX proxy layer and by
|
||||
ensuring that MongoDB has TLS enabled for all its connections.
|
||||
|
||||
|
||||
Tendermint: BFT consensus engine
|
||||
--------------------------------
|
||||
|
||||
We use Tendermint as the backend consensus engine for BFT replication of Planetmint.
|
||||
In a multi-node deployment, Tendermint nodes/peers communicate with each other via
|
||||
the public ports exposed by the NGINX gateway.
|
||||
|
||||
We use port **9986** (configurable) to allow tendermint nodes to access the public keys
|
||||
of the peers and port **26656** (configurable) for the rest of the communications between
|
||||
the peers.
|
||||
|
@ -0,0 +1,101 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
.. _how-to-set-up-a-self-signed-certificate-authority:
|
||||
|
||||
How to Set Up a Self-Signed Certificate Authority
|
||||
=================================================
|
||||
|
||||
This page enumerates the steps *we* use to set up a self-signed certificate authority (CA).
|
||||
This is something that only needs to be done once per Planetmint network,
|
||||
by the organization managing the network, i.e. the CA is for the whole network.
|
||||
We use Easy-RSA.
|
||||
|
||||
|
||||
Step 1: Install & Configure Easy-RSA
|
||||
------------------------------------
|
||||
|
||||
First create a directory for the CA and cd into it:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
mkdir bdb-node-ca
|
||||
|
||||
cd bdb-node-ca
|
||||
|
||||
Then :ref:`install and configure Easy-RSA in that directory <how-to-install-and-configure-easyrsa>`.
|
||||
|
||||
|
||||
Step 2: Create a Self-Signed CA
|
||||
-------------------------------
|
||||
|
||||
You can create a self-signed CA
|
||||
by going to the ``bdb-node-ca/easy-rsa-3.0.1/easyrsa3`` directory and using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./easyrsa init-pki
|
||||
|
||||
./easyrsa build-ca
|
||||
|
||||
You will also be asked to enter a PEM pass phrase (for encrypting the ``ca.key`` file).
|
||||
Make sure to securely store that PEM pass phrase.
|
||||
If you lose it, you won't be able to add or remove entities from your PKI infrastructure in the future.
|
||||
|
||||
You will be prompted to enter the Distinguished Name (DN) information for this CA.
|
||||
For each field, you can accept the default value [in brackets] by pressing Enter.
|
||||
|
||||
.. warning::
|
||||
|
||||
Don't accept the default value of OU (``IT``). Instead, enter the value ``ROOT-CA``.
|
||||
|
||||
While ``Easy-RSA CA`` *is* a valid and acceptable Common Name,
|
||||
you should probably enter a name based on the name of the managing organization,
|
||||
e.g. ``Omega Ledger CA``.
|
||||
|
||||
Tip: You can get help with the ``easyrsa`` command (and its subcommands)
|
||||
by using the subcommand ``./easyrsa help``
|
||||
|
||||
|
||||
Step 3: Create an Intermediate CA
|
||||
---------------------------------
|
||||
|
||||
TODO
|
||||
|
||||
Step 4: Generate a Certificate Revocation List
|
||||
----------------------------------------------
|
||||
|
||||
You can generate a Certificate Revocation List (CRL) using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./easyrsa gen-crl
|
||||
|
||||
You will need to run this command every time you revoke a certificate.
|
||||
The generated ``crl.pem`` needs to be uploaded to your infrastructure to
|
||||
prevent the revoked certificate from being used again.
|
||||
|
||||
|
||||
Step 5: Secure the CA
|
||||
---------------------
|
||||
|
||||
The security of your infrastructure depends on the security of this CA.
|
||||
|
||||
- Ensure that you restrict access to the CA and enable only legitimate and
|
||||
required people to sign certificates and generate CRLs.
|
||||
|
||||
- Restrict access to the machine where the CA is hosted.
|
||||
|
||||
- Many certificate providers keep the CA offline and use a rotating
|
||||
intermediate CA to sign and revoke certificates, to mitigate the risk of the
|
||||
CA getting compromised.
|
||||
|
||||
- In case you want to destroy the machine where you created the CA
|
||||
(for example, if this was set up on a cloud provider instance),
|
||||
you can backup the entire ``easyrsa`` directory
|
||||
to secure storage. You can always restore it to a trusted instance again
|
||||
during the times when you want to sign or revoke certificates.
|
||||
Remember to backup the directory after every update.
|
@ -0,0 +1,111 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
.. _how-to-generate-a-client-certificate-for-mongodb:
|
||||
|
||||
How to Generate a Client Certificate for MongoDB
|
||||
================================================
|
||||
|
||||
This page enumerates the steps *we* use to generate a client certificate to be
|
||||
used by clients who want to connect to a TLS-secured MongoDB database.
|
||||
We use Easy-RSA.
|
||||
|
||||
|
||||
Step 1: Install and Configure Easy-RSA
|
||||
--------------------------------------
|
||||
|
||||
First create a directory for the client certificate and cd into it:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
mkdir client-cert
|
||||
|
||||
cd client-cert
|
||||
|
||||
Then :ref:`install and configure Easy-RSA in that directory <how-to-install-and-configure-easyrsa>`.
|
||||
|
||||
|
||||
Step 2: Create the Client Private Key and CSR
|
||||
---------------------------------------------
|
||||
|
||||
You can create the client private key and certificate signing request (CSR)
|
||||
by going into the directory ``client-cert/easy-rsa-3.0.1/easyrsa3``
|
||||
and using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./easyrsa init-pki
|
||||
|
||||
./easyrsa gen-req bdb-instance-0 nopass
|
||||
|
||||
You should change the Common Name (e.g. ``bdb-instance-0``)
|
||||
to a value that reflects what the
|
||||
client certificate is being used for, e.g. ``mdb-mon-instance-3`` or ``mdb-bak-instance-4``. (The final integer is specific to your Planetmint node in the Planetmint network.)
|
||||
|
||||
You will be prompted to enter the Distinguished Name (DN) information for this certificate. For each field, you can accept the default value [in brackets] by pressing Enter.
|
||||
|
||||
.. warning::
|
||||
|
||||
Don't accept the default value of OU (``IT``). Instead, enter the value
|
||||
``Planetmint-Instance``, ``MongoDB-Mon-Instance`` or ``MongoDB-Backup-Instance``
|
||||
as appropriate.
|
||||
|
||||
Aside: The ``nopass`` option means "do not encrypt the private key (default is encrypted)". You can get help with the ``easyrsa`` command (and its subcommands)
|
||||
by using the subcommand ``./easyrsa help``.
|
||||
|
||||
.. note::
|
||||
For more information about requirements for MongoDB client certificates, please consult the `official MongoDB
|
||||
documentation <https://docs.mongodb.com/manual/tutorial/configure-x509-client-authentication/>`_.
|
||||
|
||||
|
||||
Step 3: Get the Client Certificate Signed
|
||||
-----------------------------------------
|
||||
|
||||
The CSR file created in the previous step
|
||||
should be located in ``pki/reqs/bdb-instance-0.req``
|
||||
(or whatever Common Name you used in the ``gen-req`` command above).
|
||||
You need to send it to the organization managing the Planetmint network
|
||||
so that they can use their CA
|
||||
to sign the request.
|
||||
(The managing organization should already have a self-signed CA.)
|
||||
|
||||
If you are the admin of the managing organization's self-signed CA,
|
||||
then you can import the CSR and use Easy-RSA to sign it.
|
||||
Go to your ``bdb-node-ca/easy-rsa-3.0.1/easyrsa3/``
|
||||
directory and do something like:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./easyrsa import-req /path/to/bdb-instance-0.req bdb-instance-0
|
||||
|
||||
./easyrsa sign-req client bdb-instance-0
|
||||
|
||||
Once you have signed it, you can send the signed certificate
|
||||
and the CA certificate back to the requestor.
|
||||
The files are ``pki/issued/bdb-instance-0.crt`` and ``pki/ca.crt``.
|
||||
|
||||
|
||||
Step 4: Generate the Consolidated Client PEM File
|
||||
-------------------------------------------------
|
||||
|
||||
.. note::
|
||||
This step can be skipped for Planetmint client certificate as Planetmint
|
||||
uses the PyMongo driver, which accepts separate certificate and key files.
|
||||
|
||||
MongoDB, MongoDB Backup Agent and MongoDB Monitoring Agent require a single,
|
||||
consolidated file containing both the public and private keys.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
cat /path/to/bdb-instance-0.crt /path/to/bdb-instance-0.key > bdb-instance-0.pem
|
||||
|
||||
OR
|
||||
|
||||
cat /path/to/mdb-mon-instance-0.crt /path/to/mdb-mon-instance-0.key > mdb-mon-instance-0.pem
|
||||
|
||||
OR
|
||||
|
||||
cat /path/to/mdb-bak-instance-0.crt /path/to/mdb-bak-instance-0.key > mdb-bak-instance-0.pem
|
@ -0,0 +1,68 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
.. _configure-mongodb-cloud-manager-for-monitoring:
|
||||
|
||||
Configure MongoDB Cloud Manager for Monitoring
|
||||
==============================================
|
||||
|
||||
This document details the steps required to configure MongoDB Cloud Manager to
|
||||
enable monitoring of data in a MongoDB Replica Set.
|
||||
|
||||
|
||||
Configure MongoDB Cloud Manager for Monitoring Step by Step
|
||||
-----------------------------------------------------------
|
||||
|
||||
* Once the Monitoring Agent is up and running, open
|
||||
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_.
|
||||
|
||||
* Click ``Login`` under ``MongoDB Cloud Manager`` and log in to the Cloud
|
||||
Manager.
|
||||
|
||||
* Select the group from the dropdown box on the page.
|
||||
|
||||
* Go to Settings and add a ``Preferred Hostnames`` entry as
|
||||
a regexp based on the ``mdb-instance-name`` of the nodes in your cluster.
|
||||
It may take up to 5 mins till this setting takes effect.
|
||||
You may refresh the browser window and verify whether the changes have
|
||||
been saved or not.
|
||||
|
||||
For example, for the nodes in a cluster that are named ``mdb-instance-0``,
|
||||
``mdb-instance-1`` and so on, a regex like ``^mdb-instance-[0-9]{1,2}$``
|
||||
is recommended.
|
||||
|
||||
* Next, click the ``Deployment`` tab, and then the ``Manage Existing``
|
||||
button.
|
||||
|
||||
* On the ``Import your deployment for monitoring`` page, enter the hostname
|
||||
to be the same as the one set for ``mdb-instance-name`` in the global
|
||||
ConfigMap for a node.
|
||||
For example, if the ``mdb-instance-name`` is set to ``mdb-instance-0``,
|
||||
enter ``mdb-instance-0`` as the value in this field.
|
||||
|
||||
* Enter the port number as ``27017``, with no authentication.
|
||||
|
||||
* If you have authentication enabled, select the option to enable
|
||||
authentication and specify the authentication mechanism as per your
|
||||
deployment. The default Planetmint Kubernetes deployment template currently
|
||||
supports ``X.509 Client Certificate`` as the authentication mechanism.
|
||||
|
||||
* If you have TLS enabled, select the option to enable TLS/SSL for MongoDB
|
||||
connections, and click ``Continue``. This should already be selected for
|
||||
you in case you selected ``X.509 Client Certificate`` above.
|
||||
|
||||
* Wait a minute or two for the deployment to be found and then
|
||||
click the ``Continue`` button again.
|
||||
|
||||
* Verify that you see your process on the Cloud Manager UI.
|
||||
It should look something like this:
|
||||
|
||||
.. image:: ../../_static/mongodb_cloud_manager_1.png
|
||||
|
||||
* Click ``Continue``.
|
||||
|
||||
* Verify on the UI that data is being sent by the monitoring agent to the
|
||||
Cloud Manager. It may take upto 5 minutes for data to appear on the UI.
|
98
docs/new/network-setup/k8s-deployment-template/easy-rsa.rst
Normal file
98
docs/new/network-setup/k8s-deployment-template/easy-rsa.rst
Normal file
@ -0,0 +1,98 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
.. _how-to-install-and-configure-easyrsa:
|
||||
|
||||
How to Install & Configure Easy-RSA
|
||||
===================================
|
||||
|
||||
We use
|
||||
`Easy-RSA version 3
|
||||
<https://community.openvpn.net/openvpn/wiki/EasyRSA3-OpenVPN-Howto>`_, a
|
||||
wrapper over complex ``openssl`` commands.
|
||||
`Easy-RSA is available on GitHub <https://github.com/OpenVPN/easy-rsa/releases>`_ and licensed under GPLv2.
|
||||
|
||||
|
||||
Step 1: Install Easy-RSA Dependencies
|
||||
-------------------------------------
|
||||
|
||||
The only dependency for Easy-RSA v3 is ``openssl``,
|
||||
which is available from the ``openssl`` package on Ubuntu and other
|
||||
Debian-based operating systems, i.e. you can install it using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get install openssl
|
||||
|
||||
|
||||
Step 2: Install Easy-RSA
|
||||
------------------------
|
||||
|
||||
Make sure you're in the directory where you want Easy-RSA to live,
|
||||
then download it and extract it within that directory:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
wget https://github.com/OpenVPN/easy-rsa/archive/3.0.1.tar.gz
|
||||
|
||||
tar xzvf 3.0.1.tar.gz
|
||||
|
||||
rm 3.0.1.tar.gz
|
||||
|
||||
There should now be a directory named ``easy-rsa-3.0.1``
|
||||
in your current directory.
|
||||
|
||||
|
||||
Step 3: Customize the Easy-RSA Configuration
|
||||
--------------------------------------------
|
||||
|
||||
We now create a config file named ``vars``
|
||||
by copying the existing ``vars.example`` file
|
||||
and then editing it.
|
||||
You should change the
|
||||
country, province, city, org and email
|
||||
to the correct values for your organisation.
|
||||
(Note: The country, province, city, org and email are part of
|
||||
the `Distinguished Name <https://en.wikipedia.org/wiki/X.509#Certificates>`_ (DN).)
|
||||
The comments in the file explain what each of the variables mean.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
cd easy-rsa-3.0.1/easyrsa3
|
||||
|
||||
cp vars.example vars
|
||||
|
||||
echo 'set_var EASYRSA_DN "org"' >> vars
|
||||
echo 'set_var EASYRSA_KEY_SIZE 4096' >> vars
|
||||
|
||||
echo 'set_var EASYRSA_REQ_COUNTRY "DE"' >> vars
|
||||
echo 'set_var EASYRSA_REQ_PROVINCE "Berlin"' >> vars
|
||||
echo 'set_var EASYRSA_REQ_CITY "Berlin"' >> vars
|
||||
echo 'set_var EASYRSA_REQ_ORG "Planetmint GmbH"' >> vars
|
||||
echo 'set_var EASYRSA_REQ_OU "IT"' >> vars
|
||||
echo 'set_var EASYRSA_REQ_EMAIL "contact@ipdb.global"' >> vars
|
||||
|
||||
Note: Later, when building a CA or generating a certificate signing request, you will be prompted to enter a value for the OU (or to accept the default). You should change the default OU from ``IT`` to one of the following, as appropriate:
|
||||
``ROOT-CA``,
|
||||
``MongoDB-Instance``, ``Planetmint-Instance``, ``MongoDB-Mon-Instance`` or
|
||||
``MongoDB-Backup-Instance``.
|
||||
To understand why, see `the MongoDB Manual <https://docs.mongodb.com/manual/tutorial/configure-x509-client-authentication/>`_.
|
||||
There are reminders to do this in the relevant docs.
|
||||
|
||||
|
||||
Step 4: Maybe Edit x509-types/server
|
||||
------------------------------------
|
||||
|
||||
.. warning::
|
||||
|
||||
Only do this step if you are setting up a self-signed CA.
|
||||
|
||||
Edit the file ``x509-types/server`` and change
|
||||
``extendedKeyUsage = serverAuth`` to
|
||||
``extendedKeyUsage = serverAuth,clientAuth``.
|
||||
See `the MongoDB documentation about x.509 authentication <https://docs.mongodb.com/manual/core/security-x.509/>`_ to understand why.
|
48
docs/new/network-setup/k8s-deployment-template/index.rst
Normal file
48
docs/new/network-setup/k8s-deployment-template/index.rst
Normal file
@ -0,0 +1,48 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
.. _kubernetes-deployment-template:
|
||||
|
||||
Kubernetes Deployment Template
|
||||
==============================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a Planetmint node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see our `Node Setup <../../node_setup>`_.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
This section outlines a way to deploy a Planetmint node (or Planetmint network)
|
||||
on Microsoft Azure using Kubernetes.
|
||||
You may choose to use it as a template or reference for your own deployment,
|
||||
but *we make no claim that it is suitable for your purposes*.
|
||||
Feel free change things to suit your needs or preferences.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
workflow
|
||||
ca-installation
|
||||
server-tls-certificate
|
||||
client-tls-certificate
|
||||
revoke-tls-certificate
|
||||
template-kubernetes-azure
|
||||
node-on-kubernetes
|
||||
node-config-map-and-secrets
|
||||
log-analytics
|
||||
cloud-manager
|
||||
easy-rsa
|
||||
upgrade-on-kubernetes
|
||||
planetmint-network-on-kubernetes
|
||||
tectonic-azure
|
||||
troubleshoot
|
||||
architecture
|
343
docs/new/network-setup/k8s-deployment-template/log-analytics.rst
Normal file
343
docs/new/network-setup/k8s-deployment-template/log-analytics.rst
Normal file
@ -0,0 +1,343 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
Log Analytics on Azure
|
||||
======================
|
||||
|
||||
This page describes how we use Microsoft Operations Management Suite (OMS)
|
||||
to collect all logs from a Kubernetes cluster,
|
||||
to search those logs,
|
||||
and to set up email alerts based on log messages.
|
||||
The :ref:`oms-k8s-references` section (below) contains links
|
||||
to more detailed documentation.
|
||||
|
||||
There are two steps:
|
||||
|
||||
1. Setup: Create a log analytics OMS workspace
|
||||
and a Containers solution under that workspace.
|
||||
2. Deploy OMS agents to your Kubernetes cluster.
|
||||
|
||||
|
||||
Step 1: Setup
|
||||
-------------
|
||||
|
||||
Step 1 can be done the web browser way or the command-line way.
|
||||
|
||||
|
||||
The Web Browser Way
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To create a new log analytics OMS workspace:
|
||||
|
||||
1. Go to the Azure Portal in your web browser.
|
||||
2. Click on **More services >** in the lower left corner of the Azure Portal.
|
||||
3. Type "log analytics" or similar.
|
||||
4. Select **Log Analytics** from the list of options.
|
||||
5. Click on **+ Add** to add a new log analytics OMS workspace.
|
||||
6. Give answers to the questions. You can call the OMS workspace anything,
|
||||
but use the same resource group and location as your Kubernetes cluster.
|
||||
The free option will suffice, but of course you can also use a paid one.
|
||||
|
||||
To add a "Containers solution" to that new workspace:
|
||||
|
||||
1. In Azure Portal, in the Log Analytics section, click the name of the new workspace
|
||||
2. Click **OMS Workspace**.
|
||||
3. Click **OMS Portal**. It should launch the OMS Portal in a new tab.
|
||||
4. Click the **Solutions Gallery** tile.
|
||||
5. Click the **Containers** tile.
|
||||
6. Click **Add**.
|
||||
|
||||
|
||||
The Command-Line Way
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
We'll assume your Kubernetes cluster has a resource
|
||||
group named:
|
||||
|
||||
* ``resource_group``
|
||||
|
||||
and the workspace we'll create will be named:
|
||||
|
||||
* ``work_space``
|
||||
|
||||
If you feel creative you may replace these names by more interesting ones.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az group deployment create --debug \
|
||||
--resource-group resource_group \
|
||||
--name "Microsoft.LogAnalyticsOMS" \
|
||||
--template-file log_analytics_oms.json \
|
||||
--parameters @log_analytics_oms.parameters.json
|
||||
|
||||
An example of a simple template file (``--template-file``):
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
"sku": {
|
||||
"type": "String"
|
||||
},
|
||||
"workspaceName": {
|
||||
"type": "String"
|
||||
},
|
||||
"solutionType": {
|
||||
"type": "String"
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "2015-03-20",
|
||||
"type": "Microsoft.OperationalInsights/workspaces",
|
||||
"name": "[parameters('workspaceName')]",
|
||||
"location": "[resourceGroup().location]",
|
||||
"properties": {
|
||||
"sku": {
|
||||
"name": "[parameters('sku')]"
|
||||
}
|
||||
},
|
||||
"resources": [
|
||||
{
|
||||
"apiVersion": "2015-11-01-preview",
|
||||
"location": "[resourceGroup().location]",
|
||||
"name": "[Concat(parameters('solutionType'), '(', parameters('workspaceName'), ')')]",
|
||||
"type": "Microsoft.OperationsManagement/solutions",
|
||||
"id": "[Concat(resourceGroup().id, '/providers/Microsoft.OperationsManagement/solutions/', parameters('solutionType'), '(', parameters('workspaceName'), ')')]",
|
||||
"dependsOn": [
|
||||
"[concat('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
|
||||
],
|
||||
"properties": {
|
||||
"workspaceResourceId": "[resourceId('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
|
||||
},
|
||||
"plan": {
|
||||
"publisher": "Microsoft",
|
||||
"product": "[Concat('OMSGallery/', parameters('solutionType'))]",
|
||||
"name": "[Concat(parameters('solutionType'), '(', parameters('workspaceName'), ')')]",
|
||||
"promotionCode": ""
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
An example of the associated parameter file (``--parameters``):
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
|
||||
"contentVersion": "1.0.0.0",
|
||||
"parameters": {
|
||||
"sku": {
|
||||
"value": "Free"
|
||||
},
|
||||
"workspaceName": {
|
||||
"value": "work_space"
|
||||
},
|
||||
"solutionType": {
|
||||
"value": "Containers"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Step 2: Deploy the OMS Agents
|
||||
-----------------------------
|
||||
|
||||
To deploy an OMS agent, two important pieces of information are needed:
|
||||
|
||||
1. workspace id
|
||||
2. workspace key
|
||||
|
||||
You can obtain the workspace id using:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az resource show \
|
||||
--resource-group resource_group
|
||||
--resource-type Microsoft.OperationalInsights/workspaces
|
||||
--name work_space \
|
||||
| grep customerId
|
||||
"customerId": "12345678-1234-1234-1234-123456789012",
|
||||
|
||||
Until we figure out a way to obtain the *workspace key* via the command line,
|
||||
you can get it via the OMS Portal.
|
||||
To get to the OMS Portal, go to the Azure Portal and click on:
|
||||
|
||||
Resource Groups > (Your Kubernetes cluster's resource group) > Log analytics (OMS) > (Name of the only item listed) > OMS Workspace > OMS Portal
|
||||
|
||||
(Let us know if you find a faster way.)
|
||||
Then see `Microsoft's instructions to obtain your workspace ID and key
|
||||
<https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-oms#obtain-your-workspace-id-and-key>`_ (via the OMS Portal).
|
||||
|
||||
Once you have the workspace id and key, you can include them in the following
|
||||
YAML file (:download:`oms-daemonset.yaml
|
||||
<../../../../../../k8s/logging-and-monitoring/oms-daemonset.yaml>`):
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# oms-daemonset.yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: omsagent
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: omsagent
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
- name: WSID
|
||||
value: <workspace_id>
|
||||
- name: KEY
|
||||
value: <workspace_key>
|
||||
image: microsoft/oms
|
||||
name: omsagent
|
||||
ports:
|
||||
- containerPort: 25225
|
||||
protocol: TCP
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- mountPath: /var/run/docker.sock
|
||||
name: docker-sock
|
||||
volumes:
|
||||
- name: docker-sock
|
||||
hostPath:
|
||||
path: /var/run/docker.sock
|
||||
|
||||
To deploy the OMS agents (one per Kubernetes node, i.e. one per computer),
|
||||
simply run the following command:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ kubectl create -f oms-daemonset.yaml
|
||||
|
||||
|
||||
Search the OMS Logs
|
||||
-------------------
|
||||
|
||||
OMS should now be getting, storing and indexing all the logs
|
||||
from all the containers in your Kubernetes cluster.
|
||||
You can search the OMS logs from the Azure Portal
|
||||
or the OMS Portal, but at the time of writing,
|
||||
there was more functionality in the OMS Portal
|
||||
(e.g. the ability to create an Alert based on a search).
|
||||
|
||||
There are instructions to get to the OMS Portal above.
|
||||
Once you're in the OMS Portal, click on **Log Search**
|
||||
and enter a query.
|
||||
Here are some example queries:
|
||||
|
||||
All logging messages containing the strings "critical" or "error" (not case-sensitive):
|
||||
|
||||
``Type=ContainerLog (critical OR error)``
|
||||
|
||||
.. note::
|
||||
|
||||
You can filter the results even more by clicking on things in the left sidebar.
|
||||
For OMS Log Search syntax help, see the
|
||||
`Log Analytics search reference <https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-search-reference>`_.
|
||||
|
||||
All logging messages containing the string "error" but not "404":
|
||||
|
||||
``Type=ContainerLog error NOT(404)``
|
||||
|
||||
All logging messages containing the string "critical" but not "CriticalAddonsOnly":
|
||||
|
||||
``Type=ContainerLog critical NOT(CriticalAddonsOnly)``
|
||||
|
||||
All logging messages from containers running the Docker image planetmint/nginx_3scale:1.3, containing the string "GET" but not the strings "Go-http-client" or "runscope" (where those exclusions filter out tests by Kubernetes and Runscope):
|
||||
|
||||
``Type=ContainerLog Image="planetmint/nginx_3scale:1.3" GET NOT("Go-http-client") NOT(runscope)``
|
||||
|
||||
.. note::
|
||||
|
||||
We wrote a small Python 3 script to analyze the logs found by the above NGINX search.
|
||||
It's in ``k8s/logging-and-monitoring/analyze.py``. The docsting at the top
|
||||
of the script explains how to use it.
|
||||
|
||||
|
||||
Create an Email Alert
|
||||
---------------------
|
||||
|
||||
Once you're satisfied with an OMS Log Search query string,
|
||||
click the **🔔 Alert** icon in the top menu,
|
||||
fill in the form,
|
||||
and click **Save** when you're done.
|
||||
|
||||
|
||||
Some Useful Management Tasks
|
||||
----------------------------
|
||||
List workspaces:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az resource list \
|
||||
--resource-group resource_group \
|
||||
--resource-type Microsoft.OperationalInsights/workspaces
|
||||
|
||||
List solutions:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az resource list \
|
||||
--resource-group resource_group \
|
||||
--resource-type Microsoft.OperationsManagement/solutions
|
||||
|
||||
Delete the containers solution:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az group deployment delete --debug \
|
||||
--resource-group resource_group \
|
||||
--name Microsoft.ContainersOMS
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az resource delete \
|
||||
--resource-group resource_group \
|
||||
--resource-type Microsoft.OperationsManagement/solutions \
|
||||
--name "Containers(work_space)"
|
||||
|
||||
Delete the workspace:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az group deployment delete --debug \
|
||||
--resource-group resource_group \
|
||||
--name Microsoft.LogAnalyticsOMS
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ az resource delete \
|
||||
--resource-group resource_group \
|
||||
--resource-type Microsoft.OperationalInsights/workspaces \
|
||||
--name work_space
|
||||
|
||||
|
||||
.. _oms-k8s-references:
|
||||
|
||||
References
|
||||
----------
|
||||
|
||||
* `Monitor an Azure Container Service cluster with Microsoft Operations Management Suite (OMS) <https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-oms>`_
|
||||
* `Manage Log Analytics using Azure Resource Manager templates <https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-template-workspace-configuration>`_
|
||||
* `azure commands for deployments <https://docs.microsoft.com/en-us/cli/azure/group/deployment>`_
|
||||
(``az group deployment``)
|
||||
* `Understand the structure and syntax of Azure Resource Manager templates <https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates>`_
|
||||
* `Kubernetes DaemonSet`_
|
||||
|
||||
|
||||
|
||||
.. _Azure Resource Manager templates: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates
|
||||
.. _Kubernetes DaemonSet: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
|
@ -0,0 +1,124 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
.. _how-to-configure-a-planetmint-node:
|
||||
|
||||
How to Configure a Planetmint Node
|
||||
==================================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a Planetmint node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see our `Node Setup <../../node_setup>`_.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
This page outlines the steps to set a bunch of configuration settings
|
||||
in your Planetmint node.
|
||||
They are pushed to the Kubernetes cluster in two files,
|
||||
named ``config-map.yaml`` (a set of ConfigMaps)
|
||||
and ``secret.yaml`` (a set of Secrets).
|
||||
They are stored in the Kubernetes cluster's key-value store (etcd).
|
||||
|
||||
Make sure you did the first four operations listed in the section titled
|
||||
:ref:`things-each-node-operator-must-do`.
|
||||
|
||||
|
||||
Edit vars
|
||||
---------
|
||||
|
||||
This file is located at: ``k8s/scripts/vars`` and edit
|
||||
the configuration parameters.
|
||||
That file already contains many comments to help you
|
||||
understand each data value, but we make some additional
|
||||
remarks on some of the values below.
|
||||
|
||||
|
||||
vars.NODE_FQDN
|
||||
~~~~~~~~~~~~~~~
|
||||
FQDN for your Planetmint node. This is the domain name
|
||||
used to query and access your Planetmint node. More information can be
|
||||
found in our :ref:`Kubernetes template overview guide <kubernetes-template-overview>`.
|
||||
|
||||
|
||||
vars.SECRET_TOKEN
|
||||
~~~~~~~~~~~~~~~~~
|
||||
This parameter is specific to your Planetmint node and is used for
|
||||
authentication and authorization of requests to your Planetmint node.
|
||||
More information can be found in our :ref:`Kubernetes template overview guide <kubernetes-template-overview>`.
|
||||
|
||||
|
||||
vars.HTTPS_CERT_KEY_FILE_NAME
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Absolute path of the HTTPS certificate chain of your domain.
|
||||
More information can be found in our :ref:`Kubernetes template overview guide <kubernetes-template-overview>`.
|
||||
|
||||
|
||||
vars.HTTPS_CERT_CHAIN_FILE_NAME
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Absolute path of the HTTPS certificate key of your domain.
|
||||
More information can be found in our :ref:`Kubernetes template overview guide <kubernetes-template-overview>`.
|
||||
|
||||
|
||||
vars.MDB_ADMIN_USER and vars.MDB_ADMIN_PASSWORD
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
MongoDB admin user credentials, username and password.
|
||||
This user is created on the *admin* database with the authorization to create other users.
|
||||
|
||||
|
||||
vars.BDB_PERSISTENT_PEERS, BDB_VALIDATORS, BDB_VALIDATORS_POWERS, BDB_GENESIS_TIME and BDB_CHAIN_ID
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
These parameters are shared across the Planetmint network. More information about the generation
|
||||
of these parameters can be found at :ref:`generate-the-blockchain-id-and-genesis-time`.
|
||||
|
||||
|
||||
vars.NODE_DNS_SERVER
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
IP of Kubernetes service(kube-dns), can be retrieved using
|
||||
using CLI(kubectl) or k8s dashboard. This parameter is used by the Nginx gateway instance
|
||||
to resolve the hostnames of all the services running in the Kubernetes cluster.
|
||||
|
||||
.. code::
|
||||
|
||||
# retrieval via commandline.
|
||||
$ kubectl get services --namespace=kube-system -l k8s-app=kube-dns
|
||||
|
||||
|
||||
.. _generate-config:
|
||||
|
||||
Generate configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
After populating the ``k8s/scripts/vars`` file, we need to generate
|
||||
all the configuration required for the Planetmint node, for that purpose
|
||||
we need to execute ``k8s/scripts/generate_configs.sh`` script.
|
||||
|
||||
.. code::
|
||||
|
||||
$ bash generate_configs.sh
|
||||
|
||||
.. Note::
|
||||
During execution the script will prompt the user for some inputs.
|
||||
|
||||
After successful execution, this routine will generate ``config-map.yaml`` and
|
||||
``secret.yaml`` under ``k8s/scripts``.
|
||||
|
||||
.. _deploy-config-map-and-secret:
|
||||
|
||||
Deploy Your config-map.yaml and secret.yaml
|
||||
-------------------------------------------
|
||||
|
||||
You can deploy your edited ``config-map.yaml`` and ``secret.yaml``
|
||||
files to your Kubernetes cluster using the commands:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f config-map.yaml
|
||||
|
||||
$ kubectl apply -f secret.yaml
|
@ -0,0 +1,769 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
.. _kubernetes-template-deploy-a-single-planetmint-node:
|
||||
|
||||
Kubernetes Template: Deploy a Single Planetmint Node
|
||||
====================================================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a Planetmint node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see our `Node Setup <../../node_setup>`_.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
This page describes how to deploy a Planetmint node
|
||||
using `Kubernetes <https://kubernetes.io/>`_.
|
||||
It assumes you already have a running Kubernetes cluster.
|
||||
|
||||
Below, we refer to many files by their directory and filename,
|
||||
such as ``configuration/config-map.yaml``. Those files are files in the
|
||||
`planetmint/planetmint repository on GitHub <https://github.com/planetmint/planetmint/>`_
|
||||
in the ``k8s/`` directory.
|
||||
Make sure you're getting those files from the appropriate Git branch on
|
||||
GitHub, i.e. the branch for the version of Planetmint that your Planetmint
|
||||
cluster is using.
|
||||
|
||||
|
||||
Step 1: Install and Configure kubectl
|
||||
-------------------------------------
|
||||
|
||||
kubectl is the Kubernetes CLI.
|
||||
If you don't already have it installed,
|
||||
then see the `Kubernetes docs to install it
|
||||
<https://kubernetes.io/docs/user-guide/prereqs/>`_.
|
||||
|
||||
The default location of the kubectl configuration file is ``~/.kube/config``.
|
||||
If you don't have that file, then you need to get it.
|
||||
|
||||
**Azure.** If you deployed your Kubernetes cluster on Azure
|
||||
using the Azure CLI 2.0 (as per :doc:`our template
|
||||
<../k8s-deployment-template/template-kubernetes-azure>`),
|
||||
then you can get the ``~/.kube/config`` file using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ az acs kubernetes get-credentials \
|
||||
--resource-group <name of resource group containing the cluster> \
|
||||
--name <ACS cluster name>
|
||||
|
||||
If it asks for a password (to unlock the SSH key)
|
||||
and you enter the correct password,
|
||||
but you get an error message,
|
||||
then try adding ``--ssh-key-file ~/.ssh/<name>``
|
||||
to the above command (i.e. the path to the private key).
|
||||
|
||||
.. note::
|
||||
|
||||
**About kubectl contexts.** You might manage several
|
||||
Kubernetes clusters. To make it easy to switch from one to another,
|
||||
kubectl has a notion of "contexts," e.g. the context for cluster 1 or
|
||||
the context for cluster 2. To find out the current context, do:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl config view
|
||||
|
||||
and then look for the ``current-context`` in the output.
|
||||
The output also lists all clusters, contexts and users.
|
||||
(You might have only one of each.)
|
||||
You can switch to a different context using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl config use-context <new-context-name>
|
||||
|
||||
You can also switch to a different context for just one command
|
||||
by inserting ``--context <context-name>`` into any kubectl command.
|
||||
For example:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl get pods
|
||||
|
||||
will get a list of the pods in the Kubernetes cluster associated
|
||||
with the context named ``k8s-bdb-test-cluster-0``.
|
||||
|
||||
Step 2: Connect to Your Kubernetes Cluster's Web UI (Optional)
|
||||
---------------------------------------------------------------
|
||||
|
||||
You can connect to your cluster's
|
||||
`Kubernetes Dashboard <https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/>`_
|
||||
(also called the Web UI) using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl proxy -p 8001
|
||||
|
||||
or
|
||||
|
||||
$ az acs kubernetes browse -g [Resource Group] -n [Container service instance name] --ssh-key-file /path/to/privateKey
|
||||
|
||||
or, if you prefer to be explicit about the context (explained above):
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl proxy -p 8001
|
||||
|
||||
The output should be something like ``Starting to serve on 127.0.0.1:8001``.
|
||||
That means you can visit the dashboard in your web browser at
|
||||
`http://127.0.0.1:8001/ui <http://127.0.0.1:8001/ui>`_.
|
||||
|
||||
.. note::
|
||||
|
||||
**Known Issue:** If you are having accessing the UI i.e.
|
||||
accessing `http://127.0.0.1:8001/ui <http://127.0.0.1:8001/ui>`_
|
||||
in your browser returns a blank page and is redirected to
|
||||
`http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
|
||||
<http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy>`_
|
||||
, you can access the UI by adding a **/** at the end of the redirected URL i.e.
|
||||
`http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/
|
||||
<http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/>`_
|
||||
|
||||
|
||||
Step 3: Configure Your Planetmint Node
|
||||
--------------------------------------
|
||||
|
||||
See the page titled :ref:`how-to-configure-a-planetmint-node`.
|
||||
|
||||
|
||||
.. _start-the-nginx-service:
|
||||
|
||||
Step 4: Start the NGINX Service
|
||||
-------------------------------
|
||||
|
||||
* This will will give us a public IP for the cluster.
|
||||
|
||||
* Once you complete this step, you might need to wait up to 10 mins for the
|
||||
public IP to be assigned.
|
||||
|
||||
* You have the option to use vanilla NGINX without HTTPS support or an
|
||||
NGINX with HTTPS support.
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f nginx-https/nginx-https-svc.yaml
|
||||
|
||||
OR
|
||||
|
||||
$ kubectl apply -f nginx-http/nginx-http-svc.yaml
|
||||
|
||||
|
||||
.. _assign-dns-name-to-nginx-public-ip:
|
||||
|
||||
Step 5: Assign DNS Name to the NGINX Public IP
|
||||
----------------------------------------------
|
||||
|
||||
* This step is required only if you are planning to set up multiple
|
||||
`Planetmint nodes
|
||||
<https://docs.planetmint.io/en/latest/terminology.html>`_ or are using
|
||||
HTTPS certificates tied to a domain.
|
||||
|
||||
* The following command can help you find out if the NGINX service started
|
||||
above has been assigned a public IP or external IP address:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl get svc -w
|
||||
|
||||
* Once a public IP is assigned, you can map it to
|
||||
a DNS name.
|
||||
We usually assign ``bdb-test-node-0``, ``bdb-test-node-1`` and
|
||||
so on in our documentation.
|
||||
Let's assume that we assign the unique name of ``bdb-test-node-0`` here.
|
||||
|
||||
|
||||
**Set up DNS mapping in Azure.**
|
||||
Select the current Azure resource group and look for the ``Public IP``
|
||||
resource. You should see at least 2 entries there - one for the Kubernetes
|
||||
master and the other for the NGINX instance. You may have to ``Refresh`` the
|
||||
Azure web page listing the resources in a resource group for the latest
|
||||
changes to be reflected.
|
||||
Select the ``Public IP`` resource that is attached to your service (it should
|
||||
have the Azure DNS prefix name along with a long random string, without the
|
||||
``master-ip`` string), select ``Configuration``, add the DNS assigned above
|
||||
(for example, ``bdb-test-node-0``), click ``Save``, and wait for the
|
||||
changes to be applied.
|
||||
|
||||
To verify the DNS setting is operational, you can run ``nslookup <DNS
|
||||
name added in Azure configuration>`` from your local Linux shell.
|
||||
|
||||
This will ensure that when you scale to different geographical zones, other Tendermint
|
||||
nodes in the network can reach this instance.
|
||||
|
||||
|
||||
.. _start-the-mongodb-kubernetes-service:
|
||||
|
||||
Step 6: Start the MongoDB Kubernetes Service
|
||||
--------------------------------------------
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f mongodb/mongo-svc.yaml
|
||||
|
||||
|
||||
.. _start-the-planetmint-kubernetes-service:
|
||||
|
||||
Step 7: Start the Planetmint Kubernetes Service
|
||||
-----------------------------------------------
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f planetmint/planetmint-svc.yaml
|
||||
|
||||
|
||||
.. _start-the-openresty-kubernetes-service:
|
||||
|
||||
Step 8(Optional): Start the OpenResty Kubernetes Service
|
||||
---------------------------------------------------------
|
||||
|
||||
* Start the Kubernetes Service:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f nginx-openresty/nginx-openresty-svc.yaml
|
||||
|
||||
|
||||
.. _start-the-nginx-deployment:
|
||||
|
||||
Step 9: Start the NGINX Kubernetes Deployment
|
||||
----------------------------------------------
|
||||
|
||||
* NGINX is used as a proxy to the Planetmint, Tendermint and MongoDB instances in
|
||||
the node. It proxies HTTP/HTTPS requests on the ``node-frontend-port``
|
||||
to the corresponding OpenResty(if 3scale enabled) or Planetmint backend, TCP connections
|
||||
on ``mongodb-frontend-port``, ``tm-p2p-port`` and ``tm-pub-key-access``
|
||||
to MongoDB and Tendermint respectively.
|
||||
|
||||
* This configuration is located in the file
|
||||
``nginx-https/nginx-https-dep.yaml`` or ``nginx-http/nginx-http-dep.yaml``.
|
||||
|
||||
* Start the Kubernetes Deployment:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f nginx-https/nginx-https-dep.yaml
|
||||
|
||||
OR
|
||||
|
||||
$ kubectl apaply -f nginx-http/nginx-http-dep.yaml
|
||||
|
||||
|
||||
.. _create-kubernetes-storage-class-mdb:
|
||||
|
||||
Step 10: Create Kubernetes Storage Classes for MongoDB
|
||||
------------------------------------------------------
|
||||
|
||||
MongoDB needs somewhere to store its data persistently,
|
||||
outside the container where MongoDB is running.
|
||||
Our MongoDB Docker container
|
||||
(based on the official MongoDB Docker container)
|
||||
exports two volume mounts with correct
|
||||
permissions from inside the container:
|
||||
|
||||
* The directory where the MongoDB instance stores its data: ``/data/db``.
|
||||
There's more explanation in the MongoDB docs about `storage.dbpath <https://docs.mongodb.com/manual/reference/configuration-options/#storage.dbPath>`_.
|
||||
|
||||
* The directory where the MongoDB instance stores the metadata for a sharded
|
||||
cluster: ``/data/configdb/``.
|
||||
There's more explanation in the MongoDB docs about `sharding.configDB <https://docs.mongodb.com/manual/reference/configuration-options/#sharding.configDB>`_.
|
||||
|
||||
Explaining how Kubernetes handles persistent volumes,
|
||||
and the associated terminology,
|
||||
is beyond the scope of this documentation;
|
||||
see `the Kubernetes docs about persistent volumes
|
||||
<https://kubernetes.io/docs/user-guide/persistent-volumes>`_.
|
||||
|
||||
The first thing to do is create the Kubernetes storage classes.
|
||||
|
||||
**Set up Storage Classes in Azure.**
|
||||
First, you need an Azure storage account.
|
||||
If you deployed your Kubernetes cluster on Azure
|
||||
using the Azure CLI 2.0
|
||||
(as per :doc:`our template <../k8s-deployment-template/template-kubernetes-azure>`),
|
||||
then the `az acs create` command already created a
|
||||
storage account in the same location and resource group
|
||||
as your Kubernetes cluster.
|
||||
Both should have the same "storage account SKU": ``Standard_LRS``.
|
||||
Standard storage is lower-cost and lower-performance.
|
||||
It uses hard disk drives (HDD).
|
||||
LRS means locally-redundant storage: three replicas
|
||||
in the same data center.
|
||||
Premium storage is higher-cost and higher-performance.
|
||||
It uses solid state drives (SSD).
|
||||
|
||||
We recommend using Premium storage with our Kubernetes deployment template.
|
||||
Create a `storage account <https://docs.microsoft.com/en-us/azure/storage/common/storage-create-storage-account>`_
|
||||
for Premium storage and associate it with your Azure resource group.
|
||||
For future reference, the command to create a storage account is
|
||||
`az storage account create <https://docs.microsoft.com/en-us/cli/azure/storage/account#create>`_.
|
||||
|
||||
.. note::
|
||||
Please refer to `Azure documentation <https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage>`_
|
||||
for the list of VMs that are supported by Premium Storage.
|
||||
|
||||
The Kubernetes template for configuration of the MongoDB Storage Class is located in the
|
||||
file ``mongodb/mongo-sc.yaml``.
|
||||
|
||||
You may have to update the ``parameters.location`` field in the file to
|
||||
specify the location you are using in Azure.
|
||||
|
||||
If you want to use a custom storage account with the Storage Class, you
|
||||
can also update `parameters.storageAccount` and provide the Azure storage
|
||||
account name.
|
||||
|
||||
Create the required storage classes using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f mongodb/mongo-sc.yaml
|
||||
|
||||
|
||||
You can check if it worked using ``kubectl get storageclasses``.
|
||||
|
||||
|
||||
.. _create-kubernetes-persistent-volume-claim-mdb:
|
||||
|
||||
Step 11: Create Kubernetes Persistent Volume Claims for MongoDB
|
||||
---------------------------------------------------------------
|
||||
|
||||
Next, you will create two PersistentVolumeClaim objects ``mongo-db-claim`` and
|
||||
``mongo-configdb-claim``.
|
||||
|
||||
This configuration is located in the file ``mongodb/mongo-pvc.yaml``.
|
||||
|
||||
Note how there's no explicit mention of Azure, AWS or whatever.
|
||||
``ReadWriteOnce`` (RWO) means the volume can be mounted as
|
||||
read-write by a single Kubernetes node.
|
||||
(``ReadWriteOnce`` is the *only* access mode supported
|
||||
by AzureDisk.)
|
||||
``storage: 20Gi`` means the volume has a size of 20
|
||||
`gibibytes <https://en.wikipedia.org/wiki/Gibibyte>`_.
|
||||
|
||||
You may want to update the ``spec.resources.requests.storage`` field in both
|
||||
the files to specify a different disk size.
|
||||
|
||||
Create the required Persistent Volume Claims using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f mongodb/mongo-pvc.yaml
|
||||
|
||||
|
||||
You can check its status using: ``kubectl get pvc -w``
|
||||
|
||||
Initially, the status of persistent volume claims might be "Pending"
|
||||
but it should become "Bound" fairly quickly.
|
||||
|
||||
.. note::
|
||||
The default Reclaim Policy for dynamically created persistent volumes is ``Delete``
|
||||
which means the PV and its associated Azure storage resource will be automatically
|
||||
deleted on deletion of PVC or PV. In order to prevent this from happening do
|
||||
the following steps to change default reclaim policy of dyanmically created PVs
|
||||
from ``Delete`` to ``Retain``
|
||||
|
||||
* Run the following command to list existing PVs
|
||||
|
||||
.. Code:: bash
|
||||
|
||||
$ kubectl get pv
|
||||
|
||||
* Run the following command to update a PV's reclaim policy to <Retain>
|
||||
|
||||
.. Code:: bash
|
||||
|
||||
$ kubectl patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
|
||||
|
||||
For notes on recreating a private volume form a released Azure disk resource consult
|
||||
:doc:`the page about cluster troubleshooting <../k8s-deployment-template/troubleshoot>`.
|
||||
|
||||
.. _start-kubernetes-stateful-set-mongodb:
|
||||
|
||||
Step 12: Start a Kubernetes StatefulSet for MongoDB
|
||||
---------------------------------------------------
|
||||
|
||||
* Create the MongoDB StatefulSet using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f mongodb/mongo-ss.yaml
|
||||
|
||||
* It might take up to 10 minutes for the disks, specified in the Persistent
|
||||
Volume Claims above, to be created and attached to the pod.
|
||||
The UI might show that the pod has errored with the message
|
||||
"timeout expired waiting for volumes to attach/mount". Use the CLI below
|
||||
to check the status of the pod in this case, instead of the UI.
|
||||
This happens due to a bug in Azure ACS.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl get pods -w
|
||||
|
||||
|
||||
.. _configure-users-and-access-control-mongodb:
|
||||
|
||||
Step 13: Configure Users and Access Control for MongoDB
|
||||
-------------------------------------------------------
|
||||
|
||||
* In this step, you will create a user on MongoDB with authorization
|
||||
to create more users and assign roles to it. We will also create
|
||||
MongoDB client users for Planetmint and MongoDB Monitoring agent(Optional).
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f mongodb/configure_mdb.sh
|
||||
|
||||
|
||||
.. _create-kubernetes-storage-class:
|
||||
|
||||
Step 14: Create Kubernetes Storage Classes for Planetmint
|
||||
----------------------------------------------------------
|
||||
|
||||
Planetmint needs somewhere to store Tendermint data persistently, Tendermint uses
|
||||
LevelDB as the persistent storage layer.
|
||||
|
||||
The Kubernetes template for configuration of Storage Class is located in the
|
||||
file ``planetmint/planetmint-sc.yaml``.
|
||||
|
||||
Details about how to create a Azure Storage account and how Kubernetes Storage Class works
|
||||
are already covered in this document: :ref:`create-kubernetes-storage-class-mdb`.
|
||||
|
||||
Create the required storage classes using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f planetmint/planetmint-sc.yaml
|
||||
|
||||
|
||||
You can check if it worked using ``kubectl get storageclasses``.
|
||||
|
||||
.. _create-kubernetes-persistent-volume-claim:
|
||||
|
||||
Step 15: Create Kubernetes Persistent Volume Claims for Planetmint
|
||||
------------------------------------------------------------------
|
||||
|
||||
Next, you will create two PersistentVolumeClaim objects ``tendermint-db-claim`` and
|
||||
``tendermint-config-db-claim``.
|
||||
|
||||
This configuration is located in the file ``planetmint/planetmint-pvc.yaml``.
|
||||
|
||||
Details about Kubernetes Persistent Volumes, Persistent Volume Claims
|
||||
and how they work with Azure are already covered in this
|
||||
document: :ref:`create-kubernetes-persistent-volume-claim-mdb`.
|
||||
|
||||
Create the required Persistent Volume Claims using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f planetmint/planetmint-pvc.yaml
|
||||
|
||||
You can check its status using:
|
||||
|
||||
.. code::
|
||||
|
||||
kubectl get pvc -w
|
||||
|
||||
|
||||
.. _start-kubernetes-stateful-set-bdb:
|
||||
|
||||
Step 16: Start a Kubernetes StatefulSet for Planetmint
|
||||
------------------------------------------------------
|
||||
|
||||
* This configuration is located in the file ``planetmint/planetmint-ss.yaml``.
|
||||
|
||||
* Set the ``spec.serviceName`` to the value set in ``bdb-instance-name`` in
|
||||
the ConfigMap.
|
||||
For example, if the value set in the ``bdb-instance-name``
|
||||
is ``bdb-instance-0``, set the field to ``tm-instance-0``.
|
||||
|
||||
* Set ``metadata.name``, ``spec.template.metadata.name`` and
|
||||
``spec.template.metadata.labels.app`` to the value set in
|
||||
``bdb-instance-name`` in the ConfigMap, followed by
|
||||
``-ss``.
|
||||
For example, if the value set in the
|
||||
``bdb-instance-name`` is ``bdb-instance-0``, set the fields to the value
|
||||
``bdb-insance-0-ss``.
|
||||
|
||||
* As we gain more experience running Tendermint in testing and production, we
|
||||
will tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
|
||||
|
||||
* Create the Planetmint StatefulSet using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f planetmint/planetmint-ss.yaml
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl get pods -w
|
||||
|
||||
|
||||
.. _start-kubernetes-deployment-for-mdb-mon-agent:
|
||||
|
||||
Step 17(Optional): Start a Kubernetes Deployment for MongoDB Monitoring Agent
|
||||
------------------------------------------------------------------------------
|
||||
|
||||
* This configuration is located in the file
|
||||
``mongodb-monitoring-agent/mongo-mon-dep.yaml``.
|
||||
|
||||
* Set ``metadata.name``, ``spec.template.metadata.name`` and
|
||||
``spec.template.metadata.labels.app`` to the value set in
|
||||
``mdb-mon-instance-name`` in the ConfigMap, followed by
|
||||
``-dep``.
|
||||
For example, if the value set in the
|
||||
``mdb-mon-instance-name`` is ``mdb-mon-instance-0``, set the fields to the
|
||||
value ``mdb-mon-instance-0-dep``.
|
||||
|
||||
* The configuration uses the following values set in the Secret:
|
||||
|
||||
- ``mdb-mon-certs``
|
||||
- ``ca-auth``
|
||||
- ``cloud-manager-credentials``
|
||||
|
||||
* Start the Kubernetes Deployment using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml
|
||||
|
||||
|
||||
.. _start-kubernetes-deployment-openresty:
|
||||
|
||||
Step 18(Optional): Start a Kubernetes Deployment for OpenResty
|
||||
--------------------------------------------------------------
|
||||
|
||||
* This configuration is located in the file
|
||||
``nginx-openresty/nginx-openresty-dep.yaml``.
|
||||
|
||||
* Set ``metadata.name`` and ``spec.template.metadata.labels.app`` to the
|
||||
value set in ``openresty-instance-name`` in the ConfigMap, followed by
|
||||
``-dep``.
|
||||
For example, if the value set in the
|
||||
``openresty-instance-name`` is ``openresty-instance-0``, set the fields to
|
||||
the value ``openresty-instance-0-dep``.
|
||||
|
||||
* Set the port to be exposed from the pod in the
|
||||
``spec.containers[0].ports`` section. We currently expose the port at
|
||||
which OpenResty is listening for requests, ``openresty-backend-port`` in
|
||||
the above ConfigMap.
|
||||
|
||||
* The configuration uses the following values set in the Secret:
|
||||
|
||||
- ``threescale-credentials``
|
||||
|
||||
* The configuration uses the following values set in the ConfigMap:
|
||||
|
||||
- ``node-dns-server-ip``
|
||||
- ``openresty-backend-port``
|
||||
- ``ngx-bdb-instance-name``
|
||||
- ``planetmint-api-port``
|
||||
|
||||
* Create the OpenResty Deployment using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f nginx-openresty/nginx-openresty-dep.yaml
|
||||
|
||||
|
||||
* You can check its status using the command ``kubectl get deployments -w``
|
||||
|
||||
|
||||
Step 19(Optional): Configure the MongoDB Cloud Manager
|
||||
------------------------------------------------------
|
||||
|
||||
Refer to the
|
||||
:doc:`documentation <../k8s-deployment-template/cloud-manager>`
|
||||
for details on how to configure the MongoDB Cloud Manager to enable
|
||||
monitoring and backup.
|
||||
|
||||
|
||||
Step 20(Optional): Only for multi site deployments(Geographically dispersed)
|
||||
----------------------------------------------------------------------------
|
||||
|
||||
We need to make sure that clusters are able
|
||||
to talk to each other i.e. specifically the communication between the
|
||||
Tendermint peers. Set up networking between the clusters using
|
||||
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
|
||||
|
||||
Assuming we have a Planetmint instance ``bdb-instance-1`` residing in Azure data center location ``westeurope`` and we
|
||||
want to connect to ``bdb-instance-2``, ``bdb-instance-3``, and ``bdb-instance-4`` located in Azure data centers
|
||||
``eastus``, ``centralus`` and ``westus``, respectively. Unless you already have explicitly set up networking for
|
||||
``bdb-instance-1`` to communicate with ``bdb-instance-2/3/4`` and
|
||||
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
|
||||
Tendermint P2P network.
|
||||
It is similar to ensuring that there is a ``CNAME`` record in the DNS
|
||||
infrastructure to resolve ``bdb-instance-X`` to the host where it is actually available.
|
||||
We can do this in Kubernetes using a Kubernetes Service of ``type``
|
||||
``ExternalName``.
|
||||
|
||||
* This configuration is located in the file ``planetmint/planetmint-ext-conn-svc.yaml``.
|
||||
|
||||
* Set the name of the ``metadata.name`` to the host name of the Planetmint instance you are trying to connect to.
|
||||
For instance if you are configuring this service on cluster with ``bdb-instance-1`` then the ``metadata.name`` will
|
||||
be ``bdb-instance-2`` and vice versa.
|
||||
|
||||
* Set ``spec.ports.port[0]`` to the ``tm-p2p-port`` from the ConfigMap for the other cluster.
|
||||
|
||||
* Set ``spec.ports.port[1]`` to the ``tm-rpc-port`` from the ConfigMap for the other cluster.
|
||||
|
||||
* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to.
|
||||
For more information about the FQDN please refer to: :ref:`Assign DNS name to NGINX Public
|
||||
IP <assign-dns-name-to-nginx-public-ip>`.
|
||||
|
||||
.. note::
|
||||
This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs
|
||||
we need to communicate with.
|
||||
|
||||
If you are not the system administrator of the cluster, you have to get in
|
||||
touch with the system administrator/s of the other ``n-1`` clusters and
|
||||
share with them your instance name (``tendermint-instance-name`` in the ConfigMap)
|
||||
and the FQDN of the NGINX instance acting as Gateway(set in: :ref:`Assign DNS name to NGINX
|
||||
Public IP <assign-dns-name-to-nginx-public-ip>`).
|
||||
|
||||
|
||||
.. _verify-and-test-bdb:
|
||||
|
||||
Step 21: Verify the Planetmint Node Setup
|
||||
-----------------------------------------
|
||||
|
||||
Step 21.1: Testing Internally
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To test the setup of your Planetmint node, you could use a Docker container
|
||||
that provides utilities like ``nslookup``, ``curl`` and ``dig``.
|
||||
For example, you could use a container based on our
|
||||
`planetmint/toolbox <https://hub.docker.com/r/planetmint/toolbox/>`_ image.
|
||||
(The corresponding
|
||||
`Dockerfile <https://github.com/planetmint/planetmint/blob/master/k8s/toolbox/Dockerfile>`_
|
||||
is in the ``planetmint/planetmint`` repository on GitHub.)
|
||||
You can use it as below to get started immediately:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl \
|
||||
run -it toolbox \
|
||||
--image planetmint/toolbox \
|
||||
--image-pull-policy=Always \
|
||||
--restart=Never --rm
|
||||
|
||||
It will drop you to the shell prompt.
|
||||
|
||||
To test the MongoDB instance:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ nslookup mdb-instance-0
|
||||
|
||||
$ dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
$ curl -X GET http://mdb-instance-0:27017
|
||||
|
||||
The ``nslookup`` command should output the configured IP address of the service
|
||||
(in the cluster).
|
||||
The ``dig`` command should return the configured port numbers.
|
||||
The ``curl`` command tests the availability of the service.
|
||||
|
||||
To test the Planetmint instance:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ nslookup bdb-instance-0
|
||||
|
||||
$ dig +noall +answer _bdb-api-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
$ dig +noall +answer _bdb-ws-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
$ curl -X GET http://bdb-instance-0:9984
|
||||
|
||||
$ curl -X GET http://bdb-instance-0:9986/pub_key.json
|
||||
|
||||
$ curl -X GET http://bdb-instance-0:26657/abci_info
|
||||
|
||||
$ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions
|
||||
|
||||
|
||||
To test the OpenResty instance:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ nslookup openresty-instance-0
|
||||
|
||||
$ dig +noall +answer _openresty-svc-port._tcp.openresty-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
To verify if OpenResty instance forwards the requests properly, send a ``POST``
|
||||
transaction to OpenResty at post ``80`` and check the response from the backend
|
||||
Planetmint instance.
|
||||
|
||||
|
||||
To test the vanilla NGINX instance:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ nslookup ngx-http-instance-0
|
||||
|
||||
$ dig +noall +answer _public-node-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
$ dig +noall +answer _public-health-check-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
$ wsc -er ws://ngx-http-instance-0/api/v1/streams/valid_transactions
|
||||
|
||||
$ curl -X GET http://ngx-http-instance-0:27017
|
||||
|
||||
The above curl command should result in the response
|
||||
``It looks like you are trying to access MongoDB over HTTP on the native driver port.``
|
||||
|
||||
|
||||
|
||||
To test the NGINX instance with HTTPS and 3scale integration:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ nslookup ngx-instance-0
|
||||
|
||||
$ dig +noall +answer _public-secure-node-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
$ dig +noall +answer _public-mdb-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
$ dig +noall +answer _public-insecure-node-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
|
||||
|
||||
$ wsc -er wss://<node-fqdn>/api/v1/streams/valid_transactions
|
||||
|
||||
$ curl -X GET http://<node-fqdn>:27017
|
||||
|
||||
The above curl command should result in the response
|
||||
``It looks like you are trying to access MongoDB over HTTP on the native driver port.``
|
||||
|
||||
|
||||
Step 21.2: Testing Externally
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Check the MongoDB monitoring agent on the MongoDB Cloud Manager
|
||||
portal to verify they are working fine.
|
||||
|
||||
If you are using the NGINX with HTTP support, accessing the URL
|
||||
``http://<DNS/IP of your exposed Planetmint service endpoint>:node-frontend-port``
|
||||
on your browser should result in a JSON response that shows the Planetmint
|
||||
server version, among other things.
|
||||
If you are using the NGINX with HTTPS support, use ``https`` instead of
|
||||
``http`` above.
|
||||
|
||||
Use the Python Driver to send some transactions to the Planetmint node and
|
||||
verify that your node or cluster works as expected.
|
||||
|
||||
Next, you can set up log analytics and monitoring, by following our templates:
|
||||
|
||||
* :doc:`../k8s-deployment-template/log-analytics`.
|
@ -0,0 +1,542 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
.. _kubernetes-template-deploy-planetmint-network:
|
||||
|
||||
Kubernetes Template: Deploying a Planetmint network
|
||||
===================================================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a Planetmint node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see our `Node Setup <../../node_setup>`_.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
This page describes how to deploy a static Planetmint + Tendermint network.
|
||||
|
||||
If you want to deploy a stand-alone Planetmint node in a Planetmint cluster,
|
||||
or a stand-alone Planetmint node,
|
||||
then see :doc:`the page about that <node-on-kubernetes>`.
|
||||
|
||||
We can use this guide to deploy a Planetmint network in the following scenarios:
|
||||
|
||||
* Single Azure Kubernetes Site.
|
||||
* Multiple Azure Kubernetes Sites (Geographically dispersed).
|
||||
|
||||
|
||||
Terminology Used
|
||||
----------------
|
||||
|
||||
``Planetmint node`` is a set of Kubernetes components that join together to
|
||||
form a Planetmint single node. Please refer to the :doc:`architecture diagram <architecture>`
|
||||
for more details.
|
||||
|
||||
``Planetmint network`` will refer to a collection of nodes working together
|
||||
to form a network.
|
||||
|
||||
|
||||
Below, we refer to multiple files by their directory and filename,
|
||||
such as ``planetmint/planetmint-ext-conn-svc.yaml``. Those files are located in the
|
||||
`planetmint/planetmint repository on GitHub
|
||||
<https://github.com/planetmint/planetmint/>`_ in the ``k8s/`` directory.
|
||||
Make sure you're getting those files from the appropriate Git branch on
|
||||
GitHub, i.e. the branch for the version of Planetmint that your Planetmint
|
||||
cluster is using.
|
||||
|
||||
.. note::
|
||||
|
||||
This deployment strategy is currently used for testing purposes only,
|
||||
operated by a single stakeholder or tightly coupled stakeholders.
|
||||
|
||||
.. note::
|
||||
|
||||
Currently, we only support a static set of participants in the network.
|
||||
Once a Planetmint network is started with a certain number of validators
|
||||
and a genesis file. Users cannot add new validator nodes dynamically.
|
||||
You can track the progress of this funtionality on our
|
||||
`github repository <https://github.com/planetmint/planetmint/milestones>`_.
|
||||
|
||||
|
||||
.. _pre-reqs-bdb-network:
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
The deployment methodology is similar to one covered with :doc:`node-on-kubernetes`, but
|
||||
we need to tweak some configurations depending on your choice of deployment.
|
||||
|
||||
The operator needs to follow some consistent naming convention for all the components
|
||||
covered :ref:`here <things-each-node-operator-must-do>`.
|
||||
|
||||
Lets assume we are deploying a 4 node cluster, your naming conventions could look like this:
|
||||
|
||||
.. code::
|
||||
|
||||
{
|
||||
"MongoDB": [
|
||||
"mdb-instance-1",
|
||||
"mdb-instance-2",
|
||||
"mdb-instance-3",
|
||||
"mdb-instance-4"
|
||||
],
|
||||
"Planetmint": [
|
||||
"bdb-instance-1",
|
||||
"bdb-instance-2",
|
||||
"bdb-instance-3",
|
||||
"bdb-instance-4"
|
||||
],
|
||||
"NGINX": [
|
||||
"ngx-instance-1",
|
||||
"ngx-instance-2",
|
||||
"ngx-instance-3",
|
||||
"ngx-instance-4"
|
||||
],
|
||||
"OpenResty": [
|
||||
"openresty-instance-1",
|
||||
"openresty-instance-2",
|
||||
"openresty-instance-3",
|
||||
"openresty-instance-4"
|
||||
],
|
||||
"MongoDB_Monitoring_Agent": [
|
||||
"mdb-mon-instance-1",
|
||||
"mdb-mon-instance-2",
|
||||
"mdb-mon-instance-3",
|
||||
"mdb-mon-instance-4"
|
||||
]
|
||||
}
|
||||
|
||||
.. note::
|
||||
|
||||
Blockchain Genesis ID and Time will be shared across all nodes.
|
||||
|
||||
Edit config.yaml and secret.yaml
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Make N(number of nodes) copies of ``configuration/config-map.yaml`` and ``configuration/secret.yaml``.
|
||||
|
||||
.. code:: text
|
||||
|
||||
# For config-map.yaml
|
||||
config-map-node-1.yaml
|
||||
config-map-node-2.yaml
|
||||
config-map-node-3.yaml
|
||||
config-map-node-4.yaml
|
||||
|
||||
# For secret.yaml
|
||||
secret-node-1.yaml
|
||||
secret-node-2.yaml
|
||||
secret-node-3.yaml
|
||||
secret-node-4.yaml
|
||||
|
||||
Edit the data values as described in :doc:`this document <node-config-map-and-secrets>`, based
|
||||
on the naming convention described :ref:`above <pre-reqs-bdb-network>`.
|
||||
|
||||
**Only for single site deployments**: Since all the configuration files use the
|
||||
same ConfigMap and Secret Keys i.e.
|
||||
``metadata.name -> vars, bdb-config and tendermint-config`` and
|
||||
``metadata.name -> cloud-manager-credentials, mdb-certs, mdb-mon-certs, bdb-certs,``
|
||||
``https-certs, three-scale-credentials, ca-auth`` respectively, each file
|
||||
will overwrite the configuration of the previously deployed one.
|
||||
We want each node to have its own unique configurations.
|
||||
One way to go about it is that, using the
|
||||
:ref:`naming convention above <pre-reqs-bdb-network>` we edit the ConfigMap and Secret keys.
|
||||
|
||||
.. code:: text
|
||||
|
||||
# For config-map-node-1.yaml
|
||||
metadata.name: vars -> vars-node-1
|
||||
metadata.name: bdb-config -> bdb-config-node-1
|
||||
metadata.name: tendermint-config -> tendermint-config-node-1
|
||||
|
||||
# For secret-node-1.yaml
|
||||
metadata.name: cloud-manager-credentials -> cloud-manager-credentials-node-1
|
||||
metadata.name: mdb-certs -> mdb-certs-node-1
|
||||
metadata.name: mdb-mon-certs -> mdb-mon-certs-node-1
|
||||
metadata.name: bdb-certs -> bdb-certs-node-1
|
||||
metadata.name: https-certs -> https-certs-node-1
|
||||
metadata.name: threescale-credentials -> threescale-credentials-node-1
|
||||
metadata.name: ca-auth -> ca-auth-node-1
|
||||
|
||||
# Repeat for the remaining files.
|
||||
|
||||
Deploy all your configuration maps and secrets.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl apply -f configuration/config-map-node-1.yaml
|
||||
kubectl apply -f configuration/config-map-node-2.yaml
|
||||
kubectl apply -f configuration/config-map-node-3.yaml
|
||||
kubectl apply -f configuration/config-map-node-4.yaml
|
||||
kubectl apply -f configuration/secret-node-1.yaml
|
||||
kubectl apply -f configuration/secret-node-2.yaml
|
||||
kubectl apply -f configuration/secret-node-3.yaml
|
||||
kubectl apply -f configuration/secret-node-4.yaml
|
||||
|
||||
.. note::
|
||||
|
||||
Similar to what we did, with config-map.yaml and secret.yaml i.e. indexing them
|
||||
per node, we have to do the same for each Kubernetes component
|
||||
i.e. Services, StorageClasses, PersistentVolumeClaims, StatefulSets, Deployments etc.
|
||||
|
||||
.. code:: text
|
||||
|
||||
# For Services
|
||||
*-node-1-svc.yaml
|
||||
*-node-2-svc.yaml
|
||||
*-node-3-svc.yaml
|
||||
*-node-4-svc.yaml
|
||||
|
||||
# For StorageClasses
|
||||
*-node-1-sc.yaml
|
||||
*-node-2-sc.yaml
|
||||
*-node-3-sc.yaml
|
||||
*-node-4-sc.yaml
|
||||
|
||||
# For PersistentVolumeClaims
|
||||
*-node-1-pvc.yaml
|
||||
*-node-2-pvc.yaml
|
||||
*-node-3-pvc.yaml
|
||||
*-node-4-pvc.yaml
|
||||
|
||||
# For StatefulSets
|
||||
*-node-1-ss.yaml
|
||||
*-node-2-ss.yaml
|
||||
*-node-3-ss.yaml
|
||||
*-node-4-ss.yaml
|
||||
|
||||
# For Deployments
|
||||
*-node-1-dep.yaml
|
||||
*-node-2-dep.yaml
|
||||
*-node-3-dep.yaml
|
||||
*-node-4-dep.yaml
|
||||
|
||||
|
||||
.. _single-site-network:
|
||||
|
||||
Single Site: Single Azure Kubernetes Cluster
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
For the deployment of a Planetmint network under a single cluster, we need to replicate
|
||||
the :doc:`deployment steps for each node <node-on-kubernetes>` N number of times, N being
|
||||
the number of participants in the network.
|
||||
|
||||
In our Kubernetes deployment template for a single Planetmint node, we covered the basic configurations
|
||||
settings :ref:`here <how-to-configure-a-planetmint-node>`.
|
||||
|
||||
Since, we index the ConfigMap and Secret Keys for the single site deployment, we need to update
|
||||
all the Kubernetes components to reflect the corresponding changes i.e. For each Kubernetes Service,
|
||||
StatefulSet, PersistentVolumeClaim, Deployment, and StorageClass, we need to update the respective
|
||||
`*.yaml` file and update the ConfigMapKeyRef.name OR secret.secretName.
|
||||
|
||||
Example
|
||||
"""""""
|
||||
|
||||
Assuming we are deploying the MongoDB StatefulSet for Node 1. We need to update
|
||||
the ``mongo-node-1-ss.yaml`` and update the corresponding ConfigMapKeyRef.name or secret.secretNames.
|
||||
|
||||
.. code:: text
|
||||
|
||||
########################################################################
|
||||
# This YAML file desribes a StatefulSet with a service for running and #
|
||||
# exposing a MongoDB instance. #
|
||||
# It depends on the configdb and db k8s pvc. #
|
||||
########################################################################
|
||||
|
||||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: mdb-instance-0-ss
|
||||
namespace: default
|
||||
spec:
|
||||
serviceName: mdb-instance-0
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
name: mdb-instance-0-ss
|
||||
labels:
|
||||
app: mdb-instance-0-ss
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: mongodb
|
||||
image: planetmint/mongodb:3.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
env:
|
||||
- name: MONGODB_FQDN
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: vars-1 # Changed from ``vars``
|
||||
key: mdb-instance-name
|
||||
- name: MONGODB_POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
- name: MONGODB_PORT
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: vars-1 # Changed from ``vars``
|
||||
key: mongodb-backend-port
|
||||
- name: STORAGE_ENGINE_CACHE_SIZE
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: vars-1 # Changed from ``vars``
|
||||
key: storage-engine-cache-size
|
||||
args:
|
||||
- --mongodb-port
|
||||
- $(MONGODB_PORT)
|
||||
- --mongodb-key-file-path
|
||||
- /etc/mongod/ssl/mdb-instance.pem
|
||||
- --mongodb-ca-file-path
|
||||
- /etc/mongod/ca/ca.pem
|
||||
- --mongodb-crl-file-path
|
||||
- /etc/mongod/ca/crl.pem
|
||||
- --mongodb-fqdn
|
||||
- $(MONGODB_FQDN)
|
||||
- --mongodb-ip
|
||||
- $(MONGODB_POD_IP)
|
||||
- --storage-engine-cache-size
|
||||
- $(STORAGE_ENGINE_CACHE_SIZE)
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- FOWNER
|
||||
ports:
|
||||
- containerPort: "<mongodb-backend-port from ConfigMap>"
|
||||
protocol: TCP
|
||||
name: mdb-api-port
|
||||
volumeMounts:
|
||||
- name: mdb-db
|
||||
mountPath: /data/db
|
||||
- name: mdb-configdb
|
||||
mountPath: /data/configdb
|
||||
- name: mdb-certs
|
||||
mountPath: /etc/mongod/ssl/
|
||||
readOnly: true
|
||||
- name: ca-auth
|
||||
mountPath: /etc/mongod/ca/
|
||||
readOnly: true
|
||||
resources:
|
||||
limits:
|
||||
cpu: 200m
|
||||
memory: 5G
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: mdb-api-port
|
||||
initialDelaySeconds: 15
|
||||
successThreshold: 1
|
||||
failureThreshold: 3
|
||||
periodSeconds: 15
|
||||
timeoutSeconds: 10
|
||||
restartPolicy: Always
|
||||
volumes:
|
||||
- name: mdb-db
|
||||
persistentVolumeClaim:
|
||||
claimName: mongo-db-claim-1 # Changed from ``mongo-db-claim``
|
||||
- name: mdb-configdb
|
||||
persistentVolumeClaim:
|
||||
claimName: mongo-configdb-claim-1 # Changed from ``mongo-configdb-claim``
|
||||
- name: mdb-certs
|
||||
secret:
|
||||
secretName: mdb-certs-1 # Changed from ``mdb-certs``
|
||||
defaultMode: 0400
|
||||
- name: ca-auth
|
||||
secret:
|
||||
secretName: ca-auth-1 # Changed from ``ca-auth``
|
||||
defaultMode: 0400
|
||||
|
||||
The above example is meant to be repeated for all the Kubernetes components of a Planetmint node.
|
||||
|
||||
* ``nginx-http/nginx-http-node-X-svc.yaml`` or ``nginx-https/nginx-https-node-X-svc.yaml``
|
||||
|
||||
* ``nginx-http/nginx-http-node-X-dep.yaml`` or ``nginx-https/nginx-https-node-X-dep.yaml``
|
||||
|
||||
* ``mongodb/mongodb-node-X-svc.yaml``
|
||||
|
||||
* ``mongodb/mongodb-node-X-sc.yaml``
|
||||
|
||||
* ``mongodb/mongodb-node-X-pvc.yaml``
|
||||
|
||||
* ``mongodb/mongodb-node-X-ss.yaml``
|
||||
|
||||
* ``planetmint/planetmint-node-X-svc.yaml``
|
||||
|
||||
* ``planetmint/planetmint-node-X-sc.yaml``
|
||||
|
||||
* ``planetmint/planetmint-node-X-pvc.yaml``
|
||||
|
||||
* ``planetmint/planetmint-node-X-ss.yaml``
|
||||
|
||||
* ``nginx-openresty/nginx-openresty-node-X-svc.yaml``
|
||||
|
||||
* ``nginx-openresty/nginx-openresty-node-X-dep.yaml``
|
||||
|
||||
|
||||
Multi Site: Multiple Azure Kubernetes Clusters
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
For the multi site deployment of a Planetmint network with geographically dispersed
|
||||
nodes, we need to replicate the :doc:`deployment steps for each node <node-on-kubernetes>` N number of times,
|
||||
N being the number of participants in the network.
|
||||
|
||||
The operator needs to follow a consistent naming convention which has :ref:`already
|
||||
discussed in this document <pre-reqs-bdb-network>`.
|
||||
|
||||
.. note::
|
||||
|
||||
Assuming we are using independent Kubernetes clusters, the ConfigMap and Secret Keys
|
||||
do not need to be updated unlike :ref:`single-site-network`, and we also do not
|
||||
need to update corresponding ConfigMap/Secret imports in the Kubernetes components.
|
||||
|
||||
|
||||
Deploy Kubernetes Services
|
||||
--------------------------
|
||||
|
||||
Deploy the following services for each node by following the naming convention
|
||||
described :ref:`above <pre-reqs-bdb-network>`:
|
||||
|
||||
* :ref:`Start the NGINX Service <start-the-nginx-service>`.
|
||||
|
||||
* :ref:`Assign DNS Name to the NGINX Public IP <assign-dns-name-to-nginx-public-ip>`
|
||||
|
||||
* :ref:`Start the MongoDB Kubernetes Service <start-the-mongodb-kubernetes-service>`.
|
||||
|
||||
* :ref:`Start the Planetmint Kubernetes Service <start-the-planetmint-kubernetes-service>`.
|
||||
|
||||
* :ref:`Start the OpenResty Kubernetes Service <start-the-openresty-kubernetes-service>`.
|
||||
|
||||
|
||||
Only for multi site deployments
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
We need to make sure that clusters are able
|
||||
to talk to each other i.e. specifically the communication between the
|
||||
Planetmint peers. Set up networking between the clusters using
|
||||
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
|
||||
|
||||
Assuming we have a Planetmint instance ``planetmint-instance-1`` residing in Azure data center location ``westeurope`` and we
|
||||
want to connect to ``planetmint-instance-2``, ``planetmint-instance-3``, and ``planetmint-instance-4`` located in Azure data centers
|
||||
``eastus``, ``centralus`` and ``westus``, respectively. Unless you already have explicitly set up networking for
|
||||
``planetmint-instance-1`` to communicate with ``planetmint-instance-2/3/4`` and
|
||||
vice versa, we will have to add a Kubernetes Service in each cluster to accomplish this goal in order to set up a
|
||||
Planetmint P2P network.
|
||||
It is similar to ensuring that there is a ``CNAME`` record in the DNS
|
||||
infrastructure to resolve ``planetmint-instance-X`` to the host where it is actually available.
|
||||
We can do this in Kubernetes using a Kubernetes Service of ``type``
|
||||
``ExternalName``.
|
||||
|
||||
* This configuration is located in the file ``planetmint/planetmint-ext-conn-svc.yaml``.
|
||||
|
||||
* Set the name of the ``metadata.name`` to the host name of the Planetmint instance you are trying to connect to.
|
||||
For instance if you are configuring this service on cluster with ``planetmint-instance-1`` then the ``metadata.name`` will
|
||||
be ``planetmint-instance-2`` and vice versa.
|
||||
|
||||
* Set ``spec.ports.port[0]`` to the ``tm-p2p-port`` from the ConfigMap for the other cluster.
|
||||
|
||||
* Set ``spec.ports.port[1]`` to the ``tm-rpc-port`` from the ConfigMap for the other cluster.
|
||||
|
||||
* Set ``spec.externalName`` to the FQDN mapped to NGINX Public IP of the cluster you are trying to connect to.
|
||||
For more information about the FQDN please refer to: :ref:`Assign DNS name to NGINX Public
|
||||
IP <assign-dns-name-to-nginx-public-ip>`.
|
||||
|
||||
.. note::
|
||||
This operation needs to be replicated ``n-1`` times per node for a ``n`` node cluster, with the respective FQDNs
|
||||
we need to communicate with.
|
||||
|
||||
If you are not the system administrator of the cluster, you have to get in
|
||||
touch with the system administrator/s of the other ``n-1`` clusters and
|
||||
share with them your instance name (``planetmint-instance-name`` in the ConfigMap)
|
||||
and the FQDN of the NGINX instance acting as Gateway(set in: :ref:`Assign DNS name to NGINX
|
||||
Public IP <assign-dns-name-to-nginx-public-ip>`).
|
||||
|
||||
|
||||
Start NGINX Kubernetes deployments
|
||||
----------------------------------
|
||||
|
||||
Start the NGINX deployment that serves as a Gateway for each node by following the
|
||||
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
|
||||
|
||||
* :ref:`Start the NGINX Kubernetes Deployment <start-the-nginx-deployment>`.
|
||||
|
||||
|
||||
Deploy Kubernetes StorageClasses for MongoDB and Planetmint
|
||||
------------------------------------------------------------
|
||||
|
||||
Deploy the following StorageClasses for each node by following the naming convention
|
||||
described :ref:`above <pre-reqs-bdb-network>`:
|
||||
|
||||
* :ref:`Create Kubernetes Storage Classes for MongoDB <create-kubernetes-storage-class-mdb>`.
|
||||
|
||||
* :ref:`Create Kubernetes Storage Classes for Planetmint <create-kubernetes-storage-class>`.
|
||||
|
||||
|
||||
Deploy Kubernetes PersistentVolumeClaims for MongoDB and Planetmint
|
||||
--------------------------------------------------------------------
|
||||
|
||||
Deploy the following services for each node by following the naming convention
|
||||
described :ref:`above <pre-reqs-bdb-network>`:
|
||||
|
||||
* :ref:`Create Kubernetes Persistent Volume Claims for MongoDB <create-kubernetes-persistent-volume-claim-mdb>`.
|
||||
|
||||
* :ref:`Create Kubernetes Persistent Volume Claims for Planetmint <create-kubernetes-persistent-volume-claim>`
|
||||
|
||||
|
||||
Deploy MongoDB Kubernetes StatefulSet
|
||||
--------------------------------------
|
||||
|
||||
Deploy the MongoDB StatefulSet (standalone MongoDB) for each node by following the naming convention
|
||||
described :ref:`above <pre-reqs-bdb-network>`: and referring to the following section:
|
||||
|
||||
* :ref:`Start a Kubernetes StatefulSet for MongoDB <start-kubernetes-stateful-set-mongodb>`.
|
||||
|
||||
|
||||
Configure Users and Access Control for MongoDB
|
||||
----------------------------------------------
|
||||
|
||||
Configure users and access control for each MongoDB instance
|
||||
in the network by referring to the following section:
|
||||
|
||||
* :ref:`Configure Users and Access Control for MongoDB <configure-users-and-access-control-mongodb>`.
|
||||
|
||||
|
||||
Start Kubernetes StatefulSet for Planetmint
|
||||
-------------------------------------------
|
||||
|
||||
Start the Planetmint Kubernetes StatefulSet for each node by following the
|
||||
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
|
||||
|
||||
* :ref:`Start a Kubernetes Deployment for Planetmint <start-kubernetes-stateful-set-bdb>`.
|
||||
|
||||
|
||||
Start Kubernetes Deployment for MongoDB Monitoring Agent
|
||||
---------------------------------------------------------
|
||||
|
||||
Start the MongoDB monitoring agent Kubernetes deployment for each node by following the
|
||||
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
|
||||
|
||||
* :ref:`Start a Kubernetes Deployment for MongoDB Monitoring Agent <start-kubernetes-deployment-for-mdb-mon-agent>`.
|
||||
|
||||
|
||||
Start Kubernetes Deployment for OpenResty
|
||||
------------------------------------------
|
||||
|
||||
Start the OpenResty Kubernetes deployment for each node by following the
|
||||
naming convention described :ref:`above <pre-reqs-bdb-network>` and referring to the following instructions:
|
||||
|
||||
* :ref:`Start a Kubernetes Deployment for OpenResty <start-kubernetes-deployment-openresty>`.
|
||||
|
||||
|
||||
Verify and Test
|
||||
---------------
|
||||
|
||||
Verify and test your setup by referring to the following instructions:
|
||||
|
||||
* :ref:`Verify the Planetmint Node Setup <verify-and-test-bdb>`.
|
||||
|
@ -0,0 +1,49 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
How to Revoke an SSL/TLS Certificate
|
||||
====================================
|
||||
|
||||
This page enumerates the steps *we* take to revoke a self-signed SSL/TLS
|
||||
certificate in a Planetmint network.
|
||||
It can only be done by someone with access to the self-signed CA
|
||||
associated with the network's managing organization.
|
||||
|
||||
Step 1: Revoke a Certificate
|
||||
----------------------------
|
||||
|
||||
Since we used Easy-RSA version 3 to
|
||||
:ref:`set up the CA <how-to-set-up-a-self-signed-certificate-authority>`,
|
||||
we use it to revoke certificates too.
|
||||
|
||||
Go to the following directory (associated with the self-signed CA):
|
||||
``.../bdb-node-ca/easy-rsa-3.0.1/easyrsa3``.
|
||||
You need to be aware of the file name used to import the certificate using the
|
||||
``./easyrsa import-req`` before. Run the following command to revoke a
|
||||
certificate:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./easyrsa revoke <filename>
|
||||
|
||||
|
||||
This will update the CA database with the revocation details.
|
||||
The next step is to use the updated database to issue an up-to-date
|
||||
certificate revocation list (CRL).
|
||||
|
||||
Step 2: Generate a New CRL
|
||||
--------------------------
|
||||
|
||||
Generate a new CRL for your infrastructure using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./easyrsa gen-crl
|
||||
|
||||
The generated ``crl.pem`` file needs to be uploaded to your infrastructure to
|
||||
prevent the revoked certificate from being used again.
|
||||
|
||||
In particlar, the generated ``crl.pem`` file should be sent to all Planetmint node operators in your Planetmint network, so that they can update it in their MongoDB instance and their Planetmint Server instance.
|
@ -0,0 +1,102 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
.. _how-to-generate-a-server-certificate-for-mongodb:
|
||||
|
||||
How to Generate a Server Certificate for MongoDB
|
||||
================================================
|
||||
|
||||
This page enumerates the steps *we* use to generate a
|
||||
server certificate for a MongoDB instance.
|
||||
A server certificate is also referred to as a "member certificate"
|
||||
in the MongoDB documentation.
|
||||
We use Easy-RSA.
|
||||
|
||||
|
||||
Step 1: Install & Configure Easy–RSA
|
||||
------------------------------------
|
||||
|
||||
First create a directory for the server certificate (member cert) and cd into it:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
mkdir member-cert
|
||||
|
||||
cd member-cert
|
||||
|
||||
Then :ref:`install and configure Easy-RSA in that directory <how-to-install-and-configure-easyrsa>`.
|
||||
|
||||
|
||||
Step 2: Create the Server Private Key and CSR
|
||||
---------------------------------------------
|
||||
|
||||
You can create the server private key and certificate signing request (CSR)
|
||||
by going into the directory ``member-cert/easy-rsa-3.0.1/easyrsa3``
|
||||
and using something like:
|
||||
|
||||
.. note::
|
||||
|
||||
Please make sure you are fullfilling the requirements for `MongoDB server/member certificates
|
||||
<https://docs.mongodb.com/manual/tutorial/configure-x509-member-authentication>`_.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./easyrsa init-pki
|
||||
|
||||
./easyrsa --req-cn=mdb-instance-0 --subject-alt-name=DNS:localhost,DNS:mdb-instance-0 gen-req mdb-instance-0 nopass
|
||||
|
||||
You should replace the Common Name (``mdb-instance-0`` above) with the correct name for *your* MongoDB instance in the network, e.g. ``mdb-instance-5`` or ``mdb-instance-12``. (This name is decided by the organization managing the network.)
|
||||
|
||||
You will be prompted to enter the Distinguished Name (DN) information for this certificate.
|
||||
For each field, you can accept the default value [in brackets] by pressing Enter.
|
||||
|
||||
.. warning::
|
||||
|
||||
Don't accept the default value of OU (``IT``). Instead, enter the value ``MongoDB-Instance``.
|
||||
|
||||
Aside: You need to provide the ``DNS:localhost`` SAN during certificate generation
|
||||
for using the ``localhost exception`` in the MongoDB instance.
|
||||
All certificates can have this attribute without compromising security as the
|
||||
``localhost exception`` works only the first time.
|
||||
|
||||
|
||||
Step 3: Get the Server Certificate Signed
|
||||
-----------------------------------------
|
||||
|
||||
The CSR file created in the last step
|
||||
should be located in ``pki/reqs/mdb-instance-0.req``
|
||||
(where the integer ``0`` may be different for you).
|
||||
You need to send it to the organization managing the Planetmint network
|
||||
so that they can use their CA
|
||||
to sign the request.
|
||||
(The managing organization should already have a self-signed CA.)
|
||||
|
||||
If you are the admin of the managing organization's self-signed CA,
|
||||
then you can import the CSR and use Easy-RSA to sign it.
|
||||
Go to your ``bdb-node-ca/easy-rsa-3.0.1/easyrsa3/``
|
||||
directory and do something like:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./easyrsa import-req /path/to/mdb-instance-0.req mdb-instance-0
|
||||
|
||||
./easyrsa --subject-alt-name=DNS:localhost,DNS:mdb-instance-0 sign-req server mdb-instance-0
|
||||
|
||||
Once you have signed it, you can send the signed certificate
|
||||
and the CA certificate back to the requestor.
|
||||
The files are ``pki/issued/mdb-instance-0.crt`` and ``pki/ca.crt``.
|
||||
|
||||
|
||||
Step 4: Generate the Consolidated Server PEM File
|
||||
-------------------------------------------------
|
||||
|
||||
MongoDB requires a single, consolidated file containing both the public and
|
||||
private keys.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
cat /path/to/mdb-instance-0.crt /path/to/mdb-instance-0.key > mdb-instance-0.pem
|
||||
|
@ -0,0 +1,149 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
Walkthrough: Deploy a Kubernetes Cluster on Azure using Tectonic by CoreOS
|
||||
==========================================================================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a Planetmint node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see our `Node Setup <../../node_setup>`_.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
A Planetmint node can be run inside a `Kubernetes <https://kubernetes.io/>`_
|
||||
cluster.
|
||||
This page describes one way to deploy a Kubernetes cluster on Azure using Tectonic.
|
||||
Tectonic helps in easier cluster management of Kubernetes clusters.
|
||||
|
||||
If you would rather use Azure Container Service to manage Kubernetes Clusters,
|
||||
please read :doc:`our guide for that <template-kubernetes-azure>`.
|
||||
|
||||
|
||||
Step 1: Prerequisites for Deploying Tectonic Cluster
|
||||
----------------------------------------------------
|
||||
|
||||
Get an Azure account. Refer to
|
||||
:ref:`this step in our docs <get-a-pay-as-you-go-azure-subscription>`.
|
||||
|
||||
Create an SSH Key pair for the new Tectonic cluster. Refer to
|
||||
:ref:`this step in our docs <create-an-ssh-key-pair>`.
|
||||
|
||||
|
||||
Step 2: Get a Tectonic Subscription
|
||||
-----------------------------------
|
||||
|
||||
CoreOS offers Tectonic for free for up to 10 nodes.
|
||||
|
||||
Sign up for an account `here <https://coreos.com/tectonic>`__ if you do not
|
||||
have one already and get a license for 10 nodes.
|
||||
|
||||
Login to your account, go to Overview > Your Account and save the
|
||||
``CoreOS License`` and the ``Pull Secret`` to your local machine.
|
||||
|
||||
|
||||
Step 3: Deploy the cluster on Azure
|
||||
-----------------------------------
|
||||
|
||||
The latest instructions for deployment can be found
|
||||
`here <https://coreos.com/tectonic/docs/latest/tutorials/azure/install.html>`__.
|
||||
|
||||
The following points suggests some customizations for a Planetmint deployment
|
||||
when following the steps above:
|
||||
|
||||
|
||||
#. Set the ``CLUSTER`` variable to the name of the cluster. Also note that the
|
||||
cluster will be deployed in a resource group named
|
||||
``tectonic-cluster-CLUSTER``.
|
||||
|
||||
#. Set the ``tectonic_base_domain`` to ``""`` if you want to use Azure managed
|
||||
DNS. You will be assigned a ``cloudapp.azure.com`` sub-domain by default and
|
||||
you can skip the ``Configuring Azure DNS`` section from the Tectonic installation
|
||||
guide.
|
||||
|
||||
#. Set the ``tectonic_cl_channel`` to ``"stable"`` unless you want to
|
||||
experiment or test with the latest release.
|
||||
|
||||
#. Set the ``tectonic_cluster_name`` to the ``CLUSTER`` variable defined in
|
||||
the step above.
|
||||
|
||||
#. Set the ``tectonic_license_path`` and ``tectonic_pull_secret_path`` to the
|
||||
location where you have stored the ``tectonic-license.txt`` and the
|
||||
``config.json`` files downloaded in the previous step.
|
||||
|
||||
#. Set the ``tectonic_etcd_count`` to ``"3"``, so that you have a multi-node
|
||||
etcd cluster that can tolerate a single node failure.
|
||||
|
||||
#. Set the ``tectonic_etcd_tls_enabled`` to ``"true"`` as this will enable TLS
|
||||
connectivity between the etcd nodes and their clients.
|
||||
|
||||
#. Set the ``tectonic_master_count`` to ``"3"`` so that you cane tolerate a
|
||||
single master failure.
|
||||
|
||||
#. Set the ``tectonic_worker_count`` to ``"2"``.
|
||||
|
||||
#. Set the ``tectonic_azure_location`` to ``"westeurope"`` if you want to host
|
||||
the cluster in Azure's ``westeurope`` datacenter.
|
||||
|
||||
#. Set the ``tectonic_azure_ssh_key`` to the path of the public key created in
|
||||
the previous step.
|
||||
|
||||
#. We recommend setting up or using a CA(Certificate Authority) to generate Tectonic
|
||||
Console's server certificate(s) and adding it to your trusted authorities on the client side,
|
||||
accessing the Tectonic Console i.e. Browser. If you already have a CA(self-signed or otherwise),
|
||||
Set the ``tectonic_ca_cert`` and ``tectonic_ca_key`` configurations with the content
|
||||
of PEM-encoded certificate and key files, respectively. For more information about, how to set
|
||||
up a self-signed CA, Please refer to
|
||||
:doc:`How to Set up self-signed CA <ca-installation>`.
|
||||
|
||||
#. Note that the ``tectonic_azure_client_secret`` is the same as the
|
||||
``ARM_CLIENT_SECRET``.
|
||||
|
||||
#. Note that the URL for the Tectonic console using these settings will be the
|
||||
cluster name set in the configutation file, the datacenter name and
|
||||
``cloudapp.azure.com``. For example, if you named your cluster as
|
||||
``test-cluster`` and specified the datacenter as ``westeurope``, the Tectonic
|
||||
console will be available at ``test-cluster.westeurope.cloudapp.azure.com``.
|
||||
|
||||
#. Note that, if you do not specify ``tectonic_ca_cert``, a CA certificate will
|
||||
be generated automatically and you will encounter the untrusted certificate
|
||||
message on your client(Browser), when accessing the Tectonic Console.
|
||||
|
||||
|
||||
Step 4: Configure kubectl
|
||||
-------------------------
|
||||
|
||||
#. Refer to `this tutorial
|
||||
<https://coreos.com/tectonic/docs/latest/tutorials/azure/first-app.html>`__
|
||||
for instructions on how to download the kubectl configuration files for
|
||||
your cluster.
|
||||
|
||||
#. Set the ``KUBECONFIG`` environment variable to make ``kubectl`` use the new
|
||||
config file along with the existing configuration.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ export KUBECONFIG=$HOME/.kube/config:/path/to/config/kubectl-config
|
||||
|
||||
# OR to only use the new configuration, try
|
||||
|
||||
$ export KUBECONFIG=/path/to/config/kubectl-config
|
||||
|
||||
Next, you can follow one of our following deployment templates:
|
||||
|
||||
* :doc:`node-on-kubernetes`.
|
||||
|
||||
|
||||
Tectonic References
|
||||
-------------------
|
||||
|
||||
#. https://coreos.com/tectonic/docs/latest/tutorials/azure/install.html
|
||||
#. https://coreos.com/tectonic/docs/latest/troubleshooting/installer-terraform.html
|
||||
#. https://coreos.com/tectonic/docs/latest/tutorials/azure/first-app.html
|
@ -0,0 +1,271 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
Template: Deploy a Kubernetes Cluster on Azure
|
||||
==============================================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a Planetmint node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see our `Node Setup <../../node_setup>`_.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
A Planetmint node can be run inside a `Kubernetes <https://kubernetes.io/>`_
|
||||
cluster.
|
||||
This page describes one way to deploy a Kubernetes cluster on Azure.
|
||||
|
||||
|
||||
.. _get-a-pay-as-you-go-azure-subscription:
|
||||
|
||||
Step 1: Get a Pay-As-You-Go Azure Subscription
|
||||
----------------------------------------------
|
||||
|
||||
Microsoft Azure has a Free Trial subscription (at the time of writing),
|
||||
but it's too limited to run an advanced Planetmint node.
|
||||
Sign up for a Pay-As-You-Go Azure subscription
|
||||
via `the Azure website <https://azure.microsoft.com>`_.
|
||||
|
||||
You may find that you have to sign up for a Free Trial subscription first.
|
||||
That's okay: you can have many subscriptions.
|
||||
|
||||
|
||||
.. _create-an-ssh-key-pair:
|
||||
|
||||
Step 2: Create an SSH Key Pair
|
||||
------------------------------
|
||||
|
||||
You'll want an SSH key pair so you'll be able to SSH
|
||||
to the virtual machines that you'll deploy in the next step.
|
||||
(If you already have an SSH key pair, you *could* reuse it,
|
||||
but it's probably a good idea to make a new SSH key pair
|
||||
for your Kubernetes VMs and nothing else.)
|
||||
|
||||
See the
|
||||
:doc:`page about how to generate a key pair for SSH
|
||||
<../../references/appendices/generate-key-pair-for-ssh>`.
|
||||
|
||||
|
||||
Step 3: Deploy an Azure Container Service (ACS)
|
||||
-----------------------------------------------
|
||||
|
||||
It's *possible* to deploy an Azure Container Service (ACS)
|
||||
from the `Azure Portal <https://portal.azure.com>`_
|
||||
(i.e. online in your web browser)
|
||||
but it's actually easier to do it using the Azure
|
||||
Command-Line Interface (CLI).
|
||||
|
||||
Microsoft has `instructions to install the Azure CLI 2.0
|
||||
on most common operating systems
|
||||
<https://docs.microsoft.com/en-us/cli/azure/install-az-cli2>`_.
|
||||
Do that.
|
||||
|
||||
If you already *have* the Azure CLI installed, you may want to update it.
|
||||
|
||||
.. warning::
|
||||
|
||||
``az component update`` isn't supported if you installed the CLI using some of Microsoft's provided installation instructions. See `the Microsoft docs for update instructions <https://docs.microsoft.com/en-us/cli/azure/install-az-cli2>`_.
|
||||
|
||||
|
||||
Next, login to your account using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ az login
|
||||
|
||||
It will tell you to open a web page and to copy a code to that page.
|
||||
|
||||
If the login is a success, you will see some information
|
||||
about all your subscriptions, including the one that is currently
|
||||
enabled (``"state": "Enabled"``). If the wrong one is enabled,
|
||||
you can switch to the right one using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ az account set --subscription <subscription name or ID>
|
||||
|
||||
Next, you will have to pick the Azure data center location
|
||||
where you'd like to deploy your cluster.
|
||||
You can get a list of all available locations using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ az account list-locations
|
||||
|
||||
Next, create an Azure "resource group" to contain all the
|
||||
resources (virtual machines, subnets, etc.) associated
|
||||
with your soon-to-be-deployed cluster. You can name it
|
||||
whatever you like but avoid fancy characters because they may
|
||||
confuse some software.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ az group create --name <resource group name> --location <location name>
|
||||
|
||||
|
||||
Example location names are ``koreacentral`` and ``westeurope``.
|
||||
|
||||
Finally, you can deploy an ACS using something like:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ az acs create --name <a made-up cluster name> \
|
||||
--resource-group <name of resource group created earlier> \
|
||||
--master-count 3 \
|
||||
--agent-count 3 \
|
||||
--admin-username ubuntu \
|
||||
--agent-vm-size Standard_L4s \
|
||||
--dns-prefix <make up a name> \
|
||||
--ssh-key-value ~/.ssh/<name>.pub \
|
||||
--orchestrator-type kubernetes \
|
||||
--debug --output json
|
||||
|
||||
.. Note::
|
||||
The `Azure documentation <https://docs.microsoft.com/en-us/cli/azure/acs?view=azure-cli-latest#az_acs_create>`_
|
||||
has a list of all ``az acs create`` options.
|
||||
You might prefer a smaller agent VM size, for example.
|
||||
You can also get a list of the options using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ az acs create --help
|
||||
|
||||
|
||||
It takes a few minutes for all the resources to deploy.
|
||||
You can watch the progress in the `Azure Portal
|
||||
<https://portal.azure.com>`_:
|
||||
go to **Resource groups** (with the blue cube icon)
|
||||
and click on the one you created
|
||||
to see all the resources in it.
|
||||
|
||||
|
||||
Trouble with the Service Principal? Then Read This!
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If the ``az acs create`` command fails with an error message including the text,
|
||||
"The Service Principal in ServicePrincipalProfile could not be validated",
|
||||
then we found you can prevent that by creating a Service Principal ahead of time
|
||||
and telling ``az acs create`` to use that one. (It's supposed to create one,
|
||||
but sometimes that fails, I guess.)
|
||||
|
||||
Create a new resource group, even if you created one before. They're free anyway:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ az login
|
||||
$ az group create --name <new resource group name> \
|
||||
--location <Azure region like westeurope>
|
||||
|
||||
Note the ``id`` in the output. It looks like
|
||||
``"/subscriptions/369284be-0104-421a-8488-1aeac0caecbb/resourceGroups/examplerg"``.
|
||||
It can be copied into the next command.
|
||||
Create a Service Principal using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ az ad sp create-for-rbac --role="Contributor" \
|
||||
--scopes=<id value copied from above, including the double quotes on the ends>
|
||||
|
||||
Note the ``appId`` and ``password``.
|
||||
Put those in a new ``az acs create`` command like above, with two new options added:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ az acs create ... \
|
||||
--service-principal <appId> \
|
||||
--client-secret <password>
|
||||
|
||||
|
||||
.. _ssh-to-your-new-kubernetes-cluster-nodes:
|
||||
|
||||
Optional: SSH to Your New Kubernetes Cluster Nodes
|
||||
--------------------------------------------------
|
||||
|
||||
You can SSH to one of the just-deployed Kubernetes "master" nodes
|
||||
(virtual machines) using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ ssh -i ~/.ssh/<name> ubuntu@<master-ip-address-or-fqdn>
|
||||
|
||||
where you can get the IP address or FQDN
|
||||
of a master node from the Azure Portal. For example:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ ssh -i ~/.ssh/mykey123 ubuntu@mydnsprefix.westeurope.cloudapp.azure.com
|
||||
|
||||
.. note::
|
||||
|
||||
All the master nodes are accessible behind the *same* public IP address and
|
||||
FQDN. You connect to one of the masters randomly based on the load balancing
|
||||
policy.
|
||||
|
||||
The "agent" nodes shouldn't get public IP addresses or externally accessible
|
||||
FQDNs, so you can't SSH to them *directly*,
|
||||
but you can first SSH to the master
|
||||
and then SSH to an agent from there using their hostname.
|
||||
To do that, you could
|
||||
copy your SSH key pair to the master (a bad idea),
|
||||
or use SSH agent forwarding (better).
|
||||
To do the latter, do the following on the machine you used
|
||||
to SSH to the master:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ echo -e "Host <FQDN of the cluster from Azure Portal>\n ForwardAgent yes" >> ~/.ssh/config
|
||||
|
||||
To verify that SSH agent forwarding works properly,
|
||||
SSH to the one of the master nodes and do:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ echo "$SSH_AUTH_SOCK"
|
||||
|
||||
If you get an empty response,
|
||||
then SSH agent forwarding hasn't been set up correctly.
|
||||
If you get a non-empty response,
|
||||
then SSH agent forwarding should work fine
|
||||
and you can SSH to one of the agent nodes (from a master)
|
||||
using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ ssh ubuntu@k8s-agent-4AC80E97-0
|
||||
|
||||
where ``k8s-agent-4AC80E97-0`` is the name
|
||||
of a Kubernetes agent node in your Kubernetes cluster.
|
||||
You will have to replace it by the name
|
||||
of an agent node in your cluster.
|
||||
|
||||
|
||||
Optional: Delete the Kubernetes Cluster
|
||||
---------------------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ az acs delete \
|
||||
--name <ACS cluster name> \
|
||||
--resource-group <name of resource group containing the cluster>
|
||||
|
||||
|
||||
Optional: Delete the Resource Group
|
||||
-----------------------------------
|
||||
|
||||
CAUTION: You might end up deleting resources other than the ACS cluster.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ az group delete \
|
||||
--name <name of resource group containing the cluster>
|
||||
|
||||
|
||||
Next, you can :doc:`run a Planetmint node/cluster(BFT) <node-on-kubernetes>`
|
||||
on your new Kubernetes cluster.
|
147
docs/new/network-setup/k8s-deployment-template/troubleshoot.rst
Normal file
147
docs/new/network-setup/k8s-deployment-template/troubleshoot.rst
Normal file
@ -0,0 +1,147 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
.. _cluster-troubleshooting:
|
||||
|
||||
Cluster Troubleshooting
|
||||
=======================
|
||||
|
||||
This page describes some basic issues we have faced while deploying and
|
||||
operating the cluster.
|
||||
|
||||
1. MongoDB Restarts
|
||||
-------------------
|
||||
|
||||
We define the following in the ``mongo-ss.yaml`` file:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
resources:
|
||||
limits:
|
||||
cpu: 200m
|
||||
memory: 5G
|
||||
|
||||
When the MongoDB cache occupies a memory greater than 5GB, it is
|
||||
terminated by the ``kubelet``.
|
||||
This can usually be verified by logging in to the worker node running MongoDB
|
||||
container and looking at the syslog (the ``journalctl`` command should usually
|
||||
work).
|
||||
|
||||
This issue is resolved in
|
||||
`PR #1757 <https://github.com/planetmint/planetmint/pull/1757>`_.
|
||||
|
||||
2. 502 Bad Gateway Error on Runscope Tests
|
||||
------------------------------------------
|
||||
|
||||
It means that NGINX could not find the appropriate backed to forward the
|
||||
requests to. This typically happens when:
|
||||
|
||||
#. MongoDB goes down (as described above) and Planetmint, after trying for
|
||||
``PLANETMINT_DATABASE_MAXTRIES`` times, gives up. The Kubernetes Planetmint
|
||||
Deployment then restarts the Planetmint pod.
|
||||
|
||||
#. Planetmint crashes for some reason. We have seen this happen when updating
|
||||
Planetmint from one version to the next. This usually means the older
|
||||
connections to the service gets disconnected; retrying the request one more
|
||||
time, forwards the connection to the new instance and succeed.
|
||||
|
||||
|
||||
3. Service Unreachable
|
||||
----------------------
|
||||
|
||||
Communication between Kubernetes Services and Deployments fail in
|
||||
v1.6.6 and before due to a trivial key lookup error for non-existent services
|
||||
in the ``kubelet``.
|
||||
This error can be reproduced by restarting any public facing (that is, services
|
||||
using the cloud load balancer) Kubernetes services, and watching the
|
||||
``kube-proxy`` failure in its logs.
|
||||
The solution to this problem is to restart ``kube-proxy`` on the affected
|
||||
worker/agent node. Login to the worker node and run:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
docker stop `docker ps | grep k8s_kube-proxy | cut -d" " -f1`
|
||||
|
||||
docker logs -f `docker ps | grep k8s_kube-proxy | cut -d" " -f1`
|
||||
|
||||
`This issue <https://github.com/kubernetes/kubernetes/issues/48705>`_ is
|
||||
`fixed in Kubernetes v1.7 <https://github.com/kubernetes/kubernetes/commit/41c4e965c353187889f9b86c3e541b775656dc18>`_.
|
||||
|
||||
|
||||
4. Single Disk Attached to Multiple Mountpoints in a Container
|
||||
--------------------------------------------------------------
|
||||
|
||||
This is currently the issue faced in one of the clusters and being debugged by
|
||||
the support team at Microsoft.
|
||||
|
||||
The issue was first seen on August 29, 2017 on the Test Network and has been
|
||||
logged in the `Azure/acs-engine repo on GitHub <https://github.com/Azure/acs-engine/issues/1364>`_.
|
||||
|
||||
This is apparently fixed in Kubernetes v1.7.2 which include a new disk driver,
|
||||
but is yet to tested by us.
|
||||
|
||||
|
||||
5. MongoDB Monitoring Agent throws a dial error while connecting to MongoDB
|
||||
---------------------------------------------------------------------------
|
||||
|
||||
You might see something similar to this in the MongoDB Monitoring Agent logs:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
Failure dialing host without auth. Err: `no reachable servers`
|
||||
at monitoring-agent/components/dialing.go:278
|
||||
at monitoring-agent/components/dialing.go:116
|
||||
at monitoring-agent/components/dialing.go:213
|
||||
at src/runtime/asm_amd64.s:2086
|
||||
|
||||
|
||||
The first thing to check is if the networking is set up correctly. You can use
|
||||
the (maybe using the `toolbox` container).
|
||||
|
||||
If everything looks fine, it might be a problem with the ``Preferred
|
||||
Hostnames`` setting in MongoDB Cloud Manager. If you do need to change the
|
||||
regular expression, ensure that it is correct and saved properly (maybe try
|
||||
refreshing the MongoDB Cloud Manager web page to see if the setting sticks).
|
||||
|
||||
Once you update the regular expression, you will need to remove the deployment
|
||||
and add it again for the Monitoring Agent to discover and connect to the
|
||||
MongoDB instance correctly.
|
||||
|
||||
More information about this configuration is provided in
|
||||
:doc:`this document <cloud-manager>`.
|
||||
|
||||
6. Create a Persistent Volume from existing Azure disk storage Resource
|
||||
---------------------------------------------------------------------------
|
||||
When deleting a k8s cluster, all dynamically-created PVs are deleted, along with the
|
||||
underlying Azure storage disks (so those can't be used in a new cluster). resources
|
||||
are also deleted thus cannot be used in a new cluster. This workflow will preserve
|
||||
the Azure storage disks while deleting the k8s cluster and re-use the same disks on a new
|
||||
cluster for MongoDB persistent storage without losing any data.
|
||||
|
||||
The template to create two PVs for MongoDB Stateful Set (One for MongoDB data store and
|
||||
the other for MongoDB config store) is located at ``mongodb/mongo-pv.yaml``.
|
||||
|
||||
You need to configure ``diskName`` and ``diskURI`` in ``mongodb/mongo-pv.yaml`` file. You can get
|
||||
these values by logging into your Azure portal and going to ``Resource Groups`` and click on your
|
||||
relevant resource group. From the list of resources click on the storage account resource and
|
||||
click the container (usually named as ``vhds``) that contains storage disk blobs that are available
|
||||
for PVs. Click on the storage disk file that you wish to use for your PV and you will be able to
|
||||
see ``NAME`` and ``URL`` parameters which you can use for ``diskName`` and ``diskURI`` values in
|
||||
your template respectively and run the following command to create PVs:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context <context-name> apply -f mongodb/mongo-pv.yaml
|
||||
|
||||
.. note::
|
||||
|
||||
Please make sure the storage disks you are using are not already being used by any
|
||||
other PVs. To check the existing PVs in your cluster, run the following command
|
||||
to get PVs and Storage disk file mapping.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context <context-name> get pv --output yaml
|
@ -0,0 +1,122 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
Kubernetes Template: Upgrade all Software in a Planetmint Node
|
||||
==============================================================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a Planetmint node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see our `Node Setup <../../node_setup>`_.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
This page outlines how to upgrade all the software associated
|
||||
with a Planetmint node running on Kubernetes,
|
||||
including host operating systems, Docker, Kubernetes,
|
||||
and Planetmint-related software.
|
||||
|
||||
|
||||
Upgrade Host OS, Docker and Kubernetes
|
||||
--------------------------------------
|
||||
|
||||
Some Kubernetes installation & management systems
|
||||
can do full or partial upgrades of host OSes, Docker,
|
||||
or Kubernetes, e.g.
|
||||
`Tectonic <https://coreos.com/tectonic/>`_,
|
||||
`Rancher <https://docs.rancher.com/rancher/v1.5/en/>`_,
|
||||
and
|
||||
`Kubo <https://pivotal.io/kubo>`_.
|
||||
Consult the documentation for your system.
|
||||
|
||||
**Azure Container Service (ACS).**
|
||||
On Dec. 15, 2016, a Microsoft employee
|
||||
`wrote <https://github.com/colemickens/azure-kubernetes-status/issues/15#issuecomment-267453251>`_:
|
||||
"In the coming months we [the Azure Kubernetes team] will be building managed updates in the ACS service."
|
||||
At the time of writing, managed updates were not yet available,
|
||||
but you should check the latest
|
||||
`ACS documentation <https://docs.microsoft.com/en-us/azure/container-service/>`_
|
||||
to see what's available now.
|
||||
Also at the time of writing, ACS only supported Ubuntu
|
||||
as the host (master and agent) operating system.
|
||||
You can upgrade Ubuntu and Docker on Azure
|
||||
by SSHing into each of the hosts,
|
||||
as documented on
|
||||
:ref:`another page <ssh-to-your-new-kubernetes-cluster-nodes>`.
|
||||
|
||||
In general, you can SSH to each host in your Kubernetes Cluster
|
||||
to update the OS and Docker.
|
||||
|
||||
.. note::
|
||||
|
||||
Once you are in an SSH session with a host,
|
||||
the ``docker info`` command is a handy way to detemine the
|
||||
host OS (including version) and the Docker version.
|
||||
|
||||
When you want to upgrade the software on a Kubernetes node,
|
||||
you should "drain" the node first,
|
||||
i.e. tell Kubernetes to gracefully terminate all pods
|
||||
on the node and mark it as unscheduleable
|
||||
(so no new pods get put on the node during its downtime).
|
||||
|
||||
.. code::
|
||||
|
||||
kubectl drain $NODENAME
|
||||
|
||||
There are `more details in the Kubernetes docs <https://kubernetes.io/docs/concepts/cluster-administration/cluster-management/#maintenance-on-a-node>`_,
|
||||
including instructions to make the node scheduleable again.
|
||||
|
||||
To manually upgrade the host OS,
|
||||
see the docs for that OS.
|
||||
|
||||
To manually upgrade Docker, see
|
||||
`the Docker docs <https://docs.docker.com/>`_.
|
||||
|
||||
To manually upgrade all Kubernetes software in your Kubernetes cluster, see
|
||||
`the Kubernetes docs <https://kubernetes.io/docs/admin/cluster-management/>`_.
|
||||
|
||||
|
||||
Upgrade Planetmint-Related Software
|
||||
-----------------------------------
|
||||
|
||||
We use Kubernetes "Deployments" for NGINX, Planetmint,
|
||||
and most other Planetmint-related software.
|
||||
The only exception is MongoDB; we use a Kubernetes
|
||||
StatefulSet for that.
|
||||
|
||||
The nice thing about Kubernetes Deployments
|
||||
is that Kubernetes can manage most of the upgrade process.
|
||||
A typical upgrade workflow for a single Deployment would be:
|
||||
|
||||
.. code::
|
||||
|
||||
$ KUBE_EDITOR=nano kubectl edit deployment/<name of Deployment>
|
||||
|
||||
The ``kubectl edit`` command
|
||||
opens the specified editor (nano in the above example),
|
||||
allowing you to edit the specified Deployment *in the Kubernetes cluster*.
|
||||
You can change the version tag on the Docker image, for example.
|
||||
Don't forget to save your edits before exiting the editor.
|
||||
The Kubernetes docs have more information about
|
||||
`Deployments <https://kubernetes.io/docs/concepts/workloads/controllers/deployment/>`_ (including updating them).
|
||||
|
||||
|
||||
The upgrade story for the MongoDB StatefulSet is *different*.
|
||||
(This is because MongoDB has persistent state,
|
||||
which is stored in some storage associated with a PersistentVolumeClaim.)
|
||||
At the time of writing, StatefulSets were still in beta,
|
||||
and they did not support automated image upgrade (Docker image tag upgrade).
|
||||
We expect that to change.
|
||||
Rather than trying to keep these docs up-to-date,
|
||||
we advise you to check out the current
|
||||
`Kubernetes docs about updating containers in StatefulSets
|
||||
<https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-containers>`_.
|
||||
|
||||
|
162
docs/new/network-setup/k8s-deployment-template/workflow.rst
Normal file
162
docs/new/network-setup/k8s-deployment-template/workflow.rst
Normal file
@ -0,0 +1,162 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
.. _kubernetes-template-overview:
|
||||
|
||||
Overview
|
||||
========
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a Planetmint node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see our `Node Setup <../../node_setup>`_.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
This page summarizes some steps to go through
|
||||
to set up a Planetmint network.
|
||||
You can modify them to suit your needs.
|
||||
|
||||
.. _generate-the-blockchain-id-and-genesis-time:
|
||||
|
||||
Generate All Shared Planetmint Setup Parameters
|
||||
-----------------------------------------------
|
||||
|
||||
There are some shared Planetmint setup paramters that every node operator
|
||||
in the consortium shares
|
||||
because they are properties of the Tendermint network.
|
||||
They look like this:
|
||||
|
||||
.. code::
|
||||
|
||||
# Tendermint data
|
||||
BDB_PERSISTENT_PEERS='bdb-instance-1,bdb-instance-2,bdb-instance-3,bdb-instance-4'
|
||||
BDB_VALIDATORS='bdb-instance-1,bdb-instance-2,bdb-instance-3,bdb-instance-4'
|
||||
BDB_VALIDATOR_POWERS='10,10,10,10'
|
||||
BDB_GENESIS_TIME='0001-01-01T00:00:00Z'
|
||||
BDB_CHAIN_ID='test-chain-rwcPML'
|
||||
|
||||
Those paramters only have to be generated once, by one member of the consortium.
|
||||
That person will then share the results (Tendermint setup parameters)
|
||||
with all the node operators.
|
||||
|
||||
The above example parameters are for a network of 4 initial (seed) nodes.
|
||||
Note how ``BDB_PERSISTENT_PEERS``, ``BDB_VALIDATORS`` and ``BDB_VALIDATOR_POWERS`` are lists
|
||||
with 4 items each.
|
||||
**If your consortium has a different number of initial nodes,
|
||||
then those lists should have that number or items.**
|
||||
Use ``10`` for all the power values.
|
||||
|
||||
To generate a ``BDB_GENESIS_TIME`` and a ``BDB_CHAIN_ID``,
|
||||
you can do this:
|
||||
|
||||
.. code::
|
||||
|
||||
$ mkdir $(pwd)/tmdata
|
||||
$ docker run --rm -v $(pwd)/tmdata:/tendermint/config tendermint/tendermint:v0.34.15 init
|
||||
$ cat $(pwd)/tmdata/genesis.json
|
||||
|
||||
You should see something that looks like:
|
||||
|
||||
.. code:: json
|
||||
|
||||
{"genesis_time": "0001-01-01T00:00:00Z",
|
||||
"chain_id": "test-chain-bGX7PM",
|
||||
"validators": [
|
||||
{"pub_key":
|
||||
{"type": "ed25519",
|
||||
"data": "4669C4B966EB8B99E45E40982B2716A9D3FA53B54C68088DAB2689935D7AF1A9"},
|
||||
"power": 10,
|
||||
"name": ""}
|
||||
],
|
||||
"app_hash": ""
|
||||
}
|
||||
|
||||
The value with ``"genesis_time"`` is ``BDB_GENESIS_TIME`` and
|
||||
the value with ``"chain_id"`` is ``BDB_CHAIN_ID``.
|
||||
|
||||
Now you have all the Planetmint setup parameters and can share them
|
||||
with all of the node operators. (They will put them in their ``vars`` file.
|
||||
We'll say more about that file below.)
|
||||
|
||||
|
||||
.. _things-each-node-operator-must-do:
|
||||
|
||||
Things Each Node Operator Must Do
|
||||
---------------------------------
|
||||
|
||||
1. Make up an `FQDN <https://en.wikipedia.org/wiki/Fully_qualified_domain_name>`_
|
||||
for your Planetmint node (e.g. ``mynode.mycorp.com``).
|
||||
This is where external users will access the Planetmint HTTP API, for example.
|
||||
Make sure you've registered the associated domain name (e.g. ``mycorp.com``).
|
||||
|
||||
Get an SSL certificate for your Planetmint node's FQDN.
|
||||
Also get the root CA certificate and all intermediate certificates.
|
||||
They should all be provided by your SSL certificate provider.
|
||||
Put all those certificates together in one certificate chain file in the following order:
|
||||
|
||||
- Domain certificate (i.e. the one you ordered for your FQDN)
|
||||
- All intermediate certificates
|
||||
- Root CA certificate
|
||||
|
||||
DigiCert has `a web page explaining certificate chains <https://www.digicert.com/ssl-support/pem-ssl-creation.htm>`_.
|
||||
|
||||
You will put the path to that certificate chain file in the ``vars`` file,
|
||||
when you configure your node later.
|
||||
|
||||
2a. If your Planetmint node will use 3scale for API authentication, monitoring and billing,
|
||||
you will need all relevant 3scale settings and credentials.
|
||||
|
||||
2b. If your Planetmint node will not use 3scale, then write authorization will be granted
|
||||
to all POST requests with a secret token in the HTTP headers.
|
||||
(All GET requests are allowed to pass.)
|
||||
You can make up that ``SECRET_TOKEN`` now.
|
||||
For example, ``superSECRET_token4-POST*requests``.
|
||||
You will put it in the ``vars`` file later.
|
||||
Every Planetmint node in a Planetmint network can have a different secret token.
|
||||
To make an HTTP POST request to your Planetmint node,
|
||||
you must include an HTTP header named ``X-Secret-Access-Token``
|
||||
and set it equal to your secret token, e.g.
|
||||
|
||||
``X-Secret-Access-Token: superSECRET_token4-POST*requests``
|
||||
|
||||
|
||||
3. Deploy a Kubernetes cluster for your Planetmint node. We have some instructions for how to
|
||||
:doc:`Deploy a Kubernetes cluster on Azure <../k8s-deployment-template/template-kubernetes-azure>`.
|
||||
|
||||
.. warning::
|
||||
|
||||
In theory, you can deploy your Planetmint node to any Kubernetes cluster, but there can be differences
|
||||
between different Kubernetes clusters, especially if they are running different versions of Kubernetes.
|
||||
We tested this Kubernetes Deployment Template on Azure ACS in February 2018 and at that time
|
||||
ACS was deploying a **Kubernetes 1.7.7** cluster. If you can force your cluster to have that version of Kubernetes,
|
||||
then you'll increase the likelihood that everything will work.
|
||||
|
||||
4. Deploy your Planetmint node inside your new Kubernetes cluster.
|
||||
You will fill up the ``vars`` file,
|
||||
then you will run a script which reads that file to generate some Kubernetes config files,
|
||||
you will send those config files to your Kubernetes cluster,
|
||||
and then you will deploy all the stuff that you need to have a Planetmint node.
|
||||
|
||||
⟶ Proceed to :ref:`deploy your Planetmint node <kubernetes-template-deploy-a-single-planetmint-node>`.
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
207
docs/new/network-setup/network-setup.md
Normal file
207
docs/new/network-setup/network-setup.md
Normal file
@ -0,0 +1,207 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# How to Set Up a Planetmint Network
|
||||
|
||||
You can setup or connect to a network once you have a single node running.
|
||||
Until now, everything could be done by a node operator, by themselves.
|
||||
Now the node operators, also called **Members**, must share some information
|
||||
with each other, so they can form a network.
|
||||
|
||||
There is one special Member who helps coordinate everyone: the **Coordinator**.
|
||||
|
||||
## Member: Share hostname, pub_key.value and node_id
|
||||
|
||||
Each Planetmint node is identified by its:
|
||||
|
||||
* `hostname`, i.e. the node's DNS subdomain, such as `bnode.example.com`, or its IP address, such as `46.145.17.32`
|
||||
* Tendermint `pub_key.value`
|
||||
* Tendermint `node_id`
|
||||
|
||||
The Tendermint `pub_key.value` is stored
|
||||
in the file `$HOME/.tendermint/config/priv_validator.json`.
|
||||
That file should look like:
|
||||
|
||||
```json
|
||||
{
|
||||
"address": "E22D4340E5A92E4A9AD7C62DA62888929B3921E9",
|
||||
"pub_key": {
|
||||
"type": "tendermint/PubKeyEd25519",
|
||||
"value": "P+aweH73Hii8RyCmNWbwPsa9o4inq3I+0fSfprVkZa0="
|
||||
},
|
||||
"last_height": "0",
|
||||
"last_round": "0",
|
||||
"last_step": 0,
|
||||
"priv_key": {
|
||||
"type": "tendermint/PrivKeyEd25519",
|
||||
"value": "AHBiZXdZhkVZoPUAiMzClxhl0VvUp7Xl3YT6GvCc93A/5rB4fvceKLxHIKY1ZvA+xr2jiKercj7R9J+mtWRlrQ=="
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
To get your Tendermint `node_id`, run the command:
|
||||
|
||||
```
|
||||
tendermint show_node_id
|
||||
```
|
||||
|
||||
An example `node_id` is `9b989cd5ac65fec52652a457aed6f5fd200edc22`.
|
||||
|
||||
**Share your `hostname`, `pub_key.value` and `node_id` with all other Members.**
|
||||
|
||||
## Coordinator: Create & Share the genesis.json File
|
||||
|
||||
At this point the Coordinator should have received the data
|
||||
from all the Members, and should combine them in the file
|
||||
`$HOME/.tendermint/config/genesis.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"genesis_time":"0001-01-01T00:00:00Z",
|
||||
"chain_id":"test-chain-la6HSr",
|
||||
"consensus_params":{
|
||||
"block_size_params":{
|
||||
"max_bytes":"22020096",
|
||||
"max_txs":"10000",
|
||||
"max_gas":"-1"
|
||||
},
|
||||
"tx_size_params":{
|
||||
"max_bytes":"10240",
|
||||
"max_gas":"-1"
|
||||
},
|
||||
"block_gossip_params":{
|
||||
"block_part_size_bytes":"65536"
|
||||
},
|
||||
"evidence_params":{
|
||||
"max_age":"100000"
|
||||
}
|
||||
},
|
||||
"validators":[
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"tendermint/PubKeyEd25519",
|
||||
"value":"<Member 1 public key>"
|
||||
},
|
||||
"power":10,
|
||||
"name":"<Member 1 name>"
|
||||
},
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"tendermint/PubKeyEd25519",
|
||||
"value":"<Member 2 public key>"
|
||||
},
|
||||
"power":10,
|
||||
"name":"<Member 2 name>"
|
||||
},
|
||||
{
|
||||
"...":{
|
||||
|
||||
},
|
||||
|
||||
},
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"tendermint/PubKeyEd25519",
|
||||
"value":"<Member N public key>"
|
||||
},
|
||||
"power":10,
|
||||
"name":"<Member N name>"
|
||||
}
|
||||
],
|
||||
"app_hash":""
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** The above `consensus_params` in the `genesis.json`
|
||||
are default values.
|
||||
|
||||
The new `genesis.json` file contains the data that describes the Network.
|
||||
The key `name` is the Member's moniker; it can be any valid string,
|
||||
but put something human-readable like `"Alice's Node Shop"`.
|
||||
|
||||
At this point, the Coordinator must share the new `genesis.json` file with all Members.
|
||||
|
||||
## Member: Connect to the Other Members
|
||||
|
||||
At this point the Member should have received the `genesis.json` file.
|
||||
|
||||
The Member must copy the `genesis.json` file
|
||||
into their local `$HOME/.tendermint/config` directory.
|
||||
Every Member now shares the same `chain_id` and `genesis_time` (used to identify the Network),
|
||||
and the same list of `validators`.
|
||||
|
||||
Each Member must edit their `$HOME/.tendermint/config/config.toml` file
|
||||
and make the following changes:
|
||||
|
||||
```
|
||||
moniker = "Name of our node"
|
||||
create_empty_blocks = false
|
||||
log_level = "main:info,state:info,*:error"
|
||||
|
||||
persistent_peers = "<Member 1 node id>@<Member 1 hostname>:26656,\
|
||||
<Member 2 node id>@<Member 2 hostname>:26656,\
|
||||
<Member N node id>@<Member N hostname>:26656,"
|
||||
|
||||
send_rate = 102400000
|
||||
recv_rate = 102400000
|
||||
|
||||
recheck = false
|
||||
```
|
||||
|
||||
Note: The list of `persistent_peers` doesn't have to include all nodes
|
||||
in the network.
|
||||
|
||||
## Member: Start Tarantool
|
||||
|
||||
You install Tarantool as described [here](https://www.tarantool.io/ru/download/os-installation/ubuntu/).
|
||||
|
||||
You can start it using the command `tarantool`.To run it in the background (so it will continue running after you logout), you can have to create a listener `box.cfg{listen=3301}`.
|
||||
|
||||
|
||||
## Member: Start Planetmint and Tendermint Using Monit
|
||||
|
||||
This section describes how to manage the Planetmint and Tendermint processes using [Monit][monit], a small open-source utility for managing and monitoring Unix processes. Planetmint and Tendermint are managed together, because if Planetmint is stopped (or crashes) and is restarted, *Tendermint won't try reconnecting to it*. (That's not a bug. It's just how Tendermint works.)
|
||||
|
||||
Install Monit:
|
||||
|
||||
```
|
||||
sudo apt install monit
|
||||
```
|
||||
|
||||
If you installed the `planetmint` Python package as above, you should have the `planetmint-monit-config` script in your `PATH` now. Run the script to build a configuration file for Monit:
|
||||
|
||||
```
|
||||
planetmint-monit-config
|
||||
```
|
||||
|
||||
Run Monit as a daemon, instructing it to wake up every second to check on processes:
|
||||
|
||||
```
|
||||
monit -d 1
|
||||
```
|
||||
|
||||
Monit will run the Planetmint and Tendermint processes and restart them when they crash. If the root `planetmint_` process crashes, Monit will also restart the Tendermint process.
|
||||
|
||||
You can check the status by running `monit status` or `monit summary`.
|
||||
|
||||
By default, it will collect program logs into the `~/.planetmint-monit/logs` folder.
|
||||
|
||||
To learn more about Monit, use `monit -h` (help) or read [the Monit documentation][monit-manual].
|
||||
|
||||
Check `planetmint-monit-config -h` if you want to arrange a different folder for logs or some of the Monit internal artifacts.
|
||||
|
||||
If you want to start and manage the Planetmint and Tendermint processes yourself, then look inside the file [planetmint/pkg/scripts/planetmint-monit-config](https://github.com/planetmint/planetmint/blob/master/pkg/scripts/planetmint-monit-config) to see how *it* starts Planetmint and Tendermint.
|
||||
|
||||
## How Others Can Access Your Node
|
||||
|
||||
If you followed the above instructions, then your node should be publicly-accessible with Planetmint Root URL `https://hostname` or `http://hostname:9984`. That is, anyone can interact with your node using the [Planetmint HTTP API](../connecting/http-client-server-api) exposed at that address. The most common way to do that is to use one of the [Planetmint Drivers](../connecting/drivers).
|
||||
|
||||
[bdb:software]: https://github.com/planetmint/planetmint/
|
||||
[bdb:pypi]: https://pypi.org/project/Planetmint/#history
|
||||
[tendermint:releases]: https://github.com/tendermint/tendermint/releases
|
||||
[monit]: https://www.mmonit.com/monit
|
||||
[monit-manual]: https://mmonit.com/monit/documentation/monit.html
|
44
docs/new/network-setup/networks.md
Normal file
44
docs/new/network-setup/networks.md
Normal file
@ -0,0 +1,44 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Planetmint Networks
|
||||
|
||||
A **Planetmint network** is a set of connected **Planetmint nodes**, managed by a **Planetmint consortium** (i.e. an organization). Those terms are defined in the [Planetmint Terminology page](https://docs.planetmint.io/en/latest/terminology.html).
|
||||
|
||||
## Consortium Structure & Governance
|
||||
|
||||
The consortium might be a company, a foundation, a cooperative, or [some other form of organization](https://en.wikipedia.org/wiki/Organizational_structure).
|
||||
It must make many decisions, e.g. How will new members be added? Who can read the stored data? What kind of data will be stored?
|
||||
A governance process is required to make those decisions, and therefore one of the first steps for any new consortium is to specify its governance process (if one doesn't already exist).
|
||||
This documentation doesn't explain how to create a consortium, nor does it outline the possible governance processes.
|
||||
|
||||
It's worth noting that the decentralization of a Planetmint network depends,
|
||||
to some extent, on the decentralization of the associated consortium. See the pages about [decentralization](https://docs.planetmint.io/en/latest/decentralized.html) and [node diversity](https://docs.planetmint.io/en/latest/diversity.html).
|
||||
|
||||
## DNS Records and SSL Certificates
|
||||
|
||||
We now describe how *we* set up the external (public-facing) DNS records for a Planetmint network. Your consortium may opt to do it differently.
|
||||
There were several goals:
|
||||
|
||||
* Allow external users/clients to connect directly to any Planetmint node in the network (over the internet), if they want.
|
||||
* Each Planetmint node operator should get an SSL certificate for their Planetmint node, so that their Planetmint node can serve the [Planetmint HTTP API](../connecting/http-client-server-api) via HTTPS. (The same certificate might also be used to serve the [WebSocket API](../connecting/websocket-event-stream-api).)
|
||||
* There should be no sharing of SSL certificates among Planetmint node operators.
|
||||
* Optional: Allow clients to connect to a "random" Planetmint node in the network at one particular domain (or subdomain).
|
||||
|
||||
### Node Operator Responsibilities
|
||||
|
||||
1. Register a domain (or use one that you already have) for your Planetmint node. You can use a subdomain if you like. For example, you might opt to use `abc-org73.net`, `api.dynabob8.io` or `figmentdb3.ninja`.
|
||||
2. Get an SSL certificate for your domain or subdomain, and properly install it in your node (e.g. in your NGINX instance).
|
||||
3. Create a DNS A Record mapping your domain or subdomain to the public IP address of your node (i.e. the one that serves the Planetmint HTTP API).
|
||||
|
||||
### Consortium Responsibilities
|
||||
|
||||
Optional: The consortium managing the Planetmint network could register a domain name and set up CNAME records mapping that domain name (or one of its subdomains) to each of the nodes in the network. For example, if the consortium registered `bdbnetwork.io`, they could set up CNAME records like the following:
|
||||
|
||||
* CNAME record mapping `api.bdbnetwork.io` to `abc-org73.net`
|
||||
* CNAME record mapping `api.bdbnetwork.io` to `api.dynabob8.io`
|
||||
* CNAME record mapping `api.bdbnetwork.io` to `figmentdb3.ninja`
|
11
docs/new/node-setup/README.md
Normal file
11
docs/new/node-setup/README.md
Normal file
@ -0,0 +1,11 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Node setup
|
||||
|
||||
You can use the all-in-one docker solution, or install Tendermint, MongoDB, and Planetmint step by step. For more advanced users and for development, the second option is recommended.
|
||||
|
89
docs/new/node-setup/all-in-one-planetmint.md
Normal file
89
docs/new/node-setup/all-in-one-planetmint.md
Normal file
@ -0,0 +1,89 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Run Planetmint with all-in-one Docker
|
||||
|
||||
For those who like using Docker and wish to experiment with Planetmint in
|
||||
non-production environments, we currently maintain a Planetmint all-in-one
|
||||
Docker image and a
|
||||
`Dockerfile-all-in-one` that can be used to build an image for `planetmint`.
|
||||
|
||||
This image contains all the services required for a Planetmint node i.e.
|
||||
|
||||
- Planetmint Server
|
||||
- Tarantool
|
||||
- Tendermint
|
||||
|
||||
**Note:** **NOT for Production Use:** *This is an single node opinionated image not well suited for a network deployment.*
|
||||
*This image is to help quick deployment for early adopters, for a more standard approach please refer to one of our deployment guides:*
|
||||
|
||||
- [Planetmint developer setup guides](https://docs.planetmint.io/projects/contributing/en/latest/dev-setup-coding-and-contribution-process/index.html).
|
||||
- [Planetmint with Kubernetes](http://docs.planetmint.io/projects/server/en/latest/k8s-deployment-template/index.html).
|
||||
|
||||
## Prerequisite(s)
|
||||
- [Docker](https://docs.docker.com/engine/installation/)
|
||||
|
||||
## Pull and Run the Image from Docker Hub
|
||||
|
||||
With Docker installed, you can proceed as follows.
|
||||
|
||||
In a terminal shell, pull the latest version of the Planetmint all-in-one Docker image using:
|
||||
```text
|
||||
$ docker pull planetmint/planetmint:all-in-one
|
||||
|
||||
$ docker run \
|
||||
--detach \
|
||||
--name planetmint \
|
||||
--publish 9984:9984 \
|
||||
--publish 9985:9985 \
|
||||
--publish 3303:3303 \
|
||||
--publish 26657:26657 \
|
||||
--volume $HOME/planetmint_docker/tarantool:/var/lib/tarantool \
|
||||
--volume $HOME/planetmint_docker/tendermint:/tendermint \
|
||||
planetmint/planetmint:all-in-one
|
||||
```
|
||||
|
||||
Let's analyze that command:
|
||||
|
||||
* `docker run` tells Docker to run some image
|
||||
* `--detach` run the container in the background
|
||||
* `publish 9984:9984` map the host port `9984` to the container port `9984`
|
||||
(the Planetmint API server)
|
||||
* `9985` Planetmint Websocket server
|
||||
* `26657` Tendermint RPC server
|
||||
* `3303` Configured port for Tarantool
|
||||
* `$HOME/planetmint_docker/tarantool:/var/lib/tarantool` this allows us to have the data persisted on the host machine,
|
||||
you can read more in the [official Docker
|
||||
documentation](https://docs.docker.com/engine/tutorials/dockervolumes)
|
||||
* `$HOME/planetmint_docker/tendermint:/tendermint` to persist Tendermint data.
|
||||
* `planetmint/planetmint:all-in-one` the image to use. All the options after the container name are passed on to the entrypoint inside the container.
|
||||
|
||||
## Verify
|
||||
|
||||
```text
|
||||
$ docker ps | grep planetmint
|
||||
```
|
||||
|
||||
Send your first transaction using [Planetmint drivers](../connecting/drivers).
|
||||
|
||||
|
||||
## Building Your Own Image
|
||||
|
||||
Assuming you have Docker installed, you would proceed as follows.
|
||||
|
||||
In a terminal shell:
|
||||
```text
|
||||
git clone git@github.com:planetmint/planetmint.git
|
||||
cd planetmint/
|
||||
```
|
||||
|
||||
Build the Docker image:
|
||||
```text
|
||||
docker build --file Dockerfile-all-in-one --tag <tag/name:latest> .
|
||||
```
|
||||
|
||||
Now you can use your own image to run Planetmint all-in-one container.
|
65
docs/new/node-setup/aws-setup.md
Normal file
65
docs/new/node-setup/aws-setup.md
Normal file
@ -0,0 +1,65 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Basic AWS Setup
|
||||
|
||||
Before you can deploy anything on AWS, you must do a few things.
|
||||
|
||||
## Get an AWS Account
|
||||
|
||||
If you don't already have an AWS account, you can [sign up for one for free at aws.amazon.com](https://aws.amazon.com/).
|
||||
|
||||
## Install the AWS Command-Line Interface
|
||||
|
||||
To install the AWS Command-Line Interface (CLI), just do:
|
||||
|
||||
```text
|
||||
pip install awscli
|
||||
```
|
||||
|
||||
## Create an AWS Access Key
|
||||
|
||||
The next thing you'll need is AWS access keys (access key ID and secret access key). If you don't have those, see [the AWS documentation about access keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).
|
||||
|
||||
You should also pick a default AWS region name (e.g. `eu-central-1`). The AWS documentation has [a list of them](http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region).
|
||||
|
||||
Once you've got your AWS access key, and you've picked a default AWS region name, go to a terminal session and enter:
|
||||
|
||||
```text
|
||||
aws configure
|
||||
```
|
||||
|
||||
and answer the four questions. For example:
|
||||
|
||||
```text
|
||||
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
|
||||
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
|
||||
Default region name [None]: eu-central-1
|
||||
Default output format [None]: [Press Enter]
|
||||
```
|
||||
|
||||
This writes two files: `~/.aws/credentials` and `~/.aws/config`. AWS tools and packages look for those files.
|
||||
|
||||
## Generate an RSA Key Pair for SSH
|
||||
|
||||
Eventually, you'll have one or more instances (virtual machines) running on AWS and you'll want to SSH to them. To do that, you need a public/private key pair. The public key will be sent to AWS, and you can tell AWS to put it in any instances you provision there. You'll keep the private key on your local workstation.
|
||||
|
||||
See the appendix [page about how to generate a key pair for SSH](../references/appendices/generate-key-pair-for-ssh).
|
||||
|
||||
## Send the Public Key to AWS
|
||||
|
||||
To send the public key to AWS, use the AWS Command-Line Interface:
|
||||
|
||||
```text
|
||||
aws ec2 import-key-pair \
|
||||
--key-name "<key-name>" \
|
||||
--public-key-material file://~/.ssh/<key-name>.pub
|
||||
```
|
||||
|
||||
If you're curious why there's a `file://` in front of the path to the public key, see issue [aws/aws-cli#41 on GitHub](https://github.com/aws/aws-cli/issues/41).
|
||||
|
||||
If you want to verify that your key pair was imported by AWS, go to [the Amazon EC2 console](https://console.aws.amazon.com/ec2/v2/home), select the region you gave above when you did `aws configure` (e.g. eu-central-1), click on **Key Pairs** in the left sidebar, and check that `<key-name>` is listed.
|
354
docs/new/node-setup/configuration.md
Normal file
354
docs/new/node-setup/configuration.md
Normal file
@ -0,0 +1,354 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Configuration Settings
|
||||
|
||||
Every Planetmint Server configuration setting has two names: a config-file name and an environment variable name. For example, one of the settings has the config-file name `database.host` and the environment variable name `PLANETMINT_DATABASE_HOST`. Here are some more examples:
|
||||
|
||||
`database.port` ↔ `PLANETMINT_DATABASE_PORT`
|
||||
|
||||
`database.keyfile_passphrase` ↔ `PLANETMINT_DATABASE_KEYFILE_PASSPHRASE`
|
||||
|
||||
`server.bind` ↔ `PLANETMINT_SERVER_BIND`
|
||||
|
||||
The value of each setting is determined according to the following rules:
|
||||
|
||||
* If it's set by an environment variable, then use that value
|
||||
* Otherwise, if it's set in a local config file, then use that value
|
||||
* Otherwise, use the default value
|
||||
|
||||
The local config file is `$HOME/.planetmint` by default (a file which might not even exist), but you can tell Planetmint to use a different file by using the `-c` command-line option, e.g. `planetmint -c path/to/config_file.json start`
|
||||
or using the `PLANETMINT_CONFIG_PATH` environment variable, e.g. `PLANETMINT_CONFIG_PATH=.my_planetmint_config planetmint start`.
|
||||
Note that the `-c` command line option will always take precedence if both the `PLANETMINT_CONFIG_PATH` and the `-c` command line option are used.
|
||||
|
||||
You can read the current default values in the file [planetmint/\_\_init\_\_.py](https://github.com/planetmint/planetmint/blob/master/planetmint/__init__.py). (The link is to the latest version.)
|
||||
|
||||
|
||||
## database.*
|
||||
|
||||
The settings with names of the form `database.*` are for the backend database
|
||||
(currently only Tarantool). They are:
|
||||
|
||||
* `database.backend` can only be `localtarantool`, currently.
|
||||
* `database.host` is the hostname (FQDN) of the backend database.
|
||||
* `database.port` is self-explanatory.
|
||||
* `database.user` is a user-chosen name for the database inside Tarantool, e.g. `planetmint`.
|
||||
* `database.pass` is the password of the user for connection to tarantool listener.
|
||||
|
||||
There are two ways for Planetmint Server to authenticate itself with Tarantool (or a specific Tarantool service): no authentication, username/password.
|
||||
|
||||
**No Authentication**
|
||||
|
||||
If you use all the default Planetmint configuration settings, then no authentication will be used.
|
||||
|
||||
**Username/Password Authentication**
|
||||
|
||||
To use username/password authentication, a Tarantool instance must already be running somewhere (maybe in another machine), it must already have a spaces for use by Planetmint, and that database must already have a "readWrite" user with associated username and password.
|
||||
|
||||
**Default values**
|
||||
|
||||
```js
|
||||
"database": {
|
||||
"backend": "tarantool",
|
||||
"host": "localhost",
|
||||
"port": 3301,
|
||||
"username": null,
|
||||
"password": null
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
## server.*
|
||||
|
||||
`server.bind`, `server.loglevel` and `server.workers`
|
||||
are settings for the [Gunicorn HTTP server](http://gunicorn.org/), which is used to serve the [HTTP client-server API](../connecting/http-client-server-api).
|
||||
|
||||
`server.bind` is where to bind the Gunicorn HTTP server socket. It's a string. It can be any valid value for [Gunicorn's bind setting](http://docs.gunicorn.org/en/stable/settings.html#bind). For example:
|
||||
|
||||
* If you want to allow IPv4 connections from anyone, on port 9984, use `0.0.0.0:9984`
|
||||
* If you want to allow IPv6 connections from anyone, on port 9984, use `[::]:9984`
|
||||
|
||||
In a production setting, we recommend you use Gunicorn behind a reverse proxy server such as NGINX. If Gunicorn and the reverse proxy are running on the same machine, then you can use `localhost:9984` (the default value), meaning Gunicorn will talk to the reverse proxy on port 9984. The reverse proxy could then be bound to port 80 (for HTTP) or port 443 (for HTTPS), so that external clients would connect using that port. For example:
|
||||
|
||||
[External clients]---(port 443)---[NGINX]---(port 9984)---[Gunicorn / Planetmint Server]
|
||||
|
||||
If Gunicorn and the reverse proxy are running on different machines, then `server.bind` should be `hostname:9984`, where hostname is the IP address or [FQDN](https://en.wikipedia.org/wiki/Fully_qualified_domain_name) of the reverse proxy.
|
||||
|
||||
There's [more information about deploying behind a reverse proxy in the Gunicorn documentation](http://docs.gunicorn.org/en/stable/deploy.html). (They call it a proxy.)
|
||||
|
||||
`server.loglevel` sets the log level of Gunicorn's Error log outputs. See
|
||||
[Gunicorn's documentation](http://docs.gunicorn.org/en/latest/settings.html#loglevel)
|
||||
for more information.
|
||||
|
||||
`server.workers` is [the number of worker processes](http://docs.gunicorn.org/en/stable/settings.html#workers) for handling requests. If set to `None`, the value will be (2 × cpu_count + 1). Each worker process has a single thread. The HTTP server will be able to handle `server.workers` requests simultaneously.
|
||||
|
||||
**Example using environment variables**
|
||||
|
||||
```text
|
||||
export PLANETMINT_SERVER_BIND=0.0.0.0:9984
|
||||
export PLANETMINT_SERVER_LOGLEVEL=debug
|
||||
export PLANETMINT_SERVER_WORKERS=5
|
||||
```
|
||||
|
||||
**Example config file snippet**
|
||||
|
||||
```js
|
||||
"server": {
|
||||
"bind": "0.0.0.0:9984",
|
||||
"loglevel": "debug",
|
||||
"workers": 5,
|
||||
}
|
||||
```
|
||||
|
||||
**Default values (from a config file)**
|
||||
|
||||
```js
|
||||
"server": {
|
||||
"bind": "localhost:9984",
|
||||
"loglevel": "info",
|
||||
"workers": null,
|
||||
}
|
||||
```
|
||||
|
||||
## wsserver.*
|
||||
|
||||
|
||||
### wsserver.scheme, wsserver.host and wsserver.port
|
||||
|
||||
These settings are for the
|
||||
[aiohttp server](https://aiohttp.readthedocs.io/en/stable/index.html),
|
||||
which is used to serve the
|
||||
[WebSocket Event Stream API](../connecting/websocket-event-stream-api).
|
||||
`wsserver.scheme` should be either `"ws"` or `"wss"`
|
||||
(but setting it to `"wss"` does *not* enable SSL/TLS).
|
||||
`wsserver.host` is where to bind the aiohttp server socket and
|
||||
`wsserver.port` is the corresponding port.
|
||||
If you want to allow connections from anyone, on port 9985,
|
||||
set `wsserver.host` to 0.0.0.0 and `wsserver.port` to 9985.
|
||||
|
||||
**Example using environment variables**
|
||||
|
||||
```text
|
||||
export PLANETMINT_WSSERVER_SCHEME=ws
|
||||
export PLANETMINT_WSSERVER_HOST=0.0.0.0
|
||||
export PLANETMINT_WSSERVER_PORT=9985
|
||||
```
|
||||
|
||||
**Example config file snippet**
|
||||
|
||||
```js
|
||||
"wsserver": {
|
||||
"scheme": "wss",
|
||||
"host": "0.0.0.0",
|
||||
"port": 65000
|
||||
}
|
||||
```
|
||||
|
||||
**Default values (from a config file)**
|
||||
|
||||
```js
|
||||
"wsserver": {
|
||||
"scheme": "ws",
|
||||
"host": "localhost",
|
||||
"port": 9985
|
||||
}
|
||||
```
|
||||
|
||||
### wsserver.advertised_scheme, wsserver.advertised_host and wsserver.advertised_port
|
||||
|
||||
These settings are for the advertising the Websocket URL to external clients in
|
||||
the root API endpoint. These configurations might be useful if your deployment
|
||||
is hosted behind a firewall, NAT, etc. where the exposed public IP or domain is
|
||||
different from where Planetmint is running.
|
||||
|
||||
**Example using environment variables**
|
||||
|
||||
```text
|
||||
export PLANETMINT_WSSERVER_ADVERTISED_SCHEME=wss
|
||||
export PLANETMINT_WSSERVER_ADVERTISED_HOST=myplanetmint.io
|
||||
export PLANETMINT_WSSERVER_ADVERTISED_PORT=443
|
||||
```
|
||||
|
||||
**Example config file snippet**
|
||||
|
||||
```js
|
||||
"wsserver": {
|
||||
"advertised_scheme": "wss",
|
||||
"advertised_host": "myplanetmint.io",
|
||||
"advertised_port": 443
|
||||
}
|
||||
```
|
||||
|
||||
**Default values (from a config file)**
|
||||
|
||||
```js
|
||||
"wsserver": {
|
||||
"advertised_scheme": "ws",
|
||||
"advertised_host": "localhost",
|
||||
"advertised_port": 9985
|
||||
}
|
||||
```
|
||||
|
||||
## log.*
|
||||
|
||||
The `log.*` settings are to configure logging.
|
||||
|
||||
**Example**
|
||||
|
||||
```js
|
||||
{
|
||||
"log": {
|
||||
"file": "/var/log/planetmint.log",
|
||||
"error_file": "/var/log/planetmint-errors.log",
|
||||
"level_console": "info",
|
||||
"level_logfile": "info",
|
||||
"datefmt_console": "%Y-%m-%d %H:%M:%S",
|
||||
"datefmt_logfile": "%Y-%m-%d %H:%M:%S",
|
||||
"fmt_console": "%(asctime)s [%(levelname)s] (%(name)s) %(message)s",
|
||||
"fmt_logfile": "%(asctime)s [%(levelname)s] (%(name)s) %(message)s",
|
||||
"granular_levels": {}
|
||||
}
|
||||
```
|
||||
|
||||
**Default values**
|
||||
|
||||
```js
|
||||
{
|
||||
"log": {
|
||||
"file": "~/planetmint.log",
|
||||
"error_file": "~/planetmint-errors.log",
|
||||
"level_console": "info",
|
||||
"level_logfile": "info",
|
||||
"datefmt_console": "%Y-%m-%d %H:%M:%S",
|
||||
"datefmt_logfile": "%Y-%m-%d %H:%M:%S",
|
||||
"fmt_logfile": "[%(asctime)s] [%(levelname)s] (%(name)s) %(message)s (%(processName)-10s - pid: %(process)d)",
|
||||
"fmt_console": "[%(asctime)s] [%(levelname)s] (%(name)s) %(message)s (%(processName)-10s - pid: %(process)d)",
|
||||
"granular_levels": {}
|
||||
}
|
||||
```
|
||||
|
||||
### log.file
|
||||
|
||||
The full path to the file where logs should be written.
|
||||
The user running `planetmint` must have write access to the
|
||||
specified path.
|
||||
|
||||
**Log rotation:** Log files have a size limit of about 200 MB
|
||||
and will be rotated up to five times.
|
||||
For example, if `log.file` is set to `"~/planetmint.log"`, then
|
||||
logs would always be written to `planetmint.log`. Each time the file
|
||||
`planetmint.log` reaches 200 MB it will be closed and renamed
|
||||
`planetmint.log.1`. If `planetmint.log.1` and `planetmint.log.2` already exist they
|
||||
would be renamed `planetmint.log.2` and `planetmint.log.3`. This pattern would be
|
||||
applied up to `planetmint.log.5` after which `planetmint.log.5` would be
|
||||
overwritten by `planetmint.log.4`, thus ending the rotation cycle of whatever
|
||||
logs were in `planetmint.log.5`.
|
||||
|
||||
### log.error_file
|
||||
|
||||
Similar to `log.file` (see above), this is the
|
||||
full path to the file where error logs should be written.
|
||||
|
||||
### log.level_console
|
||||
|
||||
The log level used to log to the console. Possible allowed values are the ones
|
||||
defined by [Python](https://docs.python.org/3.9/library/logging.html#levels),
|
||||
but case-insensitive for the sake of convenience:
|
||||
|
||||
```text
|
||||
"critical", "error", "warning", "info", "debug", "notset"
|
||||
```
|
||||
|
||||
### log.level_logfile
|
||||
|
||||
The log level used to log to the log file. Possible allowed values are the ones
|
||||
defined by [Python](https://docs.python.org/3.9/library/logging.html#levels),
|
||||
but case-insensitive for the sake of convenience:
|
||||
|
||||
```text
|
||||
"critical", "error", "warning", "info", "debug", "notset"
|
||||
```
|
||||
|
||||
### log.datefmt_console
|
||||
|
||||
The format string for the date/time portion of a message, when logged to the
|
||||
console.
|
||||
|
||||
For more information on how to construct the format string please consult the
|
||||
table under [Python's documentation of time.strftime(format[, t])](https://docs.python.org/3.9/library/time.html#time.strftime).
|
||||
|
||||
### log.datefmt_logfile
|
||||
|
||||
The format string for the date/time portion of a message, when logged to a log
|
||||
file.
|
||||
|
||||
For more information on how to construct the format string please consult the
|
||||
table under [Python's documentation of time.strftime(format[, t])](https://docs.python.org/3.9/library/time.html#time.strftime).
|
||||
|
||||
### log.fmt_console
|
||||
|
||||
A string used to format the log messages when logged to the console.
|
||||
|
||||
For more information on possible formatting options please consult Python's
|
||||
documentation on
|
||||
[LogRecord attributes](https://docs.python.org/3.9/library/logging.html#logrecord-attributes).
|
||||
|
||||
### log.fmt_logfile
|
||||
|
||||
A string used to format the log messages when logged to a log file.
|
||||
|
||||
For more information on possible formatting options please consult Python's
|
||||
documentation on
|
||||
[LogRecord attributes](https://docs.python.org/3.9/library/logging.html#logrecord-attributes).
|
||||
|
||||
### log.granular_levels
|
||||
|
||||
Log levels for Planetmint's modules. This can be useful to control the log
|
||||
level of specific parts of the application. As an example, if you wanted the
|
||||
logging of the `core.py` module to be more verbose, you would set the
|
||||
configuration shown in the example below.
|
||||
|
||||
**Example**
|
||||
|
||||
```js
|
||||
{
|
||||
"log": {
|
||||
"granular_levels": {
|
||||
"bichaindb.core": "debug"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Default value**
|
||||
|
||||
```js
|
||||
{}
|
||||
```
|
||||
|
||||
## tendermint.*
|
||||
|
||||
The settings with names of the form `tendermint.*` tell Planetmint Server
|
||||
where it can connect to the node's Tendermint instance.
|
||||
|
||||
* `tendermint.host` is the hostname (FQDN)/IP address of the Tendermint instance.
|
||||
* `tendermint.port` is self-explanatory.
|
||||
|
||||
**Example using environment variables**
|
||||
|
||||
```text
|
||||
export PLANETMINT_TENDERMINT_HOST=tendermint
|
||||
export PLANETMINT_TENDERMINT_PORT=26657
|
||||
```Planetmint
|
||||
|
||||
**Default values**
|
||||
|
||||
```js
|
||||
"tendermint": {
|
||||
"host": "localhost",
|
||||
"port": 26657
|
||||
}
|
||||
```
|
64
docs/new/node-setup/deploy-a-machine.md
Normal file
64
docs/new/node-setup/deploy-a-machine.md
Normal file
@ -0,0 +1,64 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Deploy a Machine for Your Planetmint Node
|
||||
|
||||
The first step is to deploy a machine for your Planetmint node.
|
||||
It might be a virtual machine (VM) or a real machine, for example,
|
||||
an EC2 on AWS or a droplet on Digital Ocean.
|
||||
If you follow this simple deployment template, all your node's
|
||||
software will run on that one machine.
|
||||
|
||||
We don't make any assumptions about _where_ you run the machine.
|
||||
It might be in Azure, AWS, your data center or a Raspberry Pi.
|
||||
|
||||
## IP Addresses
|
||||
|
||||
The following instructions assume all the nodes
|
||||
in the network (including yours) have public IP addresses.
|
||||
(A Planetmint network _can_ be run inside a private network,
|
||||
using private IP addresses, but we don't cover that here.)
|
||||
|
||||
## Operating System
|
||||
|
||||
**Use Ubuntu 18.04 Server or above versions as the operating system.**
|
||||
|
||||
Similar instructions will work on other versions of Ubuntu,
|
||||
and other recent Debian-like Linux distros,
|
||||
but you may have to change the names of the packages,
|
||||
or install more packages.
|
||||
|
||||
## Network Security Group
|
||||
|
||||
If your machine is in AWS or Azure, for example, _and_
|
||||
you want users to connect to Planetmint via HTTPS,
|
||||
then you should configure its network security group
|
||||
to allow all incoming and outgoing traffic for:
|
||||
|
||||
* TCP on port 22 (SSH)
|
||||
* TCP on port 80 (HTTP)
|
||||
* TCP on port 443 (HTTPS)
|
||||
* Any protocol on port 26656 (Tendermint P2P)
|
||||
|
||||
If you don't care about HTTPS, then forget about port 443,
|
||||
and replace port 80 with port 9984 (the default Planetmint HTTP port).
|
||||
|
||||
## Update Your System
|
||||
|
||||
SSH into your machine and update all its OS-level packages:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt full-upgrade
|
||||
```
|
||||
|
||||
## DNS Setup
|
||||
|
||||
* Register a domain name for your Planetmint node, such as `example.com`
|
||||
* Pick a subdomain of that domain for your Planetmint node, such as `bnode.example.com`
|
||||
* Create a DNS "A Record" pointing your chosen subdomain (such as `bnode.example.com`)
|
||||
at your machine's IP address.
|
7
docs/new/node-setup/planetmint-node-ansible.md
Normal file
7
docs/new/node-setup/planetmint-node-ansible.md
Normal file
@ -0,0 +1,7 @@
|
||||
# Setting up a network of nodes with the Ansible script
|
||||
|
||||
You can find one of the installation methods with Ansible on GitHub at:
|
||||
|
||||
[Ansible script](https://github.com/planetmint/planetmint-node-ansible)
|
||||
|
||||
It allows to install Planetmint, MongoDB, Tendermint, and python, and then connect nodes into a network. Current tested machine is Ubuntu 18.04.
|
8
docs/new/node-setup/production-node/README.md
Normal file
8
docs/new/node-setup/production-node/README.md
Normal file
@ -0,0 +1,8 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Production Nodes
|
20
docs/new/node-setup/production-node/index.rst
Normal file
20
docs/new/node-setup/production-node/index.rst
Normal file
@ -0,0 +1,20 @@
|
||||
|
||||
.. Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
|
||||
Production Nodes
|
||||
================
|
||||
|
||||
.. include:: node-requirements.md
|
||||
:parser: myst_parser.sphinx_
|
||||
.. include:: node-assumptions.md
|
||||
:parser: myst_parser.sphinx_
|
||||
.. include:: node-components.md
|
||||
:parser: myst_parser.sphinx_
|
||||
.. include:: node-security-and-privacy.md
|
||||
:parser: myst_parser.sphinx_
|
||||
.. include:: reverse-proxy-notes.md
|
||||
:parser: myst_parser.sphinx_
|
||||
|
25
docs/new/node-setup/production-node/node-assumptions.md
Normal file
25
docs/new/node-setup/production-node/node-assumptions.md
Normal file
@ -0,0 +1,25 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Production Node Assumptions
|
||||
|
||||
Be sure you know the key Planetmint terminology:
|
||||
|
||||
* [Planetmint node, Planetmint network and Planetmint consortium](https://docs.planetmint.io/en/latest/terminology.html)
|
||||
|
||||
Note that there are a few kinds of nodes:
|
||||
|
||||
- A **dev/test node** is a node created by a developer working on Planetmint Server, e.g. for testing new or changed code. A dev/test node is typically run on the developer's local machine.
|
||||
|
||||
- A **bare-bones node** is a node deployed in the cloud, either as part of a testing network or as a starting point before upgrading the node to be production-ready.
|
||||
|
||||
- A **production node** is a node that is part of a consortium's Planetmint network. A production node has the most components and requirements.
|
||||
|
||||
We make some assumptions about production nodes:
|
||||
|
||||
1. Each production node is set up and managed by an experienced professional system administrator or a team of them.
|
||||
1. Each production node in a network is managed by a different person or team.
|
28
docs/new/node-setup/production-node/node-components.md
Normal file
28
docs/new/node-setup/production-node/node-components.md
Normal file
@ -0,0 +1,28 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Production Node Components
|
||||
|
||||
A production Planetmint node must include:
|
||||
|
||||
* Planetmint Server
|
||||
* Tarantool
|
||||
* Tendermint
|
||||
* Storage for MongoDB and Tendermint
|
||||
|
||||
It could also include several other components, including:
|
||||
|
||||
* NGINX or similar, to provide authentication, rate limiting, etc.
|
||||
* An NTP daemon running on all machines running Planetmint Server or tarantool, and possibly other machines
|
||||
|
||||
* Log aggregation software
|
||||
* Monitoring software
|
||||
* Maybe more
|
||||
|
||||
The relationship between the main components is illustrated below.
|
||||
|
||||

|
22
docs/new/node-setup/production-node/node-requirements.md
Normal file
22
docs/new/node-setup/production-node/node-requirements.md
Normal file
@ -0,0 +1,22 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Production Node Requirements
|
||||
|
||||
**This page is about the requirements of Planetmint Server.** You can find the requirements of Tarantool, Tendermint and other [production node components](node-components) in the documentation for that software.
|
||||
|
||||
## OS Requirements
|
||||
|
||||
Planetmint Server requires Python 3.9+ and Python 3.9+ [will run on any modern OS](https://docs.python.org/3.5/using/index.html), but we recommend using an LTS version of [Ubuntu Server](https://www.ubuntu.com/server) or a similarly server-grade Linux distribution.
|
||||
|
||||
_Don't use macOS_ (formerly OS X, formerly Mac OS X), because it's not a server-grade operating system. Also, Planetmint Server uses the Python multiprocessing package and [some functionality in the multiprocessing package doesn't work on Mac OS X](https://docs.python.org/3.9/library/multiprocessing.html#multiprocessing.Queue.qsize).
|
||||
|
||||
## General Considerations
|
||||
|
||||
Planetmint Server runs many concurrent processes, so more RAM and more CPU cores is better.
|
||||
|
||||
As mentioned on the page about [production node components](node-components), every machine running Planetmint Server should be running an NTP daemon.
|
@ -0,0 +1,18 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Production Node Security & Privacy
|
||||
|
||||
Here are some references about how to secure an Ubuntu 18.04 server:
|
||||
|
||||
- [Ubuntu 18.04 - Ubuntu Server Guide - Security](https://help.ubuntu.com/lts/serverguide/security.html.en)
|
||||
- [Ubuntu Blog: National Cyber Security Centre publish Ubuntu 18.04 LTS Security Guide](https://blog.ubuntu.com/2018/07/30/national-cyber-security-centre-publish-ubuntu-18-04-lts-security-guide)
|
||||
|
||||
Also, here are some recommendations a node operator can follow to enhance the privacy of the data coming to, stored on, and leaving their node:
|
||||
|
||||
- Ensure that all data stored on a node is encrypted at rest, e.g. using full disk encryption. This can be provided as a service by the operating system, transparently to Planetmint, Tarantool and Tendermint.
|
||||
- Ensure that all data is encrypted in transit, i.e. enforce using HTTPS for the HTTP API and the Websocket API. This can be done using NGINX or similar, as we do with the IPDB Testnet.
|
58
docs/new/node-setup/production-node/reverse-proxy-notes.md
Normal file
58
docs/new/node-setup/production-node/reverse-proxy-notes.md
Normal file
@ -0,0 +1,58 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Using a Reverse Proxy
|
||||
|
||||
You may want to:
|
||||
|
||||
* rate limit inbound HTTP requests,
|
||||
* authenticate/authorize inbound HTTP requests,
|
||||
* block requests with an HTTP request body that's too large, or
|
||||
* enable HTTPS (TLS) between your users and your node.
|
||||
|
||||
While we could have built all that into Planetmint Server,
|
||||
we didn't, because you can do all that (and more)
|
||||
using a reverse proxy such as NGINX or HAProxy.
|
||||
(You would put it in front of your Planetmint Server,
|
||||
so that all inbound HTTP requests would arrive
|
||||
at the reverse proxy before *maybe* being proxied
|
||||
onwards to your Planetmint Server.)
|
||||
For detailed instructions, see the documentation
|
||||
for your reverse proxy.
|
||||
|
||||
Below, we note how a reverse proxy can be used
|
||||
to do some Planetmint-specific things.
|
||||
|
||||
You may also be interested in
|
||||
[our NGINX configuration file template](https://github.com/planetmint/nginx_3scale/blob/master/nginx.conf.template)
|
||||
(open source, on GitHub).
|
||||
|
||||
|
||||
## Enforcing a Max Transaction Size
|
||||
|
||||
The Planetmint HTTP API has several endpoints,
|
||||
but only one of them, the `POST /transactions` endpoint,
|
||||
expects a non-empty HTTP request body:
|
||||
the transaction being submitted by the user.
|
||||
|
||||
If you want to enforce a maximum-allowed transaction size
|
||||
(discarding any that are larger),
|
||||
then you can do so by configuring a maximum request body size
|
||||
in your reverse proxy.
|
||||
For example, NGINX has the `client_max_body_size`
|
||||
configuration setting. You could set it to 15 kB
|
||||
with the following line in your NGINX config file:
|
||||
|
||||
```text
|
||||
client_max_body_size 15k;
|
||||
```
|
||||
|
||||
For more information, see
|
||||
[the NGINX docs about client_max_body_size](https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size).
|
||||
|
||||
Note: By enforcing a maximum transaction size, you
|
||||
[indirectly enforce a maximum crypto-conditions complexity](https://github.com/planetmint/planetmint/issues/356#issuecomment-288085251).
|
43
docs/new/node-setup/set-up-nginx.md
Normal file
43
docs/new/node-setup/set-up-nginx.md
Normal file
@ -0,0 +1,43 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Set Up NGINX
|
||||
|
||||
If you don't want HTTPS
|
||||
(for communications between the external world and your node),
|
||||
then you can skip all the NGINX steps on this page.
|
||||
|
||||
Note: This simple deployment template uses NGINX for more than just HTTPS.
|
||||
For example, it also does basic rate limiting.
|
||||
|
||||
## Install NGINX
|
||||
|
||||
SSH into your machine and install NGINX:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt install nginx
|
||||
```
|
||||
|
||||
## Configure & Reload NGINX
|
||||
|
||||
Get an SSL certificate for your node's subdomain (such as `bnode.example.com`).
|
||||
|
||||
* Copy the SSL private key into `/etc/nginx/ssl/cert.key`
|
||||
* Create a "PEM file" (text file) by concatenating your SSL certificate with all intermediate certificates (_in that order, with the intermediate certs last_).
|
||||
* Copy that PEM file into `/etc/nginx/ssl/cert.pem`
|
||||
* In the
|
||||
[planetmint/planetmint repository on GitHub](https://github.com/planetmint/planetmint),
|
||||
find the file `nginx/nginx.conf` and copy its contents to
|
||||
`/etc/nginx/nginx.conf` on your machine (i.e. replace the existing file there).
|
||||
* Edit that file (`/etc/nginx/nginx.conf`): replace the two instances of
|
||||
the string `example.testnet2.com`
|
||||
with your chosen subdomain (such as `bnode.example.com`).
|
||||
* Reload NGINX by doing:
|
||||
```
|
||||
sudo service nginx reload
|
||||
```
|
108
docs/new/node-setup/set-up-node-software.md
Normal file
108
docs/new/node-setup/set-up-node-software.md
Normal file
@ -0,0 +1,108 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Set Up Planetmint, Tarantool and Tendermint
|
||||
|
||||
We now install and configure software that must run
|
||||
in every Planetmint node: Planetmint Server,
|
||||
Tarantool and Tendermint.
|
||||
|
||||
## Install Planetmint Server
|
||||
|
||||
Planetmint Server requires **Python 3.9+**, so make sure your system has it.
|
||||
|
||||
Install the required OS-level packages:
|
||||
|
||||
```
|
||||
# For Ubuntu 18.04:
|
||||
sudo apt install -y python3-pip libssl-dev
|
||||
# Ubuntu 16.04, and other Linux distros, may require other packages or more packages
|
||||
```
|
||||
|
||||
Planetmint Server requires [gevent](http://www.gevent.org/), and to install gevent, you must use pip 19 or later (as of 2019, because gevent now uses manylinux2010 wheels). Upgrade pip to the latest version:
|
||||
|
||||
```
|
||||
sudo pip3 install -U pip
|
||||
```
|
||||
|
||||
Now install the latest version of Planetmint Server.
|
||||
You can find the latest version by going
|
||||
to the [Planetmint project release history page on PyPI](https://pypi.org/project/Planetmint/#history).
|
||||
For example, to install version 2.2.2, you would do:
|
||||
|
||||
```
|
||||
# Change 2.0.0 to the latest version as explained above:
|
||||
sudo pip3 install planetmint==2.2.2
|
||||
```
|
||||
|
||||
Check that you installed the correct version of Planetmint Server using `planetmint --version`.
|
||||
|
||||
## Configure Planetmint Server
|
||||
|
||||
To configure Planetmint Server, run:
|
||||
|
||||
```
|
||||
planetmint configure
|
||||
```
|
||||
|
||||
The first question is ``API Server bind? (default `localhost:9984`)``.
|
||||
|
||||
* If you're using NGINX (e.g. if you want HTTPS),
|
||||
then accept the default value (`localhost:9984`).
|
||||
* If you're not using NGINX, then enter the value `0.0.0.0:9984`
|
||||
|
||||
You can accept the default value for all other Planetmint config settings.
|
||||
|
||||
If you're using NGINX, then you should edit your Planetmint config file
|
||||
(in `$HOME/.planetmint` by default) and set the following values
|
||||
under `"wsserver"`:
|
||||
|
||||
```
|
||||
"advertised_scheme": "wss",
|
||||
"advertised_host": "bnode.example.com",
|
||||
"advertised_port": 443
|
||||
```
|
||||
|
||||
where `bnode.example.com` should be replaced by your node's actual subdomain.
|
||||
|
||||
## Install (and Start) Tarantool
|
||||
|
||||
Install a recent version of Tarantool.
|
||||
Planetmint Server requires version 3.4 or newer.
|
||||
|
||||
```
|
||||
curl -L https://tarantool.io/DDJLJzv/release/2.8/installer.sh | bash
|
||||
|
||||
sudo apt-get -y install tarantool
|
||||
```
|
||||
|
||||
## Sharding with Tarantool
|
||||
|
||||
If the load on a single node becomes to large Tarantool allows for sharding to scale horizontally.
|
||||
For more information on how to setup sharding with Tarantool please refer to the [official Tarantool documentation](https://www.tarantool.io/en/doc/latest/reference/reference_rock/vshard/vshard_index/).
|
||||
|
||||
## Install Tendermint
|
||||
|
||||
The version of Planetmint Server described in these docs only works well
|
||||
with Tendermint 0.31.5 (not a higher version number). Install that:
|
||||
|
||||
```
|
||||
sudo apt install -y unzip
|
||||
wget https://github.com/tendermint/tendermint/releases/download/v0.31.5/tendermint_v0.31.5_linux_amd64.zip
|
||||
unzip tendermint_v0.31.5_linux_amd64.zip
|
||||
rm tendermint_v0.31.5_linux_amd64.zip
|
||||
sudo mv tendermint /usr/local/bin
|
||||
```
|
||||
|
||||
## Start Configuring Tendermint
|
||||
|
||||
You won't be able to finish configuring Tendermint until you have some information
|
||||
from the other nodes in the network, but you can start by doing:
|
||||
|
||||
```
|
||||
tendermint init
|
||||
```
|
161
docs/new/using-planetmint/README.md
Normal file
161
docs/new/using-planetmint/README.md
Normal file
@ -0,0 +1,161 @@
|
||||
<!---
|
||||
Copyright © 2020 Interplanetary Database Association e.V.,
|
||||
Planetmint and IPDB software contributors.
|
||||
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
|
||||
Code is Apache-2.0 and docs are CC-BY-4.0
|
||||
--->
|
||||
|
||||
# Basic usage
|
||||
|
||||
## Transactions in Planetmint
|
||||
|
||||
In Planetmint, _transactions_ are used to register, issue, create or transfer
|
||||
things (e.g. assets).
|
||||
|
||||
Transactions are the most basic kind of record stored by Planetmint. There are
|
||||
two kinds: CREATE transactions and TRANSFER transactions.
|
||||
|
||||
You can view the transaction specifications in Github, which describe transaction components and the conditions they have to fulfill in order to be valid.
|
||||
|
||||
[Planetmint Transactions Specs](https://github.com/bigchaindb/BEPs/tree/master/13/)
|
||||
|
||||
### CREATE Transactions
|
||||
|
||||
A CREATE transaction can be used to register, issue, create or otherwise
|
||||
initiate the history of a single thing (or asset) in Planetmint. For example,
|
||||
one might register an identity or a creative work. The things are often called
|
||||
"assets" but they might not be literal assets.
|
||||
|
||||
Planetmint supports divisible assets as of Planetmint Server v0.8.0.
|
||||
That means you can create/register an asset with an initial number of "shares."
|
||||
For example, A CREATE transaction could register a truckload of 50 oak trees.
|
||||
Each share of a divisible asset must be interchangeable with each other share;
|
||||
the shares must be fungible.
|
||||
|
||||
A CREATE transaction can have one or more outputs.
|
||||
Each output has an associated amount: the number of shares tied to that output.
|
||||
For example, if the asset consists of 50 oak trees,
|
||||
one output might have 35 oak trees for one set of owners,
|
||||
and the other output might have 15 oak trees for another set of owners.
|
||||
|
||||
Each output also has an associated condition: the condition that must be met
|
||||
(by a TRANSFER transaction) to transfer/spend the output.
|
||||
Planetmint supports a variety of conditions.
|
||||
For details, see
|
||||
the section titled **Transaction Components: Conditions**
|
||||
in the relevant
|
||||
[Planetmint Transactions Spec](https://github.com/bigchaindb/BEPs/tree/master/13/).
|
||||
|
||||

|
||||
|
||||
Above we see a diagram of an example Planetmint CREATE transaction.
|
||||
It has one output: Pam owns/controls three shares of the asset
|
||||
and there are no other shares (because there are no other outputs).
|
||||
|
||||
Each output also has a list of all the public keys associated
|
||||
with the conditions on that output.
|
||||
Loosely speaking, that list might be interpreted as the list of "owners."
|
||||
A more accurate word might be fulfillers, signers, controllers,
|
||||
or transfer-enablers.
|
||||
See the section titled **A Note about Owners**
|
||||
in the relevant [Planetmint Transactions Spec](https://github.com/bigchaindb/BEPs/tree/master/13/).
|
||||
|
||||
A CREATE transaction must be signed by all the owners.
|
||||
(If you're looking for that signature,
|
||||
it's in the one "fulfillment" of the one input, albeit encoded.)
|
||||
|
||||
### TRANSFER Transactions
|
||||
|
||||
A TRANSFER transaction can transfer/spend one or more outputs
|
||||
on other transactions (CREATE transactions or other TRANSFER transactions).
|
||||
Those outputs must all be associated with the same asset;
|
||||
a TRANSFER transaction can only transfer shares of one asset at a time.
|
||||
|
||||
Each input on a TRANSFER transaction connects to one output
|
||||
on another transaction.
|
||||
Each input must satisfy the condition on the output it's trying
|
||||
to transfer/spend.
|
||||
|
||||
A TRANSFER transaction can have one or more outputs,
|
||||
just like a CREATE transaction (described above).
|
||||
The total number of shares coming in on the inputs must equal
|
||||
the total number of shares going out on the outputs.
|
||||
|
||||

|
||||
|
||||
Above we see a diagram of two example Planetmint transactions,
|
||||
a CREATE transaction and a TRANSFER transaction.
|
||||
The CREATE transaction is the same as in the earlier diagram.
|
||||
The TRANSFER transaction spends Pam's output,
|
||||
so the input on that TRANSFER transaction must contain a valid signature
|
||||
from Pam (i.e. a valid fulfillment).
|
||||
The TRANSFER transaction has two outputs:
|
||||
Jim gets one share, and Pam gets the remaining two shares.
|
||||
|
||||
Terminology: The "Pam, 3" output is called a "spent transaction output"
|
||||
and the "Jim, 1" and "Pam, 2" outputs are called "unspent transaction outputs"
|
||||
(UTXOs).
|
||||
|
||||
**Example 1:** Suppose a red car is owned and controlled by Joe.
|
||||
Suppose the current transfer condition on the car says
|
||||
that any valid transfer must be signed by Joe.
|
||||
Joe could build a TRANSFER transaction containing
|
||||
an input with Joe's signature (to fulfill the current output condition)
|
||||
plus a new output condition saying that any valid transfer
|
||||
must be signed by Rae.
|
||||
|
||||
**Example 2:** Someone might construct a TRANSFER transaction
|
||||
that fulfills the output conditions on four
|
||||
previously-untransferred assets of the same asset type
|
||||
e.g. paperclips. The amounts might be 20, 10, 45 and 25, say,
|
||||
for a total of 100 paperclips.
|
||||
The TRANSFER transaction would also set up new transfer conditions.
|
||||
For example, maybe a set of 60 paperclips can only be transferred
|
||||
if Gertrude signs, and a separate set of 40 paperclips can only be
|
||||
transferred if both Jack and Kelly sign.
|
||||
Note how the sum of the incoming paperclips must equal the sum
|
||||
of the outgoing paperclips (100).
|
||||
|
||||
### Transaction Validity
|
||||
|
||||
When a node is asked to check if a transaction is valid, it checks several
|
||||
things. This got documented by a BigchainDB post (previous version of Planetmint) at*The BigchainDB Blog*:
|
||||
["What is a Valid Transaction in BigchainDB?"](https://blog.bigchaindb.com/what-is-a-valid-transaction-in-planetmint-9a1a075a9598)
|
||||
(Note: That post was about Planetmint Server v1.0.0.)
|
||||
|
||||
## A Note on IPLD marshalling and CIDs
|
||||
|
||||
Planetmint utilizes IPLD (interplanetary linked data) marshalling and CIDs (content identifiers) to store and verify data.
|
||||
Before submitting a transaction to the network the data is marshalled using [py-ipld](https://github.com/planetmint/py-ipld) and instead of the raw data a CID is stored on chain.
|
||||
|
||||
The CID is a self describing data structure. It contains information about the encoding, cryptographic algorithm, length and the actual hashvalue. For example the CID `bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi` tells us the following:
|
||||
|
||||
```
|
||||
Encoding: base32
|
||||
Codec: dag-pb (MerkleDAG protobuf)
|
||||
Hashing-Algorithm: sha2-256
|
||||
Digest (Hex): C3C4733EC8AFFD06CF9E9FF50FFC6BCD2EC85A6170004BB709669C31DE94391A
|
||||
```
|
||||
|
||||
With this information we can validate that information about an asset we've received is actually valid.
|
||||
|
||||
|
||||
### Example Transactions
|
||||
|
||||
There are example Planetmint transactions in
|
||||
[the HTTP API documentation](./connecting/http-client-server-api)
|
||||
and
|
||||
[the Python Driver documentation](./connecting/drivers).
|
||||
|
||||
## Contracts & Conditions
|
||||
|
||||
Planetmint has been developed with simple logical gateways in mind. The logic got introduced by [cryptoconditions](https://https://docs.planetmint.io/projects/cryptoconditions). The cryptocondition documentation contains all details about how conditoins are defined and how they can be verified and fulfilled.
|
||||
|
||||
The integration of such into the transaction schema of Planetmint is shown below.
|
||||
|
||||
## Zenroom Smart Contracts and Policies
|
||||
|
||||
[Zenroom](https://zenroom.org/) was integrated into [cryptoconditions](https://https://docs.planetmint.io/projects/cryptoconditions) to allow for human-readable conditions and fulfillments.
|
||||
At the moment these contracts can only be stateless, which implies that the conditions and fulfillments need to be transacted in the same transaction. However, [PRP-10](https://github.com/planetmint/PRPs/tree/main/10) aims to make stateful contracts possible, which enables asynchronous and party-independent processing of contracts.
|
||||
|
||||
As for network-wide or asset-based policies [PRP-11](https://github.com/planetmint/PRPs/tree/main/11) specifies how these can be implemented and how these can be used to verify a transaction state before it is commited to the network.
|
Loading…
x
Reference in New Issue
Block a user