mirror of
https://github.com/bigchaindb/bigchaindb.git
synced 2024-10-13 13:34:05 +00:00
Single node/cluster bootstrap and node addition workflow in k8s (#1278)
* Combining configs * Combining the persistent volume claims into a single file. * Combining the storage classes into a single file. * Updating documentation * Multiple changes * Support for ConfigMap * Custom MongoDB container for BigchainDB * Update documentation to run a single node on k8s * Additional documentation * Documentation to add a node to an existing BigchainDB cluster * Commit on rolling upgrades * Fixing minor documentation mistakes * Documentation updates as per @ttmc's comments * Block formatting error * Change in ConfigMap yaml config
This commit is contained in:
parent
a9f0f9de54
commit
ea6ce5c1a1
@ -5,7 +5,7 @@ There is some specialized terminology associated with BigchainDB. To get started
|
||||
|
||||
## Node
|
||||
|
||||
A **BigchainDB node** is a machine or set of closely-linked machines running RethinkDB Server, BigchainDB Server, and related software. (A "machine" might be a bare-metal server, a virtual machine or a container.) Each node is controlled by one person or organization.
|
||||
A **BigchainDB node** is a machine or set of closely-linked machines running RethinkDB/MongoDB Server, BigchainDB Server, and related software. (A "machine" might be a bare-metal server, a virtual machine or a container.) Each node is controlled by one person or organization.
|
||||
|
||||
|
||||
## Cluster
|
||||
@ -19,4 +19,4 @@ The people and organizations that run the nodes in a cluster belong to a **feder
|
||||
|
||||
**What's the Difference Between a Cluster and a Federation?**
|
||||
|
||||
A cluster is just a bunch of connected nodes. A federation is an organization which has a cluster, and where each node in the cluster has a different operator. Confusingly, we sometimes call a federation's cluster its "federation." You can probably tell what we mean from context.
|
||||
A cluster is just a bunch of connected nodes. A federation is an organization which has a cluster, and where each node in the cluster has a different operator. Confusingly, we sometimes call a federation's cluster its "federation." You can probably tell what we mean from context.
|
||||
|
@ -0,0 +1,168 @@
|
||||
Add a BigchainDB Node in a Kubernetes Cluster
|
||||
=============================================
|
||||
|
||||
**Refer this document if you want to add a new BigchainDB node to an existing
|
||||
cluster**
|
||||
|
||||
**If you want to start your first BigchainDB node in the BigchainDB cluster,
|
||||
refer**
|
||||
:doc:`this <node-on-kubernetes>`
|
||||
|
||||
|
||||
Terminology Used
|
||||
----------------
|
||||
|
||||
``existing cluster`` will refer to the existing (or any one of the existing)
|
||||
Kubernetes cluster that already hosts a BigchainDB instance with a MongoDB
|
||||
backend.
|
||||
|
||||
``ctx-1`` will refer to the kubectl context of the existing cluster.
|
||||
|
||||
``new cluster`` will refer to the new Kubernetes cluster that will run a new
|
||||
BigchainDB instance with a MongoDB backend.
|
||||
|
||||
``ctx-2`` will refer to the kubectl context of the new cluster.
|
||||
|
||||
``new MongoDB instance`` will refer to the MongoDB instance in the new cluster.
|
||||
|
||||
``existing MongoDB instance`` will refer to the MongoDB instance in the
|
||||
existing cluster.
|
||||
|
||||
``new BigchainDB instance`` will refer to the BigchainDB instance in the new
|
||||
cluster.
|
||||
|
||||
``existing BigchainDB instance`` will refer to the BigchainDB instance in the
|
||||
existing cluster.
|
||||
|
||||
|
||||
Step 1: Prerequisites
|
||||
---------------------
|
||||
|
||||
* You will need to have a public and private key for the new BigchainDB
|
||||
instance you will set up.
|
||||
|
||||
* The public key should be shared offline with the other existing BigchainDB
|
||||
instances. The means to achieve this requirement is beyond the scope of this
|
||||
document.
|
||||
|
||||
* You will need the public keys of all the existing BigchainDB instances. The
|
||||
means to achieve this requirement is beyond the scope of this document.
|
||||
|
||||
* A new Kubernetes cluster setup with kubectl configured to access it.
|
||||
If you are using Kubernetes on Azure Container Server (ACS), please refer
|
||||
our documentation `here <template-kubernetes-azure>` for the set up.
|
||||
|
||||
If you haven't read our guide to set up a
|
||||
:doc:`node on Kubernetes <node-on-kubernetes>`, now is a good time to jump in
|
||||
there and then come back here as these instructions build up from there.
|
||||
|
||||
|
||||
NOTE: If you are managing multiple kubernetes clusters, from your local
|
||||
system, you can run ``kubectl config view`` to list all the contexts that
|
||||
are available for the local kubectl.
|
||||
To target a specific cluster, add a ``--context`` flag to the kubectl CLI. For
|
||||
example:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 apply -f example.yaml
|
||||
$ kubectl --context ctx-2 apply -f example.yaml
|
||||
$ kubectl --context ctx-1 proxy --port 8001
|
||||
$ kubectl --context ctx-2 proxy --port 8002
|
||||
|
||||
|
||||
Step 2: Prepare the New Kubernetes cluster
|
||||
------------------------------------------
|
||||
Follow the steps in the sections to set up Storage Classes and Persisten Volume
|
||||
Claims, and to run MongoDB in the new cluster:
|
||||
|
||||
1. :ref:`Add Storage Classes <Step 3: Create Storage Classes>`
|
||||
2. :ref:`Add Persistent Volume Claims <Step 4: Create Persistent Volume Claims>`
|
||||
3. :ref:`Create the Config Map <Step 5: Create the Config Map - Optional>`
|
||||
4. :ref:`Run MongoDB instance <Step 6: Run MongoDB as a StatefulSet>`
|
||||
|
||||
|
||||
Step 3: Add the New MongoDB Instance to the Existing Replica Set
|
||||
----------------------------------------------------------------
|
||||
Note that by ``replica set`` we are referring to the MongoDB replica set, and not
|
||||
to Kubernetes' ``ReplicaSet``.
|
||||
|
||||
If you are not the administrator of an existing MongoDB/BigchainDB instance, you
|
||||
will have to coordinate offline with an existing administrator so that s/he can
|
||||
add the new MongoDB instance to the replica set. The means to achieve this is
|
||||
beyond the scope of this document.
|
||||
|
||||
Add the new instance of MongoDB from an existing instance by accessing the
|
||||
``mongo`` shell.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 exec -it mdb-0 -c mongodb -- /bin/bash
|
||||
root@mdb-0# mongo --port 27017
|
||||
|
||||
We can only add members to a replica set from the ``PRIMARY`` instance.
|
||||
The ``mongo`` shell prompt should state that this is the primary member in the
|
||||
replica set.
|
||||
If not, then you can use the ``rs.status()`` command to find out who the
|
||||
primary is and login to the ``mongo`` shell in the primary.
|
||||
|
||||
Run the ``rs.add()`` command with the FQDN and port number of the other instances:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
PRIMARY> rs.add("<fqdn>:<port>")
|
||||
|
||||
|
||||
Step 4: Verify the replica set membership
|
||||
-----------------------------------------
|
||||
|
||||
You can use the ``rs.conf()`` and the ``rs.status()`` commands available in the
|
||||
mongo shell to verify the replica set membership.
|
||||
|
||||
The new MongoDB instance should be listed in the membership information
|
||||
displayed.
|
||||
|
||||
|
||||
Step 5: Start the new BigchainDB instance
|
||||
-----------------------------------------
|
||||
|
||||
Get the file ``bigchaindb-dep.yaml`` from GitHub using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/bigchaindb/bigchaindb-dep.yaml
|
||||
|
||||
Note that we set the ``BIGCHAINDB_DATABASE_HOST`` to ``mdb`` which is the name
|
||||
of the MongoDB service defined earlier.
|
||||
|
||||
Edit the ``BIGCHAINDB_KEYPAIR_PUBLIC`` with the public key of this instance,
|
||||
the ``BIGCHAINDB_KEYPAIR_PRIVATE`` with the private key of this instance and
|
||||
the ``BIGCHAINDB_KEYRING`` with a ``:`` delimited list of all the public keys
|
||||
in the BigchainDB cluster.
|
||||
|
||||
Create the required Deployment using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-2 apply -f bigchaindb-dep.yaml
|
||||
|
||||
You can check its status using the command ``kubectl get deploy -w``
|
||||
|
||||
|
||||
Step 6: Restart the existing BigchainDB instance(s)
|
||||
---------------------------------------------------
|
||||
Add public key of the new BigchainDB instance to the keyring of all the
|
||||
existing instances and update the BigchainDB instances using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl --context ctx-1 replace -f bigchaindb-dep.yaml
|
||||
|
||||
This will create a ``rolling deployment`` in Kubernetes where a new instance of
|
||||
BigchainDB will be created, and if the health check on the new instance is
|
||||
successful, the earlier one will be terminated. This ensures that there is
|
||||
zero downtime during updates.
|
||||
|
||||
You can login to an existing BigchainDB instance and run the ``bigchaindb
|
||||
show-config`` command to see the configuration update to the keyring.
|
||||
|
@ -15,4 +15,4 @@ If you find the cloud deployment templates for nodes helpful, then you may also
|
||||
azure-quickstart-template
|
||||
template-kubernetes-azure
|
||||
node-on-kubernetes
|
||||
|
||||
add-node-on-kubernetes
|
||||
|
@ -1,6 +1,12 @@
|
||||
Run a BigchainDB Node in a Kubernetes Cluster
|
||||
=============================================
|
||||
Bootstrap a BigchainDB Node in a Kubernetes Cluster
|
||||
===================================================
|
||||
|
||||
**Refer this document if you are starting your first BigchainDB instance in
|
||||
a BigchainDB cluster or starting a stand-alone BigchainDB instance**
|
||||
|
||||
**If you want to add a new BigchainDB node to an existing cluster, refer**
|
||||
:doc:`this <add-node-on-kubernetes>`
|
||||
|
||||
Assuming you already have a `Kubernetes <https://kubernetes.io/>`_
|
||||
cluster up and running, this page describes how to run a
|
||||
BigchainDB node in it.
|
||||
@ -90,24 +96,21 @@ For future reference, the command to create a storage account is
|
||||
`az storage account create <https://docs.microsoft.com/en-us/cli/azure/storage/account#create>`_.
|
||||
|
||||
|
||||
Get the files ``mongo-data-db-sc.yaml`` and ``mongo-data-configdb-sc.yaml``
|
||||
from GitHub using:
|
||||
Get the file ``mongo-sc.yaml`` from GitHub using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-data-db-sc.yaml
|
||||
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-data-configdb-sc.yaml
|
||||
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-sc.yaml
|
||||
|
||||
You may want to update the ``parameters.location`` field in both the files to
|
||||
specify the location you are using in Azure.
|
||||
|
||||
|
||||
Create the required StorageClass using
|
||||
Create the required storage classes using
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f mongo-data-db-sc.yaml
|
||||
$ kubectl apply -f mongo-data-configdb-sc.yaml
|
||||
$ kubectl apply -f mongo-sc.yaml
|
||||
|
||||
|
||||
You can check if it worked using ``kubectl get storageclasses``.
|
||||
@ -128,13 +131,11 @@ Step 4: Create Persistent Volume Claims
|
||||
Next, we'll create two PersistentVolumeClaim objects ``mongo-db-claim`` and
|
||||
``mongo-configdb-claim``.
|
||||
|
||||
Get the files ``mongo-data-db-sc.yaml`` and ``mongo-data-configdb-sc.yaml``
|
||||
from GitHub using:
|
||||
Get the file ``mongo-pvc.yaml`` from GitHub using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-data-db-pvc.yaml
|
||||
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-data-configdb-pvc.yaml
|
||||
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-pvc.yaml
|
||||
|
||||
Note how there's no explicit mention of Azure, AWS or whatever.
|
||||
``ReadWriteOnce`` (RWO) means the volume can be mounted as
|
||||
@ -147,12 +148,11 @@ by AzureDisk.)
|
||||
You may want to update the ``spec.resources.requests.storage`` field in both
|
||||
the files to specify a different disk size.
|
||||
|
||||
Create the required PersistentVolumeClaim using:
|
||||
Create the required Persistent Volume Claims using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f mongo-data-db-pvc.yaml
|
||||
$ kubectl apply -f mongo-data-configdb-pvc.yaml
|
||||
$ kubectl apply -f mongo-pvc.yaml
|
||||
|
||||
|
||||
You can check its status using: ``kubectl get pvc -w``
|
||||
@ -161,9 +161,81 @@ Initially, the status of persistent volume claims might be "Pending"
|
||||
but it should become "Bound" fairly quickly.
|
||||
|
||||
|
||||
Step 5: Create the Config Map - Optional
|
||||
----------------------------------------
|
||||
|
||||
This step is required only if you are planning to set up multiple
|
||||
`BigchainDB nodes
|
||||
<https://docs.bigchaindb.com/en/latest/terminology.html#node>`_, else you can
|
||||
skip to the :ref:`next step <Step 6: Run MongoDB as a StatefulSet>`.
|
||||
|
||||
MongoDB reads the local ``/etc/hosts`` file while bootstrapping a replica set
|
||||
to resolve the hostname provided to the ``rs.initiate()`` command. It needs to
|
||||
ensure that the replica set is being initialized in the same instance where
|
||||
the MongoDB instance is running.
|
||||
|
||||
To achieve this, we create a ConfigMap with the FQDN of the MongoDB instance
|
||||
and populate the ``/etc/hosts`` file with this value so that a replica set can
|
||||
be created seamlessly.
|
||||
|
||||
Get the file ``mongo-cm.yaml`` from GitHub using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-cm.yaml
|
||||
|
||||
You may want to update the ``data.fqdn`` field in the file before creating the
|
||||
ConfigMap. ``data.fqdn`` field will be the DNS name of your MongoDB instance.
|
||||
This will be used by other MongoDB instances when forming a MongoDB
|
||||
replica set. It should resolve to the MongoDB instance in your cluster when
|
||||
you are done with the setup. This will help when we are adding more MongoDB
|
||||
instances to the replica set in the future.
|
||||
|
||||
|
||||
For ACS
|
||||
^^^^^^^
|
||||
In Kubernetes on ACS, the name you populate in the ``data.fqdn`` field
|
||||
will be used to configure a DNS name for the public IP assigned to the
|
||||
Kubernetes Service that is the frontend for the MongoDB instance.
|
||||
|
||||
We suggest using a name that will already be available in Azure.
|
||||
We use ``mdb-instance-0``, ``mdb-instance-1`` and so on in this document,
|
||||
which gives us ``mdb-instance-0.<azure location>.cloudapp.azure.com``,
|
||||
``mdb-instance-1.<azure location>.cloudapp.azure.com``, etc. as the FQDNs.
|
||||
The ``<azure location>`` is the Azure datacenter location you are using,
|
||||
which can also be obtained using the ``az account list-locations`` command.
|
||||
|
||||
You can also try to assign a name to an Public IP in Azure before starting
|
||||
the process, or use ``nslookup`` with the name you have in mind to check
|
||||
if it's available for use.
|
||||
|
||||
In the rare chance that name in the ``data.fqdn`` field is not available,
|
||||
we will need to create a ConfigMap with a unique name and restart the
|
||||
MongoDB instance.
|
||||
|
||||
For Kubernetes on bare-metal or other cloud providers
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
On other environments, you need to provide the name resolution function
|
||||
by other means (using DNS providers like GoDaddy, CloudFlare or your own
|
||||
private DNS server). The DNS set up for other environments is currently
|
||||
beyond the scope of this document.
|
||||
|
||||
|
||||
Create the required ConfigMap using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl apply -f mongo-cm.yaml
|
||||
|
||||
|
||||
You can check its status using: ``kubectl get cm``
|
||||
|
||||
|
||||
|
||||
Now we are ready to run MongoDB and BigchainDB on our Kubernetes cluster.
|
||||
|
||||
Step 5: Run MongoDB as a StatefulSet
|
||||
Step 6: Run MongoDB as a StatefulSet
|
||||
------------------------------------
|
||||
|
||||
Get the file ``mongo-ss.yaml`` from GitHub using:
|
||||
@ -188,7 +260,7 @@ To avoid this, we use the Docker feature of ``--cap-add=FOWNER``.
|
||||
This bypasses the uid and gid permission checks during writes and allows data
|
||||
to be persisted to disk.
|
||||
Refer to the
|
||||
`Docker doc <https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities>`_
|
||||
`Docker docs <https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities>`_
|
||||
for details.
|
||||
|
||||
As we gain more experience running MongoDB in testing and production, we will
|
||||
@ -205,8 +277,91 @@ Create the required StatefulSet using:
|
||||
You can check its status using the commands ``kubectl get statefulsets -w``
|
||||
and ``kubectl get svc -w``
|
||||
|
||||
|
||||
Step 6: Run BigchainDB as a Deployment
|
||||
You may have to wait for upto 10 minutes wait for disk to be created
|
||||
and attached on the first run. The pod can fail several times with the message
|
||||
specifying that the timeout for mounting the disk has exceeded.
|
||||
|
||||
|
||||
Step 7: Initialize a MongoDB Replica Set - Optional
|
||||
---------------------------------------------------
|
||||
|
||||
This step is required only if you are planning to set up multiple
|
||||
`BigchainDB nodes
|
||||
<https://docs.bigchaindb.com/en/latest/terminology.html#node>`_, else you can
|
||||
skip to the :ref:`step 9 <Step 9: Run BigchainDB as a Deployment>`.
|
||||
|
||||
|
||||
Login to the running MongoDB instance and access the mongo shell using:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ kubectl exec -it mdb-0 -c mongodb -- /bin/bash
|
||||
root@mdb-0:/# mongo --port 27017
|
||||
|
||||
We initialize the replica set by using the ``rs.initiate()`` command from the
|
||||
mongo shell. Its syntax is:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
rs.initiate({
|
||||
_id : "<replica-set-name",
|
||||
members: [ {
|
||||
_id : 0,
|
||||
host : "<fqdn of this instance>:<port number>"
|
||||
} ]
|
||||
})
|
||||
|
||||
An example command might look like:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
> rs.initiate({ _id : "bigchain-rs", members: [ { _id : 0, host :"mdb-instance-0.westeurope.cloudapp.azure.com:27017" } ] })
|
||||
|
||||
|
||||
where ``mdb-instance-0.westeurope.cloudapp.azure.com`` is the value stored in
|
||||
the ``data.fqdn`` field in the ConfigMap created using ``mongo-cm.yaml``.
|
||||
|
||||
|
||||
You should see changes in the mongo shell prompt from ``>``
|
||||
to ``bigchain-rs:OTHER>`` to ``bigchain-rs:SECONDARY>`` and finally
|
||||
to ``bigchain-rs:PRIMARY>``.
|
||||
|
||||
You can use the ``rs.conf()`` and the ``rs.status()`` commands to check the
|
||||
detailed replica set configuration now.
|
||||
|
||||
|
||||
Step 8: Create a DNS record - Optional
|
||||
--------------------------------------
|
||||
|
||||
This step is required only if you are planning to set up multiple
|
||||
`BigchainDB nodes
|
||||
<https://docs.bigchaindb.com/en/latest/terminology.html#node>`_, else you can
|
||||
skip to the :ref:`next step <Step 9: Run BigchainDB as a Deployment>`.
|
||||
|
||||
Since we currently rely on Azure to provide us with a public IP and manage the
|
||||
DNS entries of MongoDB instances, we detail only the steps required for ACS
|
||||
here.
|
||||
|
||||
Select the current Azure resource group and look for the ``Public IP``
|
||||
resource. You should see at least 2 entries there - one for the Kubernetes
|
||||
master and the other for the MongoDB instance. You may have to ``Refresh`` the
|
||||
Azure web page listing the resources in a resource group for the latest
|
||||
changes to be reflected.
|
||||
|
||||
Select the ``Public IP`` resource that is attached to your service (it should
|
||||
have the Kubernetes cluster name alongwith a random string),
|
||||
select ``Configuration``, add the DNS name that was added in the
|
||||
ConfigMap earlier, click ``Save``, and wait for the changes to be applied.
|
||||
|
||||
To verify the DNS setting is operational, you can run ``nslookup <dns
|
||||
name added in ConfigMap>`` from your local Linux shell.
|
||||
|
||||
|
||||
This will ensure that when you scale the replica set later, other MongoDB
|
||||
members in the replica set can reach this instance.
|
||||
|
||||
|
||||
Step 9: Run BigchainDB as a Deployment
|
||||
--------------------------------------
|
||||
|
||||
Get the file ``bigchaindb-dep.yaml`` from GitHub using:
|
||||
@ -239,23 +394,23 @@ Create the required Deployment using:
|
||||
You can check its status using the command ``kubectl get deploy -w``
|
||||
|
||||
|
||||
Step 7: Verify the BigchainDB Node Setup
|
||||
----------------------------------------
|
||||
Step 10: Verify the BigchainDB Node Setup
|
||||
-----------------------------------------
|
||||
|
||||
Step 7.1: Testing Externally
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Step 10.1: Testing Externally
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Try to access the ``<dns/ip of your exposed service endpoint>:9984`` on your
|
||||
browser. You must receive a json output that shows the BigchainDB server
|
||||
version among other things.
|
||||
Try to access the ``<dns/ip of your exposed bigchaindb service endpoint>:9984``
|
||||
on your browser. You must receive a json output that shows the BigchainDB
|
||||
server version among other things.
|
||||
|
||||
Try to access the ``<dns/ip of your exposed service endpoint>:27017`` on your
|
||||
browser. You must receive a message from MongoDB stating that it doesn't allow
|
||||
HTTP connections to the port anymore.
|
||||
Try to access the ``<dns/ip of your exposed mongodb service endpoint>:27017``
|
||||
on your browser. You must receive a message from MongoDB stating that it
|
||||
doesn't allow HTTP connections to the port anymore.
|
||||
|
||||
|
||||
Step 7.2: Testing Internally
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Step 10.2: Testing Internally
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Run a container that provides utilities like ``nslookup``, ``curl`` and ``dig``
|
||||
on the cluster and query the internal DNS and IP endpoints.
|
||||
@ -270,7 +425,7 @@ Now we can query for the ``mdb`` and ``bdb`` service details.
|
||||
.. code:: bash
|
||||
|
||||
$ nslookup mdb
|
||||
$ dig +noall +answer _mdb_port._tcp.mdb.default.svc.cluster.local SRV
|
||||
$ dig +noall +answer _mdb-port._tcp.mdb.default.svc.cluster.local SRV
|
||||
$ curl -X GET http://mdb:27017
|
||||
$ curl -X GET http://bdb:9984
|
||||
|
||||
|
57
k8s/deprecated.to.del/mongo-statefulset.yaml
Normal file
57
k8s/deprecated.to.del/mongo-statefulset.yaml
Normal file
@ -0,0 +1,57 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mongodb
|
||||
labels:
|
||||
name: mongodb
|
||||
spec:
|
||||
ports:
|
||||
- port: 27017
|
||||
targetPort: 27017
|
||||
clusterIP: None
|
||||
selector:
|
||||
role: mongodb
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: mongodb
|
||||
spec:
|
||||
serviceName: mongodb
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
role: mongodb
|
||||
environment: staging
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: mongo
|
||||
image: mongo:3.4.1
|
||||
command:
|
||||
- mongod
|
||||
- "--replSet"
|
||||
- bigchain-rs
|
||||
#- "--smallfiles"
|
||||
#- "--noprealloc"
|
||||
ports:
|
||||
- containerPort: 27017
|
||||
volumeMounts:
|
||||
- name: mongo-persistent-storage
|
||||
mountPath: /data/db
|
||||
- name: mongo-sidecar
|
||||
image: cvallance/mongo-k8s-sidecar
|
||||
env:
|
||||
- name: MONGO_SIDECAR_POD_LABELS
|
||||
value: "role=mongo,environment=staging"
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: mongo-persistent-storage
|
||||
annotations:
|
||||
volume.beta.kubernetes.io/storage-class: "fast"
|
||||
spec:
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: 100Gi
|
12
k8s/mongodb/container/Dockerfile
Normal file
12
k8s/mongodb/container/Dockerfile
Normal file
@ -0,0 +1,12 @@
|
||||
FROM mongo:3.4.2
|
||||
LABEL maintainer "dev@bigchaindb.com"
|
||||
WORKDIR /
|
||||
RUN apt-get update \
|
||||
&& apt-get -y upgrade \
|
||||
&& apt-get autoremove \
|
||||
&& apt-get clean
|
||||
COPY mongod.conf.template /etc/mongod.conf.template
|
||||
COPY mongod_entrypoint/mongod_entrypoint /
|
||||
VOLUME /data/db /data/configdb
|
||||
EXPOSE 27017
|
||||
ENTRYPOINT ["/mongod_entrypoint"]
|
51
k8s/mongodb/container/Makefile
Normal file
51
k8s/mongodb/container/Makefile
Normal file
@ -0,0 +1,51 @@
|
||||
# Targets:
|
||||
# all: Cleans, formats src files, builds the code, builds the docker image
|
||||
# clean: Removes the binary and docker image
|
||||
# format: Formats the src files
|
||||
# build: Builds the code
|
||||
# docker: Builds the code and docker image
|
||||
# push: Push the docker image to Docker hub
|
||||
|
||||
GOCMD=go
|
||||
GOVET=$(GOCMD) tool vet
|
||||
GOINSTALL=$(GOCMD) install
|
||||
GOFMT=gofmt -s -w
|
||||
|
||||
DOCKER_IMAGE_NAME?=bigchaindb/mongodb
|
||||
DOCKER_IMAGE_TAG?=latest
|
||||
|
||||
PWD=$(shell pwd)
|
||||
BINARY_PATH=$(PWD)/mongod_entrypoint/
|
||||
BINARY_NAME=mongod_entrypoint
|
||||
MAIN_FILE = $(BINARY_PATH)/mongod_entrypoint.go
|
||||
SRC_FILES = $(BINARY_PATH)/mongod_entrypoint.go
|
||||
|
||||
.PHONY: all
|
||||
|
||||
all: clean build docker
|
||||
|
||||
clean:
|
||||
@echo "removing any pre-built binary";
|
||||
-@rm $(BINARY_PATH)/$(BINARY_NAME);
|
||||
@echo "remove any pre-built docker image";
|
||||
-@docker rmi $(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG);
|
||||
|
||||
format:
|
||||
$(GOFMT) $(SRC_FILES)
|
||||
|
||||
build: format
|
||||
$(shell cd $(BINARY_PATH) && \
|
||||
export GOPATH="$(BINARY_PATH)" && \
|
||||
export GOBIN="$(BINARY_PATH)" && \
|
||||
CGO_ENABLED=0 GOOS=linux $(GOINSTALL) -ldflags "-s" -a -installsuffix cgo $(MAIN_FILE))
|
||||
|
||||
docker: build
|
||||
docker build \
|
||||
-t $(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG) .;
|
||||
|
||||
vet:
|
||||
$(GOVET) .
|
||||
|
||||
push:
|
||||
docker push \
|
||||
$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG);
|
88
k8s/mongodb/container/README.md
Normal file
88
k8s/mongodb/container/README.md
Normal file
@ -0,0 +1,88 @@
|
||||
## Custom MongoDB container for BigchainDB Backend
|
||||
|
||||
### Need
|
||||
|
||||
* MongoDB needs the hostname provided in the rs.initiate() command to be
|
||||
resolvable through the hosts file locally.
|
||||
* In the future, with the introduction of TLS for inter-cluster MongoDB
|
||||
communications, we will need a way to specify detailed configuration.
|
||||
* We also need a way to overwrite certain parameters to suit our use case.
|
||||
|
||||
|
||||
### Step 1: Build the Latest Container
|
||||
|
||||
`make` from the root of this project.
|
||||
|
||||
|
||||
### Step 2: Run the Container
|
||||
|
||||
```
|
||||
docker run \
|
||||
--name=mdb1 \
|
||||
--publish=17017:17017 \
|
||||
--rm=true \
|
||||
bigchaindb/mongodb \
|
||||
--replica-set-name <replica set name> \
|
||||
--fqdn <fully qualified domain name of this instance> \
|
||||
--port <mongod port number for external connections>
|
||||
```
|
||||
|
||||
#### Step 3: Initialize the Replica Set
|
||||
|
||||
Login to one of the MongoDB containers, say mdb1:
|
||||
|
||||
`docker exec -it mdb1 bash`
|
||||
|
||||
Start the `mongo` shell:
|
||||
|
||||
`mongo --port 27017`
|
||||
|
||||
|
||||
Run the rs.initiate() command:
|
||||
```
|
||||
rs.initiate({
|
||||
_id : "<replica-set-name", members: [
|
||||
{
|
||||
_id : 0,
|
||||
host : "<fqdn of this instance>:<port number>"
|
||||
} ]
|
||||
})
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
rs.initiate({ _id : "test-repl-set", members: [ { _id : 0, host :
|
||||
"mdb-instance-0.westeurope.cloudapp.azure.com:27017" } ] })
|
||||
```
|
||||
|
||||
You should also see changes in the mongo shell prompt from `>` to
|
||||
`test-repl-set:OTHER>` to `test-repl-set:SECONDARY>` to finally
|
||||
`test-repl-set:PRIMARY>`.
|
||||
If this instance is not the primary, you can use the `rs.status()` command to
|
||||
find out who is the primary.
|
||||
|
||||
|
||||
#### Step 4: Add members to the Replica Set
|
||||
|
||||
We can only add members to a replica set from the PRIMARY instance.
|
||||
Login to the PRIMARY and open a `mongo` shell.
|
||||
|
||||
Run the rs.add() command with the ip and port number of the other
|
||||
containers/instances:
|
||||
```
|
||||
rs.add("<fqdn>:<port>")
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
Add mdb2 to replica set from mdb1:
|
||||
```
|
||||
rs.add("bdb-cluster-1.northeurope.cloudapp.azure.com:27017")
|
||||
```
|
||||
|
||||
Add mdb3 to replica set from mdb1:
|
||||
```
|
||||
rs.add("bdb-cluster-2.northeurope.cloudapp.azure.com:27017")
|
||||
```
|
||||
|
89
k8s/mongodb/container/mongod.conf.template
Normal file
89
k8s/mongodb/container/mongod.conf.template
Normal file
@ -0,0 +1,89 @@
|
||||
# mongod.conf
|
||||
|
||||
# for documentation of all options, see:
|
||||
# http://docs.mongodb.org/manual/reference/configuration-options/
|
||||
|
||||
# where to write logging data.
|
||||
systemLog:
|
||||
verbosity: 0
|
||||
#TODO traceAllExceptions: true
|
||||
timeStampFormat: iso8601-utc
|
||||
component:
|
||||
accessControl:
|
||||
verbosity: 0
|
||||
command:
|
||||
verbosity: 0
|
||||
control:
|
||||
verbosity: 0
|
||||
ftdc:
|
||||
verbosity: 0
|
||||
geo:
|
||||
verbosity: 0
|
||||
index:
|
||||
verbosity: 0
|
||||
network:
|
||||
verbosity: 0
|
||||
query:
|
||||
verbosity: 0
|
||||
replication:
|
||||
verbosity: 0
|
||||
sharding:
|
||||
verbosity: 0
|
||||
storage:
|
||||
verbosity: 0
|
||||
journal:
|
||||
verbosity: 0
|
||||
write:
|
||||
verbosity: 0
|
||||
|
||||
processManagement:
|
||||
fork: false
|
||||
pidFilePath: /tmp/mongod.pid
|
||||
|
||||
net:
|
||||
port: PORT
|
||||
bindIp: 0.0.0.0
|
||||
maxIncomingConnections: 8192
|
||||
wireObjectCheck: false
|
||||
unixDomainSocket:
|
||||
enabled: false
|
||||
pathPrefix: /tmp
|
||||
filePermissions: 0700
|
||||
http:
|
||||
enabled: false
|
||||
compression:
|
||||
compressors: snappy
|
||||
#ssl: TODO
|
||||
|
||||
#security: TODO
|
||||
|
||||
#setParameter:
|
||||
#notablescan: 1 TODO
|
||||
#logUserIds: 1 TODO
|
||||
|
||||
storage:
|
||||
dbPath: /data/db
|
||||
indexBuildRetry: true
|
||||
journal:
|
||||
enabled: true
|
||||
commitIntervalMs: 100
|
||||
directoryPerDB: true
|
||||
engine: wiredTiger
|
||||
wiredTiger:
|
||||
engineConfig:
|
||||
journalCompressor: snappy
|
||||
collectionConfig:
|
||||
blockCompressor: snappy
|
||||
indexConfig:
|
||||
prefixCompression: true # TODO false may affect performance?
|
||||
|
||||
operationProfiling:
|
||||
mode: slowOp
|
||||
slowOpThresholdMs: 100
|
||||
|
||||
replication:
|
||||
replSetName: REPLICA_SET_NAME
|
||||
enableMajorityReadConcern: true
|
||||
|
||||
#sharding:
|
||||
|
154
k8s/mongodb/container/mongod_entrypoint/mongod_entrypoint.go
Normal file
154
k8s/mongodb/container/mongod_entrypoint/mongod_entrypoint.go
Normal file
@ -0,0 +1,154 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net"
|
||||
"os"
|
||||
"regexp"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
const (
|
||||
mongoConfFilePath string = "/etc/mongod.conf"
|
||||
mongoConfTemplateFilePath string = "/etc/mongod.conf.template"
|
||||
hostsFilePath string = "/etc/hosts"
|
||||
)
|
||||
|
||||
var (
|
||||
// Use the same entrypoint as the mongo:3.4.2 image; just supply it with
|
||||
// the mongod conf file with custom params
|
||||
mongoStartCmd []string = []string{"/entrypoint.sh", "mongod", "--config",
|
||||
mongoConfFilePath}
|
||||
)
|
||||
|
||||
// context struct stores the user input and the constraints for the specified
|
||||
// input. It also stores the keyword that needs to be replaced in the template
|
||||
// files.
|
||||
type context struct {
|
||||
cliInput string
|
||||
templateKeyword string
|
||||
regex string
|
||||
}
|
||||
|
||||
// sanity function takes the pre-defined constraints and the user inputs as
|
||||
// arguments and validates user input based on regex matching
|
||||
func sanity(input map[string]*context, fqdn, ip string) error {
|
||||
var format *regexp.Regexp
|
||||
for _, ctx := range input {
|
||||
format = regexp.MustCompile(ctx.regex)
|
||||
if format.MatchString(ctx.cliInput) == false {
|
||||
return errors.New(fmt.Sprintf(
|
||||
"Invalid value: '%s' for '%s'. Can be '%s'",
|
||||
ctx.cliInput,
|
||||
ctx.templateKeyword,
|
||||
ctx.regex))
|
||||
}
|
||||
}
|
||||
|
||||
format = regexp.MustCompile(`[a-z0-9-.]+`)
|
||||
if format.MatchString(fqdn) == false {
|
||||
return errors.New(fmt.Sprintf(
|
||||
"Invalid value: '%s' for FQDN. Can be '%s'",
|
||||
fqdn,
|
||||
format))
|
||||
}
|
||||
|
||||
if net.ParseIP(ip) == nil {
|
||||
return errors.New(fmt.Sprintf(
|
||||
"Invalid value: '%s' for IPv4. Can be a.b.c.d",
|
||||
ip))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// createFile function takes the pre-defined keywords, user inputs, the
|
||||
// template file path and the new file path location as parameters, and
|
||||
// creates a new file at file path with all the keywords replaced by inputs.
|
||||
func createFile(input map[string]*context,
|
||||
template string, conf string) error {
|
||||
// read the template
|
||||
contents, err := ioutil.ReadFile(template)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// replace
|
||||
for _, ctx := range input {
|
||||
contents = bytes.Replace(contents, []byte(ctx.templateKeyword),
|
||||
[]byte(ctx.cliInput), -1)
|
||||
}
|
||||
// write
|
||||
err = ioutil.WriteFile(conf, contents, 0644)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// updateHostsFile takes the FQDN supplied as input to the container and adds
|
||||
// an entry to /etc/hosts
|
||||
func updateHostsFile(ip, fqdn string) error {
|
||||
fileHandle, err := os.OpenFile(hostsFilePath, os.O_APPEND|os.O_WRONLY,
|
||||
os.ModeAppend)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer fileHandle.Close()
|
||||
// append
|
||||
_, err = fileHandle.WriteString(fmt.Sprintf("\n%s %s\n", ip, fqdn))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func main() {
|
||||
var fqdn, ip string
|
||||
input := make(map[string]*context)
|
||||
|
||||
input["replica-set-name"] = &context{}
|
||||
input["replica-set-name"].regex = `[a-z]+`
|
||||
input["replica-set-name"].templateKeyword = "REPLICA_SET_NAME"
|
||||
flag.StringVar(&input["replica-set-name"].cliInput,
|
||||
"replica-set-name",
|
||||
"",
|
||||
"replica set name")
|
||||
|
||||
input["port"] = &context{}
|
||||
input["port"].regex = `[0-9]{4,5}`
|
||||
input["port"].templateKeyword = "PORT"
|
||||
flag.StringVar(&input["port"].cliInput,
|
||||
"port",
|
||||
"",
|
||||
"mongodb port number")
|
||||
|
||||
flag.StringVar(&fqdn, "fqdn", "", "FQDN of the MongoDB instance")
|
||||
flag.StringVar(&ip, "ip", "", "IPv4 address of the container")
|
||||
|
||||
flag.Parse()
|
||||
err := sanity(input, fqdn, ip)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
err = createFile(input, mongoConfTemplateFilePath, mongoConfFilePath)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
err = updateHostsFile(ip, fqdn)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
fmt.Printf("Starting Mongod....")
|
||||
err = syscall.Exec(mongoStartCmd[0], mongoStartCmd[0:], os.Environ())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
13
k8s/mongodb/mongo-cm.yaml
Normal file
13
k8s/mongodb/mongo-cm.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
#####################################################################
|
||||
# This YAML file desribes a ConfigMap with the FQDN of the mongo #
|
||||
# instance to be started. MongoDB instance uses the value from this #
|
||||
# ConfigMap to bootstrap itself during startup. #
|
||||
#####################################################################
|
||||
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mdb-fqdn
|
||||
namespace: default
|
||||
data:
|
||||
fqdn: mdb-instance-0.westeurope.cloudapp.azure.com
|
@ -1,18 +0,0 @@
|
||||
##########################################################
|
||||
# This YAML file desribes a k8s pvc for mongodb configDB #
|
||||
##########################################################
|
||||
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mongo-configdb-claim
|
||||
annotations:
|
||||
volume.beta.kubernetes.io/storage-class: slow-configdb
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
# FIXME(Uncomment when ACS supports this!)
|
||||
# persistentVolumeReclaimPolicy: Retain
|
||||
resources:
|
||||
requests:
|
||||
storage: 20Gi
|
@ -1,12 +0,0 @@
|
||||
###################################################################
|
||||
# This YAML file desribes a StorageClass for the mongodb configDB #
|
||||
###################################################################
|
||||
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: slow-configdb
|
||||
provisioner: kubernetes.io/azure-disk
|
||||
parameters:
|
||||
skuName: Standard_LRS
|
||||
location: westeurope
|
@ -1,18 +0,0 @@
|
||||
########################################################
|
||||
# This YAML file desribes a k8s pvc for mongodb dbPath #
|
||||
########################################################
|
||||
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mongo-db-claim
|
||||
annotations:
|
||||
volume.beta.kubernetes.io/storage-class: slow-db
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
# FIXME(Uncomment when ACS supports this!)
|
||||
# persistentVolumeReclaimPolicy: Retain
|
||||
resources:
|
||||
requests:
|
||||
storage: 20Gi
|
@ -1,12 +0,0 @@
|
||||
#################################################################
|
||||
# This YAML file desribes a StorageClass for the mongodb dbPath #
|
||||
#################################################################
|
||||
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: slow-db
|
||||
provisioner: kubernetes.io/azure-disk
|
||||
parameters:
|
||||
skuName: Standard_LRS
|
||||
location: westeurope
|
35
k8s/mongodb/mongo-pvc.yaml
Normal file
35
k8s/mongodb/mongo-pvc.yaml
Normal file
@ -0,0 +1,35 @@
|
||||
###########################################################
|
||||
# This section file desribes a k8s pvc for mongodb dbPath #
|
||||
###########################################################
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mongo-db-claim
|
||||
annotations:
|
||||
volume.beta.kubernetes.io/storage-class: slow-db
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
# FIXME(Uncomment when ACS supports this!)
|
||||
# persistentVolumeReclaimPolicy: Retain
|
||||
resources:
|
||||
requests:
|
||||
storage: 20Gi
|
||||
---
|
||||
#############################################################
|
||||
# This YAML section desribes a k8s pvc for mongodb configDB #
|
||||
#############################################################
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mongo-configdb-claim
|
||||
annotations:
|
||||
volume.beta.kubernetes.io/storage-class: slow-configdb
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
# FIXME(Uncomment when ACS supports this!)
|
||||
# persistentVolumeReclaimPolicy: Retain
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
23
k8s/mongodb/mongo-sc.yaml
Normal file
23
k8s/mongodb/mongo-sc.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
####################################################################
|
||||
# This YAML section desribes a StorageClass for the mongodb dbPath #
|
||||
####################################################################
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: slow-db
|
||||
provisioner: kubernetes.io/azure-disk
|
||||
parameters:
|
||||
skuName: Standard_LRS
|
||||
location: westeurope
|
||||
---
|
||||
######################################################################
|
||||
# This YAML section desribes a StorageClass for the mongodb configDB #
|
||||
######################################################################
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: slow-configdb
|
||||
provisioner: kubernetes.io/azure-disk
|
||||
parameters:
|
||||
skuName: Standard_LRS
|
||||
location: westeurope
|
@ -37,14 +37,30 @@ spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: mongodb
|
||||
image: mongo:3.4.1
|
||||
# TODO(FIXME): Do not use latest in production as it is harder to track
|
||||
# versions during updates and rollbacks. Also, once fixed, change the
|
||||
# imagePullPolicy to IfNotPresent for faster bootup
|
||||
image: bigchaindb/mongodb:latest
|
||||
env:
|
||||
- name: MONGODB_FQDN
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: mdb-fqdn
|
||||
key: fqdn
|
||||
- name: MONGODB_POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
args:
|
||||
- --replSet=bigchain-rs
|
||||
- --replica-set-name=bigchain-rs
|
||||
- --fqdn=$(MONGODB_FQDN)
|
||||
- --port=27017
|
||||
- --ip=$(MONGODB_POD_IP)
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- FOWNER
|
||||
imagePullPolicy: IfNotPresent
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 27017
|
||||
hostPort: 27017
|
||||
|
Loading…
x
Reference in New Issue
Block a user