Running a single node on k8s (#1269)

* Single node as a StatefulSet in k8s
- uses bigchaindb/bigchaindb:0.9.1

* Updating README

* rdb, mdb as stateful services

* [WIP] bdb as a statefulset

* [WIP] bdb w/ rdb and bdb w/ mdb backends
- does not work as of now

* Split mdb & bdb into separate pods + enhancements
*  discovery of the mongodb service by the bdb pod by using dns name.
*  using separate storage classes to map 2 different volumes exposed by the
mongo docker container; one for /data/db (dbPath) and the other for
 /data/configdb (configDB).
*  using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However,
this seems to be unsupported in Azure and the disks still show a reclaim
policy of `delete`.
*  mongodb container runs the `mongod` process as user `mongodb` and group
`mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999
and 999 respectively. When the constinaer runs on a host with a mounted disk,
the writes fail, when there is no user with uid 999. To avoid this, I use the
docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and
gid permission checks during writes and allows writes.
Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities

* Delete redundant k8s files, add cluster deletion steps.

* Single node as a StatefulSet in k8s
- uses bigchaindb/bigchaindb:0.9.1

* Updating README

* rdb, mdb as stateful services

* [WIP] bdb as a statefulset

* [WIP] bdb w/ rdb and bdb w/ mdb backends
- does not work as of now

* Split mdb & bdb into separate pods + enhancements
*  discovery of the mongodb service by the bdb pod by using dns name.
*  using separate storage classes to map 2 different volumes exposed by the
mongo docker container; one for /data/db (dbPath) and the other for
 /data/configdb (configDB).
*  using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However,
this seems to be unsupported in Azure and the disks still show a reclaim
policy of `delete`.
*  mongodb container runs the `mongod` process as user `mongodb` and group
`mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999
and 999 respectively. When the constinaer runs on a host with a mounted disk,
the writes fail, when there is no user with uid 999. To avoid this, I use the
docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and
gid permission checks during writes and allows writes.
Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities

* Delete redundant k8s files, add cluster deletion steps.

* Documentation: running a single node with distinct mongodb and bigchaindb
pods on k8s

* Updates as per @ttmc's comments
This commit is contained in:
Krish 2017-03-09 16:53:00 +01:00 committed by GitHub
parent f2c951847c
commit 7dbd374838
16 changed files with 793 additions and 83 deletions

View File

@ -32,18 +32,34 @@ then you can get the ``~/.kube/config`` file using:
--name <ACS cluster name>
Step 3: Create a StorageClass
-----------------------------
Step 3: Create Storage Classes
------------------------------
MongoDB needs somewhere to store its data persistently,
outside the container where MongoDB is running.
The official MongoDB Docker container exports two volume mounts with correct
permissions from inside the container:
* The directory where the mongod instance stores its data - ``/data/db``,
described at `storage.dbpath <https://docs.mongodb.com/manual/reference/configuration-options/#storage.dbPath>`_.
* The directory where mongodb instance stores the metadata for a sharded
cluster - ``/data/configdb/``, described at
`sharding.configDB <https://docs.mongodb.com/manual/reference/configuration-options/#sharding.configDB>`_.
Explaining how Kubernetes handles persistent volumes,
and the associated terminology,
is beyond the scope of this documentation;
see `the Kubernetes docs about persistent volumes
<https://kubernetes.io/docs/user-guide/persistent-volumes>`_.
The first thing to do is create a Kubernetes StorageClass.
The first thing to do is create the Kubernetes storage classes.
We will accordingly create two storage classes and persistent volume claims in
Kubernetes.
**Azure.** First, you need an Azure storage account.
If you deployed your Kubernetes cluster on Azure
@ -67,25 +83,26 @@ the PersistentVolumeClaim would get stuck in a "Pending" state.
For future reference, the command to create a storage account is
`az storage account create <https://docs.microsoft.com/en-us/cli/azure/storage/account#create>`_.
Create a Kubernetes Storage Class named ``slow``
by writing a file named ``azureStorageClass.yml`` containing:
.. code:: yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Standard_LRS
location: <region where your cluster is located>
and then:
Get the files ``mongo-data-db-sc.yaml`` and ``mongo-data-configdb-sc.yaml``
from GitHub using:
.. code:: bash
$ kubectl apply -f azureStorageClass.yml
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-data-db-sc.yaml
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-data-configdb-sc.yaml
You may want to update the ``parameters.location`` field in both the files to
specify the location you are using in Azure.
Create the required StorageClass using
.. code:: bash
$ kubectl apply -f mongo-data-db-sc.yaml
$ kubectl apply -f mongo-data-configdb-sc.yaml
You can check if it worked using ``kubectl get storageclasses``.
@ -99,27 +116,19 @@ Kubernetes just looks for a storageAccount
with the specified skuName and location.
Step 4: Create a PersistentVolumeClaim
--------------------------------------
Step 4: Create Persistent Volume Claims
---------------------------------------
Next, you'll create a PersistentVolumeClaim named ``mongoclaim``.
Create a file named ``mongoclaim.yml``
with the following contents:
Next, we'll create two PersistentVolumeClaim objects ``mongo-db-claim`` and
``mongo-configdb-claim``.
.. code:: yaml
Get the files ``mongo-data-db-sc.yaml`` and ``mongo-data-configdb-sc.yaml``
from GitHub using:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongoclaim
annotations:
volume.beta.kubernetes.io/storage-class: slow
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
.. code:: bash
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-data-db-pvc.yaml
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-data-configdb-pvc.yaml
Note how there's no explicit mention of Azure, AWS or whatever.
``ReadWriteOnce`` (RWO) means the volume can be mounted as
@ -128,67 +137,144 @@ read-write by a single Kubernetes node.
by AzureDisk.)
``storage: 20Gi`` means the volume has a size of 20
`gibibytes <https://en.wikipedia.org/wiki/Gibibyte>`_.
(You can change that if you like.)
Create ``mongoclaim`` in your Kubernetes cluster:
You may want to update the ``spec.resources.requests.storage`` field in both
the files to specify a different disk size.
Create the required PersistentVolumeClaim using:
.. code:: bash
$ kubectl apply -f mongoclaim.yml
$ kubectl apply -f mongo-data-db-pvc.yaml
$ kubectl apply -f mongo-data-configdb-pvc.yaml
You can check its status using:
.. code:: bash
You can check its status using: ``kubectl get pvc -w``
$ kubectl get pvc
Initially, the status of ``mongoclaim`` might be "Pending"
Initially, the status of persistent volume claims might be "Pending"
but it should become "Bound" fairly quickly.
.. code:: bash
$ kubectl describe pvc
Name: mongoclaim
Namespace: default
StorageClass: slow
Status: Bound
Volume: pvc-ebed81f1-fdca-11e6-abf0-000d3a27ab21
Labels: <none>
Capacity: 20Gi
Access Modes: RWO
No events.
Now we are ready to run MongoDB and BigchainDB on our Kubernetes cluster.
Step 5: Run MongoDB as a StatefulSet
------------------------------------
Step 5: Deploy MongoDB & BigchainDB
-----------------------------------
Now you can deploy MongoDB and BigchainDB to your Kubernetes cluster.
Currently, the way we do that is we create a StatefulSet with two
containers: BigchainDB and MongoDB. (In the future, we'll put them
in separate pods, and we'll ensure those pods are in different nodes.)
We expose BigchainDB's port 9984 (the HTTP API port)
and MongoDB's port 27017 using a Kubernetes Service.
Get the file ``node-mdb-ss.yaml`` from GitHub using:
Get the file ``mongo-ss.yaml`` from GitHub using:
.. code:: bash
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/node-mdb-ss.yaml
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/mongodb/mongo-ss.yaml
Take a look inside that file to see how it defines the Service
and the StatefulSet.
Note how the MongoDB container uses the ``mongoclaim`` PersistentVolumeClaim
for its ``/data`` diretory (mount path).
Create the StatefulSet and Service in your cluster using:
Note how the MongoDB container uses the ``mongo-db-claim`` and the
``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and
``/data/configdb`` diretories (mount path). Note also that we use the pod's
``securityContext.capabilities.add`` specification to add the ``FOWNER``
capability to the container.
That is because MongoDB container has the user ``mongodb``, with uid ``999``
and group ``mongodb``, with gid ``999``.
When this container runs on a host with a mounted disk, the writes fail when
there is no user with uid ``999``.
To avoid this, we use the Docker feature of ``--cap-add=FOWNER``.
This bypasses the uid and gid permission checks during writes and allows data
to be persisted to disk.
Refer to the
`Docker doc <https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities>`_
for details.
As we gain more experience running MongoDB in testing and production, we will
tweak the ``resources.limits.cpu`` and ``resources.limits.memory``.
We will also stop exposing port ``27017`` globally and/or allow only certain
hosts to connect to the MongoDB instance in the future.
Create the required StatefulSet using:
.. code:: bash
$ kubectl apply -f node-mdb-ss.yaml
$ kubectl apply -f mongo-ss.yaml
You can check that they're working using:
You can check its status using the commands ``kubectl get statefulsets -w``
and ``kubectl get svc -w``
Step 6: Run BigchainDB as a Deployment
--------------------------------------
Get the file ``bigchaindb-dep.yaml`` from GitHub using:
.. code:: bash
$ kubectl get services
$ kubectl get statefulsets
$ wget https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/k8s/bigchaindb/bigchaindb-dep.yaml
Note that we set the ``BIGCHAINDB_DATABASE_HOST`` to ``mdb`` which is the name
of the MongoDB service defined earlier.
We also hardcode the ``BIGCHAINDB_KEYPAIR_PUBLIC``,
``BIGCHAINDB_KEYPAIR_PRIVATE`` and ``BIGCHAINDB_KEYRING`` for now.
As we gain more experience running BigchainDB in testing and production, we
will tweak the ``resources.limits`` values for CPU and memory, and as richer
monitoring and probing becomes available in BigchainDB, we will tweak the
``livenessProbe`` and ``readinessProbe`` parameters.
We also plan to specify scheduling policies for the BigchainDB deployment so
that we ensure that BigchainDB and MongoDB are running in separate nodes, and
build security around the globally exposed port ``9984``.
Create the required Deployment using:
.. code:: bash
$ kubectl apply -f bigchaindb-dep.yaml
You can check its status using the command ``kubectl get deploy -w``
Step 7: Verify the BigchainDB Node Setup
----------------------------------------
Step 7.1: Testing Externally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Try to access the ``<dns/ip of your exposed service endpoint>:9984`` on your
browser. You must receive a json output that shows the BigchainDB server
version among other things.
Try to access the ``<dns/ip of your exposed service endpoint>:27017`` on your
browser. You must receive a message from MongoDB stating that it doesn't allow
HTTP connections to the port anymore.
Step 7.2: Testing Internally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Run a container that provides utilities like ``nslookup``, ``curl`` and ``dig``
on the cluster and query the internal DNS and IP endpoints.
.. code:: bash
$ kubectl run -it toolbox -- image <docker image to run> --restart=Never --rm
It will drop you to the shell prompt.
Now we can query for the ``mdb`` and ``bdb`` service details.
.. code:: bash
$ nslookup mdb
$ dig +noall +answer _mdb_port._tcp.mdb.default.svc.cluster.local SRV
$ curl -X GET http://mdb:27017
$ curl -X GET http://bdb:9984
There is a generic image based on alpine:3.5 with the required utilities
hosted at Docker Hub under ``bigchaindb/toolbox``.
The corresponding Dockerfile is `here
<https://github.com/bigchaindb/bigchaindb/k8s/toolbox/Dockerfile>`_.
You can use it as below to get started immediately:
.. code:: bash
$ kubectl run -it toolbox --image bigchaindb/toolbox --restart=Never --rm

View File

@ -94,7 +94,9 @@ Finally, you can deploy an ACS using something like:
$ az acs create --name <a made-up cluster name> \
--resource-group <name of resource group created earlier> \
--master-count 3 \
--agent-count 3 \
--admin-username ubuntu \
--agent-vm-size Standard_D2_v2 \
--dns-prefix <make up a name> \
--ssh-key-value ~/.ssh/<name>.pub \
@ -113,9 +115,6 @@ go to **Resource groups** (with the blue cube icon)
and click on the one you created
to see all the resources in it.
Next, you can :doc:`run a BigchainDB node on your new
Kubernetes cluster <node-on-kubernetes>`.
Optional: SSH to Your New Kubernetes Cluster Nodes
--------------------------------------------------
@ -125,11 +124,10 @@ You can SSH to one of the just-deployed Kubernetes "master" nodes
.. code:: bash
$ ssh -i ~/.ssh/<name>.pub azureuser@<master-ip-address-or-hostname>
$ ssh -i ~/.ssh/<name>.pub ubuntu@<master-ip-address-or-hostname>
where you can get the IP address or hostname
of a master node from the Azure Portal.
Note how the default username is ``azureuser``.
The "agent" nodes don't get public IP addresses or hostnames,
so you can't SSH to them *directly*,
@ -141,5 +139,48 @@ the master (a bad idea),
or use something like
`SSH agent forwarding <https://yakking.branchable.com/posts/ssh-A/>`_ (better).
Optional: Set up SSH Forwarding
-------------------------------
On the system you will use to access the cluster, run
.. code:: bash
$ echo -e "Host <FQDN of the cluster from Azure Portal>\n ForwardAgent yes" >> ~/.ssh/config
To verify whether SSH Forwarding works properly, login to the one of the master
machines and run
.. code:: bash
$ echo "$SSH_AUTH_SOCK"
If you get an empty response, SSH forwarding hasn't been set up correctly.
If you get a non-empty response, SSH forwarding should work fine and you can
try to login to one of the k8s nodes from the master.
Optional: Delete the Kubernetes Cluster
---------------------------------------
.. code:: bash
$ az acs delete \
--name <ACS cluster name> \
--resource-group <name of resource group containing the cluster>
Optional: Delete the Resource Group
-----------------------------------
CAUTION: You might end up deleting resources other than the ACS cluster.
.. code:: bash
$ az group delete \
--name <name of resource group containing the cluster>
Next, you can :doc:`run a BigchainDB node on your new
Kubernetes cluster <node-on-kubernetes>`.

View File

@ -0,0 +1,83 @@
###############################################################
# This config file runs bigchaindb:master as a k8s Deployment #
# and it connects to the mongodb backend on a separate pod #
###############################################################
apiVersion: v1
kind: Service
metadata:
name: bdb
namespace: default
labels:
name: bdb
spec:
selector:
app: bdb
ports:
- port: 9984
targetPort: 9984
name: bdb-port
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bdb
spec:
replicas: 1
template:
metadata:
labels:
app: bdb
spec:
terminationGracePeriodSeconds: 10
containers:
- name: bigchaindb
image: bigchaindb/bigchaindb:master
args:
- start
env:
- name: BIGCHAINDB_DATABASE_HOST
value: mdb
- name: BIGCHAINDB_DATABASE_PORT
# TODO(Krish): remove hardcoded port
value: "27017"
- name: BIGCHAINDB_DATABASE_REPLICASET
value: bigchain-rs
- name: BIGCHAINDB_DATABASE_BACKEND
value: mongodb
- name: BIGCHAINDB_DATABASE_NAME
value: bigchain
- name: BIGCHAINDB_SERVER_BIND
value: 0.0.0.0:9984
- name: BIGCHAINDB_KEYPAIR_PUBLIC
value: EEWUAhsk94ZUHhVw7qx9oZiXYDAWc9cRz93eMrsTG4kZ
- name: BIGCHAINDB_KEYPAIR_PRIVATE
value: 3CjmRhu718gT1Wkba3LfdqX5pfYuBdaMPLd7ENUga5dm
- name: BIGCHAINDB_BACKLOG_REASSIGN_DELAY
value: "120"
- name: BIGCHAINDB_KEYRING
value: ""
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9984
hostPort: 9984
name: bdb-port
protocol: TCP
resources:
limits:
cpu: 200m
memory: 768Mi
livenessProbe:
httpGet:
path: /
port: 9984
initialDelaySeconds: 15
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /
port: 9984
initialDelaySeconds: 15
timeoutSeconds: 10
restartPolicy: Always

View File

@ -0,0 +1,89 @@
###############################################################
# This config file runs bigchaindb:latest and connects to the #
# mongodb backend as a service #
###############################################################
apiVersion: v1
kind: Service
metadata:
name: bdb-mdb-service
namespace: default
labels:
name: bdb-mdb-service
spec:
selector:
app: bdb-mdb
ports:
- port: 9984
targetPort: 9984
name: bdb-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bdb-mdb
spec:
replicas: 1
template:
metadata:
labels:
app: bdb-mdb
spec:
terminationGracePeriodSeconds: 10
containers:
- name: bdb-mdb
image: bigchaindb/bigchaindb:latest
args:
- start
env:
- name: BIGCHAINDB_DATABASE_HOST
value: mdb-service
- name: BIGCHAINDB_DATABASE_PORT
value: "27017"
- name: BIGCHAINDB_DATABASE_REPLICASET
value: bigchain-rs
- name: BIGCHIANDB_DATABASE_BACKEND
value: mongodb
- name: BIGCHAINDB_DATABASE_NAME
value: bigchain
- name: BIGCHAINDB_SERVER_BIND
value: 0.0.0.0:9984
- name: BIGCHAINDB_KEYPAIR_PUBLIC
value: EEWUAhsk94ZUHhVw7qx9oZiXYDAWc9cRz93eMrsTG4kZ
- name: BIGCHAINDB_KEYPAIR_PRIVATE
value: 3CjmRhu718gT1Wkba3LfdqX5pfYuBdaMPLd7ENUga5dm
- name: BIGCHAINDB_BACKLOG_REASSIGN_DELAY
value: "120"
- name: BIGCHAINDB_KEYRING
value: ""
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9984
hostPort: 9984
name: bdb-port
protocol: TCP
volumeMounts:
- name: bigchaindb-data
mountPath: /data
resources:
limits:
cpu: 200m
memory: 768Mi
livenessProbe:
httpGet:
path: /
port: 9984
initialDelaySeconds: 15
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /
port: 9984
initialDelaySeconds: 15
timeoutSeconds: 10
restartPolicy: Always
volumes:
- name: bigchaindb-data
hostPath:
path: /disk/bigchaindb-data

View File

@ -0,0 +1,87 @@
###############################################################
# This config file runs bigchaindb:latest and connects to the #
# rethinkdb backend as a service #
###############################################################
apiVersion: v1
kind: Service
metadata:
name: bdb-rdb-service
namespace: default
labels:
name: bdb-rdb-service
spec:
selector:
app: bdb-rdb
ports:
- port: 9984
targetPort: 9984
name: bdb-rdb-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bdb-rdb
spec:
replicas: 1
template:
metadata:
labels:
app: bdb-rdb
spec:
terminationGracePeriodSeconds: 10
containers:
- name: bdb-rdb
image: bigchaindb/bigchaindb:latest
args:
- start
env:
- name: BIGCHAINDB_DATABASE_HOST
value: rdb-service
- name: BIGCHAINDB_DATABASE_PORT
value: "28015"
- name: BIGCHIANDB_DATABASE_BACKEND
value: rethinkdb
- name: BIGCHAINDB_DATABASE_NAME
value: bigchain
- name: BIGCHAINDB_SERVER_BIND
value: 0.0.0.0:9984
- name: BIGCHAINDB_KEYPAIR_PUBLIC
value: EEWUAhsk94ZUHhVw7qx9oZiXYDAWc9cRz93eMrsTG4kZ
- name: BIGCHAINDB_KEYPAIR_PRIVATE
value: 3CjmRhu718gT1Wkba3LfdqX5pfYuBdaMPLd7ENUga5dm
- name: BIGCHAINDB_BACKLOG_REASSIGN_DELAY
value: "120"
- name: BIGCHAINDB_KEYRING
value: ""
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9984
hostPort: 9984
name: bdb-port
protocol: TCP
volumeMounts:
- name: bigchaindb-data
mountPath: /data
resources:
limits:
cpu: 200m
memory: 768Mi
livenessProbe:
httpGet:
path: /
port: 9984
initialDelaySeconds: 15
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /
port: 9984
initialDelaySeconds: 15
timeoutSeconds: 10
restartPolicy: Always
volumes:
- name: bigchaindb-data
hostPath:
path: /disk/bigchaindb-data

View File

@ -42,8 +42,8 @@ spec:
spec:
terminationGracePeriodSeconds: 10
containers:
- name: bdb-server
image: bigchaindb/bigchaindb:latest
- name: bigchaindb
image: bigchaindb/bigchaindb:master
args:
- start
env:

View File

@ -0,0 +1,89 @@
#####################################################
# This config file uses bdb v0.9.1 with bundled rdb #
#####################################################
apiVersion: v1
kind: Service
metadata:
name: bdb-service
namespace: default
labels:
name: bdb-service
spec:
selector:
app: bdb
ports:
- port: 9984
targetPort: 9984
name: bdb-http-api
- port: 8080
targetPort: 8080
name: bdb-rethinkdb-api
type: LoadBalancer
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: bdb
namespace: default
spec:
serviceName: bdb
replicas: 1
template:
metadata:
name: bdb
labels:
app: bdb
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "bdb091-configure",
"image": "bigchaindb/bigchaindb:0.9.1",
"command": ["bigchaindb", "-y", "configure", "rethinkdb"],
"volumeMounts": [
{
"name": "bigchaindb-data",
"mountPath": "/data"
}
]
}
]'
spec:
terminationGracePeriodSeconds: 10
containers:
- name: bdb091-server
image: bigchaindb/bigchaindb:0.9.1
args:
- -c
- /data/.bigchaindb
- start
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9984
hostPort: 9984
name: bdb-port
protocol: TCP
volumeMounts:
- name: bigchaindb-data
mountPath: /data
resources:
limits:
cpu: 200m
memory: 768Mi
livenessProbe:
httpGet:
path: /
port: 9984
initialDelaySeconds: 15
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /
port: 9984
initialDelaySeconds: 15
timeoutSeconds: 10
restartPolicy: Always
volumes:
- name: bigchaindb-data
hostPath:
path: /disk/bigchaindb-data

View File

@ -0,0 +1,75 @@
####################################################
# This config file runs rethinkdb:2.3 as a service #
####################################################
apiVersion: v1
kind: Service
metadata:
name: rdb-service
namespace: default
labels:
name: rdb-service
spec:
selector:
app: rdb
ports:
- port: 8080
targetPort: 8080
name: rethinkdb-http-port
- port: 28015
targetPort: 28015
name: rethinkdb-driver-port
type: LoadBalancer
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: rdb
namespace: default
spec:
serviceName: rdb
replicas: 1
template:
metadata:
name: rdb
labels:
app: rdb
spec:
terminationGracePeriodSeconds: 10
containers:
- name: rethinkdb
image: rethinkdb:2.3
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
hostPort: 8080
name: rdb-http-port
protocol: TCP
- containerPort: 28015
hostPort: 28015
name: rdb-client-port
protocol: TCP
volumeMounts:
- name: rdb-data
mountPath: /data
resources:
limits:
cpu: 200m
memory: 768Mi
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 10
restartPolicy: Always
volumes:
- name: rdb-data
hostPath:
path: /disk/rdb-data

View File

@ -0,0 +1,18 @@
##########################################################
# This YAML file desribes a k8s pvc for mongodb configDB #
##########################################################
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongo-configdb-claim
annotations:
volume.beta.kubernetes.io/storage-class: slow-configdb
spec:
accessModes:
- ReadWriteOnce
# FIXME(Uncomment when ACS supports this!)
# persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 20Gi

View File

@ -0,0 +1,12 @@
###################################################################
# This YAML file desribes a StorageClass for the mongodb configDB #
###################################################################
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow-configdb
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Standard_LRS
location: westeurope

View File

@ -0,0 +1,18 @@
########################################################
# This YAML file desribes a k8s pvc for mongodb dbPath #
########################################################
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongo-db-claim
annotations:
volume.beta.kubernetes.io/storage-class: slow-db
spec:
accessModes:
- ReadWriteOnce
# FIXME(Uncomment when ACS supports this!)
# persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 20Gi

View File

@ -0,0 +1,12 @@
#################################################################
# This YAML file desribes a StorageClass for the mongodb dbPath #
#################################################################
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow-db
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Standard_LRS
location: westeurope

76
k8s/mongodb/mongo-ss.yaml Normal file
View File

@ -0,0 +1,76 @@
########################################################################
# This YAML file desribes a StatefulSet with a service for running and #
# exposing a MongoDB service. #
# It depends on the configdb and db k8s pvc. #
########################################################################
apiVersion: v1
kind: Service
metadata:
name: mdb
namespace: default
labels:
name: mdb
spec:
selector:
app: mdb
ports:
- port: 27017
targetPort: 27017
name: mdb-port
type: LoadBalancer
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mdb
namespace: default
spec:
serviceName: mdb
replicas: 1
template:
metadata:
name: mdb
labels:
app: mdb
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongodb
image: mongo:3.4.1
args:
- --replSet=bigchain-rs
securityContext:
capabilities:
add:
- FOWNER
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
hostPort: 27017
name: mdb-port
protocol: TCP
volumeMounts:
- name: mdb-db
mountPath: /data/db
- name: mdb-configdb
mountPath: /data/configdb
resources:
limits:
cpu: 200m
memory: 768Mi
livenessProbe:
tcpSocket:
port: mdb-port
successThreshold: 1
failureThreshold: 3
periodSeconds: 15
timeoutSeconds: 1
restartPolicy: Always
volumes:
- name: mdb-db
persistentVolumeClaim:
claimName: mongo-db-claim
- name: mdb-configdb
persistentVolumeClaim:
claimName: mongo-configdb-claim

12
k8s/toolbox/Dockerfile Normal file
View File

@ -0,0 +1,12 @@
# Toolbox container for debugging
# Run as:
# docker run -it --rm --entrypoint sh krish7919/toolbox
# kubectl run -it toolbox --image krish7919/toolbox --restart=Never --rm
FROM alpine:3.5
MAINTAINER github.com/krish7919
WORKDIR /
RUN apk add --no-cache curl bind-tools
ENTRYPOINT ["/bin/sh"]

12
k8s/toolbox/README.md Normal file
View File

@ -0,0 +1,12 @@
## Docker container with debugging tools
* curl
* bind-utils - provides nslookup, dig
## Build
`docker build -t bigchaindb/toolbox .`
## Push
`docker push bigchaindb/toolbox`