bigchaindb/k8s/mongodb/mongo-ss.yaml
Krish 7dbd374838 Running a single node on k8s (#1269)
* Single node as a StatefulSet in k8s
- uses bigchaindb/bigchaindb:0.9.1

* Updating README

* rdb, mdb as stateful services

* [WIP] bdb as a statefulset

* [WIP] bdb w/ rdb and bdb w/ mdb backends
- does not work as of now

* Split mdb & bdb into separate pods + enhancements
*  discovery of the mongodb service by the bdb pod by using dns name.
*  using separate storage classes to map 2 different volumes exposed by the
mongo docker container; one for /data/db (dbPath) and the other for
 /data/configdb (configDB).
*  using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However,
this seems to be unsupported in Azure and the disks still show a reclaim
policy of `delete`.
*  mongodb container runs the `mongod` process as user `mongodb` and group
`mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999
and 999 respectively. When the constinaer runs on a host with a mounted disk,
the writes fail, when there is no user with uid 999. To avoid this, I use the
docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and
gid permission checks during writes and allows writes.
Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities

* Delete redundant k8s files, add cluster deletion steps.

* Single node as a StatefulSet in k8s
- uses bigchaindb/bigchaindb:0.9.1

* Updating README

* rdb, mdb as stateful services

* [WIP] bdb as a statefulset

* [WIP] bdb w/ rdb and bdb w/ mdb backends
- does not work as of now

* Split mdb & bdb into separate pods + enhancements
*  discovery of the mongodb service by the bdb pod by using dns name.
*  using separate storage classes to map 2 different volumes exposed by the
mongo docker container; one for /data/db (dbPath) and the other for
 /data/configdb (configDB).
*  using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However,
this seems to be unsupported in Azure and the disks still show a reclaim
policy of `delete`.
*  mongodb container runs the `mongod` process as user `mongodb` and group
`mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999
and 999 respectively. When the constinaer runs on a host with a mounted disk,
the writes fail, when there is no user with uid 999. To avoid this, I use the
docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and
gid permission checks during writes and allows writes.
Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities

* Delete redundant k8s files, add cluster deletion steps.

* Documentation: running a single node with distinct mongodb and bigchaindb
pods on k8s

* Updates as per @ttmc's comments
2017-03-09 16:53:00 +01:00

77 lines
1.8 KiB
YAML

########################################################################
# This YAML file desribes a StatefulSet with a service for running and #
# exposing a MongoDB service. #
# It depends on the configdb and db k8s pvc. #
########################################################################
apiVersion: v1
kind: Service
metadata:
name: mdb
namespace: default
labels:
name: mdb
spec:
selector:
app: mdb
ports:
- port: 27017
targetPort: 27017
name: mdb-port
type: LoadBalancer
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mdb
namespace: default
spec:
serviceName: mdb
replicas: 1
template:
metadata:
name: mdb
labels:
app: mdb
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongodb
image: mongo:3.4.1
args:
- --replSet=bigchain-rs
securityContext:
capabilities:
add:
- FOWNER
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
hostPort: 27017
name: mdb-port
protocol: TCP
volumeMounts:
- name: mdb-db
mountPath: /data/db
- name: mdb-configdb
mountPath: /data/configdb
resources:
limits:
cpu: 200m
memory: 768Mi
livenessProbe:
tcpSocket:
port: mdb-port
successThreshold: 1
failureThreshold: 3
periodSeconds: 15
timeoutSeconds: 1
restartPolicy: Always
volumes:
- name: mdb-db
persistentVolumeClaim:
claimName: mongo-db-claim
- name: mdb-configdb
persistentVolumeClaim:
claimName: mongo-configdb-claim