mirror of
https://github.com/bigchaindb/bigchaindb.git
synced 2024-10-13 13:34:05 +00:00

* Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Single node as a StatefulSet in k8s - uses bigchaindb/bigchaindb:0.9.1 * Updating README * rdb, mdb as stateful services * [WIP] bdb as a statefulset * [WIP] bdb w/ rdb and bdb w/ mdb backends - does not work as of now * Split mdb & bdb into separate pods + enhancements * discovery of the mongodb service by the bdb pod by using dns name. * using separate storage classes to map 2 different volumes exposed by the mongo docker container; one for /data/db (dbPath) and the other for /data/configdb (configDB). * using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However, this seems to be unsupported in Azure and the disks still show a reclaim policy of `delete`. * mongodb container runs the `mongod` process as user `mongodb` and group `mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999 and 999 respectively. When the constinaer runs on a host with a mounted disk, the writes fail, when there is no user with uid 999. To avoid this, I use the docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and gid permission checks during writes and allows writes. Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities * Delete redundant k8s files, add cluster deletion steps. * Documentation: running a single node with distinct mongodb and bigchaindb pods on k8s * Updates as per @ttmc's comments
19 lines
507 B
YAML
19 lines
507 B
YAML
##########################################################
|
|
# This YAML file desribes a k8s pvc for mongodb configDB #
|
|
##########################################################
|
|
|
|
kind: PersistentVolumeClaim
|
|
apiVersion: v1
|
|
metadata:
|
|
name: mongo-configdb-claim
|
|
annotations:
|
|
volume.beta.kubernetes.io/storage-class: slow-configdb
|
|
spec:
|
|
accessModes:
|
|
- ReadWriteOnce
|
|
# FIXME(Uncomment when ACS supports this!)
|
|
# persistentVolumeReclaimPolicy: Retain
|
|
resources:
|
|
requests:
|
|
storage: 20Gi
|