- Currently, MongoDB container crashed because of resource constaints i.e.
out of memory exception. This change updates the resources and provides
data on how the configure/calculate them(if not following the guide).
- Also, add the ability to specify the storage engine(WiredTiger) cache
size for MongoDB, this configuration also helps with keeping the resources
constrained for MongoDB containers.
- Minor changes in some other documents as well.
- If a mandatory variable is not specified, it will exit with the relevant
code and error message.
- For more verbosity, we will also echo the values for all the mandatory
variables.
- Documentation support to add a new BDB node to an existing
replica set, using x.509 certificates and SSL/TSL connections, across
geographically dispersed clusters.
- Fix some documentation issues and add more references i.e.
specifically about signing of MongoDB member certificates.
- Minor fixes for nginx-https-dep.yaml(invalid configMap var)
- Reconfigure nginx keep_alive between MongoDB front and backend ports.
- Editor removed whitespaces
- Creating a common secret for CA, since all the members of the replica set
and the clients need to have a common CA, moving all the relevant configuration
to a common secret.
- Modifying Dockerfiles for some components, once changes are approved
we will publish the new images.
- No documentation changes required.
- MongoDB StatefulSet hitting memory limit, so k8s restarts it.
We have had multiple instances of restarts lately.
- Changing it to 3.5 GB, data and reasoning to back it up
is mentioned in the ticket #1655
Update MongoDB container tag to `3.0`.
Doc change to reflect bdb-config.bdb-user parameter usage.
Fix typo in configuration.md.
Add BIGCHAINDB_DATABASE_SSL parameter to bigchaindb-dep.yaml for
Kubernetes deployments.
Refer the the `bdb-user` parameter from ConfigMap in
bigchaindb-dep.yaml.
Consolidate all BigchainDB parameter values under the
`bdb-config` ConfigMap.
Remove `bdb-user` from secrets.yaml.
* Changes to support auth on the infrastructure
* Auth over TLS/SSL support in BigchainDB, MongoDB, Monitoring Agent, Backup Agent
* Update certificates: Different OUs specified now
* Code formatting
- Make flake happy!
* Raise proper authentication failed error
* Documentation changes for auth
* Support auth in k8s deployment
* Commit certs for monitoring and backup agents
* Configuration to allow Cloud Manager Backup Agent to backup data
* Update docs and remove authentication error
* Support for secure TLS communication in MongoDB, MongoDB Monitoring
Agent and MongoDB Backup Agent
- Move from Golang to Bash for entrypoint program
- Update image tag to 2.0 for Backup and Monitoring Agents and to
3.4.4 for MongoDB
- Add documentation
* changed title & rewrote Step 1 of workflow.rst
* copy-edited ca-installation.rst
* copy-edited & modified structure of workflow.rst
* moved repeated Easy-RSA install & config docs to new page
* edited the sentences describing the Easy-RSA dirs
* copy-edited the page about generating server certificate
* copy-edited the page about generating client certificate
* renamed page to 'How to Set Up a Self-Signed Certificate Authority'
* copy-edited page about how to revoke a certificate
* Comments on how to uniquely name all instances in the cluster
* Added comments about the other questions when setting up a CA
* Added note about one Agent Api Key per Cloud Manager backup
* docs: clarified instructions for generating server CSR
* docs: added back 'from your PKI infrastructure'
* docs: fixed step & added step re/ FQDNs & certs in workflow.rst
* docs: added note re/ the Distinguished Name
* Update docs for env vars setup
* docs: added tip: how to get help with the easyrsa command
* Add more tools to the toolbox container
* Add mongodb monitoring agent
* Add a bigchaindb/mongodb-monitoring-agent container that includes the
monitoring agent.
* It makes use of an api key provided by MongoDB Cloud Manager. This is
included in the configuration/config-map.yaml file.
* Changes to mongodb StatefulSet configuration
Changes to bump up mongodb version to v3.4.3.
Add configuration settings for mongodb instance name in ConfigMap.
Split the mongodb service to a new configuration file.
* Modify bigchaindb deployment config
* Bugfix to remove keyring field for the first node.
* Split the mongodb service to a new configuration file.
* Add mongodb backup agent
* Add a bigchaindb/mongodb-backup-agent container that includes the
backup agent.
* It makes use of an api key provided by MongoDB Cloud Manager. This is
included in the configuration/config-map.yaml file.
* Changes to nginx deployment config
* Allow 'all' by default for now. This is included in the
configuration/config-map.yaml file.
* Dynamically resolve DNS addresses of our backend services; cache DNS
resolution for 20s.
* Configure DNS based on user provided resolver. This helps in user
deciding to provide 8.8.8.8 or a custom DNS for name resolution. For k8s
deployments, we use the hardcoded k8s DNS IP of 10.0.0.10.
* Changes to nginx-3scale deployment config
* Use the common ConfigMap in configuration/config-map.yaml file.
* Removing prefix `v` from the docker tag for mongodb-monitoring-agent and mongodb containers
* Bumping up version for nginx-3scale container
* Add small helper scripts for docker build and push of mongodb monitoring
and backup agents
* Documentation for setting up the first node with monitoring and backup
agents
- Added NGINX deployment to frontend both BDB and MDB.
- Nginx is configured with a whitelist (which is read from a ConfigMap)
to allow only other MDB nodes in the closter to communicate with it.
- Azure LB apparently does not support proxy protocol and hence
whitelisting fails as nginx always observer the LB IP instead of the
real IP in the TCP stream.
- Whitelisting source IPs for MongoDB
- Removing deprecated folder
- Better log format
- Intuitive port number usage
- README and examples
- Addressed a typo in PYTHON_STYLE_GUIDE.md
- Azure LB apparently does not support proxy protocol and hence
whitelisting fails as nginx always observer the LB IP instead of the
real IP in the TCP stream.
- Whitelisting source IPs for MongoDB
- Removing deprecated folder
- Multiple changes:
- Better log format
- Intuitive port number usage
- README and examples
- Addressed a typo in PYTHON_STYLE_GUIDE.md
- Documentation
- add the k8s directory to the ignore list in codecov.yml
* Combining configs
* Combining the persistent volume claims into a single file.
* Combining the storage classes into a single file.
* Updating documentation
* Multiple changes
* Support for ConfigMap
* Custom MongoDB container for BigchainDB
* Update documentation to run a single node on k8s
* Additional documentation
* Documentation to add a node to an existing BigchainDB cluster
* Commit on rolling upgrades
* Fixing minor documentation mistakes
* Documentation updates as per @ttmc's comments
* Block formatting error
* Change in ConfigMap yaml config
* Single node as a StatefulSet in k8s
- uses bigchaindb/bigchaindb:0.9.1
* Updating README
* rdb, mdb as stateful services
* [WIP] bdb as a statefulset
* [WIP] bdb w/ rdb and bdb w/ mdb backends
- does not work as of now
* Split mdb & bdb into separate pods + enhancements
* discovery of the mongodb service by the bdb pod by using dns name.
* using separate storage classes to map 2 different volumes exposed by the
mongo docker container; one for /data/db (dbPath) and the other for
/data/configdb (configDB).
* using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However,
this seems to be unsupported in Azure and the disks still show a reclaim
policy of `delete`.
* mongodb container runs the `mongod` process as user `mongodb` and group
`mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999
and 999 respectively. When the constinaer runs on a host with a mounted disk,
the writes fail, when there is no user with uid 999. To avoid this, I use the
docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and
gid permission checks during writes and allows writes.
Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities
* Delete redundant k8s files, add cluster deletion steps.
* Single node as a StatefulSet in k8s
- uses bigchaindb/bigchaindb:0.9.1
* Updating README
* rdb, mdb as stateful services
* [WIP] bdb as a statefulset
* [WIP] bdb w/ rdb and bdb w/ mdb backends
- does not work as of now
* Split mdb & bdb into separate pods + enhancements
* discovery of the mongodb service by the bdb pod by using dns name.
* using separate storage classes to map 2 different volumes exposed by the
mongo docker container; one for /data/db (dbPath) and the other for
/data/configdb (configDB).
* using the `persistentVolumeReclaimPolicy: Retain` in k8s pvc. However,
this seems to be unsupported in Azure and the disks still show a reclaim
policy of `delete`.
* mongodb container runs the `mongod` process as user `mongodb` and group
`mongodb. The corresponding `uid` and `gid` for the `mongod` process is 999
and 999 respectively. When the constinaer runs on a host with a mounted disk,
the writes fail, when there is no user with uid 999. To avoid this, I use the
docker provided feature of --cap-add=FOWNER in k8s. This bypasses the uid and
gid permission checks during writes and allows writes.
Ref: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities
* Delete redundant k8s files, add cluster deletion steps.
* Documentation: running a single node with distinct mongodb and bigchaindb
pods on k8s
* Updates as per @ttmc's comments