From d3924213ee4f74be7f496065a863fc25c99826e1 Mon Sep 17 00:00:00 2001 From: Troy McConaghy Date: Thu, 29 Jun 2017 14:35:23 +0200 Subject: [PATCH 1/6] edits in nginx-3scale service docs --- .../node-on-kubernetes.rst | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/docs/server/source/production-deployment-template/node-on-kubernetes.rst b/docs/server/source/production-deployment-template/node-on-kubernetes.rst index fb4219f1..92c7c424 100644 --- a/docs/server/source/production-deployment-template/node-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/node-on-kubernetes.rst @@ -138,14 +138,17 @@ Step 4.1: Vanilla NGINX Step 4.2: OpenResty NGINX + 3scale ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - - * This configuration is located in the file ``nginx/nginx-3scale-svc.yaml``. * You have to enable HTTPS for this one and will need an HTTPS certificate - for your domain + for your domain. - * You should have already created the Kubernetes Secret in the previous - step. + * You should have already created the necessary Kubernetes Secrets in the previous + step (e.g. ``https-certs`` and ``threescale-credentials``). + + * This configuration is located in the file ``nginx-3scale/nginx-3scale-svc.yaml``. + + * Set the ``metadata.name`` and ``metadata.labels.name`` to the value + set in ``ngx-instance-name`` in the ConfigMap above. * Set the ``spec.selector.app`` to the value set in ``ngx-instance-name`` in the ConfigMap followed by ``-dep``. For example, if the value set in the From a72bf56089b794dbd1e360aa8ed95844fb7366af Mon Sep 17 00:00:00 2001 From: Troy McConaghy Date: Thu, 29 Jun 2017 15:09:27 +0200 Subject: [PATCH 2/6] copyedited docs re assigning DNS name to NGINX public IP --- .../node-on-kubernetes.rst | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/docs/server/source/production-deployment-template/node-on-kubernetes.rst b/docs/server/source/production-deployment-template/node-on-kubernetes.rst index 92c7c424..0310a4df 100644 --- a/docs/server/source/production-deployment-template/node-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/node-on-kubernetes.rst @@ -170,20 +170,18 @@ Step 5: Assign DNS Name to the NGINX Public IP `_ or are using HTTPS certificates tied to a domain. - * The following command can help you find out if the nginx service started + * The following command can help you find out if the NGINX service started above has been assigned a public IP or external IP address: .. code:: bash $ kubectl --context k8s-bdb-test-cluster-0 get svc -w - * Once a public IP is assigned, you can log in to the Azure portal and map it to + * Once a public IP is assigned, you can map it to a DNS name. - - * We usually assign ``bdb-test-cluster-0``, ``bdb-test-cluster-1`` and + We usually assign ``bdb-test-cluster-0``, ``bdb-test-cluster-1`` and so on in our documentation. - - * Let us assume that we assigned the unique name of ``bdb-test-cluster-0`` here. + Let's assume that we assign the unique name of ``bdb-test-cluster-0`` here. **Set up DNS mapping in Azure.** @@ -198,7 +196,7 @@ have the Azure DNS prefix name along with a long random string, without the (for example, ``bdb-test-cluster-0``), click ``Save``, and wait for the changes to be applied. -To verify the DNS setting is operational, you can run ``nslookup `` from your local Linux shell. This will ensure that when you scale the replica set later, other MongoDB From 1034db1ce50ede3ced81133018c1ca48e2d1baa4 Mon Sep 17 00:00:00 2001 From: Troy McConaghy Date: Thu, 29 Jun 2017 15:32:08 +0200 Subject: [PATCH 3/6] Fixed name of https-certs volume mount in nginx-3scale-dep.yaml --- k8s/nginx-3scale/nginx-3scale-dep.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/k8s/nginx-3scale/nginx-3scale-dep.yaml b/k8s/nginx-3scale/nginx-3scale-dep.yaml index 1dacf617..7951e14d 100644 --- a/k8s/nginx-3scale/nginx-3scale-dep.yaml +++ b/k8s/nginx-3scale/nginx-3scale-dep.yaml @@ -84,7 +84,7 @@ spec: timeoutSeconds: 10 restartPolicy: Always volumes: - - name: https + - name: https-certs secret: secretName: https-certs defaultMode: 0400 From 92ec8f613e67d4e670a3d26dc9b9ea6d9205306f Mon Sep 17 00:00:00 2001 From: Troy McConaghy Date: Thu, 29 Jun 2017 16:02:34 +0200 Subject: [PATCH 4/6] Fixed spelling & grammar stuff in docs re MDB StatefulSet --- .../production-deployment-template/node-on-kubernetes.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/server/source/production-deployment-template/node-on-kubernetes.rst b/docs/server/source/production-deployment-template/node-on-kubernetes.rst index 0310a4df..34fbfda4 100644 --- a/docs/server/source/production-deployment-template/node-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/node-on-kubernetes.rst @@ -453,11 +453,11 @@ Step 11: Start a Kubernetes StatefulSet for MongoDB * Note how the MongoDB container uses the ``mongo-db-claim`` and the ``mongo-configdb-claim`` PersistentVolumeClaims for its ``/data/db`` and - ``/data/configdb`` diretories (mount path). + ``/data/configdb`` directories (mount paths). * Note also that we use the pod's ``securityContext.capabilities.add`` specification to add the ``FOWNER`` capability to the container. That is - because MongoDB container has the user ``mongodb``, with uid ``999`` and + because the MongoDB container has the user ``mongodb``, with uid ``999`` and group ``mongodb``, with gid ``999``. When this container runs on a host with a mounted disk, the writes fail when there is no user with uid ``999``. To avoid this, we use the Docker From 6b6bfe173331c862d167d5d7e8917521edcbad3c Mon Sep 17 00:00:00 2001 From: Troy McConaghy Date: Thu, 29 Jun 2017 16:39:01 +0200 Subject: [PATCH 5/6] Explained how to log in to the MongoDB pod --- .../node-on-kubernetes.rst | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/docs/server/source/production-deployment-template/node-on-kubernetes.rst b/docs/server/source/production-deployment-template/node-on-kubernetes.rst index 34fbfda4..7d0a8f83 100644 --- a/docs/server/source/production-deployment-template/node-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/node-on-kubernetes.rst @@ -491,12 +491,23 @@ Step 11: Start a Kubernetes StatefulSet for MongoDB Step 12: Configure Users and Access Control for MongoDB ------------------------------------------------------- - * Create a user on MongoDB with authorization to create more users and assign + * In this step, you will create a user on MongoDB with authorization + to create more users and assign roles to them. Note: You need to do this only when setting up the first MongoDB node of the cluster. - Log in to the MongoDB instance and open a mongo shell using the certificates + * Find out the name of your MongoDB pod by reading the output + of the ``kubectl ... get pods`` command at the end of the last step. + It should be something like ``mdb-instance-0-ss-0``. + + * Log in to the MongoDB pod using: + + .. code:: bash + + $ kubectl --context k8s-bdb-test-cluster-0 exec -it bash + + * Open a mongo shell using the certificates already present at ``/etc/mongod/ssl/`` .. code:: bash From 69cdfd56cfa6ed8dd9d1de2e785e24731edcfc08 Mon Sep 17 00:00:00 2001 From: Troy McConaghy Date: Thu, 29 Jun 2017 17:02:23 +0200 Subject: [PATCH 6/6] Added note about what to expect from MongoDB's db.auth() command --- .../production-deployment-template/node-on-kubernetes.rst | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/server/source/production-deployment-template/node-on-kubernetes.rst b/docs/server/source/production-deployment-template/node-on-kubernetes.rst index 7d0a8f83..4237fbe3 100644 --- a/docs/server/source/production-deployment-template/node-on-kubernetes.rst +++ b/docs/server/source/production-deployment-template/node-on-kubernetes.rst @@ -561,6 +561,9 @@ Step 12: Configure Users and Access Control for MongoDB PRIMARY> use admin PRIMARY> db.auth("adminUser", "superstrongpassword") + ``db.auth()`` returns 0 when authentication is not successful, + and 1 when successful. + * We need to specify the user name *as seen in the certificate* issued to the BigchainDB instance in order to authenticate correctly. Use the following ``openssl`` command to extract the user name from the