diff --git a/docs/root/source/timestamps.md b/docs/root/source/timestamps.md index 5ed8d3a4..5525c01c 100644 --- a/docs/root/source/timestamps.md +++ b/docs/root/source/timestamps.md @@ -14,7 +14,7 @@ We advise BigchainDB nodes to run special software (an "NTP daemon") to keep the ## Converting Timestamps to UTC -To convert a BigchainDB timestamp (a Unix time) to UTC, you need to know how the node providing the timestamp was set up. That's because different setups will report a different "Unix time" value around leap seconds! There's [a nice Red Hat Developer Blog post about the various setup options](http://developers.redhat.com/blog/2015/06/01/five-different-ways-handle-leap-seconds-ntp/). If you want more details, see [David Mills' pages about leap seconds, NTP, etc.](https://www.eecis.udel.edu/~mills/leap.html) (David Mills designed NTP.) +To convert a BigchainDB timestamp (a Unix time) to UTC, you need to know how the node providing the timestamp was set up. That's because different setups will report a different "Unix time" value around leap seconds! There's [a nice Red Hat Developer Blog post about the various setup options](https://developers.redhat.com/blog/2015/06/01/five-different-ways-handle-leap-seconds-ntp/). If you want more details, see [David Mills' pages about leap seconds, NTP, etc.](https://www.eecis.udel.edu/~mills/leap.html) (David Mills designed NTP.) We advise BigchainDB nodes to run an NTP daemon with particular settings so that their timestamps are consistent. diff --git a/docs/server/source/appendices/aws-setup.md b/docs/server/source/appendices/aws-setup.md index f57997c5..0471f8af 100644 --- a/docs/server/source/appendices/aws-setup.md +++ b/docs/server/source/appendices/aws-setup.md @@ -69,4 +69,4 @@ aws ec2 import-key-pair \ If you're curious why there's a `file://` in front of the path to the public key, see issue [aws/aws-cli#41 on GitHub](https://github.com/aws/aws-cli/issues/41). -If you want to verify that your key pair was imported by AWS, go to the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), select the region you gave above when you did `aws configure` (e.g. eu-central-1), click on **Key Pairs** in the left sidebar, and check that `` is listed. +If you want to verify that your key pair was imported by AWS, go to [the Amazon EC2 console](https://console.aws.amazon.com/ec2/v2/home), select the region you gave above when you did `aws configure` (e.g. eu-central-1), click on **Key Pairs** in the left sidebar, and check that `` is listed. diff --git a/docs/server/source/appendices/ntp-notes.md b/docs/server/source/appendices/ntp-notes.md index e6b81b3f..08861cb1 100644 --- a/docs/server/source/appendices/ntp-notes.md +++ b/docs/server/source/appendices/ntp-notes.md @@ -9,7 +9,7 @@ There are several NTP daemons available, including: * Maybe [Ntimed](http://nwtime.org/projects/ntimed/), once it's production-ready * [More](https://en.wikipedia.org/wiki/Ntpd#Implementations) -We suggest you run your NTP daemon in a mode which will tell your OS kernel to handle leap seconds in a particular way: the default NTP way, so that system clock adjustments are localized and not spread out across the minutes, hours, or days surrounding leap seconds (e.g. "slewing" or "smearing"). There's [a nice Red Hat Developer Blog post about the various options](http://developers.redhat.com/blog/2015/06/01/five-different-ways-handle-leap-seconds-ntp/). +We suggest you run your NTP daemon in a mode which will tell your OS kernel to handle leap seconds in a particular way: the default NTP way, so that system clock adjustments are localized and not spread out across the minutes, hours, or days surrounding leap seconds (e.g. "slewing" or "smearing"). There's [a nice Red Hat Developer Blog post about the various options](https://developers.redhat.com/blog/2015/06/01/five-different-ways-handle-leap-seconds-ntp/). Use the default mode with `ntpd` and `chronyd`. For another NTP daemon, consult its documentation. diff --git a/docs/server/source/appendices/run-with-docker.md b/docs/server/source/appendices/run-with-docker.md index 4747de56..455331ed 100644 --- a/docs/server/source/appendices/run-with-docker.md +++ b/docs/server/source/appendices/run-with-docker.md @@ -35,7 +35,7 @@ Let's analyze that command: `$HOME/bigchaindb_docker` to the container directory `/data`; this allows us to have the data persisted on the host machine, you can read more in the [official Docker - documentation](https://docs.docker.com/engine/userguide/containers/dockervolumes/#mount-a-host-directory-as-a-data-volume) + documentation](https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume) * `-t` allocate a pseudo-TTY * `-i` keep STDIN open even if not attached * `bigchaindb/bigchaindb` the image to use diff --git a/docs/server/source/clusters-feds/aws-testing-cluster.md b/docs/server/source/clusters-feds/aws-testing-cluster.md index 942871e3..f4a94cac 100644 --- a/docs/server/source/clusters-feds/aws-testing-cluster.md +++ b/docs/server/source/clusters-feds/aws-testing-cluster.md @@ -26,7 +26,7 @@ pip install fabric fabtools requests boto3 awscli What did you just install? * "[Fabric](http://www.fabfile.org/) is a Python (2.5-2.7) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks." -* [fabtools](https://github.com/ronnix/fabtools) are "tools for writing awesome Fabric files" +* [fabtools](https://github.com/fabtools/fabtools) are "tools for writing awesome Fabric files" * [requests](http://docs.python-requests.org/en/master/) is a Python package/library for sending HTTP requests * "[Boto](https://boto3.readthedocs.io/en/latest/) is the Amazon Web Services (AWS) SDK for Python, which allows Python developers to write software that makes use of Amazon services like S3 and EC2." (`boto3` is the name of the latest Boto package.) * [The aws-cli package](https://pypi.python.org/pypi/awscli), which is an AWS Command Line Interface (CLI). diff --git a/docs/server/source/clusters-feds/monitoring.md b/docs/server/source/clusters-feds/monitoring.md index 659326f7..4a5de698 100644 --- a/docs/server/source/clusters-feds/monitoring.md +++ b/docs/server/source/clusters-feds/monitoring.md @@ -3,7 +3,7 @@ BigchainDB uses [StatsD](https://github.com/etsy/statsd) for cluster monitoring. We require some additional infrastructure to take full advantage of its functionality: * an agent to listen for metrics: [Telegraf](https://github.com/influxdata/telegraf), -* a time-series database: [InfluxDB](https://influxdata.com/time-series-platform/influxdb/), and +* a time-series database: [InfluxDB](https://www.influxdata.com/time-series-platform/influxdb/), and * a frontend to display analytics: [Grafana](http://grafana.org/). We put each of those inside its own Docker container. The whole system is illustrated below. diff --git a/docs/server/source/dev-and-test/setup-run-node.md b/docs/server/source/dev-and-test/setup-run-node.md index e577a755..db188f71 100644 --- a/docs/server/source/dev-and-test/setup-run-node.md +++ b/docs/server/source/dev-and-test/setup-run-node.md @@ -35,15 +35,14 @@ The BigchainDB [CONTRIBUTING.md file](https://github.com/bigchaindb/bigchaindb/b ## Option B: Using a Dev Machine on Cloud9 -Ian Worrall of [Encrypted Labs](http://www.encryptedlabs.com/) wrote a document (PDF) explaining how to set up a BigchainDB (Server) dev machine on [Cloud9](https://c9.io/): - -[Download that document from GitHub](https://github.com/bigchaindb/bigchaindb/raw/master/docs/source/_static/cloud9.pdf) +Ian Worrall of [Encrypted Labs](http://www.encryptedlabs.com/) wrote a document (PDF) explaining how to set up a BigchainDB (Server) dev machine on Cloud9: +[Download that document from GitHub](https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/docs/server/source/_static/cloud9.pdf) ## Option C: Using a Local Dev Machine and Docker -You need to have recent versions of [docker engine](https://docs.docker.com/engine/installation/#installation) -and [docker-compose](https://docs.docker.com/compose/install/). +You need to have recent versions of [Docker Engine](https://docs.docker.com/engine/installation/) +and (Docker) [Compose](https://docs.docker.com/compose/install/). Build the images: diff --git a/docs/server/source/drivers-clients/http-client-server-api.rst b/docs/server/source/drivers-clients/http-client-server-api.rst index 17cdf429..4cab0c57 100644 --- a/docs/server/source/drivers-clients/http-client-server-api.rst +++ b/docs/server/source/drivers-clients/http-client-server-api.rst @@ -74,8 +74,7 @@ POST /transactions/ Push a new transaction. - Note: The posted transaction should be a valid and signed `transaction - `_. + Note: The posted transaction should be a valid and signed :doc:`transaction <../data-models/transaction-model>`. The steps to build a valid transaction are beyond the scope of this page. One would normally use a driver such as the `BigchainDB Python Driver `_ to diff --git a/docs/server/source/nodes/setup-run-node.md b/docs/server/source/nodes/setup-run-node.md index 6b8a29b7..b8de7340 100644 --- a/docs/server/source/nodes/setup-run-node.md +++ b/docs/server/source/nodes/setup-run-node.md @@ -58,7 +58,7 @@ There are many options and tradeoffs. Don't forget to look into Amazon Elastic B ## Install RethinkDB Server -If you don't already have RethinkDB Server installed, you must install it. The RethinkDB documentation has instructions for [how to install RethinkDB Server on a variety of operating systems](http://rethinkdb.com/docs/install/). +If you don't already have RethinkDB Server installed, you must install it. The RethinkDB documentation has instructions for [how to install RethinkDB Server on a variety of operating systems](https://rethinkdb.com/docs/install/). ## Configure RethinkDB Server diff --git a/docs/server/source/topic-guides/models.md b/docs/server/source/topic-guides/models.md index e0265285..7f993feb 100644 --- a/docs/server/source/topic-guides/models.md +++ b/docs/server/source/topic-guides/models.md @@ -3,4 +3,4 @@ This page about transaction concepts and data models was getting too big, so it was split into smaller pages. It will be deleted eventually, so update your links. Here's where you can find the new pages: * [Transaction Concepts](https://docs.bigchaindb.com/en/latest/transaction-concepts.html) -* [Data Models (all of them)](https://docs.bigchaindb.com/en/latest/data-models/index.html) +* [Data Models (all of them)](../data-models/index.html)