Added deploy_conf.py, changed code to use it, updated docs

This commit is contained in:
troymc 2016-05-13 13:39:27 +02:00
parent fe93b376ca
commit 6565f7a063
5 changed files with 128 additions and 79 deletions

View File

@ -4,37 +4,19 @@
# if any command has a non-zero exit status
set -e
USAGE="usage: ./awsdeploy.sh <number_of_nodes_in_cluster> <pypi_or_branch> <servers_or_clients>"
# Validate the values in deploy_conf.py
python validate_deploy_conf.py
if [ -z "$1" ]; then
echo $USAGE
echo "No first argument was specified"
echo "It should be a number like 3 or 15"
exit 1
else
NUM_NODES=$1
fi
if [ -z "$2" ]; then
echo $USAGE
echo "No second argument was specified, so BigchainDB will be installed from PyPI"
BRANCH="pypi"
else
BRANCH=$2
fi
if [ -z "$3" ]; then
echo $USAGE
echo "No third argument was specified, so servers will be deployed"
WHAT_TO_DEPLOY="servers"
else
WHAT_TO_DEPLOY=$3
fi
if [[ ("$WHAT_TO_DEPLOY" != "servers") && ("$WHAT_TO_DEPLOY" != "clients") ]]; then
echo "The third argument, if included, must be servers or clients"
exit 1
fi
# Read deploy_conf.py
# to set environment variables related to AWS deployment
echo "Reading deploy_conf.py"
source deploy_conf.py
echo "NUM_NODES = "$NUM_NODES
echo "BRANCH = "$BRANCH
echo "WHAT_TO_DEPLOY = "$WHAT_TO_DEPLOY
echo "USE_KEYPAIRS_FILE = "$USE_KEYPAIRS_FILE
echo "IMAGE_ID = "$IMAGE_ID
echo "INSTANCE_TYPE = "$INSTANCE_TYPE
# Check for AWS private key file (.pem file)
if [ ! -f "pem/bigchaindb.pem" ]; then
@ -67,7 +49,7 @@ chmod 0400 pem/bigchaindb.pem
# 5. writes the shellscript add2known_hosts.sh
# 6. (over)writes a file named hostlist.py
# containing a list of all public DNS names.
python launch_ec2_nodes.py --tag $TAG --nodes $NUM_NODES
python launch_ec2_nodes.py --tag $TAG
# Make add2known_hosts.sh executable then execute it.
# This adds remote keys to ~/.ssh/known_hosts
@ -117,7 +99,12 @@ if [ "$WHAT_TO_DEPLOY" == "servers" ]; then
# Transform the config files in the confiles directory
# to have proper keyrings, api_endpoint values, etc.
python clusterize_confiles.py confiles $NUM_NODES
if [ "$USE_KEYPAIRS_FILE" == "True" ]; then
echo "Using keypairs in keypairs.py"
python clusterize_confiles.py -k confiles $NUM_NODES
else
python clusterize_confiles.py confiles $NUM_NODES
fi
# Send one of the config files to each instance
for (( HOST=0 ; HOST<$NUM_NODES ; HOST++ )); do

View File

@ -0,0 +1,48 @@
# AWS deployment config file
# To use in a Bash shell script:
# source deploy_conf.py
# echo $EXAMPLEVAR
# To use in a Python script:
# from deploy_conf import *
# # EXAMPLEVAR now has a value
# DON'T PUT SPACES AROUND THE =
# because that would confuse Bash.
# Values can be strings in double quotes, or integers like 23
# NUM_NODES is the number of nodes to deploy
NUM_NODES=3
# PYPI_OR_BRANCH is either "pypi" or the name of a local Git branch
# (e.g. "master" or "feat/3627/optional-delimiter-in-txfile")
# It's where to get the BigchainDB code to be deployed on the nodes
BRANCH="master"
# WHAT_TO_DEPLOY is either "servers" or "clients"
# What do you want to deploy?
WHAT_TO_DEPLOY="servers"
# USE_KEYPAIRS_FILE is either True or False
# Should node keypairs be read from keypairs.py?
# (If False, then the keypairs will be whatever is in the the
# BigchainDB config files in the confiles directory.)
USE_KEYPAIRS_FILE=False
# IMAGE_ID is the Amazon Machine Image (AMI) id to use
# in all the servers/instances to be launched.
# Examples:
# "ami-accff2b1" = An Ubuntu 14.04.2 LTX "Ubuntu Cloud image" from Canonical
# 64-bit, hvm-ssd, published to eu-central-1
# See http://tinyurl.com/hkjhg46
# "ami-596b7235" = Ubuntu with IOPS storage? Does this work?
#
# See http://cloud-images.ubuntu.com/releases/14.04/release-20150325/
IMAGE_ID="ami-accff2b1"
# INSTANCE_TYPE is the type of AWS instance to launch
# i.e. How many CPUs do you want? How much storage? etc.
# Examples: "m3.2xlarge", "c3.8xlarge", "c4.8xlarge"
# For all options, see https://aws.amazon.com/ec2/instance-types/
INSTANCE_TYPE="m3.2xlarge"

View File

@ -18,10 +18,15 @@ import socket
import argparse
import botocore
import boto3
from awscommon import get_naeips
from deploy_conf import *
# First, ensure they're using Python 2.5-2.7
# Make sure NUM_NODES is an int
assert isinstance(NUM_NODES, int)
# Ensure they're using Python 2.5-2.7
pyver = sys.version_info
major = pyver[0]
minor = pyver[1]
@ -36,14 +41,8 @@ parser = argparse.ArgumentParser()
parser.add_argument("--tag",
help="tag to add to all launched instances on AWS",
required=True)
parser.add_argument("--nodes",
help="number of nodes in the cluster",
required=True,
type=int)
args = parser.parse_args()
tag = args.tag
num_nodes = int(args.nodes)
# Get an AWS EC2 "resource"
# See http://boto3.readthedocs.org/en/latest/guide/resources.html
@ -81,10 +80,10 @@ print('You have {} allocated elastic IPs which are '
'not already associated with instances'.
format(len(non_associated_eips)))
if num_nodes > len(non_associated_eips):
num_eips_to_allocate = num_nodes - len(non_associated_eips)
if NUM_NODES > len(non_associated_eips):
num_eips_to_allocate = NUM_NODES - len(non_associated_eips)
print('You want to launch {} instances'.
format(num_nodes))
format(NUM_NODES))
print('so {} more elastic IPs must be allocated'.
format(num_eips_to_allocate))
for _ in range(num_eips_to_allocate):
@ -103,22 +102,19 @@ if num_nodes > len(non_associated_eips):
raise
print('Commencing launch of {} instances on Amazon EC2...'.
format(num_nodes))
format(NUM_NODES))
for _ in range(num_nodes):
for _ in range(NUM_NODES):
# Request the launch of one instance at a time
# (so list_of_instances should contain only one item)
list_of_instances = ec2.create_instances(
ImageId='ami-accff2b1', # ubuntu-image
# 'ami-596b7235', # ubuntu w/ iops storage
MinCount=1,
MaxCount=1,
KeyName='bigchaindb',
InstanceType='m3.2xlarge',
# 'c3.8xlarge',
# 'c4.8xlarge',
SecurityGroupIds=['bigchaindb']
)
ImageId=IMAGE_ID,
MinCount=1,
MaxCount=1,
KeyName='bigchaindb',
InstanceType=INSTANCE_TYPE,
SecurityGroupIds=['bigchaindb']
)
# Tag the just-launched instances (should be just one)
for instance in list_of_instances:

View File

@ -0,0 +1,28 @@
# -*- coding: utf-8 -*-
"""This script validates the values in deploy_conf.py
"""
from __future__ import unicode_literals
import sys
from deploy_conf import *
try:
assert isinstance(NUM_NODES, int)
assert isinstance(BRANCH, str)
assert isinstance(WHAT_TO_DEPLOY, str)
assert isinstance(USE_KEYPAIRS_FILE, bool)
assert isinstance(IMAGE_ID, str)
assert isinstance(INSTANCE_TYPE, str)
except NameError as e:
sys.exit('A variable with {} '.format(e.args[0]) + 'in deploy_conf.py')
if NUM_NODES > 64:
raise ValueError('NUM_NODES should be less than or equal to 64. '
'The deploy_conf.py file sets it to {}'.format(NUM_NODES))
if WHAT_TO_DEPLOY not in ['servers', 'clients']:
raise ValueError('WHAT_TO_DEPLOY should be either "servers" or "clients". '
'The deploy_conf.py file sets it to {}'.
format(WHAT_TO_DEPLOY))

View File

@ -153,43 +153,33 @@ cd deploy-cluster-aws
python3 write_keypairs_file.py 100
```
The above command generates a file with 100 keypairs. (You can generate more keypairs than you need, so you can use the same list over and over again, for different numbers of servers.) To make the `awsdeploy.sh` script read all keys from `keypairs.py`, you must _edit_ the `awsdeploy.sh` script: change the line that says `python clusterize_confiles.py confiles $NUM_NODES` to `python clusterize_confiles.py -k confiles $NUM_NODES` (i.e. add the `-k` option).
The above command generates a file with 100 keypairs. (You can generate more keypairs than you need, so you can use the same list over and over again, for different numbers of servers.) To make the `awsdeploy.sh` script read all keys from `keypairs.py`, just set `USE_KEYPAIRS_FILE=True` in `deploy_conf.py`.
### Step 2
Step 2 is to launch the nodes ("instances") on AWS, to install all the necessary software on them, configure the software, run the software, and more.
Here's an example of how one could launch a BigchainDB cluster of three (3) nodes on AWS:
First, edit the AWS deployment configuration file, `deploy_conf.py`, in the `bigchaindb/deploy-cluster-aws` directory. It comes with comments explaining each of the configuration settings. You may want to make a copy of `deploy_conf.py` before editing it. The defaults are (or should be):
```text
NUM_NODES=3
BRANCH="master"
WHAT_TO_DEPLOY="servers"
USE_KEYPAIRS_FILE=False
IMAGE_ID="ami-accff2b1"
INSTANCE_TYPE="m3.2xlarge"
```
Once you've edited `deploy_conf.py` to your liking:
```text
# in a Python 2.5-2.7 virtual environment where fabric, boto3, etc. are installed
cd bigchaindb
cd deploy-cluster-aws
./awsdeploy.sh 3
./awsdeploy.sh
# Only if you want to start BigchainDB on all the nodes:
fab start_bigchaindb
```
`awsdeploy.sh` is a Bash script which calls some Python and Fabric scripts. The usage is:
```text
./awsdeploy.sh <number_of_nodes_in_cluster> [pypi_or_branch] [servers_or_clients]
```
**<number_of_nodes_in_cluster>** (Required)
The number of nodes you want to deploy. Example value: 5
**[pypi_or_branch]** (Optional)
Where the nodes should get their BigchainDB source code. If it's `pypi`, then BigchainDB will be installed from the latest `bigchaindb` package in the [Python Package Index (PyPI)](https://pypi.python.org/pypi). That is, on each node, BigchainDB will be installed using `pip install bigchaindb`. You can also put the name of a local Git branch; it will be compressed and sent out to all the nodes for installation. If you don't include the second argument, then the default is `pypi`.
**[servers_or_clients]** (Optional)
If you want to deploy BigchainDB servers, then the third argument should be `servers`.
If you want to deploy BigchainDB clients, then the third argument should be `clients`.
The third argument is optional, but if you want to include it, you must also include the second argument. If you don't include the third argument, then the default is `servers`.
---
If you're curious what the `awsdeploy.sh` script does, [the source code](https://github.com/bigchaindb/bigchaindb/blob/master/deploy-cluster-aws/awsdeploy.sh) has lots of explanatory comments, so it's quite easy to read.
`awsdeploy.sh` is a Bash script which calls some Python and Fabric scripts. If you're curious what it does, [the source code](https://github.com/bigchaindb/bigchaindb/blob/master/deploy-cluster-aws/awsdeploy.sh) has lots of explanatory comments, so it's quite easy to read.
It should take a few minutes for the deployment to finish. If you run into problems, see the section on Known Deployment Issues below.