This is helper scripts for Spark Group Demo 2016 Feb, Singapore.
The main purpose for these scripts is setting up mesos cluster for demo.
However, there are plenty of existing ways to do this.
- dcos for mesos eco-system bootstrap with coreos
- mantl for mesos eco-system bootstrap with centos
- playa-mesos for mesos sandbox in vagrant.
They can achieve the same thing in a more formal way.
warning Local deployment can only be used for bare minimum features of this demo.
The local deployment is based on virtualbox and docker-machine These two softwares are required to be installed.
After this you can
$ cd local
# Start all the docker machines and bootstrap
$ ./deploy_dm.sh bootstrap
# Remove all the docker machines
$ ./deploy_dm.sh teardown
After you create the env, you can
- Get the ip of mesos-master by
docker-machine ip mesos-master
- Access "http://$YOUR_MASTER_IP:5050"
- Find the correct ubuntu image with ami finder
- Prepare your ssh keys (pem and pub files). Put rename them to be spark.pem and spark.pub in aws folder.
- Create an instance profile (IAM role) with s3 read/write permission.
- Create s3 bucket for exhibitor config sync.
After this you can
$ cd aws
$ cp infra.sample.yml infra.yml
# Modify infra.yml accordingly (The s3 bucket your created)
$ cp terraform/aws.tf.tmpl aws.tf
# Modify accordingly (ami, credentials, instance profile)
# Start all the instances and bootstrap
$ ./deploy_aws.sh bootstrap
# Remove all instances
$ ./deploy_aws.sh teardown
In the bootstrap script,
After you create the env, you can
- Get one of the masters's ip
ansible -m debug -a "var=hostvars['spark-master-001']['public_ipv4']" spark-master-001
ansible -m debug -a "var=hostvars['spark-master-001']['private_ipv4']" spark-master-001
- SSH to your master and create tunnel
ssh -i spark.pem ubuntu@$YOUR_MASTER_PUBLIC_IP -D 8108
- Use Foxy Proxy to access your infra in aws "http://$YOUR_MASTER_PRIVATE_IP:5050"
I didn't tried. But terraform has all the providers.
TODO