Introducing swarmt: local swarm clusters manager
Swarmt
is a small project I’ve started while dealing with the many swarm clusters
I’ve deployed on my laptop.
Moments when doing docker-machine ls
gives that kind of output:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
anchore-vm - virtualbox Stopped Unknown
dagda-vm - virtualbox Stopped Unknown
swarm1m1 - virtualbox Stopped Unknown
swarm1m2 - virtualbox Stopped Unknown
swarm1w1 - virtualbox Stopped Unknown
swarm1w2 - virtualbox Stopped Unknown
swarm1w3 - virtualbox Stopped Unknown
rancherm1 - virtualbox Stopped Unknown
rancherm2 - virtualbox Stopped Unknown
rancherw1 - virtualbox Stopped Unknown
rancherw2 - virtualbox Stopped Unknown
rancherw3 - virtualbox Stopped Unknown
pxcm1 - virtualbox Stopped Unknown
pxcm2 - virtualbox Stopped Unknown
pxcw1 - virtualbox Stopped Unknown
pxcw2 - virtualbox Stopped Unknown
pxcw3 - virtualbox Stopped Unknown
........
Until now I was using that kind of not so fancy shell tips:
for i in swarm1m1 swarm1m2 swarm1w1 swarm1w2 swarm1w3; do docker-machine $i stop;done
Here comes swarmt.sh to my rescue !
How does swarmt work
Basically swarmt
wraps docker-machine
with a little bit of [[IMG MAGIC]]
It looks for a configuration file which by default should be named swarmt.conf
you can use different configuration files (see examples below)
The file should contain these parameters:
project=[your project name]
smanager=[number of swarm managers you want]
sworker=[number of swarm workers you want]
mdriver=[docker-machine driver you want to use] (Virtualbox only , digital-ocean should follow)
mimage=[name of the image you want use]
dotoken=[digital-ocean token]
stackfile=[name of the stack file you want swarmt to load in the end] (optional)
Different options are available in swarmt
:
init
: create swarm nodes and bootstrap the cluster as defined in the configuration file
start
: start an existing swarm cluster
stop
: stop all swarm nodes
rm
: stop and delete all swarm nodes
list
: list and give status of all swarm nodes
Examples
Single swarm cluster
Let’s start with a very simple swarm cluster (i.e 1 manager and 2 workers)
for a mysql galera cluster.
Edit swarmt.conf
as below:
project=swarmG
smanager=1
sworker=2
mdriver=virtualbox
mimage=https://releases.rancher.com/os/latest/rancheros.iso
dotoken=
stackfile=swarmG.yml
In this example, swarmG.yml doesn’t exist so swarmt
won’t deploy any container
Time to fire up our swarm cluster:
./swarmt.sh init
<=== yes ! that simple!
Few minutes later, you should have this message:
swarmG swarm cluster is up and running
Let’s see if it really works !
eval $(docker-machine env swarmGm1)
docker node ls
Should output:
swarmGm1 * virtualbox Running tcp://192.168.99.100:2376 v17.05.0-ce
swarmGw1 - virtualbox Running tcp://192.168.99.101:2376 v17.05.0-ce
swarmGw2 - virtualbox Running tcp://192.168.99.102:2376 v17.05.0-ce
Awesome !
multiple swarm cluster
Now imagine, you want swarmt
to manage, let’s say 2 swarm cluster configuration.
First, create myproject.conf
:
myproject.conf:
project=myproject
smanager=1
sworker=2
mdriver=virtualbox
mimage=https://releases.rancher.com/os/latest/rancheros.iso
dotoken=
stackfile=
mycoolproject.conf
:
mycoolproject.conf:
project=mycoolproject
smanager=2
sworker=3
mdriver=virtualbox
mimage=https://releases.rancher.com/os/latest/rancheros.iso
dotoken=
stackfile=
Let’s start clusters for these 2 projects:
./swarmt.sh -c myproject.conf init
./swarmt.sh -c mycoolproject.conf init
you should see:
myproject swarm cluster is up and running
mycoolproject swarm cluster is up and running
How about listing all swarm nodes?
./swarmt.sh list
<== no need to specify any configuration file
Should output:
myproject swarm nodes:
myprojectm1 * virtualbox Running tcp://192.168.99.100:2376 v17.05.0-ce
myprojectw1 - virtualbox Running tcp://192.168.99.101:2376 v17.05.0-ce
myprojectw2 - virtualbox Running tcp://192.168.99.102:2376 v17.05.0-ce
mycoolproject swarm nodes:
mycoolprojectm1 * virtualbox Running tcp://192.168.99.100:2376 v17.05.0-ce
mycoolprojectm2 - virtualbox Running tcp://192.168.99.100:2376 v17.05.0-ce
mycoolprojectw1 - virtualbox Running tcp://192.168.99.101:2376 v17.05.0-ce
mycoolprojectw2 - virtualbox Running tcp://192.168.99.102:2376 v17.05.0-ce
mycoolprojectw3 - virtualbox Running tcp://192.168.99.102:2376 v17.05.0-ce
Now let’s stop mycoolproject swarm nodes:
./swarmt.sh -c mycoolproject.conf stop
Should output:
Stopping "mycoolprojectm1"...
Machine "mycoolprojectm1" was stopped.
Stopping "mycoolprojectm2"...
Machine "mycoolprojectm2" was stopped.
Stopping "mycoolprojectw1"...
Machine "mycoolprojectw1" was stopped.
Stopping "mycoolprojectw2"...
Machine "mycoolprojectw2" was stopped.
Stopping "mycoolprojectw3"...
Machine "mycoolprojectw3" was stopped.
===================================
mycoolproject swarm cluster is halted
===================================
Need to start again your cluster ? here you are:
./swarmt.sh -c mycoolproject.conf start
Should output:
.......
mycoolprojectw3 swarm node is starting :
Starting "mycoolprojectw3"...
(mycoolprojectw3) Check network to re-create if needed...
(mycoolprojectw3) Waiting for an IP...
Machine "mycoolprojectw3" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
===================================
swarm1 swarm cluster is ready
===================================
As you can see swarmt
can be pretty useful when dealing with multiple clusters setup.
Checkout my github repository for more about swarmt
.
As always PR / Feature request / Issues are welcome :)
Have fun !
R.