read

Using salt for configuration and release management is great, but a system as depended on as CM and RM also needs rigours testing before changes can go live.

Fixing and eliminating typos, mistake etc (which always can happen with human interaction) needs to be a priority when working with configuration manangement like salt, puppet, chef etc. Surly you have a dev, qa, stage and production environment and a pull request process, but you also need to make sure it’s easy and convenient to test salt states, pillars and other config during development.

Salt cluster with docker help

I’ve created a dockerifyed salt setup using dockerfile, machine and composer which provides a locally running salt cluster. Both master and minions are running from the same docker image which includes both salt-master, salt-minion, salt-api and also salt-cloud.

Initial setup

Clone repo

Start by cloning repo.

git clone [email protected]:jacksoncage/salt-docker.git
cd salt-docker

Needed applications

To get things running we do need virtualbox, machine, compose and boot2docker with which we setup a virtualbox called salt and connected local docker client to it.

brew install docker-machine docker-compose
docker-machine create --driver virtualbox salt
eval "$(docker-machine env salt)"

Compose cluster

We’re then using compose to specify our cluster, see below or at example yml.

master:
  image: jacksoncage/salt:latest
  environment:
    SALT_USE: master
    SALT_NAME: master
    SALT_GRAINS: "{'test': 'test'}"
    LOG_LEVEL: debug
  hostname: master
  expose:
    - "4505"
    - "4506"
    - "8080"
    - "8081"
  ports:
    - "8080:8080"
    - "8081:8081"
  volumes:
    - ./srv/salt:/srv/salt/:rw
minion1:
  image: jacksoncage/salt:latest
  links:
   - master
  environment:
    SALT_USE: minion
    SALT_NAME: minion1
    SALT_GRAINS: "{'test': 'test'}"
  hostname: minion1
minion2:
  image: jacksoncage/salt:latest
  links:
   - master
  environment:
    SALT_USE: minion
    SALT_NAME: minion2
    SALT_GRAINS: "{'test': 'test'}"
  hostname: minion2

In this case we’ll be starting 3 containers containing 1 master and 3 minions. As you see usage, name and grains can be set with environment variables and volumes are used to mount in ./srv/salt volume to be able to run your salt states etc. Compose templete can be used to replicate you production environment.

Run cluster

All you need to do to get cluster up and running is a docker-compose up

docker-compose up  
Creating saltdocker_master_1...
Pulling image jacksoncage/salt:latest...
Pulling repository jacksoncage/salt
9375a2464ad6: Download complete
Status: Downloaded newer image for jacksoncage/salt:latest
Creating saltdocker_minion1_1...
Creating saltdocker_minion2_1...
Attaching to saltdocker_master_1, saltdocker_minion1_1, saltdocker_minion2_1
minion1_1 | INFO: Set grains on minion1 to: {'testname': 'minion1'}
minion1_1 | INFO: Starting salt-minion with log level debug with hostname minion1
minion1_1 | [DEBUG   ] Reading configuration from /etc/salt/minion
minion1_1 | [DEBUG   ] Using cached minion ID from /etc/salt/minion_id: minion1
minion1_1 | [DEBUG   ] Configuration file path: /etc/salt/minion
minion1_1 | [INFO    ] Setting up the Salt Minion "minion1"
minion1_1 | [DEBUG   ] Created pidfile: /var/run/salt-minion.pid
minion1_1 | [DEBUG   ] Reading configuration from /etc/salt/minion
minion2_1 | INFO: Set grains on minion2 to: {'testname': 'minion2'}
minion2_1 | INFO: Starting salt-minion with log level debug with hostname minion2
master_1  | INFO: Starting salt-minion and auto connect to salt-master
master_1  |  * Starting salt minion control daemon salt-minion
master_1  |    ...done.
master_1  | INFO: Set grains on master to: {'testname': 'master'}
master_1  | INFO: Starting salt-master with log level debug with hostname master
master_1  | [DEBUG   ] Reading configuration from /etc/salt/master
master_1  | [DEBUG   ] Using cached minion ID from /etc/salt/minion_id: master
master_1  | [DEBUG   ] Configuration file path: /etc/salt/master
master_1  | [INFO    ] Setting up the Salt Master
master_1  | [INFO    ] Generating master keys: /etc/salt/pki/master
master_1  | [INFO    ] Preparing the root key for local communication
master_1  | [DEBUG   ] Created pidfile: /var/run/salt-master.pid
master_1  | [INFO    ] salt-master is starting as user 'root'
master_1  | [INFO    ] Current values for max open files soft/hard setting: 1048576/1048576
master_1  | [DEBUG   ] Started '<bound method Master._clear_old_jobs of <salt.master.Master object at 0x7f34fe976a10>>'(*[], **{} with pid 36
master_1  | [DEBUG   ] Reading configuration from /etc/salt/master

Cluster is now up and all minions are connected to salt master.

Test and troubleshoot

By jumping in with docker exec -i -t saltdocker_master_1 bash your now able to test/troubleshoot. As you would from a normal salt master.

docker exec -i -t saltdocker_master_1 bash

[email protected]:/# salt \* test.ping
minion2:
    True
master:
    True
minion1:
    True

Now let’s write some states and test them out!

Blog Logo

Love Billingskog Nyberg


Published

Image

jacksoncage

A blog about sysadmin, devops, automation, containers and awesomeness!

Back to Overview