Environnement de lab Linux Ansible

1. Installation du stack Libvirtd

Afin de mieux profiter des capacités de virtualisation d’un hôte Linux, il est recommandé d’installer le stack “libvirtd”.

Installation des pré-requis :

echo "Go to the home folder"
cd
echo "Install Git"
apt-get -y install git
echo "Clone the virt-script repo on Github"
git clone https://github.com/goffinet/virt-scripts
echo "Go to the virt-scripts folder"
cd virt-scripts
echo "Install the requirements"
./autoprep.sh
systemctl stop apache2

Test en lançant trois invités :

echo "include virt-scripts in the PATH"
export PATH=$PATH:~/virt-scripts/ ; echo "PATH=$PATH:~/virt-scripts/" >> ~/.bashrc
echo "Download images Centos and Ubuntu images"
download-images.sh centos7 --force
download-images.sh ubuntu1804 --force
echo "Launch three Centos 7 guests"
for x in app1 app2 db ; do define-guest-image.sh $x centos7 ; done
echo "Add the libvirt bridge as nameserver"
sed -i '1 i\nameserver 192.168.122.1' /etc/resolv.conf
echo "Check the guests"
virsh list
echo "Get disk usage"
du -h /var/lib/libvirt/images/*
echo "Test the connectivity to the hosts"
sleep 60 ; for x in app1 app2 db ; do ping -c10 $x ; done

Créer un dossier de travail ~/test-ansible et créer un fichier d’inventaire :

cd
mkdir test-ansible
cd test-ansible
cat << EOF >> inventory.ini
app1
app2
db

[all:vars]
ansible_connection=ssh
ansible_user=root
ansible_ssh_pass=testtest
EOF

Dans le même dossier créer un fichier de configuration :

cat << EOF >> ansible.cfg
[defaults]
inventory = ./inventory.ini
host_key_checking = False
private_key_file = /root/.ssh/id_rsa
callback_whitelist = profile_tasks
forks = 20
#strategy = free
gathering = explicit
become = True
[callback_profile_tasks ]
task_output_limit = 100
EOF

Tester Ansible sur tous les hôtes de l’inventaire :

ansible -m ping all

Résultat :

app1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
db | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
app2 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

2. Services Cloud

  • Amazon Web Services
  • Google Cloud Platform
  • OpenStack : OVH
  • Microsoft Azure
  • Digital Ocean
  • Packet
  • Linode
  • Rackspace
  • Scaleway

3. Notes Vagrant Up

Installation de Vagrant pour KVM (Ubuntu 16.04)

Source : Vagrant Libvirt Provider

apt-get update && apt-get -yy upgrade
apt-get -yy build-dep vagrant ruby-libvirt
apt-get -yy install qemu libvirt-bin ebtables dnsmasq
apt-get -yy install libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev
wget https://releases.hashicorp.com/vagrant/2.1.2/vagrant_2.1.2_x86_64.deb
dpkg -i vagrant_2.1.2_x86_64.deb
vagrant plugin install vagrant-libvirt
echo .bashrc >> "export VAGRANT_DEFAULT_PROVIDER=libvirt"
vagrant plugin install vagrant-mutate

Déploiement d’une Box Centos 7

Liste de boxes KVM.

vagrant box add geerlingguy/centos7
vagrant mutate geerlingguy/centos7 libvirt
vagrant init geerlingguy/centos7
vagrant up

Par exemple :

mkdir -p vproj/centos-7
cd vproj/centos-7
vagrant init magneticone/centos-7
grep -v '^$\|^\s*\#' Vagrantfile

Vagrant.configure("2") do |config|
  config.vm.box = "magneticone/centos-7"
end
vagrant up
vagrant ssh

Approvisionnement

cat << EOF > bootstrap.sh
#!/usr/bin/env bash

yum install httpd
systemctl enable httpd
systemctl start httpd
EOF
cat << EOF > Vagrantfile
Vagrant.configure("2") do |config|
  config.vm.box = "magneticone/centos-7"
  config.vm.provision :shell, path: "bootstrap.sh"
end
EOF
vagrant provision

Gestion de base

vagrant suspend
vagrant resume
vagrant halt
vagrant destroy

Déploiement d’une Box Windows

cd ~/vproj
mkdir windows-server-2016-standard-x64-eval
cd windows-server-2016-standard-x64-eval
vagrant init peru/windows-server-2016-standard-x64-eval
vagrant up

Déploiement de plusieurs boxes

Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure('2') do |config|
  config.vm.box = "magneticone/centos-7"

  # Application server 1.
  config.vm.define "app1" do |app|
    app.vm.hostname = "orc-app1.test"
    app.vm.network :private_network, ip: "192.168.60.4"
  end

  # Application server 2.
  config.vm.define "app2" do |app|
    app.vm.hostname = "orc-app2.test"
    app.vm.network :private_network, ip: "192.168.60.5"
  end

   # Database server.
  config.vm.define "db" do |db|
    db.vm.hostname = "orc-db.test"
    db.vm.network :private_network, ip: "192.168.60.6"
  end
end

Pour l’inventaire suivant :

https://github.com/geerlingguy/ansible-for-devops/blob/master/orchestration/inventory

inventory file

# Application servers
[app]
192.168.60.4
192.168.60.5

# Database server
[db]
192.168.60.6

# Group 'multi' with all servers
[multi:children]
app
db

[multi:vars]
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
vagrant up
ssh-keygen -y -f ~/.vagrant.d/insecure_private_key
ssh-keygen -y -f ~/.vagrant.d/insecure_private_key > ~/.vagrant.d/vagrant.pub
ssh-copy-id -f -i ~/.vagrant.d/vagrant.pub vagrant@192.168.60.4

Ansible comme provisionner Vagrant

Ansible comme post-processor Packer