On Github agallou / presentation-provisionning
Apéro PHP les 29 du mois à l'Antre Autre
Talks dans cette salle tous les deux mois
Radically simple ITautomation platform
Python, GPL v3
via ssh
ansible prod -m shell -a "yum upgrade -y"
YAML + des bouts de Jinja
---
- hosts: jboss_servers
serial: 10%
tasks:
- nagios: action=disable_alerts service=host host={{ inventory_hostname }}
delegate_to: 127.0.0.1
- shell: yum upgrade -y
- shell: reboot
- wait_for: port=80 delay=10 host={{ inventory_hostname }}
delegate_to: 127.0.0.1
- nagios: action=enable_alerts service=host host={{ inventory_hostname }}
delegate_to: 127.0.0.1
---
- hosts: web_servers
roles:
- role: website
url: www.example.org
document_root: /srv/sites/www/
- role: website
url: webmail.example.org
document_root: /srv/sites/webmail/
modules:
- php
- role: website
url: calendar.example.org
redirect: webmail.example.org
Idempotents
Couvrent de nombreux besoins
[web_servers] www[1,5].example.org ansible_ssh_user=root [redis_servers] redis[1,3].example.org ansible_ssh_user=centos ansible_sudo=yes redis-test.example.org ansible_ssh_user=fedora ansible_sudo=yes
Utilisables en Jinja
---
- hosts: webbuilder
vars:
builder_user: builder_middleman
error_mail: misc@example.org
roles:
- role: builder
name: manageiq
git_url: "https://git.example.com/example/website.git"
git_version: master
remote_location: /var/www/html
remote_server: www.example.org
remote_user: root
$ ansible all -m setup -c local -i '127.0.0.1,'
127.0.0.1 | success >> {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"192.168.76.131"
],
"ansible_all_ipv6_addresses": [
"fe80::ea2a:eaff:fe15:9d20"
],
"ansible_architecture": "x86_64",
"ansible_bios_date": "03/28/2014",
"ansible_bios_version": "CJFT85RW (2.24 )",
"ansible_cmdline": {
"BOOT_IMAGE": "/vmlinuz-3.10.0-210.el7.x86_64",
"LANG": "fr_FR.UTF-8",
$ ls * deploy.yml hosts requirements.yml group_vars: webservers.yml redis_servers.yml host_vars: redis-test.yml roles: builder website redis
$ ls roles/builder/* roles/builder/defaults: mail.yml roles/builder/files: builder.sh roles/builder/handlers: mail.yml roles/builder/meta: main.yml roles/builder/tasks: main.yml roles/builder/templates: config.ini
- name: install Epel yum: pkg=epel-release state=installed when: ansible_distribution == 'CentOS'
- name: install base rpms
yum: pkg={{ item }} state=installed
with_items:
- screen
- htop
- iftop
- iotop
- strace
- vim-enhanced
- tcpdump
- chrony
- user:
name=builder_middleman
generate_ssh_key=yes
register: result
- authorized_key:
key="{{ result.ssh_public_key }}"
user=copy_user
delegate_to: www.example.org
http://www.ansible.com/
#ansible sur Freenode
ansible-users sur Google groups
http://docs.chef.io/attributes.html
En gros, c'est une variable configurable et/ou dépendante de la plateforme.
# my_app/attributes/default.rb default['my_app']['environment'] = 'prod'
# my_app/recipes/default.rb puts node['my_app']['environment']
# my_app/recipes/packages.rb
package 'hhvm' do
action :install
end
“Chef, installe le paquet hhvm s'il n'est pas déjà installé.
Merci !”
Une recette contient une ou plusieurs ressources
# recipes/packages.rb
yum_repository 'epel' do
mirrorlist 'http://mirrors.fedoraproject.org/mirrorlist?repo=epel-5&arch=$basearch'
gpgkey 'http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-6'
action :create
notifies :run, 'execute[yum-clean-all]', :immediately
end
execute 'yum-clean-all' do
command 'yum clean all'
action :nothing
end
package 'php'
“Chef, pour installer le paquet php, il faut commencer par ajouter le dépôt EPEL. Ensuite tu pourras installer php”
.
├── attributes/
│ └── default.rb
├── files/
│ └── default/
├── libraries/
├── metadata.rb
├── providers/
├── recipes/
│ ├── default.rb
│ └── packages.rb
├── resources/
└── templates/
└── default/
Un peu de JSON
{
"run_list": [
"recipe[etc_environment]",
"recipe[my_app::packages]",
"recipe[my_app]"
],
"etc_environment": {
"SYMFONY_ENV": "prod",
"SYMFONY_DEBUG": 0
}
}
La liste des recettes (run_list) à exécuter ainsi que les attributs
Pour le mode standalone
# /opt/chef/solo.rb cookbook_path ['/opt/chef/cookbooks'] json_attribs '/opt/chef/solo.json'
$ chef-solo -c /opt/chef/solo.rb
Starting Chef Client, version 11.12.8 resolving cookbooks for run list: ["ohai", "proxmox-ohai", "resolver", "timezone", "consul", ...] Synchronizing Cookbooks: - ohai - proxmox-ohai - resolver - timezone - consul ... Compiling Cookbooks... Converging 26 resources Recipe resolver::default * template[/etc/resolv.conf] action create (up to date) Recipe timezone::default * package[tzdata] action install (up to date) * template[/etc/timezone] action create (up to date) * bash[dpkg-reconfigure tzdata] action nothing (skipped due to action :nothing) Recipe consul::default ... Chef Client finished, 2/31 resources updated in 3.391670365 seconds
# my_app/recipes/default.rb # Une fonction Ruby puts 'Yo' # Une ressource Chef log 'Man' puts 'Hello' log 'World'
Yo Hello [2015-01-22T20:51:04+00:00] INFO: Man [2015-01-22T20:51:04+00:00] INFO:World
Pour le mode client/serveur
chef_server_url "https://chef.myorg.com" validation_client_name "chef-validator" node_name "my_app_web" environment "production"
$ chef-client
Grâce à l'index :node et à la méthode search.
search(:node, "role:db",
:filter_result => { 'ip' => [ 'ipaddress' ]}).each do |result|
puts result['ip']
end
https://docs.chef.io/resource_<resource-name>.html
Dépôt central des cookbooks communautaires
Code source des cookbooks communautaires
https://github.com/opscode-cookbookshttps://github.com/chef-cookbooks
À partir d’un jeu de règles
On peut aussi générer le catalogue depuis les cibles à condition de tout dupliquer.
node default {
service { 'sshd':
enable => true,
}
package { 'sshd':
ensure => latest,
name => 'openssh-server',
}
file { '/etc/ssh/sshd_config':
file => 'puppet:///files/sshd_config',
notify => Service['sshd'],
require => Package['sshd'],
}
}
Liste exhaustive des types natifs supportés.
define monmodule::montype (
$variable1 = 'valeur par défaut',
$variable2,
) {
file { "/etc/httpd/conf.d/${variable1}":
content => $variable2,
}
}
class monmodule::maclasse (
$variable1 = 'valeur par défaut',
$variable2,
) {
package { 'httpd': }
service { 'httpd': }
file { '/etc/httpd/httpd.conf':
content => template('monmodule/httpd.conf.erb'),
notify => Service['httpd'],
}
monmodule::montype { 'default':
variable2 => '',
}
}
Deux manières : la bonne et la mauvaise
include monmodule::maclasse
class { monmodule::maclasse: }
class { monmodule::maclasse:
variable1 => 'valeur',
}
À la place on met les variables de classe dans hiera.
/etc/puppet ├── auth.conf ├── environments │ └── production │ ├── manifests │ │ └── site.pp │ └── modules ├── fileserver.conf <!-- Utilisé par les requêtes puppet://[serveur]/loc --> ├── hiera.yaml <!-- Configuration de l’entrepôt de variables --> ├── manifests │ └── site.pp ├── modules ├── puppet.conf │ <!-- Mes recommandations --> ├── hieradata └── sensitive
/etc/puppet/fileserver.conf
[sensitive] path /etc/puppet/sensitive/%d/%h allow *
hierdata/
├── common.yaml ├── developement.yaml ├── afpy.org.yaml ├── integration.yaml └── server1.afpy.org.yaml
common.yaml
---
nginx::confd_purge: true
postgresql::globals::encoding: 'UTF-8'
default_vhost_name: "%{fqdn}"
root_keys:
- user1
jvm_properties:
Xmx: 5g
Les fonctions
hiera('key', 'default', 'override')
hiera_array('key', 'default', 'override')
hiera_hash('key', 'default', 'override')
Override sert pour redéfinir la hiérarchie de hiera.Permet de récupérer des informations à propos de nœuds.
architecture => x86_64
bios_version => MWPNT10N.86A.0083.2011.0524.1600
blockdevice_sda_model => TOSHIBA DT01ACA1
blockdevice_sda_size => 1000204886016
blockdevices => sda sdb sdc
domain => afpy.org
fqdn => exemple1.afpy.org
hostname => exemple1
interfaces => br0,eth0,lo,virbr0,virbr0_nic
ipaddress => 10.10.12.12
ipaddress6_br0 => 2001::1
is_virtual => false
kernel => Linux
kernelmajversion => 3.10
memoryfree => 2.85 GB
memorysize_mb => 3775.95
operatingsystem => CentOS
operatingsystemmajrelease => 7
os => {"name"=>"CentOS", "family"=>"RedHat", "release"=>{"major"=>"7", "minor"=>"0", "full"=>"7.0.1406"}}
osfamily => RedHat
partitions => {"sda1"=>{"uuid"=>"aa6f8f14-9e42-495d-8b1b-2a17849494d0", "size"=>"40957952", "mount"=>"/", "filesystem"=>"ext3"}, "sda2"=>{"uuid"=>"5bca139c-67a4-416c-8d6d-b2eda0404b64", "size"=>"1046528", "filesystem"=>"swap"}}
processorcount => 2
selinux => true
virtual => physical
Ressources exportées
@@dns::record::aaaa { 'wsgi':
data => $::ipaddress6,
zone => $::domain,
}
Dns::Record:Aaaa <<| zone == $::domain |>>
modules/monmodule/ ├── files │ └── sshd_config ├── lib │ ├── facter │ └── puppet │ ├── parser │ ├── provider │ └── type ├── manifests │ ├── init.pp <!-- monmodule --> │ ├── maclasse.pp <!-- monmodule::maclasse --> │ ├── montype.pp <!-- monmodule::montype --> │ └── montype │ └── autre.pp <!-- monmodule::montype::autre --> ├── Modulefile ├── spec / tests └── templates └── httpd.conf.erb
Gaston TJEBBES @majerti.fr>
Trois niveaux de complexité:
Installer un “master”
yum install salt-master
Installer des “minion”
yum install salt-minion
Référencer
salt-key -a minion.example.com
Les modules fournissent un panel de commandes exécutables
salt '*' test.ping salt 'minion.example.com' pkg.upgrade
#/srv/salt/_modules/hello.py
def message(filepath, message):
with open(filepath, w) as file_buffer:
file_buffer.write(message)
salt 'minion.example.com' hello.message /tmp/test "Hello world"
Les minions fournissent par défaut des variables d’environnement les ‘grains’ :
Fournissent des informations :
: - Matériel
- Logiciel
Sont personnalisables
salt 'minion.example.com' grains.get os minion
# /srv/salt/nginx.sls
{% if grains['node_type'] == 'django' %}
nginx:
pkg.installed:
- nginx
service.runing:
- name: nginx
- require:
- pkg: nginx
{% endif %}
salt 'minion.example.com' state.sls nginx
top.sls
le fichier d’entrée qui associe les états aux machines
# /srv/salt/top.sls
base:
'minion.example.com':
- django_project
- nginx
Ce qui va nous permettre de lancer
salt '*' state.highstate
require permet de requérir :
include: - nginx collect_static: cmd.run: - name: /root/collect_static.sh - require: - sls: nginx
Observe les modifications apportées par un autre état
gunicorn_conf_file: file.managed: - source: salt://django/source/etc/gunicorn.d/project.conf - name: /etc/gunicorn.d/project.conf gunicorn: service.running: - enable: True - reload: True - watch: - file: gunicorn_conf_file
gunicorn_conf_file: file.managed: - source: salt://django/source/etc/gunicorn.d/project.conf - name: /etc/gunicorn.d/project.conf - template: jinja
Composant permettant de distribuer des variables de configuration :
#/srv/pillar/top.sls base: 'minion.example.com': - db_pass #/srv/pillar/db_pass.sls sql_user: django sql_password: ma donnéessuper secret
# /srv/salt/django_project/sources/etc/django/settings.py