From 754bbf7a25a8dda49b5d08ef0d0443bbf5af0e36 Mon Sep 17 00:00:00 2001 From: Craig Jennings Date: Sun, 7 Apr 2024 13:41:34 -0500 Subject: new repository --- devdocs/vagrant/provisioning%2Fansible_local.html | 99 +++++++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 devdocs/vagrant/provisioning%2Fansible_local.html (limited to 'devdocs/vagrant/provisioning%2Fansible_local.html') diff --git a/devdocs/vagrant/provisioning%2Fansible_local.html b/devdocs/vagrant/provisioning%2Fansible_local.html new file mode 100644 index 00000000..48742f8b --- /dev/null +++ b/devdocs/vagrant/provisioning%2Fansible_local.html @@ -0,0 +1,99 @@ +

Ansible Local Provisioner

Provisioner name: ansible_local

The Vagrant Ansible Local provisioner allows you to provision the guest using Ansible playbooks by executing ansible-playbook directly on the guest machine.

Warning: If you are not familiar with Ansible and Vagrant already, I recommend starting with the shell provisioner. However, if you are comfortable with Vagrant already, Vagrant is a great way to learn Ansible.

+

Setup Requirements

The main advantage of the Ansible Local provisioner in comparison to the Ansible (remote) provisioner is that it does not require any additional software on your Vagrant host.

On the other hand, Ansible must obviously be installed on your guest machine(s).

Note: By default, Vagrant will try to automatically install Ansible if it is not yet present on the guest machine (see the install option below for more details).

Usage

This page only documents the specific parts of the ansible_local provisioner. General Ansible concepts like Playbook or Inventory are shortly explained in the introduction to Ansible and Vagrant.

The Ansible Local provisioner requires that all the Ansible Playbook files are available on the guest machine, at the location referred by the provisioning_path option. Usually these files are initially present on the host machine (as part of your Vagrant project), and it is quite easy to share them with a Vagrant Synced Folder.

Simplest Configuration

To run Ansible from your Vagrant guest, the basic Vagrantfile configuration looks like:

Vagrant.configure("2") do |config|
+  # Run Ansible from the Vagrant VM
+  config.vm.provision "ansible_local" do |ansible|
+    ansible.playbook = "playbook.yml"
+  end
+end
+
+

Requirements:

Options

This section lists the specific options for the Ansible Local provisioner. In addition to the options listed below, this provisioner supports the common options for both Ansible provisioners.

Tips and Tricks

Install Galaxy Roles in a path owned by root

+
Disclaimer: This tip is not a recommendation to install galaxy roles out of the vagrant user space, especially if you rely on ssh agent forwarding to fetch the roles.

Be careful that ansible-galaxy command is executed by default as vagrant user. Setting galaxy_roles_path to a folder like /etc/ansible/roles will fail, and ansible-galaxy will extract the role a second time in /home/vagrant/.ansible/roles/. Then if your playbook uses become to run as root, it will fail with a "role was not found" error.

To work around that, you can use ansible.galaxy_command to prepend the command with sudo, as illustrated in the example below:

Vagrant.configure(2) do |config|
+  config.vm.box = "centos/7"
+  config.vm.provision "ansible_local" do |ansible|
+    ansible.become = true
+    ansible.playbook = "playbook.yml"
+    ansible.galaxy_role_file = "requirements.yml"
+    ansible.galaxy_roles_path = "/etc/ansible/roles"
+    ansible.galaxy_command = "sudo ansible-galaxy install --role-file=%{role_file} --roles-path=%{roles_path} --force"
+  end
+end
+
+

Ansible Parallel Execution from a Guest

With the following configuration pattern, you can install and execute Ansible only on a single guest machine (the "controller") to provision all your machines.

Vagrant.configure("2") do |config|
+
+  config.vm.box = "ubuntu/trusty64"
+
+  config.vm.define "node1" do |machine|
+    machine.vm.network "private_network", ip: "172.17.177.21"
+  end
+
+  config.vm.define "node2" do |machine|
+    machine.vm.network "private_network", ip: "172.17.177.22"
+  end
+
+  config.vm.define 'controller' do |machine|
+    machine.vm.network "private_network", ip: "172.17.177.11"
+
+    machine.vm.provision :ansible_local do |ansible|
+      ansible.playbook       = "example.yml"
+      ansible.verbose        = true
+      ansible.install        = true
+      ansible.limit          = "all" # or only "nodes" group, etc.
+      ansible.inventory_path = "inventory"
+    end
+  end
+
+end
+
+

You need to create a static inventory file that corresponds to your Vagrantfile machine definitions:

controller ansible_connection=local
+node1      ansible_ssh_host=172.17.177.21 ansible_ssh_private_key_file=/vagrant/.vagrant/machines/node1/virtualbox/private_key
+node2      ansible_ssh_host=172.17.177.22 ansible_ssh_private_key_file=/vagrant/.vagrant/machines/node2/virtualbox/private_key
+
+[nodes]
+node[1:2]
+
+

And finally, you also have to create an ansible.cfg file to fully disable SSH host key checking. More SSH configurations can be added to the ssh_args parameter (e.g. agent forwarding, etc.)

[defaults]
+host_key_checking = no
+
+[ssh_connection]
+ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes
+
+

+ © 2010–2018 Mitchell Hashimoto
Licensed under the MPL 2.0 License.
+ https://www.vagrantup.com/docs/provisioning/ansible_local.html +

+
-- cgit v1.2.3