tf-ansible-workflow

Terraform/Ansible Workflow for Libvirt
git clone https://git.in0rdr.ch/tf-ansible-workflow.git
Log | Files | Refs | Pull requests |Archive

commit 9a7063871700c3294b52b52528801785674932fb
parent a68a4b1f9a8ea05ddee16dd398bfbbefa44a94a2
Author: Andreas Gruhler <andreas.gruhler@adfinis-sygroup.ch>
Date:   Mon, 19 Aug 2019 09:24:17 +0200

new structure

Diffstat:
MReadme.md | 77+++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------
Aansible/defaults/all.yml | 26++++++++++++++++++++++++++
Rget-ips.yml -> ansible/get-ips.yml | 0
Aansible/playbook.yml | 84+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Aansible/templates/config.j2 | 18++++++++++++++++++
Ddefaults/all.yml | 9---------
Ddefaults/qemu.yml | 2--
Dgroup_vars/all.yml | 9---------
Dinventory.tpl | 8--------
Dplaybook.yml | 55-------------------------------------------------------
Dqemu-config.yml.tpl | 13-------------
Dtemplates/ssh_config.j2 | 18------------------
Aterraform/outputs.tf | 17+++++++++++++++++
Aterraform/templates/inventory.tpl | 11+++++++++++
Aterraform/templates/qemu-config.yml.tpl | 7+++++++
Aterraform/terraform.tfvars.example | 11+++++++++++
Aterraform/variables.tf | 48++++++++++++++++++++++++++++++++++++++++++++++++
Aterraform/vms.tf | 66++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dvms.tf | 609-------------------------------------------------------------------------------
19 files changed, 341 insertions(+), 747 deletions(-)

diff --git a/Readme.md b/Readme.md @@ -1,16 +1,19 @@ # Proxmox VE (PVE) Terraform and Ansible Workflow -## 1 Preparation: Unicast MAC Address and TF Files -Generate a unicast MAC foreach VM: +This repository describes a workflow which helps me to (re)create multiple similar VMs for testing purposes. + +## 1 Preparation +### 1.1 PVE Varibles +Define authentication variables for the [PVE API](https://github.com/Telmate/proxmox-api-go). The Terraform provider relies on the PVE API which requires you to defined the following environment variables: ``` -network { - .. - macaddr = "A2:15:1C:5C:D8:31" - .. -} +export PM_USER=test@pve +export PM_PASS="secret" +export PM_API_URL="https://cloud.proxmox.org/api2/json" +#export TF_LOG=DEBUG ``` -For instance, use a bash script to do so: +### 1.2 Unicast MAC Addresses and Terraform Variables +Generate a unicast MAC foreach VM. For instance, use a bash script to do so: ``` #!/bin/sh function macaddr() { @@ -18,7 +21,14 @@ function macaddr() { } ``` -Also, make any changes to the `.tf` files for your infrastructure. +Use environment variables or create a new file `./terraform/terraform.tfvars` to specify the details for the VMs (see also `./terraform/variables.tf`). Among other variables, insert the mac addresses from above: +``` +# terraform.tfvars +hosts = ["host0"] +macaddr = { + host0 = "02:a2:1d:38:31:1c" +} +``` ## 2 Run Terraform @@ -28,29 +38,44 @@ terraform plan terraform apply ``` +* Terraform automatically recreates the Ansible inventory and the mapping of Qemu VM id to hostnames (see next section), whenever one of the hosts is added or removed (i.e., the Terraform `id` is changed). +* Terraform writes the SSH private key into the file `./ssh/id_rsa`. + ## 3 Terraform Outputs -Save following outputs: +If the Ansible inventory or the mapping of Qemu VM id to hostname needs to be updated manually, the values can be retrieved from the Terraform output any time: ``` -terraform output inventory > inventory -terraform output qemu_config > qemu-config.yml +terraform output inventory > ../ansible/inventory +terraform output qemu_config > ../ansible/qemu-config.yml # inspect the name of the key file, see instructions below terraform output ssh_keyfile ``` -Adjust variables in `group_vars/all.yml`: -* `ssh.identity_file`: Output of `terraform output ssh_keyfile` -* also, set `ssh.proxy_jump` and `user` if needed -* make sure `pve_api` points to your compiled pve api binary (https://github.com/Telmate/proxmox-api-go) +## 4 Ansible -Make sure that `id_rsa` matches both: -1. the name of `identity_file` in the file `ssh_config` -2. and `terraform output ssh_keyfile` +### 4.1 Preconditions and Preparations +Ansible depends on the following files written by Terraform, see section "2 Run Terraform" and "3 Terraform Outputs": +1. `./ansible/inventory`: The Ansible inventory containing a local connection and one group of remote hosts +2. `./ansible/qemu-config.yml`: The mapping of Qemu VM ids to hostnames -## 4 Ansible +Adjust variables in `./ansible/group_vars/all.yml`: +* `ssh_identity_file`: Relative path name to the SSH privat key (output of `terraform output ssh_keyfile`) +* Set `ssh_proxy_jump` and `ssh_user` if necessary +* Ensure `pve_api` points to your compiled PVE API binary +* Define `additional_users` as needed + +### 4.2 Run Ansible -Run playbook: +The Ansible playbook runst the following tasks: +1. Retrieve the IP of the VMs via Qemu guest agent +2. Write the IP to the file `./ansible/qemu-config.yml` +3. Build the `./ssh/ssh_config` based on the information in the previous step +4. Add additional users `additional_users` + +It is necessary to run ansible, because the IP address of the hosts cannot be retrieved by Terraform (the PVE provider is not mature enough yet). Therefore, we need to retrieve the IP addresses of the hosts via the Qemu guest agents running in the VMs. This process is automated and it will amend the IPs to the file `./ansible/qemu-config.yml`. Furthermore, the playbook will set the hostname and restart networking inside the VMs, such that then hostnames is published to the DNS server and the hosts are known/addressable by name. + +Run the playbook: ``` ansible-playbook playbook.yml -i inventory ``` @@ -73,7 +98,12 @@ terraform refresh ### 5.3 Retrive private key without running Terraform If needed, retrieve the SSH key (again) without re-applying changes: ``` -terraform output ssh_key > id_rsa +terraform output ssh_key > ../ssh/id_rsa ``` -Terraform takes care of writing this private key file the first time you run `terraform apply`, however, you might want to retrieve the key again without re-running Terraform. -\ No newline at end of file +Terraform takes care of writing this private key file the first time you run `terraform apply`, however, you might want to retrieve the key again without re-running Terraform. + +--- +## Dependencies +* PVE API: https://github.com/Telmate/proxmox-api-go +* Terraform provider for Proxmox: https://github.com/Telmate/terraform-provider-proxmox diff --git a/ansible/defaults/all.yml b/ansible/defaults/all.yml @@ -0,0 +1,26 @@ +--- + +# path to proxmox api binary +# this default is used if not otherwise specified +pve_api: 'proxmox-api-go' + +# ssh configuraiton to reach the VMs +# this is only an example, each line +# needs to enabled explicitly in group_vars +ssh_user: root +ssh_identity_file: '../ssh/id_rsa' +ssh_proxy_jump: proxyhost +ssh_include_config: '~/.ssh/config' + +# example of adding additional users +# additional_users: +# - name: user1 +# # comma seperated list of additional groups +# additional_groups: 'wheel' +# # mutually exclusive with reuse_ssh_key below +# generate_ssh_key: yes +# # mutually exclusive with generate_ssh_key above +# # lets you reuse an existing ssh key +# #ssh_key: '../ssh/id_rsa' +# # adds this key as authorized key +# sauthorized_key: '~/.ssh/id_rsa.pub' diff --git a/get-ips.yml b/ansible/get-ips.yml diff --git a/ansible/playbook.yml b/ansible/playbook.yml @@ -0,0 +1,84 @@ +--- + +# local task that generates an ssh onfig to connect to VMs +# - input/requires: './qemu-config.yml' +- hosts: local + vars: + qemu_config: "{{ lookup('file', 'qemu-config.yml') | from_yaml }}" + tasks: + + - name: get ips for invenotry hostnames + include_tasks: 'get-ips.yml' + loop_control: + index_var: i + loop: '{{ qemu_config }}' + + - name: copy ip4 adress to qemu config file + copy: + content: '{{ qemu_config | to_nice_yaml }}' + dest: 'qemu-config.yml' + + - name: create ssh config + template: + src: 'templates/config.j2' + dest: '../ssh/config' + +- hosts: qemu + tasks: + - name: set hostname + command: 'hostnamectl set-hostname {{ inventory_hostname }}' + register: hostname_update + + - name: restart network to register hostname with dns server + service: + name: network + state: restarted + when: hostname_update.changed + + - name: set ssh private key + copy: + src: '{{ ssh_identity_file }}' + dest: '{{ ansible_env.HOME }}/.ssh/id_rsa' + owner: '{{ ssh_user }}' + group: '{{ ssh_user }}' + mode: '0600' + + - name: add additional users + user: + name: '{{ item.name }}' + shell: /bin/bash + groups: '{{ item.additional_groups }}' + append: yes + loop: '{{ additional_users }}' + + - name: generate additional users ssh keys + user: + name: '{{ item.name }}' + generate_ssh_key: '{{ item.generate_ssh_key }}' + loop: '{{ additional_users }}' + when: item.generate_ssh_key | default(false, true) and not item.ssh_key | default(false, true) + + - name: ensure ssh directory for additional users exists + file: + path: '/home/{{ item.name }}/.ssh' + state: directory + mode: '0700' + loop: '{{ additional_users }}' + + - name: set additional users ssh keys from existing key + copy: + src: '{{ ssh_identity_file }}' + dest: '/home/{{ item.name }}/.ssh/id_rsa' + owner: '{{ item.name }}' + group: '{{ item.name }}' + mode: '0600' + loop: '{{ additional_users }}' + when: item.ssh_key | default(false, true) and not item.generate_ssh_key | default(false, true) + + - name: set authorized key for user + authorized_key: + user: '{{ item.name }}' + state: present + key: '{{ lookup("file", item.authorized_key) }}' + loop: '{{ additional_users }}' + when: item.authorized_key | default(false, true) diff --git a/ansible/templates/config.j2 b/ansible/templates/config.j2 @@ -0,0 +1,17 @@ +{% if ssh_include_config | default(false, true) %} +Include {{ ssh_include_config }} +{% endif %} + +{% for host in qemu_config %} +Host {{ host.fqdn }} + HostName {{ host.ip4 }} +{% if ssh_proxy_jump | default(false, true) %} + ProxyJump {{ ssh_proxy_jump }} +{% endif %} +{% if ssh_identity_file | default(false, true) %} + IdentityFile {{ ssh_identity_file }} +{% endif %} +{% if ssh_user | default(false, true) %} + User {{ ssh_user }} +{% endif %} +{% endfor %} +\ No newline at end of file diff --git a/defaults/all.yml b/defaults/all.yml @@ -1,8 +0,0 @@ ---- - pve_api: 'proxmox-api-go' - - ssh: - user: root - identity_file: '~/ssh/id_rsa' - proxy_jump: proxyhost - include_config: '~/.ssh/config' -\ No newline at end of file diff --git a/defaults/qemu.yml b/defaults/qemu.yml @@ -1,2 +0,0 @@ ---- -ansible_ssh_common_args: "-F ssh_config" diff --git a/group_vars/all.yml b/group_vars/all.yml @@ -1,8 +0,0 @@ ---- - pve_api: '/home/andi/gocode/src/github.com/Telmate/proxmox-api-go/proxmox-api-go' - - ssh: - user: root - identity_file: './id_rsa' - proxy_jump: proxmox - include_config: '~/.ssh/config' -\ No newline at end of file diff --git a/inventory.tpl b/inventory.tpl @@ -1,7 +0,0 @@ -[local] -localhost ansible_connection=local - -[qemu] -%{ for host in hosts ~} -${host.fqdn} -%{ endfor ~} -\ No newline at end of file diff --git a/playbook.yml b/playbook.yml @@ -1,54 +0,0 @@ ---- - -- hosts: local - vars: - qemu_config: "{{ lookup('file', 'qemu-config.yml') | from_yaml }}" - tasks: - - - name: get ips for invenotry hostnames - include_tasks: 'get-ips.yml' - loop_control: - index_var: i - loop: '{{ qemu_config }}' - - - name: copy ip4 adress to qemu config file - copy: - content: '{{ qemu_config | to_nice_yaml }}' - dest: 'qemu-config.yml' - - - name: create ssh config - template: - src: 'templates/ssh_config.j2' - dest: 'ssh_config' - - - name: create group_vars folder - file: - path: 'group_vars' - state: directory - - - name: add ssh_config to qemu vm group_vars - copy: - content: | - --- - ansible_ssh_common_args: "-F ssh_config" - dest: 'group_vars/qemu.yml' - -- hosts: qemu - tasks: - - name: set hostname - command: 'hostnamectl set-hostname {{ inventory_hostname }}' - register: hostname_update - - - name: restart network to register hostname on dns server - service: - name: network - state: restarted - when: hostname_update.changed - - - name: set ssh private key - copy: - src: '{{ ssh.identity_file }}' - dest: '/root/.ssh/id_rsa' - owner: '{{ ssh.user }}' - group: '{{ ssh.user }}' - mode: '0600' -\ No newline at end of file diff --git a/qemu-config.yml.tpl b/qemu-config.yml.tpl @@ -1,12 +0,0 @@ ---- - -# qemu_hosts: -# %{ for host in hosts ~} -# - fqdn: ${host.fqdn} -# id: ${split("/",host.id)[2]} -# %{ endfor ~} - -%{ for host in hosts ~} -- fqdn: ${host.fqdn} - id: ${split("/",host.id)[2]} -%{ endfor ~} -\ No newline at end of file diff --git a/templates/ssh_config.j2 b/templates/ssh_config.j2 @@ -1,17 +0,0 @@ -{% if ssh.include_config | default(false, true) %} -Include {{ ssh.include_config }} -{% endif %} - -{% for host in qemu_config %} -Host {{ host.fqdn }} - HostName {{ host.ip4 }} -{% if ssh.proxy_jump | default(false, true) %} - ProxyJump {{ ssh.proxy_jump }} -{% endif %} -{% if ssh.identity_file | default(false, true) %} - IdentityFile {{ ssh.identity_file }} -{% endif %} -{% if ssh.user | default(false, true) %} - User {{ ssh.user }} -{% endif %} -{% endfor %} -\ No newline at end of file diff --git a/terraform/outputs.tf b/terraform/outputs.tf @@ -0,0 +1,16 @@ +output "inventory" { + value = "${templatefile("${path.module}/templates/inventory.tpl", { hosts = proxmox_vm_qemu.host })}" +} + +output "qemu_config" { + value = "${templatefile("${path.module}/templates/qemu-config.yml.tpl", { hosts = proxmox_vm_qemu.host })}" +} + +output "ssh_key" { + value = "${tls_private_key.id_rsa.private_key_pem}" + sensitive = true +} + +output "ssh_keyfile" { + value = "${local_file.ssh_key.filename}" +} +\ No newline at end of file diff --git a/terraform/templates/inventory.tpl b/terraform/templates/inventory.tpl @@ -0,0 +1,10 @@ +[local] +localhost ansible_connection=local + +[qemu] +%{ for host in hosts ~} +${host.name} +%{ endfor ~} + +[qemu:vars] +ansible_ssh_common_args='-F ../ssh/config' +\ No newline at end of file diff --git a/terraform/templates/qemu-config.yml.tpl b/terraform/templates/qemu-config.yml.tpl @@ -0,0 +1,6 @@ +--- + +%{ for host in hosts ~} +- fqdn: ${host.name} + id: ${split("/",host.id)[2]} +%{ endfor ~} +\ No newline at end of file diff --git a/terraform/terraform.tfvars.example b/terraform/terraform.tfvars.example @@ -0,0 +1,11 @@ +hosts = ["host0"] +macaddr = { + bastion0 = "02:ea:8e:e2:1e:25" +} +pool = "pool-name" +cores = 1 +sockets = 1 +memory = 2048 +disk = 30 +target_node = "cloudname" +clone = "CentOS7-GenericCloud" diff --git a/terraform/variables.tf b/terraform/variables.tf @@ -0,0 +1,47 @@ +variable "hosts" { + type = "list" + default = ["node0", "node1"] +} + +variable "macaddr" { + type = "map" + default = { + node0 = "02:e6:df:96:00:d6" + node1 = "02:98:f7:29:3a:82" + } +} + +variable "pool" { + type = string + default = "pool-name" +} + +variable "cores" { + type = number + default = 1 +} + +variable "sockets" { + type = number + default = 1 +} + +variable "memory" { + type = number + default = 2048 +} + +variable "disk" { + type = number + default = 30 +} + +variable "target_node" { + type = string + default = "node-name" +} + +variable "clone" { + type = string + default = "CentOS7-GenericCloud" +} +\ No newline at end of file diff --git a/terraform/vms.tf b/terraform/vms.tf @@ -0,0 +1,65 @@ +resource "proxmox_vm_qemu" "host" { + # create each host + for_each = toset(var.hosts) + + name = "${each.value}" + + cores = var.cores + sockets = var.sockets + memory = var.memory + + bootdisk = "scsi0" + scsihw = "virtio-scsi-pci" + disk { + id = 0 + size = var.disk + type = "scsi" + storage = "local-lvm" + storage_type = "lvm" + } + + network { + id = 0 + model = "virtio" + bridge = "vmbr0" + macaddr = var.macaddr[each.value] + } + + target_node = var.target_node + pool = var.pool + clone = var.clone + agent = 1 + + os_type = "cloud-init" + ipconfig0 = "ip=dhcp" + ciuser = "root" + cipassword = "root" + sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" +} + +resource "null_resource" "update_inventory" { + triggers = { + # when a host id changes + host_ids = "${join(" ", values(proxmox_vm_qemu.host)[*].id)}" + } + provisioner "local-exec" { + # recreate ansible inventory + command = "echo '${templatefile("${path.module}/templates/inventory.tpl", { hosts = proxmox_vm_qemu.host })}' > ../ansible/inventory" + } + provisioner "local-exec" { + # recreate mapping of qemu VM id to hostnames + command = "echo '${templatefile("${path.module}/templates/qemu-config.yml.tpl", { hosts = proxmox_vm_qemu.host })}' > ../ansible/qemu-config.yml" + } +} + +# ssh private key +resource "tls_private_key" "id_rsa" { + algorithm = "RSA" +} +resource "local_file" "ssh_key" { + sensitive_content = "${tls_private_key.id_rsa.private_key_pem}" + filename = "${path.module}/../ssh/id_rsa" + provisioner "local-exec" { + command = "chmod 600 ${path.module}/../ssh/id_rsa" + } +} +\ No newline at end of file diff --git a/vms.tf b/vms.tf @@ -1,608 +0,0 @@ -# variable "mac_id" { -# default = "0" -# } - -# resource "random_id" "macaddr" { -# keepers = { -# # Generate a new id each time we switch to a new MAC id -# mac_id = "${var.mac_id}" -# } -# byte_length = 1 -# } - -# network { -# # Read the MAC id "through" the mac_id resource to ensure that -# # both will change together. -# # macaddr = "${random_id.macaddr.hex}" -# } - - - -# Can we use "count" for multiple instancances and seed the Mac addr -# from a pool of mac addresses ? - - -resource "tls_private_key" "id_rsa" { - algorithm = "RSA" -} - -resource "local_file" "ssh_key" { - sensitive_content = "${tls_private_key.id_rsa.private_key_pem}" - filename = "${path.module}/id_rsa" - provisioner "local-exec" { - command = "chmod 600 ${path.module}/id_rsa" - } -} - -resource "proxmox_vm_qemu" "bastion0" { - name = "bastion0" - desc = "Dacadoo jumphost" - - cores = 2 - sockets = 2 - memory = 2048 - - bootdisk = "scsi0" - scsihw = "virtio-scsi-pci" - disk { - id = 0 - size = 40 - type = "scsi" - storage = "local-lvm" - storage_type = "lvm" - } - - network { - id = 0 - model = "virtio" - bridge = "vmbr0" - macaddr = "02:6f:c0:8f:1e:4a" - } - - target_node = "wolke4" - pool = "dacadoo" - clone = "CentOS7-GenericCloud" - agent = 1 - - os_type = "cloud-init" - ipconfig0 = "ip=dhcp" - ciuser = "root" - cipassword = "root" - sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" - - # provisioner "file" { - # content = "${tls_private_key.id_rsa.private_key_pem}" - # destination = "/home/${self.ciuser}/.ssh/id_rsa" - # } - - # provisioner "remote-exec" { - # inline = [ - # "chmod 600 /home/${self.ciuser}/.ssh/id_rsa" - # ] - # } -} - -resource "proxmox_vm_qemu" "elastic0" { - name = "elastic0" - desc = "Elasticsearch with Logstash and Kibana" - - cores = 2 - sockets = 2 - memory = 4096 - - bootdisk = "scsi0" - scsihw = "virtio-scsi-pci" - disk { - id = 0 - size = 100 - type = "scsi" - storage = "local-lvm" - storage_type = "lvm" - } - - network { - id = 0 - model = "virtio" - bridge = "vmbr0" - macaddr = "02:0e:1e:34:c7:23" - } - - target_node = "wolke4" - pool = "dacadoo" - clone = "CentOS7-GenericCloud" - agent = 1 - - os_type = "cloud-init" - ipconfig0 = "ip=dhcp" - ciuser = "root" - cipassword = "root" - sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" -} - -resource "proxmox_vm_qemu" "kubernetes0" { - name = "kubernetes0" - desc = "Kubernets master" - - cores = 2 - sockets = 2 - memory = 2048 - - bootdisk = "scsi0" - scsihw = "virtio-scsi-pci" - disk { - id = 0 - size = 40 - type = "scsi" - storage = "local-lvm" - storage_type = "lvm" - } - - network { - id = 0 - model = "virtio" - bridge = "vmbr0" - macaddr = "02:0c:21:fc:c7:9b" - } - - target_node = "wolke4" - pool = "dacadoo" - clone = "CentOS7-GenericCloud" - agent = 1 - - os_type = "cloud-init" - ipconfig0 = "ip=dhcp" - ciuser = "root" - cipassword = "root" - sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" -} - -resource "proxmox_vm_qemu" "kubernetes1" { - name = "kubernetes1" - desc = "Kubernets worker" - - cores = 2 - sockets = 2 - memory = 2048 - - bootdisk = "scsi0" - scsihw = "virtio-scsi-pci" - disk { - id = 0 - size = 40 - type = "scsi" - storage = "local-lvm" - storage_type = "lvm" - } - - network { - id = 0 - model = "virtio" - bridge = "vmbr0" - macaddr = "02:4f:82:ed:7a:27" - } - - target_node = "wolke4" - pool = "dacadoo" - clone = "CentOS7-GenericCloud" - agent = 1 - - os_type = "cloud-init" - ipconfig0 = "ip=dhcp" - ciuser = "root" - cipassword = "root" - sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" -} - -resource "proxmox_vm_qemu" "mongodb0" { - name = "mongodb0" - desc = "MongoDB Node 0" - - cores = 2 - sockets = 2 - memory = 2048 - - bootdisk = "scsi0" - scsihw = "virtio-scsi-pci" - disk { - id = 0 - size = 40 - type = "scsi" - storage = "local-lvm" - storage_type = "lvm" - } - - network { - id = 0 - model = "virtio" - bridge = "vmbr0" - macaddr = "02:d1:b5:ed:45:4f" - } - - target_node = "wolke4" - pool = "dacadoo" - clone = "CentOS7-GenericCloud" - agent = 1 - - os_type = "cloud-init" - ipconfig0 = "ip=dhcp" - ciuser = "root" - cipassword = "root" - sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" -} - -resource "proxmox_vm_qemu" "mongodb1" { - name = "mongodb1" - desc = "MongoDB Node 1" - - cores = 2 - sockets = 2 - memory = 2048 - - bootdisk = "scsi0" - scsihw = "virtio-scsi-pci" - disk { - id = 0 - size = 40 - type = "scsi" - storage = "local-lvm" - storage_type = "lvm" - } - - network { - id = 0 - model = "virtio" - bridge = "vmbr0" - macaddr = "02:f5:99:e8:9c:5c" - } - - target_node = "wolke4" - pool = "dacadoo" - clone = "CentOS7-GenericCloud" - agent = 1 - - os_type = "cloud-init" - ipconfig0 = "ip=dhcp" - ciuser = "root" - cipassword = "root" - sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" -} - -resource "proxmox_vm_qemu" "mongodb2" { - name = "mongodb2" - desc = "MongoDB Node 2" - - cores = 2 - sockets = 2 - memory = 2048 - - bootdisk = "scsi0" - scsihw = "virtio-scsi-pci" - disk { - id = 0 - size = 40 - type = "scsi" - storage = "local-lvm" - storage_type = "lvm" - } - - network { - id = 0 - model = "virtio" - bridge = "vmbr0" - macaddr = "02:94:38:0f:05:3d" - } - - target_node = "wolke4" - pool = "dacadoo" - clone = "CentOS7-GenericCloud" - agent = 1 - - os_type = "cloud-init" - ipconfig0 = "ip=dhcp" - ciuser = "root" - cipassword = "root" - sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" -} - -resource "proxmox_vm_qemu" "consul0" { - name = "consul0" - desc = "Consul Node 0" - - cores = 2 - sockets = 2 - memory = 2048 - - bootdisk = "scsi0" - scsihw = "virtio-scsi-pci" - disk { - id = 0 - size = 40 - type = "scsi" - storage = "local-lvm" - storage_type = "lvm" - } - - network { - id = 0 - model = "virtio" - bridge = "vmbr0" - macaddr = "02:65:50:da:ae:af" - } - - target_node = "wolke4" - pool = "dacadoo" - clone = "CentOS7-GenericCloud" - agent = 1 - - os_type = "cloud-init" - ipconfig0 = "ip=dhcp" - ciuser = "root" - cipassword = "root" - sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" -} - -resource "proxmox_vm_qemu" "consul1" { - name = "consul1" - desc = "Consul Node 1" - - cores = 2 - sockets = 2 - memory = 2048 - - bootdisk = "scsi0" - scsihw = "virtio-scsi-pci" - disk { - id = 0 - size = 40 - type = "scsi" - storage = "local-lvm" - storage_type = "lvm" - } - - network { - id = 0 - model = "virtio" - bridge = "vmbr0" - macaddr = "02:7a:5b:7a:25:f3" - } - - target_node = "wolke4" - pool = "dacadoo" - clone = "CentOS7-GenericCloud" - agent = 1 - - os_type = "cloud-init" - ipconfig0 = "ip=dhcp" - ciuser = "root" - cipassword = "root" - sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" -} - -resource "proxmox_vm_qemu" "consul2" { - name = "consul2" - desc = "Consul Node 2" - - cores = 2 - sockets = 2 - memory = 2048 - - bootdisk = "scsi0" - scsihw = "virtio-scsi-pci" - disk { - id = 0 - size = 40 - type = "scsi" - storage = "local-lvm" - storage_type = "lvm" - } - - network { - id = 0 - model = "virtio" - bridge = "vmbr0" - macaddr = "02:20:34:63:b8:1e" - } - - target_node = "wolke4" - pool = "dacadoo" - clone = "CentOS7-GenericCloud" - agent = 1 - - os_type = "cloud-init" - ipconfig0 = "ip=dhcp" - ciuser = "root" - cipassword = "root" - sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" -} - -resource "proxmox_vm_qemu" "vault0" { - name = "vault0" - desc = "Vault Node 0" - - cores = 2 - sockets = 2 - memory = 2048 - - bootdisk = "scsi0" - scsihw = "virtio-scsi-pci" - disk { - id = 0 - size = 40 - type = "scsi" - storage = "local-lvm" - storage_type = "lvm" - } - - network { - id = 0 - model = "virtio" - bridge = "vmbr0" - macaddr = "02:05:4e:c7:b6:9d" - } - - target_node = "wolke4" - pool = "dacadoo" - clone = "CentOS7-GenericCloud" - agent = 1 - - os_type = "cloud-init" - ipconfig0 = "ip=dhcp" - ciuser = "root" - cipassword = "root" - sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" -} - -resource "proxmox_vm_qemu" "vault1" { - name = "vault1" - desc = "Vault Node 1" - - cores = 2 - sockets = 2 - memory = 2048 - - bootdisk = "scsi0" - scsihw = "virtio-scsi-pci" - disk { - id = 0 - size = 40 - type = "scsi" - storage = "local-lvm" - storage_type = "lvm" - } - - network { - id = 0 - model = "virtio" - bridge = "vmbr0" - macaddr = "02:79:99:60:ba:ab" - } - - target_node = "wolke4" - pool = "dacadoo" - clone = "CentOS7-GenericCloud" - agent = 1 - - os_type = "cloud-init" - ipconfig0 = "ip=dhcp" - ciuser = "root" - cipassword = "root" - sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" -} - -resource "proxmox_vm_qemu" "vault2" { - name = "vault2" - desc = "Vault Node 2" - - cores = 2 - sockets = 2 - memory = 2048 - - bootdisk = "scsi0" - scsihw = "virtio-scsi-pci" - disk { - id = 0 - size = 40 - type = "scsi" - storage = "local-lvm" - storage_type = "lvm" - } - - network { - id = 0 - model = "virtio" - bridge = "vmbr0" - macaddr = "02:c2:ea:eb:f0:b9" - } - - target_node = "wolke4" - pool = "dacadoo" - clone = "CentOS7-GenericCloud" - agent = 1 - - os_type = "cloud-init" - ipconfig0 = "ip=dhcp" - ciuser = "root" - cipassword = "root" - sshkeys = "${tls_private_key.id_rsa.public_key_openssh}" -} - -# locals { -# ids = "[]" -# } - -# data "templatefile"${file("${path.module}/inventory.tpl")}" -# vars = { -# ids = -# } -# }inventory_hostname - - - -# resource "null_resource" "update_inventory" { - -# triggers = { -# template = "${data.template_file.inventory.rendered}" -# } - -# provisioner "local-exec" { -# command = "echo '${templatefile("${path.module}/inventory.tpl", { ids = [proxmox_vm_qemu.bastion0.id, proxmox_vm_qemu.elastic0.id] })}' > inventory" -# } -# } - -locals { - qemu_hosts = [ - { - id: proxmox_vm_qemu.bastion0.id, - fqdn: proxmox_vm_qemu.bastion0.name }, - { - id: proxmox_vm_qemu.elastic0.id, - fqdn: proxmox_vm_qemu.elastic0.name }, - { - id: proxmox_vm_qemu.kubernetes0.id, - fqdn: proxmox_vm_qemu.kubernetes0.name }, - { - id: proxmox_vm_qemu.kubernetes1.id, - fqdn: proxmox_vm_qemu.kubernetes1.name }, - { - id: proxmox_vm_qemu.mongodb0.id, - fqdn: proxmox_vm_qemu.mongodb0.name }, - { - id: proxmox_vm_qemu.mongodb1.id, - fqdn: proxmox_vm_qemu.mongodb1.name }, - { - id: proxmox_vm_qemu.mongodb2.id, - fqdn: proxmox_vm_qemu.mongodb2.name }, - { - id: proxmox_vm_qemu.consul0.id, - fqdn: proxmox_vm_qemu.consul0.name }, - { - id: proxmox_vm_qemu.consul1.id, - fqdn: proxmox_vm_qemu.consul1.name }, - { - id: proxmox_vm_qemu.consul2.id, - fqdn: proxmox_vm_qemu.consul2.name }, - { - id: proxmox_vm_qemu.vault0.id, - fqdn: proxmox_vm_qemu.vault0.name }, - { - id: proxmox_vm_qemu.vault1.id, - fqdn: proxmox_vm_qemu.vault1.name }, - ] -} - -output "inventory" { - value = "${templatefile("${path.module}/inventory.tpl", { hosts = local.qemu_hosts })}" -} - -output "qemu_config" { - value = "${templatefile("${path.module}/qemu-config.yml.tpl", { hosts = local.qemu_hosts })}" -} - -output "ssh_key" { - value = "${tls_private_key.id_rsa.private_key_pem}" - sensitive = true -} - -output "ssh_keyfile" { - value = "${local_file.ssh_key.filename}" -} -\ No newline at end of file