Need to spin up some virtual machine’s using Proxmox / QEMU? Below is my approach for using a cloud-init Debian image to make the creation of those VM’s nice and simple.

Create the template VM image

Start with a cloud-specific image. Each distribution has it’s own set of these. For Debian, you can find them at https://cloud.debian.org/images/cloud/.

Read the instructions on that page for which image to download. I’m using the generic Debian 12 Bookworm image for an Intel (AMD64) processor in the qcow2 format.

On your Proxmox node, SSH in and run:

wget https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2

The downloaded image does not include quemu-guest-agent which I want to have in order to show details about the VM in the Proxmox web GUI. So let’s add that in.

On your Proxmox node:

apt update
apt install -y libguestfs-tools

This needs to only be done one time for the life of the Proxmox install.

Now customize that downloaded debian-12-generic-amd64.qcow2 VM image:

virt-customize --install qemu-guest-agent -a debian-12-generic-amd64.qcow2

This will install libguestfs-tools and quemu-guest-agent into the image we downloaded earlier. (Note: The same process can be used to add other packages into the image.)

Now we will create a VM template to be cloned into a new VM.

qm create 9001 --name debian12-template --memory 1024 --net0 virtio,bridge=vmbr0

This will:

  • Create a new VM with ID 9001 (if you already have a VM with this ID on your Proxmox host/cluster, pick a different value!)
  • Name it debian12-template
  • Assign 1024MB of RAM
  • Assign network bridge vmbr0 which is the usually the default one created by Proxmox (if you have a differently named bridge, use that value instead)

Next we import the cloud image we downloaded earlier as a disk for the VM template. local-zfs is the name of the storage in Proxmox where the disk should be stored, and could vary on your system (may be local or rpool or something else you named it).

# Import the VM cloud image as a disk to the VM template.
qm importdisk 9001 debian-12-generic-amd64.qcow2 local-zfs

# Set the disk for the VM to be the disk we just imported in the previous step `vm-9001-disk-0` is the name of the disk that was generated after the import.
qm set 9001 --scsihw virtio-scsi-pci --scsi0 local-zfs:vm-9001-disk-0

# Create the cloud-init CD-ROM drive which activates the cloud-init options for the VM.
qm set 9001 --ide2 local-zfs:cloudinit

# Define the boot disk and boot order.
qm set 9001 --boot c --bootdisk scsi0

# Configure a serial console to use as display otherwise we won't see anything in the "Console" view in Proxmox.
qm set 9001 --serial0 socket --vga serial0

# By default, have the image use DHCP for obtaining an IP address (we can override this later, per VM).
qm set 9001 --ipconfig0 ip=dhcp

# Enables guest agent so Proxmox can give more info about the VM when running.
qm set 9001 --agent enabled=1

Second to last step, we define the default boot disk size. This will be the minimum size of any future VM we clone off of this template. It needs to be at least as big as the debian-12-generic-amd64.qcow2 image used to create the boot disk, and can be larger. Be careful making this too large! You won’t be able to create a VM with a smaller boot disk size from this template.

qm resize 9001 scsi0 15G

And the final step is to make it a template!

qm template 9001

At the end of this you can remove the qcow2 image, as it’s no longer needed:

rm debian-12-generic-amd64.qcow2

We now have a template VM image we can clone to create a new VM from!

Using Terraform or OpenTofu to create a VM from this template

Below is my main.tf file I use to clone the VM template and create a new VM.

The first main.tf is using the bpg/proxmox provider. You can choose to use Terraform, or OpenTofu.

resource "proxmox_virtual_environment_vm" "sample-server" {
  node_name   = "pve"
  name        = "sample-server-hostname"
  description = "Sample Server.  Managed by Terraform."
  tags        = ["sample"]
  started     = true
  on_boot     = true

  agent {
    enabled = true
  }

  clone {
    node_name = "pve"
    vm_id = 9001
  }

  operating_system {
    type = "l26"
  }

  cpu {
    cores = 2
  }
  memory {
    dedicated = 4096
  }

  disk {
    datastore_id = "pve-vms"
    discard      = "on"
    interface    = "scsi0"
    size         = 15  # disk size in gigabytes (GB)
  }

  vga {
    type = "serial0"
  }

  network_device {
    bridge        = "vmbr0"
    enabled       = "true"
    # mac_address   = ""  # Set this following first creation of VM.
  }

  initialization {
    datastore_id = "pve-vms"

    ip_config {
      ipv4 {
        address = "10.0.0.100/24"
        gateway = "10.0.0.1"
      }
    }

    dns {
      servers = ["10.0.0.1", "8.8.8.8"]
    }

    user_account {
      keys     = var.public_keys
      username = "matt"
    }
  }

}
variable "public_keys" {
  type = list
  default = [
    "ssh-ed25519 ...",
    "ecdsa-sha2-nistp256 ..."
  ]
}

A few notes about the above:

  • The clone block is telling Terraform/OpenTofu to clone the VM template 9001 we created.
  • The initialization/ip_config block is overriding the IPv4 DHCP setup with a static IP. IPv6 will still use DHCP, unless overridden.
  • In the disk block, we can set the boot disk to be 15GB or larger. As mentioned previously in this post, we cannot make it smaller.
  • Also in the disk block, I’m using a different datastore on my Proxmox cluster pve-vms. This is just a different ZFS pool I have specifically for storing VM images. Your setup will differ.
  • After the resource is created, we can get it’s MAC address and add it into the network_device block. Doing this will ensure subsequent runs don’t attempt to modify or destroy/re-create the resource.

An alternative approach is using the telmate/proxmox provider, which I no longer recommend. Here is how I did that previously.

# This works for me using Proxmox v7.* and telmate/proxmox v2.9.14
resource "proxmox_vm_qemu" "sample-server" {
  target_node      = "pve"
  name             = "server-host-name"
  desc             = "Description of this server."
  os_type          = "cloud-init"
  full_clone       = true
  clone            = "debian12-template"
  memory           = 2048
  sockets          = 1
  cores            = 2
  ssh_user         = "matt"
  ciuser           = "matt"
  ipconfig0        = "ip=10.0.0.100/24,gw=10.0.0.1"
  nameserver       = "10.0.0.1 8.8.8.8"
  automatic_reboot = true
  onboot = true

  disk {
    storage = "vmdata"
    type    = "scsi"
    size    = "15G"
    discard = "on"
  }

  network {
    bridge   = "vmbr0"
    model    = "virtio"
    mtu      = 0
    queues   = 0
    rate     = 0
  }

  # sshkeys set using variables. the variable contains the text of the key.
  sshkeys = <<EOF
ssh-ed25519 ...
EOF

}

Wrap up

I hope you find this useful!

It’s been incredibly useful for me to have (most of) my homelab infrastructure defined in code like this. I can easily lose or nuke a machine, and spin up a replacement very quickly using this approach. The second piece of secret sauce that makes all of this work are Ansible playbooks that configure the machines once I have them created using the above approach. There’s nothing on my blog I can link back to showing that, though, so I’ll make a mental note to write up some future posts about how I’m using Ansible playbooks to configure all of my self-hosted services.