Effortlessly Create Proxmox VE Debian Templates at Lightning Speed with Cloud-Init

Nov 13, 2023 · 13 mins read
Effortlessly Create Proxmox VE Debian Templates at Lightning Speed with Cloud-Init

In the video below, we show you how to create and use Debian templates quicker with Cloud-Init


If you ever plan on creating virtual machines in a hypervisor it makes sense to create a template and then clone that as it saves you a lot of time

But creating a template by installing an operating system from an ISO image for instance can be time consuming itself

And that’s where Cloud-Init comes to the resuce because it saves you time when creating templates

Now most Linux distros support this and although it’s aimed at Cloud providers, you can also use this with Proxmox VE

So in this video we show you how to create Debian templates and deploy VMs from them using Cloud-init

Useful links:
https://cloud-init.io/
https://cloud.debian.org/images/cloud/
https://pve.proxmox.com/wiki/Cloud-Init_Support
https://pve.proxmox.com/wiki/Serial_Terminal

Create Generic Linux Template:
Now it might sound like overkill, but the first thing we’re going to do is to create a generic template

This will then be used to create templates for specific distros or future templates

One reason for doing this is that the hardware requirements across distros will usually be the same so why repeat the process?

Also when a new version of the OS is released a new template should be created, because it’s better to start afresh than upgrade an existing OS

Now granted I could save even more time by doing everything from the CLI, but it will need information looking up, time spent copying and pasting, etc. and to me it’s quicker to do this through the GUI

To create the generic template we’ll start by creating a basic VM without a hard drive

So we’ll click the Create VM button in the top right corner and walk through the setup process

To keep templates and VMs better separated we’ll give this a VM ID of 9000 and a Name of LinuxTemplate, then click Next

We don’t need any media, so we’ll select Do not use any media. The Guest OS defaults to Linux so we’ll just click Next

Now in a Cloud enviroment there won’t be a monitor, keyboard or mouse so we’ll set the Graphic card option to Serial terminal 0 as the Proxmox documentation suggests

Linux operating systems usually support the Qemu Agent out of the box so we’ll enable this then click Next

We don’t need a hard drive at this stage so we’ll delete the default one by clicking the rubbish bin icon then click Next

The Linux VMs I create don’t usually need a lot of compute resources so I leave them with 1 core. The sockets option applies when the bare metal computer itself has multiple sockets and a VM needs a lot of CPUs but I just leave it set to 1

Because I use the same servers in a cluster I prefer to set the CPU Type to host to get better CPU performance

As per Proxmox’s documentation, you can have problems if you have mixed CPUs when it comes to security and live migrations, and so what they recommend for this setting is as follows:
If you don’t care about live migration or have a homogeneous cluster where all nodes have the same CPU and same microcode version, set the CPU type to host, as in theory this will give your guests maximum performance
If you care about live migration and security, and you have only Intel CPUs or only AMD CPUs, choose the lowest generation CPU model of your cluster
If you care about live migration without security, or have mixed Intel/AMD cluster, choose the lowest compatible virtual QEMU CPU type

There are other settings you might want to set here, but in my case I’ll just click Next

Linux usually doesn’t need much memory if all you’re doing is running an application or service but I leave it set to 2GB

Ballooning only makes sense to me if you’re really low on RAM as it can cause performance issues

I don’t want to be in a situation where a VM suddenly needs to use more physical memory and some of the RAM is actually disk space on a hard drive that the hypervisor has been using in reserve, so I always disable this option

Now click Next

The last thing to do is to define the network settings. From what I’ve experienced over the years in IT, it’s good to have a dedicated build VLAN for creating new computers. This should be an isolated VLAN behind a firewall where new computers are created, in order to protect them as they’re being built, and then they should be moved to the proper VLAN when they’re ready

So in a production enviroment, I’d set the Bridge and VLAN Tag for the Build VLAN but in this lab that I’m using to create videos there are no VLANs so I’ll leave the settings as is

In any case, click Next

To finish off our VM we’ll check the summary then click Finish

While you can clone an existing VM, it’s better to convert this into a template because you can’t start a template or make internal changes to it. Instead, everything will remain as is, which is exactly what we want

So with this VM selected in the left hand pane, click the More drop-down menu and select Convert to template and then click Yes

Create Debian Template:
Now the real intention here is to create a template for Debian computers

So with the generic template selected, click the More drop-down menu and select Clone

Again we want a high VM ID for templates so we’ll give this one a value of 9001

We’ll then give this a name of Debian12Template so its easily identified

In this instance there is no hard drive to clone and so it will take just as long to create the VM regardless of whether we choose a Linked clone or Full clone

But the main reason for avoiding linked clones is that any VM cloned in this way will stop working if the VM or template they were cloned from is ever deleted or damaged so I’d say it’s better to always opt for a Full clone

In which case, we’ll change the mode to Full Clone and then click Clone

Now the next thing to do is to import a hard drive which already has the OS installed and is prepared for Cloud-Init

For Debian images, you can find these on their website
https://cloud.debian.org/images/cloud/

At the time of recording, we want the Bookworm version of this, so we’ll navigate to that release and select Latest

What we’re looking for is the generic version that will run on our bare-metal hypervisor. Typically this will be the debian-12-generic-amd64.qcow2 file which can be used on a 64-bit Intel or AMD CPU

As far as I’m aware, Proxmox VE doesn’t offer a GUI option to import images, so right click the filename and select Copy link address

Next we’ll open a Shell session on our Proxmox VE server, or alternatively you can SSH into the server

First we’ll download the file

wget https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2

Next we’ll create a hard drive for our VM by importing this

qm importdisk 9001 debian-12-generic-amd64.qcow2 VMS --format qcow2

In this case what Proxmox VE will do is to create a hard drive for our VM, that has a VM ID of 9001, from the file we downloaded and it will create this in my VMS storage with a format of qcow2

As the storage name will likely be different for you, do change this to what is appropriate for you

One thing to note is if your local storage, typically local-lvm, is just a normal hard drive it won’t support qcow2. A raw image, which is what you’ll get in that case, doesn’t support snapshots which is extremely useful for VMs

But if that’s your only storage option then you’ll likely use something like this instead

qm importdisk 9001 debian-12-generic-amd64.qcow2 local-lvm

Now head back to the GUI, select the VM and navigate to Hardware

You should now see an unused hard disk so we need to attach this to the VM

To do that, select the hard drive, click the Edit button and then click Add

Now unless you don’t have much need for disk space or plan on adding a second drive, you will probably want to incease the hard drive so that is has more than 2GB

To do that, make sure the hard drive is selected, then click the Disk Action drop-down menu and select Resize

For the Size Incrment set this to 30 for instance to make the drive 32GB and then click Resize Disk

Now navigate to Options and select Boot Order. Click Edit then de-select the ide2 and net0 devices and select the scsi0 one. As an extra measure you can also drag the scsi0 hard drive to the top of the list

What this does is to tell Proxmox VE to boot from the iSCSI drive we’ve added and that none of the other options should be tried

Now click OK

For Proxmox VE we need to add support for Cloud-Init and to do that navigate to Hardware

From the Add drop-down menu select CloudInit drive

You can change the Bus/Device type if you like but IDE should be fine

Select the Storage option for this drive from the drop down menu, ideally where the hard drive was created, then click Add

The next thing to do is to configure our Cloud-Init options so navigate to Cloud-Init

Here you can define a user account and password by clicking on the entries and then clicking the Edit button

NOTE: Even though we’ll be storing username and password credentials here, my aim will be to remove these from any VM created using Ansible. As such, I’m not too concerned about doing this as the account won’t have access to anything other than to a new VM and only during its initial build. The alternative would be to have a default user account created which I don’t want at all

You can change the DNS domain and servers if you like, but by default it will use whatever Proxmox VE is configured with

For security reasons it’s best to use SSH keys for authentication. In which case it makes sense to add any public SSH keys here and now

To do this select the SSH public key entry, click the Edit button and paste in the relevant keys. Alternatively you can click the Load SSH Key File button to upload keys

In any case, once the keys are added, click OK

Every VM will need its own IP settings and you can set these for IPv4 and/or IPv6

By default these are setup to expect a static IP address and it would better to set this up for DHCP

To do that, select the IP Config entry then click Edit and change the IPv4 option at least to DHCP

For whatever reason, you can’t disable IPv6 here if you don’t want to use it, in which case you might want to leave that set to static

When you’re ready, click OK

Now we can convert this to a template, so click the More drop-down menu and select Convert to template and then click Yes

Create Debian VM:
Since we have a template to use we can clone this to create virtual machines

So with the Debian template selected, click the More drop-down menu and select Clone

This time we should either accept the VM ID suggested, or enter something different

Add a meaningful name and again I would suggest changing this to a Full Clone before clicking Clone

You can, if you want, change the hardware and Cloud-init settings for this particular VM, e.g. you could give it a static IP address, but otherwise you can now start the VM

From what I’ve picked up from forums mind, logging into the console before Cloud-Init has finished may cause issues. In which case, I prefer to leave the VM for a while and check if the DHCP server has asssigned an IP address as well as check to see when activity suggests the VM is idling

TIP: Having DDNS will be better as then you can connect to the VM via its FQDN rather than having to find out what the IP address is

In any case, you should be able to SSH into the VM using the account and a public key configured in the Cloud-Init settings

One thing to note, is that unlike the normal Debian images, the root password is unknown, however, the user account will have Sudo rights

In addition, the hostname of the VM will be set to the name used in the cloning process. Normally I’ve found when creating a template using an ISO image, the hostname is the same as what the template had and so needs to be changed every time a VM is cloned

Another gotcha is that the timezone will be set to UTC and you can run into issues when running commands where it complains of the locale setting not being defined

One thing I’m not overly fond of with these images is that they use Netplan. I prefer the simpler method of editing a plain config file

Also bear in mind, for this image you will have to install the Qemu Guest Agent afterwards

In any case, these are all things you can deal with post install, and for me that should be relatively simple by using Ansible

On thing to note, is now that the VM is built, you should remove the CloudInit drive

Summary:
Now although this method saves me a lot of time versus creating a template from an ISO image, it is still a manual process

Now there are several tools to help you automate the building of virtual machines, but these usually involve learning new scripting languages, getting the key presses correct to create the template and so on

So for me, the long term goal is to hand over as much possible to Ansible

In other words, once the hypervisors are up and running, it should be possible to get Ansible to create these Cloud-Init templates and then use them to create the virtual machines

Sharing is caring!