Proxmox VE 8 to 9 Upgrade: Complete Guide

What’s New in Proxmox VE 9.0

Proxmox VE 9.0, released on August 5, 2025, represents a major update to the virtualization platform with numerous significant improvements and new features.

Key Innovations

Updated Foundation

  • Based on Debian 13 “Trixie” for improved security and modern hardware support
  • Linux kernel 6.14.8-2 by default for better hardware compatibility
  • QEMU 10.0.2 for enhanced virtual machine performance
  • LXC 6.0.4 with improved container resource management
  • Ceph Squid 19.2.3 for distributed storage

ZFS 2.3.3 and RAIDZ Expansion One of the most anticipated features is the ability to expand RAIDZ arrays with minimal downtime. You can now add new disks to existing RAIDZ pools without recreating the array.

Snapshots for Thick-Provisioned LVM Volumes Added support for creating virtual machine snapshots on thick-provisioned shared LVM volumes, particularly beneficial for enterprise users with Fibre Channel (FC) or iSCSI SAN infrastructure.

Software-Defined Networking (SDN) with Fabric Support The new SDN Fabric feature simplifies configuration and management of complex routed networks. Supports OpenFabric and OSPF routing protocols for creating resilient two-tier spine-leaf architectures.

High Availability (HA) Resource Affinity Rules Introduced HA affinity rules for precise control over resource placement in clusters. This allows grouping related virtual machines on the same node or distributing them across different nodes for fault tolerance.

Network Interface Pinning Tool The new proxmox-network-interface-pinning tool allows binding MAC addresses to interface names, preventing issues with interface name changes after upgrades.

ZFS ARC Memory Usage Display ZFS ARC cache memory consumption is now displayed in the memory resources tab of the web interface.

Upgrade Preparation

Important Prerequisites

Mandatory Requirements:

  • Upgrade to the latest Proxmox VE 8.4.1 or newer on all nodes
  • Create and verify backups of all virtual machines and containers
  • Test backups in a laboratory environment
  • Minimum 5 GB free space on root partition (recommended 10+ GB)
  • Access to nodes via independent channel (IKVM/IPMI) or physical access

For Hyperconverged Ceph Clusters:

  • Upgrade Ceph Quincy or Reef to Ceph 19.2 Squid before starting Proxmox VE upgrade

For Co-installed Proxmox Backup Server:

  • Upgrade to version 4.x before upgrading Proxmox VE

Compatibility Check

Before starting the upgrade, run the check script:

pve8to9 --full

This script will identify potential issues and provide recommendations for resolving them.

Important Compatibility Changes

Removal of cgroupv1 Support: Proxmox VE 9 no longer supports legacy cgroupv1. Containers with systemd version 230 and older (e.g., CentOS 7, Ubuntu 16.04) will not be supported.

LVM Auto-activation Changes: For existing LVM volumes, it’s recommended to run the migration script to disable auto-activation:

/usr/share/pve-manager/migrations/pve-lvm-disable-autoactivation

Upgrade Process

Method 1: In-Place Upgrade (Recommended)

This method is suitable for most cases and is performed via APT.

Step 1: System Preparation

Ensure the system is using the latest Proxmox VE 8.4 packages:

apt update
apt dist-upgrade
pveversion

The last command should show version 8.4.1 or newer.

Step 2: Update Debian Repositories

Replace Bookworm repositories with Trixie:

sed -i 's/bookworm/trixie/g' /etc/apt/sources.list
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list.d/*

Step 3: Add Proxmox VE 9 Repository

Create a new repository file:

cat > /etc/apt/sources.list.d/proxmox.sources << EOF
Types: deb
URIs: http://download.proxmox.com/debian/pve
Suites: trixie
Components: pve-no-subscription
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOF

Remove old Proxmox VE 8 repositories from corresponding files.

Step 4: Update Ceph Repository (if applicable)

For hyperconverged clusters:

cat > /etc/apt/sources.list.d/ceph.sources << EOF
Types: deb
URIs: http://download.proxmox.com/debian/ceph-squid
Suites: trixie
Components: no-subscription
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOF

Remove the old /etc/apt/sources.list.d/ceph.list file.

Step 5: Update Package Index

apt update

Ensure the command executes without errors.

Step 6: Perform the Upgrade

apt dist-upgrade

Important: The process can take from 5 minutes on high-performance servers to 60+ minutes on slower systems.

Step 7: Configuration File Responses

During the upgrade, the system may request confirmation for configuration file changes:

  • /etc/issue – choose “No” (keep current version)
  • /etc/lvm/lvm.conf – recommended “Yes” (install maintainer’s version)
  • /etc/ssh/sshd_config – if no changes were made, choose “Yes”
  • /etc/default/grub – be careful, recommended “No” if unsure

Step 8: Reboot

After successful upgrade completion:

pve8to9
reboot

Important: Reboot is mandatory, even if kernel 6.14 was already in use in Proxmox VE 8.

Step 9: Post-Upgrade Verification

  • Clear browser cache (Ctrl + Shift + R)
  • Verify all cluster nodes are working correctly
  • Ensure all virtual machines and containers are functioning normally

Method 2: Clean Installation

This method is recommended for heavily customized systems:

  1. Create backups of all VMs and containers
  2. Save configuration files from /etc/pve/
  3. Perform clean installation of Proxmox VE 9.0
  4. Restore cluster configuration
  5. Restore VMs from backups

Cluster Upgrade

When upgrading a cluster, follow these rules:

  1. Planning: Upgrade nodes one at a time
  2. Migration: Move critical VMs from the node being upgraded
  3. Compatibility: Migration from newer to older versions is not supported
  4. HA Groups: Automatically migrate to HA rules after all nodes are upgraded

Known Issues and Solutions

GRUB Issues in UEFI Mode

For systems with root partition on LVM in UEFI mode:

[ -d /sys/firmware/efi ] && apt install grub-efi-amd64

Network Interface Name Changes

Use the new interface pinning tool:

proxmox-network-interface-pinning

NVIDIA vGPU Compatibility

Update GRID/vGPU drivers to version 18.3 or newer (570.158.02+).

Legacy Hardware

Thoroughly test compatibility on hardware older than 10 years before upgrading production systems.

Security Recommendations

  1. Testing: Always test upgrades on identical non-production hardware
  2. Backups: Create and verify backups before starting
  3. Access: Ensure independent server access (IPMI/KVM)
  4. Terminal Multiplexer: Use tmux or screen for SSH connections
  5. Monitoring: Monitor the upgrade process and service status

Post-Upgrade Optimization

After successful upgrade, it’s recommended to:

  1. Modernize repositories: apt modernize-sources
  2. Check cluster status: pvecm status
  3. Update firewall configuration (if necessary)
  4. Test VM migration between nodes
  5. Verify new features (ZFS ARC monitoring, HA rules)

Conclusion

Upgrading to Proxmox VE 9.0 brings numerous significant improvements, especially in storage, networking, and high availability management. With proper preparation and following the instructions, the upgrade process goes smoothly and provides access to modern virtualization platform capabilities.

Remember the importance of testing and creating backups before starting production system upgrades. New features such as RAIDZ expansion, LVM volume snapshots, and improved network management make this upgrade particularly attractive for enterprise environments.