A comprehensive guide to solving VM storage limitations by leveraging external drives with QEMU/KVM and Virt-Manager on Fedora
Problem: Limited internal storage preventing multiple VM deployments Solution: Dedicated external drive partition for VM storage with proper libvirt integration
# Quick verification of your setup
lsblk # Check drive layout
virsh pool-list --all # Verify storage pools
df -h # report file system space usage- Fedora Linux (tested on 42+) or any other Linux Distribution
- External drive with sufficient space (100GB+ recommended)
- CPU with virtualization support (Intel VT-x or AMD-V)
- Administrative privileges
- QEMU/KVM installation and configuration
- External drive partitioning without data loss
- Libvirt storage pool setup
- VM creation using external storage
- Performance optimization tips
- Common troubleshooting scenarios
- Problem
- System Requirements
- Installation & Setup
- External Drive Configuration
- Libvirt Storage Pool
- VM Creation Process
- Testing & Verification
- Troubleshooting
- Best Practices
- Contributing
- Limited Internal Storage: System's internal drive insufficient for multiple VMs
- Space Consumption: ISO files alone consuming significant space
- Resource Availability: CPU capable of handling VMs, storage is the bottleneck
- Future Scalability: Need for expandable VM storage solution
- Utilize existing external drive for VM storage
- Create dedicated partition for VM-related files
- Maintain separation between personal data and VM storage
- Leverage external drive's larger capacity
- CPU: Intel VT-x or AMD-V virtualization support
- RAM: 8GB+ (4GB for host + VM allocations)
- Storage: External drive with 100GB+ available space
- USB: USB 3.0+ port for optimal performance
- OS: Fedora Linux 39+ (adaptable to other RPM-based distros) or other Distros
- Packages: QEMU/KVM, libvirt, virt-manager
- Permissions: User account with sudo access
# Check virtualization support
lscpu | grep Virtualization
# To check if virtualization is enabled in the BIOS/UEFI on a Linux system (such as Fedora).
egrep -c '(vmx|svm)' /proc/cpuinfo
# If zero (0) is returned then you'll have to turn off your pc and log into the BIOS/UEFI to enable it.
# Verify that the KVM kernel modules are loaded by running:
lsmod | grep kvm
# KVM requires a CPU with virtualization extensions, found on most consumer CPUs. These extensions are called Intel VT or AMD-V. To check whether you have CPU support, run the following command:
grep -E '^flags.*(vmx|svm)' /proc/cpuinfo
# If this command results in nothing printed, your system does not support the relevant virtualization extensions. You can still use QEMU/KVM, but the emulator will fall back to software virtualization, which is much slower.
# Verify available space
df -h
# Check USB port speed
lsusb -t
# If the checks were positive then you're good to go.# Install complete virtualization group
sudo dnf install @virtualization
# Alternative: Install individual components
sudo dnf install qemu-kvm libvirt virt-manager virt-viewer virt-install# Enable and start libvirt daemon
sudo systemctl enable --now libvirtd
# Add user to libvirt group
sudo usermod -aG libvirt $USER
# Verify installation
systemctl status libvirtd
virsh version
⚠️ Important: Log out and back in after adding user to libvirt group
# List all storage devices
lsblk
# Detailed partition information
sudo fdisk -lExpected output example (yours won't show up another partition if you have none):
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 16M 0 part
├─sda2 8:2 0 186.3G 0 part /run/media/gr3ytrac3/500 GB
└─sda3 8:3 0 279.5G 0 part /mnt/vm_storage
zram0 251:0 0 8G 0 disk [SWAP]
nvme0n1 259:0 0 238.5G 0 disk
├─nvme0n1p1 259:1 0 600M 0 part /boot/efi
├─nvme0n1p2 259:2 0 1G 0 part /boot
└─nvme0n1p3 259:3 0 236.9G 0 part /home
/
sda2 is the default path of your external drive, automounted by defautlt without a proper mounting point. We'll create and properly mount sda3, which will be dedicated for the vm storage.
🚨 CRITICAL: Backup all important data before proceeding with partitioning. Although not really necessary if sure of you, but preventive against the shrinking process
# Install GParted
sudo dnf install gparted
# Launch with administrative privileges
sudo gpartedSteps in GParted:
-
Select your external drive (top right corner as in the image below) (e.g.,
/dev/sda)
-
Right-click existing partition → "Resize/Move" 🚨 This is an important stage. You'll have to decide on the size of space you want to shrink and dedicate to the other partition. Please take note of the used space on your disk before entering the space you wish to shrink. If the sapce isn't enough for you then transfer or detele some unwanted data from it. Get back to GParted once you're done. If this stage is skipped, you might end up loosing important data. If you're done, then you can proceed.
Learn to calculate and convert storage units for VM partitioning
Need to quickly convert storage units for VM planning? Jump to the Quick Reference Table or use our Storage Calculator.
- Understanding Storage Units
- Conversion Formulas
- Quick Reference Table
- VM Size Planning
- Storage Calculator
- Practical Examples
When working with VM storage, you'll encounter two different measurement systems:
| Decimal (SI Units) | Binary (IEC Units) |
|---|---|
| Used by drive manufacturers | Used by operating systems |
| Base-10 (powers of 1000) | Base-2 (powers of 1024) |
| KB, MB, GB, TB | KiB, MiB, GiB, TiB |
| Unit | Value (Bytes) | Type | Common Usage |
|---|---|---|---|
| 1 KB (Kilobyte) | 1,000 | Decimal | Drive specifications |
| 1 KiB (Kibibyte) | 1,024 | Binary | OS reporting |
| 1 MB (Megabyte) | 1,000,000 | Decimal | File sizes |
| 1 MiB (Mebibyte) | 1,048,576 (1024²) | Binary | RAM allocation |
| 1 GB (Gigabyte) | 1,000,000,000 | Decimal | Drive capacity |
| 1 GiB (Gibibyte) | 1,073,741,824 (1024³) | Binary | Actual usable space |
💡 Key Insight: Linux and virtualization tools (QEMU, virt-manager) typically use binary units (MiB, GiB)
MiB = (GB × 1,000,000,000) ÷ 1,048,576
Example: Convert 20 GB to MiB
(20 × 1,000,000,000) ÷ 1,048,576 ≈ 19,073 MiB
GB = (MiB × 1,048,576) ÷ 1,000,000,000
Example: Convert 4096 MiB to GB
(4096 × 1,048,576) ÷ 1,000,000,000 ≈ 4.3 GB
For rough calculations, you can use these approximation factors:
| Conversion | Factor |
|---|---|
| GB to MiB | Multiply by ~953.7 |
| MiB to GB | Divide by ~1024, then multiply by 1.0737 |
| MiB to GiB | Divide by 1024 |
| Decimal (GB) | Binary Equivalent (GiB) | Binary Equivalent (MiB) |
|---|---|---|
| 5 GB | ≈ 4.66 GiB | ≈ 4,768 MiB |
| 10 GB | ≈ 9.31 GiB | ≈ 9,537 MiB |
| 20 GB | ≈ 18.63 GiB | ≈ 19,073 MiB |
| 50 GB | ≈ 46.57 GiB | ≈ 47,684 MiB |
| 100 GB | ≈ 93.13 GiB | ≈ 95,367 MiB |
| 200 GB | ≈ 186.26 GiB | ≈ 190,735 MiB |
| 500 GB | ≈ 465.66 GiB | ≈ 476,837 MiB |
| 1000 GB (1TB) | ≈ 931.32 GiB | ≈ 953,674 MiB |
| VM Type | Minimum | Recommended | With Development Tools |
|---|---|---|---|
| Alpine Linux | 2 GiB | 4 GiB | 8 GiB |
| Ubuntu Server | 8 GiB | 15 GiB | 25 GiB |
| Ubuntu Desktop | 15 GiB | 25 GiB | 40 GiB |
| Fedora Workstation | 20 GiB | 30 GiB | 50 GiB |
| Windows 10 | 32 GiB | 50 GiB | 80 GiB |
| Windows 11 | 40 GiB | 60 GiB | 100 GiB |
| Kali Linux | 15 GiB | 25 GiB | 40 GiB |
| CentOS/RHEL | 10 GiB | 20 GiB | 35 GiB |
| Component | Typical Size | Notes |
|---|---|---|
| ISO Files | 1-6 GB each | Store in dedicated folder |
| VM Snapshots | 10-50% of VM size | Per snapshot |
| Swap Space | Equal to VM RAM | If enabled in guest |
| Log Files | 1-5 GiB | Over time |
| Growth Buffer | 20-30% extra | For updates and data |
Scenario: Setting up a development environment with multiple VMs
Planned VMs:
├── Ubuntu Server (Web Dev) : 20 GiB
├── Windows 11 (Testing) : 60 GiB
├── Kali Linux (Security) : 25 GiB
├── CentOS (Production Test) : 20 GiB
└── Alpine (Container Test) : 5 GiB
Additional Storage:
├── ISO Files : 15 GiB
├── VM Snapshots (estimated) : 30 GiB
├── Templates : 10 GiB
└── Growth Buffer (25%) : 41 GiB
Total Required: 226 GiB ≈ 243 GB
- Sum VM requirements: 130 GiB
- Add additional storage: 55 GiB
- Calculate subtotal: 185 GiB
- Add growth buffer (25%): 41 GiB
- Total needed: 226 GiB
- Convert to decimal: ~243 GB
For the above scenario, create a 250-300 GB partition to ensure adequate space.
Problem: You have a 500 GB external drive. How much usable space for VMs?
Solution:
500 GB × 0.9313 = 465.66 GiB usable space
Planning: You can comfortably fit 8-10 moderate-sized VMs.
Problem: Creating a VM with virt-manager showing "20 GB" option.
Reality:
20 GB = 18.63 GiB actual usable space in guest OS
Best Practice: Plan for slightly larger sizes than guest OS requirements.
Problem: 1TB external drive, want to keep 400 GB for personal data.
Available for VMs:
600 GB available = 558.79 GiB for VM storage
Can Support:
- 15-20 lightweight VMs, or
- 8-12 full desktop VMs, or
- 4-6 Windows VMs with development tools
# ❌ Wrong assumption
"My 1TB drive should give me 1024 GB of VM space"
# ✅ Reality
"My 1TB drive gives me ~931 GiB of actual VM storage"# ❌ Tight planning
VM Size: Exactly what guest OS needs
# ✅ Smart planning
VM Size: Guest OS needs + 30% buffer for updates/data# ❌ Missing components
Total = Sum of VM disk sizes
# ✅ Complete calculation
Total = VM disks + ISOs + snapshots + templates + buffer# qcow2 sparse allocation (default)
qemu-img create -f qcow2 disk.qcow2 20G
# Creates 20G capacity but uses minimal actual space initially
# Pre-allocated (uses full space immediately)
qemu-img create -f qcow2 -o preallocation=full disk.qcow2 20G- Always add 20-30% buffer for growth and snapshots
- Use sparse allocation (qcow2 default) to save initial space
- Monitor usage regularly to prevent space exhaustion
- Plan for snapshots - they can be 10-50% of VM size each
/mnt/vm_storage/
├── images/ # VM disk files
├── isos/ # Installation media
├── templates/ # Base VM templates
├── snapshots/ # VM snapshots
└── backups/ # VM backups
- Binary Prefix (Wikipedia) - Understanding unit differences
- QEMU Disk Images - Official QEMU documentation
- Libvirt Storage - Storage pool management
- Virt-Manager Guide - GUI management
-
Drag boundary to create unallocated space after entering the space you want to shrink and attribute to the new partition. "Before you proceed:" You need to confirm and complete the shrinking process. For that, click on the "operation pending" zone at the bottom left corner of GParted after you're done. You should see the following display. Wait until it's done

-
Right-click unallocated space → "New"
-
Configure new partition:
- Label:
vm_storage - File System:
ext4 - Size: Remaining space
- Label:
-
Apply all operations (same zone at the bottom eft corner). Once the process is done, sure to mount it (Part 4)
# Launch fdisk
sudo fdisk /dev/sda
# Create new partition (follow interactive prompts)
# n (new) -> p (primary) -> partition number -> first sector -> last sector
# w (write changes)
# Format new partition
sudo mkfs.ext4 -L vm_storage /dev/sda3# Create mount point
sudo mkdir -p /mnt/vm_storage
# Get partition UUID
sudo blkid /dev/sda3
# Add to fstab for automatic mounting
echo "UUID=$(sudo blkid -s UUID -o value /dev/sda3) /mnt/vm_storage ext4 defaults 0 2" | sudo tee -a /etc/fstab
# Mount the partition
sudo mount -a
# Verify mount
df -h /mnt/vm_storage# Change ownership to current user
sudo chown $USER:$USER /mnt/vm_storage
# Create directory structure
mkdir -p /mnt/vm_storage/{isos,images,templates}
# Verify setup
ls -la /mnt/vm_storage/-
Launch virt-manager
virt-manager
-
Open Connection Details
- Edit → Connection Details
- Navigate to Storage tab
-
Create New Storage Pool
- Click "+" (Add Pool)
- Name:
vm_storage(your new partion name) - Type:
dir: Filesystem Directory - Target Path:
/mnt/vm_storage - Click Finish
-
Start and Auto-start Pool
- Select
vm_storagepool - Click Start Pool
- Check Autostart checkbox
- Select
# Define storage pool
virsh pool-define-as vm_storage dir --target /mnt/vm_storage
# Start the pool
virsh pool-start vm_storage
# Set autostart
virsh pool-autostart vm_storage
# Verify pool creation
virsh pool-list --all
virsh pool-info vm_storageExpected output:
Name: vm_storage
UUID: [uuid-string]
State: running
Persistent: yes
Autostart: yes
Capacity: 279.40 GiB
Allocation: 1.20 GiB
Available: 278.20 GiB
You can create a folder in your new patition to store your .iso files. Do this only if you gave enough space to your partition ( not less than 250GB)

# Navigate to ISO directory
cd /mnt/vm_storage/isos
# Example downloads
wget https://releases.ubuntu.com/22.04/ubuntu-22.04.3-live-server-amd64.iso
wget https://download.fedoraproject.org/pub/fedora/linux/releases/39/Server/x86_64/iso/Fedora-Server-netinst-x86_64-39-1.5.iso-
Launch virt-manager and create new VM
virt-manager
-
VM Creation Wizard
- Step 1: "Local install media (ISO image or CDROM)"
- Step 2: Browse →
/mnt/vm_storage/isos/→ Select ISO - Step 3: Memory and CPU allocation
- Step 4: Critical - Storage configuration
-
Storage Configuration Never change the default name given by virt-manager

- ✅ Check "Enable storage for this virtual machine"
- Click "Manage..."
- Select
vm_storagepool - Click "+" to create new volume
- Name:
vm-name.qcow2 - Format:
qcow2 - Max Capacity: As needed (e.g., 20GB)
- Allocation:
0(sparse allocation)
-
Complete VM Setup
- Step 5: Review and customize hardware
- Click "Begin Installation" (top right)
# Create VM using virt-install
virt-install \
--name ubuntu-server \
--ram 2048 \
--vcpus 2 \
--disk path=/mnt/vm_storage/images/ubuntu-server.qcow2,size=20,format=qcow2 \
--cdrom /mnt/vm_storage/isos/ubuntu-22.04.3-live-server-amd64.iso \
--network network=default \
--graphics spice \
--os-variant ubuntu22.04By default Virt-Manager allows you to share any external peripheral with your VMs. But that prompts your device to display on the VM, not on the host. The best way is to set it as a storage hardware. The below image shows you how to add your patition as a storage hardware. When you run the Vm it will display.
Note: Avoid allowing every VMs to write anything to the partition. It can corrupt its content in other Vms where you added it too. The best approach is to set it as read-only. With that set, you can copy files, docs into your VMs but not modify it, to avoid inconsistency from the main VM that has access to it.

Make sure you entered the correct partition mounting point

You can eddit the XML display in case of permission issues

Enter your user password in order to access the added partition

# Check storage pool status
virsh pool-info vm_storage
# List all volumes in pool
virsh vol-list vm_storage
# Check disk usage on external drive
df -h /mnt/vm_storage
du -sh /mnt/vm_storage/*# List running VMs
virsh list
# Check VM disk usage
ls -lah /mnt/vm_storage/images/
# Monitor VM performance
virsh dominfo vm-name# Monitor disk I/O during VM operation
iostat -x 1
# Check external drive performance
hdparm -t /dev/sda3
# Monitor VM resource usage
virt-top# Restart libvirt service
sudo systemctl restart libvirtd
# Redefine storage pool
virsh pool-define-as vm_storage dir --target /mnt/vm_storage
virsh pool-start vm_storage
virsh pool-autostart vm_storage# Fix ownership and permissions (set 755 or 777)
sudo chown -R $USER:libvirt /mnt/vm_storage
sudo chmod -R 755 /mnt/vm_storage
# Check SELinux context (if enabled)
sudo restorecon -R /mnt/vm_storage
# Check Groups. kvm, libvirt and quemu should be listed among other groups
groups# Check fstab entry
grep vm_storage /etc/fstab
# Manual mount
sudo mount UUID=$(sudo blkid -s UUID -o value /dev/sda3) /mnt/vm_storage
# Check filesystem for errors
sudo fsck /dev/sda3
# Mount it from GParter# Check VM configuration
virsh dumpxml vm-name
# Verify storage path exists
ls -la /mnt/vm_storage/images/
# Check libvirt logs
sudo journalctl -u libvirtd -f# Enable virtio drivers in VM hardware settings
virsh edit vm-name
# Check if external drive is USB 3.0+
lsusb -t
# Monitor I/O performance
iotop -o- Regular backups of critical VM images
- Monitor external drive health with
smartctl - Use sparse allocation for VM disks to save space
- Plan storage expansion before reaching capacity limits
# Create VM backup
virsh vol-download --pool vm_storage vm-name.qcow2 /backup/location/
# Check drive health
sudo smartctl -a /dev/sda- Use fastest available port (USB 3.0+ or Thunderbolt)
- Enable virtio drivers for all VM components
- Allocate memory wisely - don't over-commit
- Consider SSD external drives for better I/O performance
# Optimize VM for performance
virsh edit vm-name
# Change to virtio:
# <disk type='file' device='disk'>
# <driver name='qemu' type='qcow2' cache='none' io='native'/>
# <target dev='vda' bus='virtio'/>- Encrypt external drive if storing sensitive VMs
- Regular security updates for host and guest systems
- Network isolation for untrusted VMs
- Backup encryption keys separately
# Weekly maintenance script
#!/bin/bash
echo "=== VM Storage Health Check ==="
df -h /mnt/vm_storage
virsh pool-info vm_storage
sudo smartctl --health /dev/sda
echo "=== Running VMs ==="
virsh list --all
echo "=== Storage Pool Contents ==="
virsh vol-list vm_storagePartition Structure with each VMs.qcow2 file:

This setup successfully addresses storage limitations by:
✅ Leveraging existing hardware - no need for expensive internal storage upgrades
✅ Organized separation - clean distinction between personal and VM data
✅ Scalable solution - easy to add more VMs or storage
✅ Cost-effective - utilize external storage you likely already own
✅ Maintainable - standard libvirt tools work seamlessly
- Storage efficiency: 50-80% reduction in internal drive usage
- VM capacity: Support for 5-10+ VMs depending on external drive size
- Performance: Acceptable I/O performance with USB 3.0+ drives
- Flexibility: Easy VM management with standard tools
Contributions are welcome! Please feel free to:
- Submit issues for problems or questions
- Create pull requests for improvements
- Share your experience with different hardware/configurations
- Add support for other Linux distributions
- Fork this repository
- Create a feature branch (
git checkout -b feature/improvement) - Commit your changes (
git commit -am 'Add some improvement') - Push to the branch (
git push origin feature/improvement) - Create a Pull Request
- Libvirt Storage Management - Official documentation
- QEMU/KVM Performance Tuning - Optimization guide
- Virt-Manager Documentation - GUI management
- KVM Best Practices - Community wiki
If you encounter any problems or have questions:
- Check the Troubleshooting section
- Search existing Issues
- Create a new issue with:
- Your system specs (Fedora version or other Linux Distro, hardware)
- Complete error messages
- Steps you've already tried
⭐ If this guide helped you, consider giving it a star!








