What is vmd controller
Last updated: April 2, 2026
Key Facts
- Intel VMD 2.0 technology provides up to 64 PCIe lanes subdivided into four management domains, each domain managing x16 lanes for NVMe SSD control
- Intel VMD 1.0 provides up to 48 PCIe lanes subdivided into three domains, with each domain managing x16 lanes
- VMD enables surprise insertion and removal of NVMe drives isolated from the main PCIe bus, managed entirely by the controller without OS notification
- Intel VROC (Virtual RAID on CPU) supports RAID 0, 1, 5, and 10 configurations with VMD NVMe SSDs, available on Intel Xeon Scalable processors starting with 1st generation
- VMD can be configured on x4 or x2 lane granularity (x2 starting with Intel VROC 8.6), supporting either NVMe SSD devices or PCIe switch devices in each domain
Overview and Purpose
Intel Volume Management Device (VMD) represents a fundamental architectural advancement in data center storage management, integrating storage virtualization directly into Intel Xeon Scalable processor silicon. Rather than requiring discrete hardware RAID controllers connected via PCIe buses, VMD consolidates storage management functions into the processor itself, reducing complexity, latency, and power consumption. The controller manages NVMe SSDs and other storage devices through PCIe lanes allocated directly from the processor's root complex, bypassing traditional I/O architecture that would normally expose individual drives to the operating system. This architectural approach enables sophisticated storage management capabilities previously reserved for expensive external hardware, making enterprise-grade storage virtualization accessible to a wider range of deployments. VMD emerged from Intel's recognition that modern data center workloads required greater flexibility in storage configuration than traditional hardware RAID solutions provided. Cloud computing platforms, virtualized environments, and hyperconverged infrastructure (HCI) systems demanded the ability to dynamically manage storage resources without costly external controllers. Intel VMD technology addresses these requirements by embedding storage virtualization logic directly into processor firmware, making it available on all Xeon Scalable systems without additional hardware investment. The technology has become increasingly important as NVMe SSDs have replaced SATA drives in performance-critical applications, requiring sophisticated management to fully exploit the speed advantages of modern storage media.
Technical Architecture and Domain Management
VMD's technical implementation centers on the concept of virtual domains, logical groupings of PCIe lanes that the controller manages as isolated units. Each domain presents a single unified interface to the operating system, which loads a single storage driver to manage all devices within that domain. This architectural approach provides several critical advantages: operating systems only need to support one driver instance per domain rather than individually managing dozens of NVMe SSDs, device firmware updates can be coordinated across multiple SSDs within a domain, and configuration changes can be applied consistently to grouped devices. The PCIe lane allocation varies based on VMD version. Intel VMD 2.0, available on newer Xeon Scalable processor generations, provides 64 total PCIe lanes subdivided into four domains, with each domain managing x16 lanes. Intel VMD 1.0, on earlier Xeon generations, provides 48 total PCIe lanes subdivided into three domains. Each domain can be independently configured or disabled, allowing flexible allocation based on workload requirements. For example, a system might dedicate one VMD domain exclusively to database tier storage, another to caching tier storage, and a third to backup storage, with each tier independently managed by separate drivers optimized for that specific workload. VMD's design isolates specific devices and root ports on the root bus, making them invisible to the operating system except through the unified VMD driver. This invisibility ensures that guest operating systems in virtualized environments cannot directly access underlying storage infrastructure, maintaining isolation guarantees critical for cloud and virtualization platforms. Access to hidden target devices flows entirely through the VMD controller, which enforces per-domain policy and security boundaries. The controller can be configured to operate on x4 or x2 lane granularity (with x2 support added in Intel VROC version 8.6), allowing flexible subdivision of larger lane allocations to support diverse storage configurations ranging from single-socket to multi-socket systems.
Hot-Plug Capability and Storage Management
One of VMD's most significant capabilities is surprise insertion and removal support for NVMe SSDs, enabling hot-plug operations without requiring system downtime or OS-level intervention. In traditional systems, inserting or removing an NVMe SSD while the system operates risks corrupting data or causing system instability because the OS may not gracefully handle disappearing storage devices. VMD isolates these insertion and removal events from the PCIe bus, preventing ripple effects through the broader system. When an NVMe drive is inserted into a U.2 (SFF-8639) hot-plug bay, VMD detects the insertion event and notifies the unified driver, which can then initialize the device following protocol-defined procedures. When an SSD is removed, VMD similarly isolates the removal event, preventing OS-level PCIe error reporting that would normally cascade through system logs and cause application interruption. Intel VROC, the software layer sitting above VMD, takes appropriate action on device changes based on SFF-8639 interface specifications. For RAID configurations, VROC automatically handles device failure scenarios—if an SSD in a RAID 1 (mirrored) configuration fails, VROC immediately detects the failure, marks the device degraded, and can automatically initiate rebuild operations onto replacement devices inserted into empty hot-plug bays. This hot-plug capability proves essential for data center environments where minimizing downtime is critical. Rather than scheduling maintenance windows to replace failed drives, data center operators can simply insert a replacement SSD and allow VROC to automatically rebalance the RAID configuration during ongoing operation. For RAID 10 configurations managing millions of IOPS (input/output operations per second), the ability to replace drives without interrupting service translates directly to improved availability and reduced operational costs. The SFF-8639 interface standard defines the mechanical and electrical specifications for hot-plug NVMe enclosures, ensuring compatibility across vendors and generations of drives. VMD's integration with this standard enables ecosystem compatibility, allowing administrators to mix drives from different manufacturers within the same systems as long as they comply with U.2 specifications.
RAID Configuration and Virtual RAID on CPU
VMD serves as the foundation for Intel's Virtual RAID on CPU (VROC) technology, which implements software-defined RAID directly on Xeon processors. Intel VROC supports RAID 0 (striping for maximum performance with no redundancy), RAID 1 (mirroring for maximum redundancy), RAID 5 (striping with single parity, balancing performance and redundancy for 3–16 drives), and RAID 10 (striped mirrors for both performance and redundancy). The Intel Rapid Storage Technology (RST) driver version 18.7.6.1010.3 provides VROC functionality for Windows environments, while Linux environments use native MD-RAID or other software RAID implementations layered above VMD. RAID configuration decisions significantly impact overall system performance and data protection. RAID 0 striping distributes data across multiple drives, allowing parallel reads and writes that can deliver up to 8–10 GB/s throughput with modern NVMe SSDs, but offers no protection against drive failure—a single failed drive loses all data. RAID 1 mirroring maintains duplicate copies of all data, guaranteeing data preservation even if one drive completely fails, but consumes 50% of available capacity for redundancy and doesn't improve read performance. RAID 5 uses sophisticated mathematical algorithms (parity calculations) to protect against single drive failures while consuming only 1/N of capacity for redundancy (where N is the number of drives), making it popular for 4–8 drive systems. RAID 10 combines the benefits of both striping and mirroring, protecting against up to two drive failures in certain configurations while maintaining performance advantages of striping. The rebuild process—reconstructing data on a failed drive's replacement—is particularly important in RAID deployments. RAID 5 with large capacity modern SSDs can require 24–48 hours for complete rebuild, during which the system operates in degraded state with reduced performance. RAID 10 rebuild times are typically 30–50% faster due to simpler mathematical reconstruction. VMD's hardware-assisted acceleration can reduce rebuild times by 10–20% compared to pure software implementations, making it particularly valuable for RAID 5 deployments with large drive counts.
Virtual Machine Integration and Direct Assignment
VMD's most powerful capability for virtualization environments is direct assignment to virtual machines, enabling guest operating systems to directly control entire VMD domains without hypervisor intervention. This direct assignment completely bypasses the hypervisor's I/O virtualization layer, dramatically reducing latency and increasing throughput for storage-intensive workloads. When a VMD domain is assigned directly to a virtual machine, the guest OS perceives the domain as local hardware directly attached to the virtual system. The guest OS runs VROC drivers or native storage drivers and directly manages all RAID configuration, device hotplug events, and storage optimization within that domain—the hypervisor remains completely uninvolved in storage operations. This architecture proves particularly valuable for hyperconverged infrastructure (HCI) implementations where compute and storage resources are tightly coupled in single systems. A typical HCI deployment runs multiple virtual machines on a single physical server, with each VM managing a portion of the server's NVMe storage for application-local data. Direct VMD assignment enables each VM to achieve near-native storage performance despite sharing the underlying physical system with other VMs. Throughput improves by 30–50% compared to traditional hypervisor-mediated storage access, and latency decreases from 10–20 microseconds to 2–5 microseconds—critical improvements for latency-sensitive applications like databases or real-time analytics. The guest OS inherits the full capabilities of the underlying VMD controller, allowing the guest to implement its own RAID strategies, perform advanced storage tiering, or use sophisticated caching mechanisms. This flexibility enables virtualized environments to achieve performance characteristics approaching bare-metal deployments, making HCI architectures viable for performance-critical workloads previously requiring dedicated storage arrays. Direct assignment does impose constraints—a domain assigned to a VM cannot be simultaneously accessed by the hypervisor or other VMs, requiring careful planning of domain allocation across the VM population.
Common Misconceptions and Technical Clarifications
A widespread misconception positions VMD as a replacement for external hardware RAID controllers—while VMD reduces reliance on external controllers, the comparison is not direct. External hardware RAID controllers offer advanced features like battery-backed write caches, sophisticated hot-spare management, and predictive analytics that VMD's software-defined approach doesn't provide. VMD represents a shift toward software-defined storage where policy and management logic run on processors rather than dedicated controller chips, but this architectural shift doesn't imply VMD is inherently superior to quality hardware controllers. Another misunderstanding involves VMD's relationship to NVMe protocol. VMD operates above the NVMe protocol layer—it groups NVMe drives into management domains and applies RAID logic across those groups, but doesn't fundamentally change how individual drives operate. Each NVMe drive still executes standard NVMe command processing, authentication, and error handling independent of VMD. VMD's management logic applies at the domain level, not the individual drive level. A third confusion relates to VMD's performance characteristics. While direct assignment to VMs can dramatically improve performance compared to hypervisor-mediated access, this doesn't mean VMD automatically makes all workloads faster. Workloads that are already compute-bound rather than storage-bound see minimal benefit from VMD optimization. Additionally, RAID 5 with VMD still incurs parity calculation overhead—throughput is approximately (N-1)/N of available drive capacity, where N is the number of drives. Finally, users sometimes assume VMD is available on all Intel systems—VMD requires specific processor generations and BIOS support, limiting availability to Xeon Scalable processors and some newer desktop/laptop processors.
Related Questions
How does VMD improve storage performance compared to traditional hardware RAID?
VMD reduces latency by eliminating the intermediate hardware controller layer—storage commands travel directly to NVMe drives rather than being processed through a separate controller's firmware. This typically reduces latency from 10–20 microseconds (hardware RAID) to 2–5 microseconds (VMD direct assignment). The software-defined approach also enables CPU-accelerated RAID parity calculations using advanced processor instructions like AVX-512, achieving 30–50% faster rebuild times than traditional controllers. However, direct hardware RAID controllers offer battery-backed caches unavailable in pure software implementations, which some workloads depend upon.
What is the difference between VMD 1.0 and VMD 2.0?
VMD 2.0 provides 64 PCIe lanes subdivided into four domains (each managing x16 lanes), while VMD 1.0 provides 48 lanes subdivided into three domains. VMD 2.0's additional lanes and domains enable larger RAID configurations and more flexible workload partitioning across a single processor. VMD 2.0 also typically includes firmware improvements and better integration with modern NVMe specifications. Both versions support the same RAID 0/1/5/10 capabilities, but VMD 2.0 appears on newer Xeon Scalable generations.
Can VMD be disabled if not required for a workload?
Yes, VMD can be completely disabled in BIOS, reverting the system to traditional PCIe SSD enumeration where the operating system directly manages individual drives. This flexibility allows operators to choose between VMD's centralized management (useful for RAID and hot-plug scenarios) or traditional per-device management (simpler for single-SSD deployments). Disabling VMD requires BIOS reconfiguration and typically a system reboot, so the decision should be made during initial system setup rather than changed frequently.
How does VMD handle drive failures in RAID configurations?
When an NVMe drive fails within a RAID configuration, Intel VROC immediately detects the failure through monitoring drive health status and S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) metrics. For RAID 5, the system continues operating in degraded state, reconstructing missing data from parity information at reduced performance. For RAID 1 or RAID 10, redundant copies ensure data availability. When a replacement drive is inserted into a hot-plug bay, VROC automatically begins rebuild operations, reconstructing the lost data without administrator intervention.
Is VMD suitable for consumer or small business systems?
VMD is primarily engineered for data center and enterprise environments where NVMe SSDs, RAID requirements, and hot-plug capability justify the additional complexity. Consumer and small business systems rarely benefit from VMD because they typically use single SSD configurations, don't require RAID protection, and rarely need hot-swap capability. VMD support is limited to Intel Xeon Scalable processors, which are generally not used in consumer or small business computers. For most non-enterprise users, direct NVMe enumeration provides simpler, equally effective storage management.