What is EVMS? EVMS brings a new model of volume management to Linux®. EVMS integrates all aspects of volume management, such as disk partitioning, Linux logical volume manager (LVM) and multi-disk (MD) management, and file system operations into a single cohesive package. With EVMS, various volume management technologies are accessible through one interface, and new technologies can be added as plug-ins as they are developed. Why choose EVMS? EVMS lets you manage storage space in a way that is more intuitive and flexible than many other Linux volume management systems. Practical tasks, such as migrating disks or adding new disks to your Linux system, become more manageable with EVMS because EVMS can recognize and read from different volume types and file systems. EVMS provides additional safety controls by not allowing commands that are unsafe. These controls help maintain the integrity of the data stored on the system. You can use EVMS to create and manage data storage. With EVMS, you can use multiple volume management technologies under one framework while ensuring your system still interacts correctly with stored data. With EVMS, you are can use drive linking, shrink and expand volumes, create snapshots of your volumes, and set up RAID (redundant array of independent devices) features for your system. You can also use many types of file systems and manipulate these storage pieces in ways that best meet the needs of your particular work environment. EVMS also provides the capability to manage data on storage that is physically shared by nodes in a cluster. This shared storage allows data to be highly available from different nodes in the cluster. The EVMS user interfaces There are currently three user interfaces available for EVMS: graphical (GUI), text mode (Ncurses), and the Command Line Interpreter (CLI). Additionally, you can use the EVMS Application Programming Interface to implement your own customized user interface. tells more about each of the EVMS user interfaces. EVMS user interfaces User interface Typical user Types of use FunctionGUI All All uses except automation Allows you to choose from available options only, instead of having to sort through all the options, including ones that are not available at that point in the process. Ncurses Users who don't have GTK libraries or X Window Systems on their machines All uses except automation Allows you to choose from available options only, instead of having to sort through all the options, including ones that are not available at that point in the process. Command Line Expert All uses Allows easy automation of tasks
EVMS terminology To avoid confusion with other terms that describe volume management in general, EVMS uses a specific set of terms. These terms are listed, from most fundamental to most comprehensive, as follows: Logical disk Representation of anything EVMS can access as a physical disk. In EVMS, physical disks are logical disks. Sector The lowest level of addressability on a block device. This definition is in keeping with the standard meaning found in other management systems. Disk segment An ordered set of physically contiguous sectors residing on the same storage object. The general analogy for a segment is to a traditional disk partition, such as DOS or OS/2 ® Storage region An ordered set of logically contiguous sectors that are not necessarily physically contiguous. Storage object Any persistent memory structure in EVMS that can be used to build objects or create a volume. Storage object is a generic term for disks, segments, regions, and feature objects. Storage container A collection of storage objects. A storage container consumes one set of storage objects and produces new storage objects. One common subset of storage containers is volume groups, such as AIX® or LVM. Storage containers can be either of type private or cluster. Cluster storage container Specialized storage containers that consume only disk objects that are physically accessible from all nodes of a cluster. Private storage container A collection of disks that are physically accessible from all nodes of a cluster, managed as a single pool of storage, and owned and accessed by a single node of the cluster at any given time. Shared storage container A collection of disks that are physically accessible from all nodes of a cluster, managed as a single pool of storage, and owned and accessed by all nodes of the cluster simultaneously. Deported storage container A shared cluster container that is not owned by any node of the cluster. Feature object A storage object that contains an EVMS native feature. An EVMS Native Feature is a function of volume management designed and implemented by EVMS. These features are not intended to be backward compatible with other volume management technologies. Logical volume A volume that consumes a storage object and exports something mountable. There are two varieties of logical volumes: EVMS Volumes and Compatibility volumes. EVMS Volumes contain EVMS native metadata and can support all EVMS features. /dev/evms/my_volume would be an example of an EVMS Volume. Compatibility volumes do not contain any EVMS native metadata. Compatibility volumes are backward compatible to their particular scheme, but they cannot support EVMS features. /dev/evms/md/md0 would be an example of a compatibility volume. What makes EVMS so flexible? There are numerous drivers in the Linux kernel, such as Device Mapper and MD (software RAID), that implement volume management schemes. EVMS is built on top of these drivers to provide one framework for combining and accessing the capabilities. The EVMS Engine handles the creation, configuration, and management of volumes, segments, and disks. The EVMS Engine is a programmatic interface to the EVMS system. User interfaces and programs that use EVMS must go through the Engine. EVMS provides the capacity for plug-in modules to the Engine that allow EVMS to perform specialized tasks without altering the core code. These plug-in modules allow EVMS to be more extensible and customizable than other volume management systems. Plug-in layer definitions EVMS defines a layered architecture where plug-ins in each layer create abstractions of the layer or layers below. EVMS also allows most plug-ins to create abstractions of objects within the same layer. The following list defines these layers from the bottom up. Device managers The first (bottom) layer consists of device managers. These plug-ins communicate with hardware device drivers to create the first EVMS objects. Currently, all devices are handled by a single plug-in. Future releases of EVMS might need additional device managers for network device management (for example, to manage disks on a storage area network (SAN)). Segment managers The second layer consists of segment managers. These plug-ins handle the segmenting, or partitioning, of disk drives. The Engine components can replace partitioning programs, such as fdisk and Disk Druid, and EVMS uses Device Mapper to replace the in-kernel disk partitioning code. Segment managers can also be "stacked," meaning that one segment manager can take as input the output from another segment manager. EVMS provides the following segment managers: DOS, GPT, System/390® (S/390), Cluster, BSD, Mac, and BBR. Other segment manager plug-ins can be added to support other partitioning schemes. Region managers The third layer consists of region managers. This layer provides a place for plug-ins that ensure compatibility with existing volume management schemes in Linux and other operating systems. Region managers are intended to model systems that provide a logical abstraction above disks or partitions. Like segment managers, region managers can also be stacked. Therefore, the input object(s) to a region manager can be disks, segments, or other regions. There are currently three region manager plug-ins in EVMS: Linux LVM, LVM2, and Multi-Disk (MD). Linux LVM The Linux LVM plug-in provides compatibility with the Linux LVM and allows the creation of volume groups (known in EVMS as containers) and logical volumes (known in EVMS as regions). LVM2 The LVM2 plug-in provides compatibility with the new volume format introduced by the LVM2 tools from Red Hat. This plug-in is very similar in functionality to the LVM plug-in. The primary difference is the new, improved metadata format. MD The Multi-Disk (MD) plug-in for RAID provides RAID levels linear, 0, 1, 4, and 5 in software. MD is one plug-in that displays as four region managers that you can choose from. EVMS features The next layer consists of EVMS features. This layer is where new EVMS-native functionality is implemented. EVMS features can be built on any object in the system, including disks, segments, regions, or other feature objects. All EVMS features share a common type of metadata, which makes discovery of feature objects much more efficient, and recovery of broken features objects much more reliable. There are three features currently available in EVMS: drive linking, Bad Block Relocation, and snapshotting. Drive Linking Drive linking allows any number of objects to be linearly concatenated together into a single object. A drive linked volume can be expanded by adding another storage object to the end or shrunk by removing the last object. Bad Block Relocation Bad Block Relocation (BBR) monitors its I/O path and detects write failures (which can be caused by a damaged disk). In the event of such a failure, the data from that request is stored in a new location. BBR keeps track of this remapping. Additional I/Os to that location are redirected to the new location. Snapshotting The Snapshotting feature provides a mechanism for creating a "frozen" copy of a volume at a single instant in time, without having to take that volume off-line. This is useful for performing backups on a live system. Snapshots work with any volume (EVMS or compatibility), and can use any other available object as a backing store. After a snapshot is created and made into an EVMS volume, writes to the "original" volume cause the original contents of that location to be copied to the snapshot's storage object. Reads to the snapshot volume look like they come from the original at the time the snapshot was created. File System Interface Modules File System Interface Modules (FSIMs) provide coordination with the file systems during certain volume management operations. For instance, when expanding or shrinking a volume, the file system must also be expanded or shrunk to the appropriate size. Ordering in this example is also important; a file system cannot be expanded before the volume, and a volume cannot be shrunk before the file system. The FSIMs allow EVMS to ensure this coordination and ordering. FSIMs also perform file system operations from one of the EVMS user interfaces. For instance, a user can make new file systems and check existing file systems by interacting with the FSIM. Cluster Manager Interface Modules Cluster Manager Interface Modules, also known as the EVMS Clustered Engine (ECE), interface with the local cluster manager installed on the system. The ECE provides a standardized ECE API to the Engine while hiding cluster manager details from the Engine.