Anaconda Storage Filtering

This is the main page for information about anaconda's further/continuing storage rewrite. The entire storage subsystem was rewritten from scratch in F11, this page tracks the cleanup and last bits of support.

Summary

Anaconda's storage configuration code was rewritten from scratch in F11 to address several design limitations and replace outdated chunks of code. A number of outstanding problem areas remain, and this page tracks progress. It also documents the storage device filtering design and test requirements.

Owner

Current status

Summary

At the request of many folks with lots of storage available, we plan to add the ability to prune the available storage presented as an install target.
We want to provide users the ability to select which devices to use during installation, a way to specify disks by something other than the device node name for kickstart, and we're also curious if there is a way we can control which LUNs are scanned/activated by the kernel.

This is a pervasive change. Work items include:

Figure out how to use udev to identify and filter various types of devices.

Add support to ignoredisk/clearpart/etc. kickstart commands for shell globs and non-device-node methods of referring to disks.

Review all storage code to make sure globs work (this can probably be added right into our udev code).

Create UIs for selecting and filtering out devices, and make sure these work right with kickstart.

Update various dialogs to work better on systems with many, many disks visible. The goal here is to make sure you don't get 9000 popups asking for confirmation.

Update storage UI to show more information than just device nodes, as WWIDs or other pieces of information might be much more descriptive to the user. Also, devices nodes can change between installs.

Benefit to Fedora

Advanced storage filtering will mean that people have a higher degree of confidence that anaconda is not about to destroy certain disks. It should also mean people with very large storage configurations will have faster and easier to use installs.

Scope

This is a major change early on in the installer, and it also affects exactly which devices anaconda's storage system will work with later.

How To Test

Testing will need to be extensive.

We will have to verify the following areas of functionality as a starting point:

Ideally, find a testbed with MANY LUNs.

Verify all the normal stuff in the test matrix still works.

Don't select certain devices in the filtering UI and make sure they do not show up in later storage UI.

On machines with advanced storage capabilities, verify those don't show up in the simple filtering UI.

On the cleardisks UI, aggregate devices (RAID, multipath) should appear as one thing, not as their component parts.

Lots of kickstart testing:

Referencing disks by globs, /dev/disk/by-path, etc. works.

filter, filtertype, and cleardisks should all be skipped if clearpart and ignoredisks are used.

ignoredisks not being specified should still result in a fully automated install.

interactive should stop at every filtering screen with the UI correctly populated.

No partitioning commands should stop at filtering UI and partitioning UI.

User Experience

The user will potentially see several new screens early on in anaconda, as well as another screen later during partitioning. Some work still needs to be done to minimize the number of screens shown in the typical home user with one disk case, but that's polish that can be worked in later.

This feature should improve the user's experience regarding the storage configuration portion of installation, and give them the ability to filter out unwanted storage devices as potential installation targets.

Dependencies

Anaconda's storage code uses general storage utilities such as mdadm, parted, dmraid. If liblvm becomes available in time it would be an optional work item to replace existing calls to the lvm command line with library references, but this is more likely to happen in F13.

Need people with many storage devices to test as we dont have 600 luns.

Contingency Plan

Revert to previous codebase and selectively include bugfixes. Which will be difficult to unwedge from the new functionality, which is minor and intertwined with the bugfixes. Basically, this is a lousy option.