summaryrefslogtreecommitdiff
path: root/debian
diff options
context:
space:
mode:
authorMichael Tokarev <mjt@tls.msk.ru>2012-01-11 18:04:06 +0400
committerMichael Tokarev <mjt@tls.msk.ru>2012-01-11 18:04:06 +0400
commit92119fa0f98e33fd9994994cdd26d5df12eefee6 (patch)
tree4e8907ee17e0a236dad61910726fd2a2ad3cabab /debian
parentaa9f8538e3707cc0e35aacddfd682e43038101a0 (diff)
move all files from contrib/docs/ topgit branches to debian/docs/
Diffstat (limited to 'debian')
-rw-r--r--debian/docs/RAID5_versus_RAID10.txt177
-rw-r--r--debian/docs/md.txt600
-rw-r--r--debian/docs/md_superblock_formats.txt534
-rw-r--r--debian/docs/rebuilding-raid.html561
-rw-r--r--debian/mdadm.docs2
-rwxr-xr-xdebian/rules2
6 files changed, 1874 insertions, 2 deletions
diff --git a/debian/docs/RAID5_versus_RAID10.txt b/debian/docs/RAID5_versus_RAID10.txt
new file mode 100644
index 00000000..8278ab26
--- /dev/null
+++ b/debian/docs/RAID5_versus_RAID10.txt
@@ -0,0 +1,177 @@
+# from http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt
+# also see http://www.miracleas.com/BAARF/BAARF2.html
+#
+# Note: I, the Debian maintainer, do not agree with some of the arguments,
+# especially not with the total condemning of RAID5. Anyone who talks about
+# data loss and blames the RAID system should spend time reading up on Backups
+# instead of trying to evangelise, but that's only my opinion. RAID5 has its
+# merits and its shortcomings, just like any other method. However, the author
+# of this argument puts forth a good case and thus I am including the
+# document. Remember that you're the only one that can decide which RAID level
+# to use.
+#
+
+RAID5 versus RAID10 (or even RAID3 or RAID4)
+
+First let's get on the same page so we're all talking about apples.
+
+What is RAID5?
+
+OK here is the deal, RAID5 uses ONLY ONE parity drive per stripe and many
+RAID5 arrays are 5 (if your counts are different adjust the calculations
+appropriately) drives (4 data and 1 parity though it is not a single drive
+that is holding all of the parity as in RAID 3 & 4 but read on). If you
+have 10 drives or say 20GB each for 200GB RAID5 will use 20% for parity
+(assuming you set it up as two 5 drive arrays) so you will have 160GB of
+storage. Now since RAID10, like mirroring (RAID1), uses 1 (or more) mirror
+drive for each primary drive you are using 50% for redundancy so to get the
+same 160GB of storage you will need 8 pairs or 16 - 20GB drives, which is
+why RAID5 is so popular. This intro is just to put things into
+perspective.
+
+RAID5 is physically a stripe set like RAID0 but with data recovery
+included. RAID5 reserves one disk block out of each stripe block for
+parity data. The parity block contains an error correction code which can
+correct any error in the RAID5 block, in effect it is used in combination
+with the remaining data blocks to recreate any single missing block, gone
+missing because a drive has failed. The innovation of RAID5 over RAID3 &
+RAID4 is that the parity is distributed on a round robin basis so that
+there can be independent reading of different blocks from the several
+drives. This is why RAID5 became more popular than RAID3 & RAID4 which
+must sychronously read the same block from all drives together. So, if
+Drive2 fails blocks 1,2,4,5,6 & 7 are data blocks on this drive and blocks
+3 and 8 are parity blocks on this drive. So that means that the parity on
+Drive5 will be used to recreate the data block from Disk2 if block 1 is
+requested before a new drive replaces Drive2 or during the rebuilding of
+the new Drive2 replacement. Likewise the parity on Drive1 will be used to
+repair block 2 and the parity on Drive3 will repair block4, etc. For block
+2 all the data is safely on the remaining drives but during the rebuilding
+of Drive2's replacement a new parity block will be calculated from the
+block 2 data and will be written to Drive 2.
+
+Now when a disk block is read from the array the RAID software/firmware
+calculates which RAID block contains the disk block, which drive the disk
+block is on and which drive contains the parity block for that RAID block
+and reads ONLY the one data drive. It returns the data block. If you
+later modify the data block it recalculates the parity by subtracting the
+old block and adding in the new version then in two separate operations it
+writes the data block followed by the new parity block. To do this it must
+first read the parity block from whichever drive contains the parity for
+that stripe block and reread the unmodified data for the updated block from
+the original drive. This read-read-write-write is known as the RAID5 write
+penalty since these two writes are sequential and synchronous the write
+system call cannot return until the reread and both writes complete, for
+safety, so writing to RAID5 is up to 50% slower than RAID0 for an array of
+the same capacity. (Some software RAID5's avoid the re-read by keeping an
+unmodified copy of the orginal block in memory.)
+
+Now what is RAID10:
+
+RAID10 is one of the combinations of RAID1 (mirroring) and RAID0
+(striping) which are possible. There used to be confusion about what
+RAID01 or RAID10 meant and different RAID vendors defined them
+differently. About five years or so ago I proposed the following standard
+language which seems to have taken hold. When N mirrored pairs are
+striped together this is called RAID10 because the mirroring (RAID1) is
+applied before striping (RAID0). The other option is to create two stripe
+sets and mirror them one to the other, this is known as RAID01 (because
+the RAID0 is applied first). In either a RAID01 or RAID10 system each and
+every disk block is completely duplicated on its drive's mirror.
+Performance-wise both RAID01 and RAID10 are functionally equivalent. The
+difference comes in during recovery where RAID01 suffers from some of the
+same problems I will describe affecting RAID5 while RAID10 does not.
+
+Now if a drive in the RAID5 array dies, is removed, or is shut off data is
+returned by reading the blocks from the remaining drives and calculating
+the missing data using the parity, assuming the defunct drive is not the
+parity block drive for that RAID block. Note that it takes 4 physical
+reads to replace the missing disk block (for a 5 drive array) for four out
+of every five disk blocks leading to a 64% performance degradation until
+the problem is discovered and a new drive can be mapped in to begin
+recovery. Performance is degraded further during recovery because all
+drives are being actively accessed in order to rebuild the replacement
+drive (see below).
+
+If a drive in the RAID10 array dies data is returned from its mirror drive
+in a single read with only minor (6.25% on average for a 4 pair array as a
+whole) performance reduction when two non-contiguous blocks are needed from
+the damaged pair (since the two blocks cannot be read in parallel from both
+drives) and none otherwise.
+
+One begins to get an inkling of what is going on and why I dislike RAID5,
+but, as they say on late night info-mercials, there's more.
+
+What's wrong besides a bit of performance I don't know I'm missing?
+
+OK, so that brings us to the final question of the day which is: What is
+the problem with RAID5? It does recover a failed drive right? So writes
+are slower, I don't do enough writing to worry about it and the cache
+helps a lot also, I've got LOTS of cache! The problem is that despite the
+improved reliability of modern drives and the improved error correction
+codes on most drives, and even despite the additional 8 bytes of error
+correction that EMC puts on every Clariion drive disk block (if you are
+lucky enough to use EMC systems), it is more than a little possible that a
+drive will become flaky and begin to return garbage. This is known as
+partial media failure. Now SCSI controllers reserve several hundred disk
+blocks to be remapped to replace fading sectors with unused ones, but if
+the drive is going these will not last very long and will run out and SCSI
+does NOT report correctable errors back to the OS! Therefore you will not
+know the drive is becoming unstable until it is too late and there are no
+more replacement sectors and the drive begins to return garbage. [Note
+that the recently popular IDE/ATA drives do not (TMK) include bad sector
+remapping in their hardware so garbage is returned that much sooner.]
+When a drive returns garbage, since RAID5 does not EVER check parity on
+read (RAID3 & RAID4 do BTW and both perform better for databases than
+RAID5 to boot) when you write the garbage sector back garbage parity will
+be calculated and your RAID5 integrity is lost! Similarly if a drive
+fails and one of the remaining drives is flaky the replacement will be
+rebuilt with garbage also propagating the problem to two blocks instead of
+just one.
+
+Need more? During recovery, read performance for a RAID5 array is
+degraded by as much as 80%. Some advanced arrays let you configure the
+preference more toward recovery or toward performance. However, doing so
+will increase recovery time and increase the likelihood of losing a second
+drive in the array before recovery completes resulting in catastrophic
+data loss. RAID10 on the other hand will only be recovering one drive out
+of 4 or more pairs with performance ONLY of reads from the recovering pair
+degraded making the performance hit to the array overall only about 20%!
+Plus there is no parity calculation time used during recovery - it's a
+straight data copy.
+
+What about that thing about losing a second drive? Well with RAID10 there
+is no danger unless the one mirror that is recovering also fails and
+that's 80% or more less likely than that any other drive in a RAID5 array
+will fail! And since most multiple drive failures are caused by
+undetected manufacturing defects you can make even this possibility
+vanishingly small by making sure to mirror every drive with one from a
+different manufacturer's lot number. ("Oh", you say, "this schenario does
+not seem likely!" Pooh, we lost 50 drives over two weeks when a batch of
+200 IBM drives began to fail. IBM discovered that the single lot of
+drives would have their spindle bearings freeze after so many hours of
+operation. Fortunately due in part to RAID10 and in part to a herculean
+effort by DG techs and our own people over 2 weeks no data was lost.
+HOWEVER, one RAID5 filesystem was a total loss after a second drive failed
+during recover. Fortunately everything was on tape.
+
+Conclusion? For safety and performance favor RAID10 first, RAID3 second,
+RAID4 third, and RAID5 last! The original reason for the RAID2-5 specs
+was that the high cost of disks was making RAID1, mirroring, impractical.
+That is no longer the case! Drives are commodity priced, even the biggest
+fastest drives are cheaper in absolute dollars than drives were then and
+cost per MB is a tiny fraction of what it was. Does RAID5 make ANY sense
+anymore? Obviously I think not.
+
+To put things into perspective: If a drive costs $1000US (and most are far
+less expensive than that) then switching from a 4 pair RAID10 array to a 5
+drive RAID5 array will save 3 drives or $3000US. What is the cost of
+overtime, wear and tear on the technicians, DBAs, managers, and customers
+of even a recovery scare? What is the cost of reduced performance and
+possibly reduced customer satisfaction? Finally what is the cost of lost
+business if data is unrecoverable? I maintain that the drives are FAR
+cheaper! Hence my mantra:
+
+NO RAID5! NO RAID5! NO RAID5! NO RAID5! NO RAID5! NO RAID5! NO RAID5!
+
+Art S. Kagel
+
diff --git a/debian/docs/md.txt b/debian/docs/md.txt
new file mode 100644
index 00000000..fc94770f
--- /dev/null
+++ b/debian/docs/md.txt
@@ -0,0 +1,600 @@
+Tools that manage md devices can be found at
+ http://www.kernel.org/pub/linux/utils/raid/
+
+
+Boot time assembly of RAID arrays
+---------------------------------
+
+You can boot with your md device with the following kernel command
+lines:
+
+for old raid arrays without persistent superblocks:
+ md=<md device no.>,<raid level>,<chunk size factor>,<fault level>,dev0,dev1,...,devn
+
+for raid arrays with persistent superblocks
+ md=<md device no.>,dev0,dev1,...,devn
+or, to assemble a partitionable array:
+ md=d<md device no.>,dev0,dev1,...,devn
+
+md device no. = the number of the md device ...
+ 0 means md0,
+ 1 md1,
+ 2 md2,
+ 3 md3,
+ 4 md4
+
+raid level = -1 linear mode
+ 0 striped mode
+ other modes are only supported with persistent super blocks
+
+chunk size factor = (raid-0 and raid-1 only)
+ Set the chunk size as 4k << n.
+
+fault level = totally ignored
+
+dev0-devn: e.g. /dev/hda1,/dev/hdc1,/dev/sda1,/dev/sdb1
+
+A possible loadlin line (Harald Hoyer <HarryH@Royal.Net>) looks like this:
+
+e:\loadlin\loadlin e:\zimage root=/dev/md0 md=0,0,4,0,/dev/hdb2,/dev/hdc3 ro
+
+
+Boot time autodetection of RAID arrays
+--------------------------------------
+
+When md is compiled into the kernel (not as module), partitions of
+type 0xfd are scanned and automatically assembled into RAID arrays.
+This autodetection may be suppressed with the kernel parameter
+"raid=noautodetect". As of kernel 2.6.9, only drives with a type 0
+superblock can be autodetected and run at boot time.
+
+The kernel parameter "raid=partitionable" (or "raid=part") means
+that all auto-detected arrays are assembled as partitionable.
+
+Boot time assembly of degraded/dirty arrays
+-------------------------------------------
+
+If a raid5 or raid6 array is both dirty and degraded, it could have
+undetectable data corruption. This is because the fact that it is
+'dirty' means that the parity cannot be trusted, and the fact that it
+is degraded means that some datablocks are missing and cannot reliably
+be reconstructed (due to no parity).
+
+For this reason, md will normally refuse to start such an array. This
+requires the sysadmin to take action to explicitly start the array
+despite possible corruption. This is normally done with
+ mdadm --assemble --force ....
+
+This option is not really available if the array has the root
+filesystem on it. In order to support this booting from such an
+array, md supports a module parameter "start_dirty_degraded" which,
+when set to 1, bypassed the checks and will allows dirty degraded
+arrays to be started.
+
+So, to boot with a root filesystem of a dirty degraded raid[56], use
+
+ md-mod.start_dirty_degraded=1
+
+
+Superblock formats
+------------------
+
+The md driver can support a variety of different superblock formats.
+Currently, it supports superblock formats "0.90.0" and the "md-1" format
+introduced in the 2.5 development series.
+
+The kernel will autodetect which format superblock is being used.
+
+Superblock format '0' is treated differently to others for legacy
+reasons - it is the original superblock format.
+
+
+General Rules - apply for all superblock formats
+------------------------------------------------
+
+An array is 'created' by writing appropriate superblocks to all
+devices.
+
+It is 'assembled' by associating each of these devices with an
+particular md virtual device. Once it is completely assembled, it can
+be accessed.
+
+An array should be created by a user-space tool. This will write
+superblocks to all devices. It will usually mark the array as
+'unclean', or with some devices missing so that the kernel md driver
+can create appropriate redundancy (copying in raid1, parity
+calculation in raid4/5).
+
+When an array is assembled, it is first initialized with the
+SET_ARRAY_INFO ioctl. This contains, in particular, a major and minor
+version number. The major version number selects which superblock
+format is to be used. The minor number might be used to tune handling
+of the format, such as suggesting where on each device to look for the
+superblock.
+
+Then each device is added using the ADD_NEW_DISK ioctl. This
+provides, in particular, a major and minor number identifying the
+device to add.
+
+The array is started with the RUN_ARRAY ioctl.
+
+Once started, new devices can be added. They should have an
+appropriate superblock written to them, and then passed be in with
+ADD_NEW_DISK.
+
+Devices that have failed or are not yet active can be detached from an
+array using HOT_REMOVE_DISK.
+
+
+Specific Rules that apply to format-0 super block arrays, and
+ arrays with no superblock (non-persistent).
+-------------------------------------------------------------
+
+An array can be 'created' by describing the array (level, chunksize
+etc) in a SET_ARRAY_INFO ioctl. This must has major_version==0 and
+raid_disks != 0.
+
+Then uninitialized devices can be added with ADD_NEW_DISK. The
+structure passed to ADD_NEW_DISK must specify the state of the device
+and its role in the array.
+
+Once started with RUN_ARRAY, uninitialized spares can be added with
+HOT_ADD_DISK.
+
+
+
+MD devices in sysfs
+-------------------
+md devices appear in sysfs (/sys) as regular block devices,
+e.g.
+ /sys/block/md0
+
+Each 'md' device will contain a subdirectory called 'md' which
+contains further md-specific information about the device.
+
+All md devices contain:
+ level
+ a text file indicating the 'raid level'. e.g. raid0, raid1,
+ raid5, linear, multipath, faulty.
+ If no raid level has been set yet (array is still being
+ assembled), the value will reflect whatever has been written
+ to it, which may be a name like the above, or may be a number
+ such as '0', '5', etc.
+
+ raid_disks
+ a text file with a simple number indicating the number of devices
+ in a fully functional array. If this is not yet known, the file
+ will be empty. If an array is being resized this will contain
+ the new number of devices.
+ Some raid levels allow this value to be set while the array is
+ active. This will reconfigure the array. Otherwise it can only
+ be set while assembling an array.
+ A change to this attribute will not be permitted if it would
+ reduce the size of the array. To reduce the number of drives
+ in an e.g. raid5, the array size must first be reduced by
+ setting the 'array_size' attribute.
+
+ chunk_size
+ This is the size in bytes for 'chunks' and is only relevant to
+ raid levels that involve striping (0,4,5,6,10). The address space
+ of the array is conceptually divided into chunks and consecutive
+ chunks are striped onto neighbouring devices.
+ The size should be at least PAGE_SIZE (4k) and should be a power
+ of 2. This can only be set while assembling an array
+
+ layout
+ The "layout" for the array for the particular level. This is
+ simply a number that is interpretted differently by different
+ levels. It can be written while assembling an array.
+
+ array_size
+ This can be used to artificially constrain the available space in
+ the array to be less than is actually available on the combined
+ devices. Writing a number (in Kilobytes) which is less than
+ the available size will set the size. Any reconfiguration of the
+ array (e.g. adding devices) will not cause the size to change.
+ Writing the word 'default' will cause the effective size of the
+ array to be whatever size is actually available based on
+ 'level', 'chunk_size' and 'component_size'.
+
+ This can be used to reduce the size of the array before reducing
+ the number of devices in a raid4/5/6, or to support external
+ metadata formats which mandate such clipping.
+
+ reshape_position
+ This is either "none" or a sector number within the devices of
+ the array where "reshape" is up to. If this is set, the three
+ attributes mentioned above (raid_disks, chunk_size, layout) can
+ potentially have 2 values, an old and a new value. If these
+ values differ, reading the attribute returns
+ new (old)
+ and writing will effect the 'new' value, leaving the 'old'
+ unchanged.
+
+ component_size
+ For arrays with data redundancy (i.e. not raid0, linear, faulty,
+ multipath), all components must be the same size - or at least
+ there must a size that they all provide space for. This is a key
+ part or the geometry of the array. It is measured in sectors
+ and can be read from here. Writing to this value may resize
+ the array if the personality supports it (raid1, raid5, raid6),
+ and if the component drives are large enough.
+
+ metadata_version
+ This indicates the format that is being used to record metadata
+ about the array. It can be 0.90 (traditional format), 1.0, 1.1,
+ 1.2 (newer format in varying locations) or "none" indicating that
+ the kernel isn't managing metadata at all.
+ Alternately it can be "external:" followed by a string which
+ is set by user-space. This indicates that metadata is managed
+ by a user-space program. Any device failure or other event that
+ requires a metadata update will cause array activity to be
+ suspended until the event is acknowledged.
+
+ resync_start
+ The point at which resync should start. If no resync is needed,
+ this will be a very large number (or 'none' since 2.6.30-rc1). At
+ array creation it will default to 0, though starting the array as
+ 'clean' will set it much larger.
+
+ new_dev
+ This file can be written but not read. The value written should
+ be a block device number as major:minor. e.g. 8:0
+ This will cause that device to be attached to the array, if it is
+ available. It will then appear at md/dev-XXX (depending on the
+ name of the device) and further configuration is then possible.
+
+ safe_mode_delay
+ When an md array has seen no write requests for a certain period
+ of time, it will be marked as 'clean'. When another write
+ request arrives, the array is marked as 'dirty' before the write
+ commences. This is known as 'safe_mode'.
+ The 'certain period' is controlled by this file which stores the
+ period as a number of seconds. The default is 200msec (0.200).
+ Writing a value of 0 disables safemode.
+
+ array_state
+ This file contains a single word which describes the current
+ state of the array. In many cases, the state can be set by
+ writing the word for the desired state, however some states
+ cannot be explicitly set, and some transitions are not allowed.
+
+ Select/poll works on this file. All changes except between
+ active_idle and active (which can be frequent and are not
+ very interesting) are notified. active->active_idle is
+ reported if the metadata is externally managed.
+
+ clear
+ No devices, no size, no level
+ Writing is equivalent to STOP_ARRAY ioctl
+ inactive
+ May have some settings, but array is not active
+ all IO results in error
+ When written, doesn't tear down array, but just stops it
+ suspended (not supported yet)
+ All IO requests will block. The array can be reconfigured.
+ Writing this, if accepted, will block until array is quiessent
+ readonly
+ no resync can happen. no superblocks get written.
+ write requests fail
+ read-auto
+ like readonly, but behaves like 'clean' on a write request.
+
+ clean - no pending writes, but otherwise active.
+ When written to inactive array, starts without resync
+ If a write request arrives then
+ if metadata is known, mark 'dirty' and switch to 'active'.
+ if not known, block and switch to write-pending
+ If written to an active array that has pending writes, then fails.
+ active
+ fully active: IO and resync can be happening.
+ When written to inactive array, starts with resync
+
+ write-pending
+ clean, but writes are blocked waiting for 'active' to be written.
+
+ active-idle
+ like active, but no writes have been seen for a while (safe_mode_delay).
+
+ bitmap/location
+ This indicates where the write-intent bitmap for the array is
+ stored.
+ It can be one of "none", "file" or "[+-]N".
+ "file" may later be extended to "file:/file/name"
+ "[+-]N" means that many sectors from the start of the metadata.
+ This is replicated on all devices. For arrays with externally
+ managed metadata, the offset is from the beginning of the
+ device.
+ bitmap/chunksize
+ The size, in bytes, of the chunk which will be represented by a
+ single bit. For RAID456, it is a portion of an individual
+ device. For RAID10, it is a portion of the array. For RAID1, it
+ is both (they come to the same thing).
+ bitmap/time_base
+ The time, in seconds, between looking for bits in the bitmap to
+ be cleared. In the current implementation, a bit will be cleared
+ between 2 and 3 times "time_base" after all the covered blocks
+ are known to be in-sync.
+ bitmap/backlog
+ When write-mostly devices are active in a RAID1, write requests
+ to those devices proceed in the background - the filesystem (or
+ other user of the device) does not have to wait for them.
+ 'backlog' sets a limit on the number of concurrent background
+ writes. If there are more than this, new writes will by
+ synchronous.
+ bitmap/metadata
+ This can be either 'internal' or 'external'.
+ 'internal' is the default and means the metadata for the bitmap
+ is stored in the first 256 bytes of the allocated space and is
+ managed by the md module.
+ 'external' means that bitmap metadata is managed externally to
+ the kernel (i.e. by some userspace program)
+ bitmap/can_clear
+ This is either 'true' or 'false'. If 'true', then bits in the
+ bitmap will be cleared when the corresponding blocks are thought
+ to be in-sync. If 'false', bits will never be cleared.
+ This is automatically set to 'false' if a write happens on a
+ degraded array, or if the array becomes degraded during a write.
+ When metadata is managed externally, it should be set to true
+ once the array becomes non-degraded, and this fact has been
+ recorded in the metadata.
+
+
+
+
+As component devices are added to an md array, they appear in the 'md'
+directory as new directories named
+ dev-XXX
+where XXX is a name that the kernel knows for the device, e.g. hdb1.
+Each directory contains:
+
+ block
+ a symlink to the block device in /sys/block, e.g.
+ /sys/block/md0/md/dev-hdb1/block -> ../../../../block/hdb/hdb1
+
+ super
+ A file containing an image of the superblock read from, or
+ written to, that device.
+
+ state
+ A file recording the current state of the device in the array
+ which can be a comma separated list of
+ faulty - device has been kicked from active use due to
+ a detected fault or it has unacknowledged bad
+ blocks
+ in_sync - device is a fully in-sync member of the array
+ writemostly - device will only be subject to read
+ requests if there are no other options.
+ This applies only to raid1 arrays.
+ blocked - device has failed, and the failure hasn't been
+ acknowledged yet by the metadata handler.
+ Writes that would write to this device if
+ it were not faulty are blocked.
+ spare - device is working, but not a full member.
+ This includes spares that are in the process
+ of being recovered to
+ write_error - device has ever seen a write error.
+ This list may grow in future.
+ This can be written to.
+ Writing "faulty" simulates a failure on the device.
+ Writing "remove" removes the device from the array.
+ Writing "writemostly" sets the writemostly flag.
+ Writing "-writemostly" clears the writemostly flag.
+ Writing "blocked" sets the "blocked" flag.
+ Writing "-blocked" clears the "blocked" flags and allows writes
+ to complete and possibly simulates an error.
+ Writing "in_sync" sets the in_sync flag.
+ Writing "write_error" sets writeerrorseen flag.
+ Writing "-write_error" clears writeerrorseen flag.
+
+ This file responds to select/poll. Any change to 'faulty'
+ or 'blocked' causes an event.
+
+ errors
+ An approximate count of read errors that have been detected on
+ this device but have not caused the device to be evicted from
+ the array (either because they were corrected or because they
+ happened while the array was read-only). When using version-1
+ metadata, this value persists across restarts of the array.
+
+ This value can be written while assembling an array thus
+ providing an ongoing count for arrays with metadata managed by
+ userspace.
+
+ slot
+ This gives the role that the device has in the array. It will
+ either be 'none' if the device is not active in the array
+ (i.e. is a spare or has failed) or an integer less than the
+ 'raid_disks' number for the array indicating which position
+ it currently fills. This can only be set while assembling an
+ array. A device for which this is set is assumed to be working.
+
+ offset
+ This gives the location in the device (in sectors from the
+ start) where data from the array will be stored. Any part of
+ the device before this offset us not touched, unless it is
+ used for storing metadata (Formats 1.1 and 1.2).
+
+ size
+ The amount of the device, after the offset, that can be used
+ for storage of data. This will normally be the same as the
+ component_size. This can be written while assembling an
+ array. If a value less than the current component_size is
+ written, it will be rejected.
+
+ recovery_start
+ When the device is not 'in_sync', this records the number of
+ sectors from the start of the device which are known to be
+ correct. This is normally zero, but during a recovery
+ operation is will steadily increase, and if the recovery is
+ interrupted, restoring this value can cause recovery to
+ avoid repeating the earlier blocks. With v1.x metadata, this
+ value is saved and restored automatically.
+
+ This can be set whenever the device is not an active member of
+ the array, either before the array is activated, or before
+ the 'slot' is set.
+
+ Setting this to 'none' is equivalent to setting 'in_sync'.
+ Setting to any other value also clears the 'in_sync' flag.
+
+ bad_blocks
+ This gives the list of all known bad blocks in the form of
+ start address and length (in sectors respectively). If output
+ is too big to fit in a page, it will be truncated. Writing
+ "sector length" to this file adds new acknowledged (i.e.
+ recorded to disk safely) bad blocks.
+
+ unacknowledged_bad_blocks
+ This gives the list of known-but-not-yet-saved-to-disk bad
+ blocks in the same form of 'bad_blocks'. If output is too big
+ to fit in a page, it will be truncated. Writing to this file
+ adds bad blocks without acknowledging them. This is largely
+ for testing.
+
+
+
+An active md device will also contain and entry for each active device
+in the array. These are named
+
+ rdNN
+
+where 'NN' is the position in the array, starting from 0.
+So for a 3 drive array there will be rd0, rd1, rd2.
+These are symbolic links to the appropriate 'dev-XXX' entry.
+Thus, for example,
+ cat /sys/block/md*/md/rd*/state
+will show 'in_sync' on every line.
+
+
+
+Active md devices for levels that support data redundancy (1,4,5,6)
+also have
+
+ sync_action
+ a text file that can be used to monitor and control the rebuild
+ process. It contains one word which can be one of:
+ resync - redundancy is being recalculated after unclean
+ shutdown or creation
+ recover - a hot spare is being built to replace a
+ failed/missing device
+ idle - nothing is happening
+ check - A full check of redundancy was requested and is
+ happening. This reads all block and checks
+ them. A repair may also happen for some raid
+ levels.
+ repair - A full check and repair is happening. This is
+ similar to 'resync', but was requested by the
+ user, and the write-intent bitmap is NOT used to
+ optimise the process.
+
+ This file is writable, and each of the strings that could be
+ read are meaningful for writing.
+
+ 'idle' will stop an active resync/recovery etc. There is no
+ guarantee that another resync/recovery may not be automatically
+ started again, though some event will be needed to trigger
+ this.
+ 'resync' or 'recovery' can be used to restart the
+ corresponding operation if it was stopped with 'idle'.
+ 'check' and 'repair' will start the appropriate process
+ providing the current state is 'idle'.
+
+ This file responds to select/poll. Any important change in the value
+ triggers a poll event. Sometimes the value will briefly be
+ "recover" if a recovery seems to be needed, but cannot be
+ achieved. In that case, the transition to "recover" isn't
+ notified, but the transition away is.
+
+ degraded
+ This contains a count of the number of devices by which the
+ arrays is degraded. So an optimal array with show '0'. A
+ single failed/missing drive will show '1', etc.
+ This file responds to select/poll, any increase or decrease
+ in the count of missing devices will trigger an event.
+
+ mismatch_count
+ When performing 'check' and 'repair', and possibly when
+ performing 'resync', md will count the number of errors that are
+ found. The count in 'mismatch_cnt' is the number of sectors
+ that were re-written, or (for 'check') would have been
+ re-written. As most raid levels work in units of pages rather
+ than sectors, this my be larger than the number of actual errors
+ by a factor of the number of sectors in a page.
+
+ bitmap_set_bits
+ If the array has a write-intent bitmap, then writing to this
+ attribute can set bits in the bitmap, indicating that a resync
+ would need to check the corresponding blocks. Either individual
+ numbers or start-end pairs can be written. Multiple numbers
+ can be separated by a space.
+ Note that the numbers are 'bit' numbers, not 'block' numbers.
+ They should be scaled by the bitmap_chunksize.
+
+ sync_speed_min
+ sync_speed_max
+ This are similar to /proc/sys/dev/raid/speed_limit_{min,max}
+ however they only apply to the particular array.
+ If no value has been written to these, of if the word 'system'
+ is written, then the system-wide value is used. If a value,
+ in kibibytes-per-second is written, then it is used.
+ When the files are read, they show the currently active value
+ followed by "(local)" or "(system)" depending on whether it is
+ a locally set or system-wide value.
+
+ sync_completed
+ This shows the number of sectors that have been completed of
+ whatever the current sync_action is, followed by the number of
+ sectors in total that could need to be processed. The two
+ numbers are separated by a '/' thus effectively showing one
+ value, a fraction of the process that is complete.
+ A 'select' on this attribute will return when resync completes,
+ when it reaches the current sync_max (below) and possibly at
+ other times.
+
+ sync_max
+ This is a number of sectors at which point a resync/recovery
+ process will pause. When a resync is active, the value can
+ only ever be increased, never decreased. The value of 'max'
+ effectively disables the limit.
+
+
+ sync_speed
+ This shows the current actual speed, in K/sec, of the current
+ sync_action. It is averaged over the last 30 seconds.
+
+ suspend_lo
+ suspend_hi
+ The two values, given as numbers of sectors, indicate a range
+ within the array where IO will be blocked. This is currently
+ only supported for raid4/5/6.
+
+ sync_min
+ sync_max
+ The two values, given as numbers of sectors, indicate a range
+ within the array where 'check'/'repair' will operate. Must be
+ a multiple of chunk_size. When it reaches "sync_max" it will
+ pause, rather than complete.
+ You can use 'select' or 'poll' on "sync_completed" to wait for
+ that number to reach sync_max. Then you can either increase
+ "sync_max", or can write 'idle' to "sync_action".
+
+
+Each active md device may also have attributes specific to the
+personality module that manages it.
+These are specific to the implementation of the module and could
+change substantially if the implementation changes.
+
+These currently include
+
+ stripe_cache_size (currently raid5 only)
+ number of entries in the stripe cache. This is writable, but
+ there are upper and lower limits (32768, 16). Default is 128.
+ strip_cache_active (currently raid5 only)
+ number of active entries in the stripe cache
+ preread_bypass_threshold (currently raid5 only)
+ number of times a stripe requiring preread will be bypassed by
+ a stripe that does not require preread. For fairness defaults
+ to 1. Setting this to 0 disables bypass accounting and
+ requires preread stripes to wait until all full-width stripe-
+ writes are complete. Valid values are 0 to stripe_cache_size.
diff --git a/debian/docs/md_superblock_formats.txt b/debian/docs/md_superblock_formats.txt
new file mode 100644
index 00000000..f9c3eb81
--- /dev/null
+++ b/debian/docs/md_superblock_formats.txt
@@ -0,0 +1,534 @@
+# From: http://linux-raid.osdl.org/index.php/RAID_superblock_formats
+
+RAID superblock formats
+
+From Linux-raid
+
+Jump to: navigation, search
+
+Contents
+
+ • 1 RAID superblock formats
+ □ 1.1 The version-0.90 Superblock Format
+ □ 1.2 The version-1 Superblock Format
+ □ 1.3 Sub-versions of the version-1 superblock
+ □ 1.4 The version-1 superblock format on-disk layout
+ ☆ 1.4.1 Total Size of superblock
+ ☆ 1.4.2 Section: Superblock/"Magic-Number" Identification area
+ ☆ 1.4.3 Section: Per-Array Identification & Configuration area
+ ☆ 1.4.4 Section: RAID-Reshape In-Process Metadata Storage/Recovery
+ area
+ ☆ 1.4.5 Section: This-Component-Device Information area
+ ☆ 1.4.6 Section: Array-State Information area
+ ☆ 1.4.7 Section: Device-Roles (Positions-in-Array) area
+
+[edit]
+
+RAID superblock formats
+
+Currently, the Linux RAID subsystem recognizes two distinct variant
+superblocks.
+
+They are known as "version-0.90" and "version-1" Superblock formats.
+
+[edit]
+
+The version-0.90 Superblock Format
+
+The version-0.90 superblock format has several limitations. It limits the
+number of component devices within an array to 28, and limits each component
+device to a maximum size of 2TB.
+
+[edit]
+
+The version-1 Superblock Format
+
+The version-1 superblock format represents a more-expandable format, capable of
+supporting arrays with 384+ devices, with 64-bit sector lengths.
+
+[edit]
+
+Sub-versions of the version-1 superblock
+
+The "version-1" superblock format is currently used in three different
+"sub-versions".
+
+The sub-versions differ primarily (solely?) in the location on each component
+device at which they actually store the superblock.
+
+┌───────────┬───────────────────────────────────┐
+│Sub-Version│ Superblock Position on Device │
+├───────────┼───────────────────────────────────┤
+│1.0 │At the end of the device │
+├───────────┼───────────────────────────────────┤
+│1.1 │At the beginning of the device │
+├───────────┼───────────────────────────────────┤
+│1.2 │4K from the beginning of the device│
+└───────────┴───────────────────────────────────┘
+[edit]
+
+The version-1 superblock format on-disk layout
+
+[edit]
+
+Total Size of superblock
+
+Total Size of superblock: 256 Bytes, plus 2 bytes per device in the array
+
+[edit]
+
+Section: Superblock/"Magic-Number" Identification area
+
+16 Bytes, Offset 0-15 (0x00 - 0x0F)
+
+┌──────┬──────┬──────┬─────────────┬───────────┬─────┬──────────────────────────┬───────┐
+│Offset│Offset│Length│ │ Usage/ │Data │ │ │
+│(Hex) │(Dec) │ (in │ Field Name │ Meaning │Type │ Data Value │ Notes │
+│ │ │bytes)│ │ │ │ │ │
+├──────┼──────┼──────┼─────────────┼───────────┼─────┼──────────────────────────┼───────┤
+│ │ │ │ │"Magic │ │ │ │
+│0x00 -│0 - 3 │4 │magic │Number" │__u32│0xa92b4efc │ │
+│0x03 │ │ │ │(Superblock│ │(little-endian) │ │
+│ │ │ │ │ID) │ │ │ │
+├──────┼──────┼──────┼─────────────┼───────────┼─────┼──────────────────────────┼───────┤
+│ │ │ │ │Major │ │ │ │
+│0x04 -│4 - 7 │4 │major_version│Version │__u32│1 │ │
+│0x07 │ │ │ │of the │ │ │ │
+│ │ │ │ │Superblock │ │ │ │
+├──────┼──────┼──────┼─────────────┼───────────┼─────┼──────────────────────────┼───────┤
+│ │ │ │ │ │ │0 │ │
+│ │ │ │ │ │ │Bit-Mapped Field │ │
+│ │ │ │ │ │ │ │ │
+│ │ │ │ │ │ │┌─────┬──────────────────┐│ │
+│ │ │ │ │ │ ││ Bit │ Meaning ││ │
+│ │ │ │ │ │ ││Value│ ││ │
+│ │ │ │ │ │ │├─────┼──────────────────┤│ │
+│ │ │ │ │ │ ││1 │RAID Bitmap is ││ │
+│ │ │ │ │ │ ││ │used ││ │
+│ │ │ │ │ │ │├─────┼──────────────────┤│ │
+│ │ │ │ │Feature Map│ ││ │RAID Recovery is ││ │
+│ │ │ │ │- which │ ││2 │in progress ││ │
+│ │ │ │ │extended │ ││ │(See ││ │
+│ │ │ │ │features │ ││ │"recovery_offset")││ │
+│ │ │ │ │(such as │ │├─────┼──────────────────┤│ │
+│0x08 -│ │ │ │volume │ ││4 │RAID Reshape is in││ │
+│0x0B │8 - 11│4 │feature_map │bitmaps, │__u32││ │progress ││ │
+│ │ │ │ │recovery, │ │├─────┼──────────────────┤│ │
+│ │ │ │ │or reshape)│ ││8 │undefined/reserved││ │
+│ │ │ │ │are in use │ ││ │(0) ││ │
+│ │ │ │ │on this │ │├─────┼──────────────────┤│ │
+│ │ │ │ │array │ ││16 │undefined/reserved││ │
+│ │ │ │ │ │ ││ │(0) ││ │
+│ │ │ │ │ │ │├─────┼──────────────────┤│ │
+│ │ │ │ │ │ ││32 │undefined/reserved││ │
+│ │ │ │ │ │ ││ │(0) ││ │
+│ │ │ │ │ │ │├─────┼──────────────────┤│ │
+│ │ │ │ │ │ ││64 │undefined/reserved││ │
+│ │ │ │ │ │ ││ │(0) ││ │
+│ │ │ │ │ │ │├─────┼──────────────────┤│ │
+│ │ │ │ │ │ ││128 │undefined/reserved││ │
+│ │ │ │ │ │ ││ │(0) ││ │
+│ │ │ │ │ │ │└─────┴──────────────────┘│ │
+├──────┼──────┼──────┼─────────────┼───────────┼─────┼──────────────────────────┼───────┤
+│ │ │ │ │ │ │ │Always │
+│0x0C -│12 - │ │ │Padding │ │ │set to │
+│0x0F │15 │4 │pad0 │Block 0 │__u32│0 │zero │
+│ │ │ │ │ │ │ │when │
+│ │ │ │ │ │ │ │writing│
+└──────┴──────┴──────┴─────────────┴───────────┴─────┴──────────────────────────┴───────┘
+
+
+[edit]
+
+Section: Per-Array Identification & Configuration area
+
+48 Bytes, Offset 16-63 (0x10 - 0x3F)
+
+┌──────┬──────┬──────┬─────────────┬──────────┬─────┬────────────────┬───────────┐
+│Offset│Offset│Length│ │ Usage/ │Data │ │ │
+│(Hex) │(Dec) │ (in │ Field Name │ Meaning │Type │ Data Value │ Notes │
+│ │ │bytes)│ │ │ │ │ │
+├──────┼──────┼──────┼─────────────┼──────────┼─────┼────────────────┼───────────┤
+│0x10 -│16 - │ │ │UUID for │__u8 │Set by │ │
+│0x1F │31 │16 │set_uuid │the Array │[16] │user-space │ │
+│ │ │ │ │(?) │ │formatting util │ │
+├──────┼──────┼──────┼─────────────┼──────────┼─────┼────────────────┼───────────┤
+│0x20 -│32 - │ │ │Name for │char │Set and used by │ │
+│0x3F │63 │32 │set_name │the Array │[32] │user-space utils│Nt │
+│ │ │ │ │(?) │ │ │ │
+├──────┼──────┼──────┼─────────────┼──────────┼─────┼────────────────┼───────────┤
+│ │ │ │ │ │ │low 40-bits are │ │
+│0x40 -│64 - │8 │ctime │Creation │__u64│seconds │ │
+│0x47 │71 │ │ │Time(?) │ │high 24-bits are│ │
+│ │ │ │ │ │ │uSeconds │ │
+├──────┼──────┼──────┼─────────────┼──────────┼─────┼────────────────┼───────────┤
+│ │ │ │ │ │ │┌──┬───────────┐│ │
+│ │ │ │ │ │ ││-4│Multi-Path ││ │
+│ │ │ │ │ │ │├──┼───────────┤│ │
+│ │ │ │ │ │ ││-1│Linear ││ │
+│ │ │ │ │ │ │├──┼───────────┤│ │
+│ │ │ │ │ │ ││0 │RAID-0 ││ │
+│ │ │ │ │ │ ││ │(Striped) ││ │
+│ │ │ │ │ │ │├──┼───────────┤│ │
+│ │ │ │ │ │ ││1 │RAID-1 ││ │
+│ │ │ │ │ │ ││ │(Mirrored) ││mdadm │
+│ │ │ │ │ │ │├──┼───────────┤│versions │
+│ │ │ │ │ │ ││ │RAID-4 ││(as of │
+│ │ │ │ │ │ ││ │(Striped ││v2.6.4) │
+│0x48 -│72 - │ │ │RAID Level│ ││4 │with ││limit │
+│0x4B │75 │4 │level │of the │__u32││ │Dedicated ││RAID-6 │
+│ │ │ │ │Array │ ││ │Block-Level││(creation) │
+│ │ │ │ │ │ ││ │Parity) ││to 256 │
+│ │ │ │ │ │ │├──┼───────────┤│disks or │
+│ │ │ │ │ │ ││ │RAID-5 ││less │
+│ │ │ │ │ │ ││ │(Striped ││ │
+│ │ │ │ │ │ ││5 │with ││ │
+│ │ │ │ │ │ ││ │Distributed││ │
+│ │ │ │ │ │ ││ │Parity) ││ │
+│ │ │ │ │ │ │├──┼───────────┤│ │
+│ │ │ │ │ │ ││ │RAID-6 ││ │
+│ │ │ │ │ │ ││6 │(Striped ││ │
+│ │ │ │ │ │ ││ │with Dual ││ │
+│ │ │ │ │ │ ││ │Parity) ││ │
+│ │ │ │ │ │ │└──┴───────────┘│ │
+├──────┼──────┼──────┼─────────────┼──────────┼─────┼────────────────┼───────────┤
+│ │ │ │ │ │ │┌─┬────────────┐│ │
+│ │ │ │ │ │ ││0│left ││ │
+│ │ │ │ │ │ ││ │asymmetric ││ │
+│ │ │ │ │ │ │├─┼────────────┤│Controls │
+│ │ │ │ │ │ ││1│right ││the │
+│ │ │ │ │layout of │ ││ │asymmetric ││relative │
+│0x4C -│76 - │4 │layout │array │__u32│├─┼────────────┤│arrangement│
+│0x4F │79 │ │ │(RAID5(and│ ││ │left ││of data and│
+│ │ │ │ │6?) only) │ ││2│symmetric ││parity │
+│ │ │ │ │ │ ││ │(default) ││blocks on │
+│ │ │ │ │ │ │├─┼────────────┤│the disks. │
+│ │ │ │ │ │ ││3│right ││ │
+│ │ │ │ │ │ ││ │symmetric ││ │
+│ │ │ │ │ │ │└─┴────────────┘│ │
+├──────┼──────┼──────┼─────────────┼──────────┼─────┼────────────────┼───────────┤
+│ │ │ │ │ │ │size of │ │
+│ │ │ │ │used-size │ │component │ │
+│0x50 -│80 - │8 │size │of │__u64│devices │ │
+│0x57 │87 │ │ │component │ │(in # of │ │
+│ │ │ │ │devices │ │512-byte │ │
+│ │ │ │ │ │ │sectors) │ │
+├──────┼──────┼──────┼─────────────┼──────────┼─────┼────────────────┼───────────┤
+│ │ │ │ │ │ │ │default is │
+│ │ │ │ │ │ │ │64K? for │
+│ │ │ │ │ │ │ │raid levels│
+│ │ │ │ │ │ │ │0, 10, 4, │
+│ │ │ │ │ │ │ │5, and 6 │
+│ │ │ │ │ │ │ │chunksize │
+│ │ │ │ │ │ │ │not used in│
+│ │ │ │ │ │ │ │raid levels│
+│ │ │ │ │ │ │chunk-size of │1, linear, │
+│ │ │ │ │chunk-size│ │the array │and │
+│0x58 -│88 - │4 │chunksize │of the │__u32│(in # of │multi-path │
+│0x5B │91 │ │ │array │ │512-byte │ │
+│ │ │ │ │ │ │sectors) │Note: │
+│ │ │ │ │ │ │ │During │
+│ │ │ │ │ │ │ │creation │
+│ │ │ │ │ │ │ │this │
+│ │ │ │ │ │ │ │appears to │
+│ │ │ │ │ │ │ │be created │
+│ │ │ │ │ │ │ │as a │
+│ │ │ │ │ │ │ │multiple of│
+│ │ │ │ │ │ │ │1024 rather│
+│ │ │ │ │ │ │ │than 512. │
+├──────┼──────┼──────┼─────────────┼──────────┼─────┼────────────────┼───────────┤
+│ │ │ │ │ │ │ │raid4 │
+│ │ │ │ │ │ │ │requires a │
+│ │ │ │ │ │ │ │minimum of │
+│ │ │ │ │ │ │ │2 member │
+│ │ │ │ │ │ │ │devs │
+│ │ │ │ │ │ │ │raid5 │
+│ │ │ │ │ │ │ │requires a │
+│ │ │ │ │ │ │ │minimum of │
+│ │ │ │ │(?)number │ │ │2 member │
+│0x5C -│92 - │4 │raid_disks │of disks │__u32│# │devs │
+│0x5F │95 │ │ │in array │ │ │raid6 │
+│ │ │ │ │(?) │ │ │requires a │
+│ │ │ │ │ │ │ │minimum of │
+│ │ │ │ │ │ │ │4 member │
+│ │ │ │ │ │ │ │devs │
+│ │ │ │ │ │ │ │raid6 │
+│ │ │ │ │ │ │ │limited to │
+│ │ │ │ │ │ │ │a max of │
+│ │ │ │ │ │ │ │256 member │
+│ │ │ │ │ │ │ │devs │
+├──────┼──────┼──────┼─────────────┼──────────┼─────┼────────────────┼───────────┤
+│ │ │ │ │ │ │ │This is │
+│ │ │ │ │# of │ │ │only valid │
+│ │ │ │ │sectors │ │ │if │
+│ │ │ │ │after │ │ │feature_map│
+│ │ │ │ │superblock│ │ │[1] is set │
+│ │ │ │ │that │ │ │ │
+│0x60 -│96 - │4 │bitmap_offset│bitmap │__u32│(signed) │Signed │
+│0x63 │99 │ │ │starts │ │ │value │
+│ │ │ │ │(See note │ │ │allows │
+│ │ │ │ │about │ │ │bitmap │
+│ │ │ │ │signed │ │ │to appear │
+│ │ │ │ │value) │ │ │before │
+│ │ │ │ │ │ │ │superblock │
+│ │ │ │ │ │ │ │on the disk│
+└──────┴──────┴──────┴─────────────┴──────────┴─────┴────────────────┴───────────┘
+
+
+[edit]
+
+Section: RAID-Reshape In-Process Metadata Storage/Recovery area
+
+64 Bytes, Offset 100-163 (0x64 - 0x7F)
+(Note: Only contains valid data if feature_map bit '4' is set)
+
+┌──────┬──────┬──────┬────────────────┬───────────┬─────┬─────────────┬───────┐
+│Offset│Offset│Length│ │ Usage/ │Data │ │ │
+│(Hex) │(Dec) │ (in │ Field Name │ Meaning │Type │ Data Value │ Notes │
+│ │ │bytes)│ │ │ │ │ │
+├──────┼──────┼──────┼────────────────┼───────────┼─────┼─────────────┼───────┤
+│ │ │ │ │the new │ │ │ │
+│0x64 -│100 - │4 │new_level │RAID level │__u32│see level │ │
+│0x67 │103 │ │ │being │ │field (above)│ │
+│ │ │ │ │reshaped-to│ │ │ │
+├──────┼──────┼──────┼────────────────┼───────────┼─────┼─────────────┼───────┤
+│ │ │ │ │Next │ │current │ │
+│0x68 -│104 - │8 │reshape_position│address of │__u64│position of │ │
+│0x6F │111 │ │ │the array │ │the reshape │ │
+│ │ │ │ │to reshape │ │operation │ │
+├──────┼──────┼──────┼────────────────┼───────────┼─────┼─────────────┼───────┤
+├──────┼──────┼──────┼────────────────┼───────────┼─────┼─────────────┼───────┤
+│ │ │ │ │this holds │ │ │ │
+│0x70 -│112 - │4 │delta_disks │the change │__u32│change in # │ │
+│0x73 │115 │ │ │in # of │ │of raid disks│ │
+│ │ │ │ │raid disks │ │ │ │
+├──────┼──────┼──────┼────────────────┼───────────┼─────┼─────────────┼───────┤
+│0x74 -│116 - │4 │new_layout │new layout │__u32│see layout │ │
+│0x77 │119 │ │ │for array │ │field (above)│ │
+├──────┼──────┼──────┼────────────────┼───────────┼─────┼─────────────┼───────┤
+│0x78 -│120 - │4 │new_chunk │new chunk │__u32│see chunksize│ │
+│0x7B │123 │ │ │size │ │field (above)│ │
+├──────┼──────┼──────┼────────────────┼───────────┼─────┼─────────────┼───────┤
+│ │ │ │ │ │ │ │Always │
+│0x7C -│124 - │ │ │Padding │__u8 │ │set to │
+│0x7F │127 │4 │pad1 │Block #1 │[4] │0 │zero │
+│ │ │ │ │ │ │ │when │
+│ │ │ │ │ │ │ │writing│
+└──────┴──────┴──────┴────────────────┴───────────┴─────┴─────────────┴───────┘
+
+
+
+[edit]
+
+Section: This-Component-Device Information area
+
+64 Bytes, Offset 128-191 (0x80 - 0xbf)
+
+┌──────┬──────┬──────┬──────────────────┬────────────┬─────┬────────────────────┬────────────┐
+│Offset│Offset│Length│ │ Usage/ │Data │ │ │
+│(Hex) │(Dec) │ (in │ Field Name │ Meaning │Type │ Data Value │ Notes │
+│ │ │bytes)│ │ │ │ │ │
+├──────┼──────┼──────┼──────────────────┼────────────┼─────┼────────────────────┼────────────┤
+│0x80 -│128 - │ │ │the sector #│ │sector # where data │ │
+│0x87 │135 │8 │data_offset │upon which │__u64│begins │ │
+│ │ │ │ │data starts │ │(Often 0) │ │
+├──────┼──────┼──────┼──────────────────┼────────────┼─────┼────────────────────┼────────────┤
+│ │ │ │ │sectors in │ │ │ │
+│0x88 -│136 - │ │ │the device │ │# of sectors that │ │
+│0x8F │143 │8 │data_size │that are │__u64│can be used for data│ │
+│ │ │ │ │used for │ │ │ │
+│ │ │ │ │data │ │ │ │
+├──────┼──────┼──────┼──────────────────┼────────────┼─────┼────────────────────┼────────────┤
+│ │ │ │ │# of the │ │ │ │
+│0x90 -│144 - │ │ │sector upon │ │# of the sector upon│ │
+│0x97 │151 │8 │super_offset │which this │__u64│which this │ │
+│ │ │ │ │superblock │ │superblock starts │ │
+│ │ │ │ │starts │ │ │ │
+├──────┼──────┼──────┼──────────────────┼────────────┼─────┼────────────────────┼────────────┤
+│ │ │ │ │sectors │ │ │ │
+│ │ │ │ │before this │ │ │ │
+│0x98 -│152 - │ │ │offset │ │ │ │
+│0x9F │159 │8 │recovery_offset │(from │__u64│sector # │ │
+│ │ │ │ │data_offset)│ │ │ │
+│ │ │ │ │have been │ │ │ │
+│ │ │ │ │recovered │ │ │ │
+├──────┼──────┼──────┼──────────────────┼────────────┼─────┼────────────────────┼────────────┤
+│0xA0 -│160 - │ │ │ │ │Permanent identifier│ │
+│0xA3 │163 │4 │dev_number │Fm │__u32│of this device (Not │ │
+│ │ │ │ │ │ │its role in RAID(?))│ │
+├──────┼──────┼──────┼──────────────────┼────────────┼─────┼────────────────────┼────────────┤
+│ │ │ │ │Number of │ │ │ │
+│0xA4 -│164 - │ │ │read-errors │ │ │ │
+│0xA7 │167 │4 │cnt_corrected_read│that were │__u32│Dv │ │
+│ │ │ │ │corrected by│ │ │ │
+│ │ │ │ │re-writing │ │ │ │
+├──────┼──────┼──────┼──────────────────┼────────────┼─────┼────────────────────┼────────────┤
+│ │ │ │ │UUID of the │ │ │Set by │
+│0xA8 -│168 - │16 │device_uuid │component │__u8 │ │User-Space │
+│0xB7 │183 │ │ │device │[16] │ │Ignored by │
+│ │ │ │ │ │ │ │kernel │
+├──────┼──────┼──────┼──────────────────┼────────────┼─────┼────────────────────┼────────────┤
+│ │ │ │ │ │ │Bit-Mapped Field │ │
+│ │ │ │ │ │ │ │ │
+│ │ │ │ │ │ │┌─────┬────────────┐│ │
+│ │ │ │ │ │ ││ Bit │ Meaning ││ │
+│ │ │ │ │ │ ││Value│ ││WriteMostly1│
+│ │ │ │ │ │ │├─────┼────────────┤│indicates │
+│ │ │ │ │ │ ││1 │WriteMostly1││that this │
+│ │ │ │ │ │ │├─────┼────────────┤│device │
+│ │ │ │ │ │ ││2 │(?) ││should only │
+│ │ │ │ │Per-Device │ │├─────┼────────────┤│be updated │
+│0xB8 │184 │1 │devflags │Flags │__u8 ││4 │(?) ││on writes, │
+│ │ │ │ │(Bit-Mapped │ │├─────┼────────────┤│not read │
+│ │ │ │ │Field) │ ││8 │(?) ││from. │
+│ │ │ │ │ │ │├─────┼────────────┤│(Useful with│
+│ │ │ │ │ │ ││16 │(?) ││slow devices│
+│ │ │ │ │ │ │├─────┼────────────┤│in RAID1 │
+│ │ │ │ │ │ ││32 │(?) ││arrays?) │
+│ │ │ │ │ │ │├─────┼────────────┤│ │
+│ │ │ │ │ │ ││64 │(?) ││ │
+│ │ │ │ │ │ │├─────┼────────────┤│ │
+│ │ │ │ │ │ ││128 │(?) ││ │
+│ │ │ │ │ │ │└─────┴────────────┘│ │
+├──────┼──────┼──────┼──────────────────┼────────────┼─────┼────────────────────┼────────────┤
+│ │ │ │ │ │ │ │Always set │
+│0xB9 -│185 - │7 │pad2 │Padding │__u8 │0 │to │
+│0xBF │191 │ │ │block 2 │[7] │ │zero when │
+│ │ │ │ │ │ │ │writing │
+└──────┴──────┴──────┴──────────────────┴────────────┴─────┴────────────────────┴────────────┘
+
+
+[edit]
+
+Section: Array-State Information area
+
+64 Bytes, Offset 192-255 (0xC0 - 0xFF)
+
+┌──────┬──────┬──────┬─────────────┬─────────────┬─────┬────────┬─────────────┐
+│Offset│Offset│Length│ │ │Data │ Data │ │
+│(Hex) │(Dec) │ (in │ Field Name │Usage/Meaning│Type │ Value │ Notes │
+│ │ │bytes)│ │ │ │ │ │
+├──────┼──────┼──────┼─────────────┼─────────────┼─────┼────────┼─────────────┤
+│ │ │ │ │ │ │low │ │
+│ │ │ │ │ │ │40-bits │ │
+│ │ │ │ │ │ │are │ │
+│0xC0 -│192 - │8 │utime │Fm │__u64│seconds │Nt │
+│0xC7 │199 │ │ │ │ │high │ │
+│ │ │ │ │ │ │24-bits │ │
+│ │ │ │ │ │ │are │ │
+│ │ │ │ │ │ │uSeconds│ │
+├──────┼──────┼──────┼─────────────┼─────────────┼─────┼────────┼─────────────┤
+│ │ │ │ │ │ │ │Updated │
+│ │ │ │ │ │ │ │whenever the │
+│ │ │ │ │ │ │ │superblock is│
+│ │ │ │ │ │ │ │updated. │
+│ │ │ │ │ │ │ │Used by mdadm│
+│0xC8 -│200 - │8 │events │Event Count │__u64│# │in │
+│0xCF │207 │ │ │for the Array│ │ │re-assembly │
+│ │ │ │ │ │ │ │to detect │
+│ │ │ │ │ │ │ │failed/ │
+│ │ │ │ │ │ │ │out-of-sync │
+│ │ │ │ │ │ │ │component │
+│ │ │ │ │ │ │ │devices. │
+├──────┼──────┼──────┼─────────────┼─────────────┼─────┼────────┼─────────────┤
+│ │ │ │ │Offsets │ │ │ │
+│ │ │ │ │before this │ │ │ │
+│ │ │ │ │one (starting│ │ │ │
+│0xD0 -│208 - │8 │resync_offset│from │__u64│offset #│ │
+│0xD7 │215 │ │ │data_offset) │ │ │ │
+│ │ │ │ │are 'known' │ │ │ │
+│ │ │ │ │to be in │ │ │ │
+│ │ │ │ │sync. │ │ │ │
+├──────┼──────┼──────┼─────────────┼─────────────┼─────┼────────┼─────────────┤
+│ │ │ │ │ │ │ │This value │
+│ │ │ │ │Checksum of │ │ │will be │
+│0xD8 -│216 - │ │ │this │ │ │different for│
+│0xDB │219 │4 │sb_csum │superblock up│__u32│# │each │
+│ │ │ │ │to devs │ │ │component │
+│ │ │ │ │[max_dev] │ │ │device's │
+│ │ │ │ │ │ │ │superblock. │
+├──────┼──────┼──────┼─────────────┼─────────────┼─────┼────────┼─────────────┤
+│ │ │ │ │How many │ │ │ │
+│0xDC -│220 - │ │ │devices are │ │ │ │
+│0xDF │223 │4 │max_dev │part of (or │__u32│# │ │
+│ │ │ │ │related to) │ │ │ │
+│ │ │ │ │the array │ │ │ │
+├──────┼──────┼──────┼─────────────┼─────────────┼─────┼────────┼─────────────┤
+│0xE0 -│224 - │ │ │Padding Block│__u8 │ │Always set to│
+│0xFF │255 │32 │pad3 │3 │[32] │0 │zero when │
+│ │ │ │ │ │ │ │writing │
+└──────┴──────┴──────┴─────────────┴─────────────┴─────┴────────┴─────────────┘
+
+
+[edit]
+
+Section: Device-Roles (Positions-in-Array) area
+
+Length: Variable number of bytes (but at least 768 bytes?)
+2 Bytes per device in the array, including both spare-devices and
+faulty-devices
+
+┌──────────────────────────────────────────────────────────────────────────────┐
+│ Section: Device-Roles (Positions-in-Array) area │
+├──────────────────────────────────────────────────────────────────────────────┤
+│(Variable length - 2 Bytes per Device in Array (including Spares/Faulty-Devs) │
+├──────────────────────────────────────────────────────────────────────────────┤
+│ │
+├────────┬───────┬──────┬─────────┬────────┬─────┬───────────────────────┬─────┤
+│ Offset │Offset │Length│ Field │ Usage/ │Data │ │ │
+│ (Hex) │ (Dec) │ (in │ Name │Meaning │Type │ Data Value │Notes│
+│ │ │bytes)│ │ │ │ │ │
+├────────┴───────┴──────┴─────────┴────────┴─────┴───────────────────────┴─────┤
+│ ?? Bytes, Offset 256-??? (0x100 - 0x???) │
+├────────┬───────┬──────┬─────────┬────────┬─────┬───────────────────────┬─────┤
+│ │ │ │ │ │ │Role or Position of │ │
+│0x100 - │256 │? │dev_roles│Fm │__u16│device in the array. │ │
+│0x??? │- ??? │ │ │ │ │0xFFFF means "spare". │ │
+│ │ │ │ │ │ │0xFFFE means "faulty". │ │
+└────────┴───────┴──────┴─────────┴────────┴─────┴───────────────────────┴─────┘
+Retrieved from "http://linux-raid.osdl.org/index.php/RAID_superblock_formats"
+
+Views
+
+ • Article
+ • Discussion
+ • Edit
+ • History
+
+Personal tools
+
+ • Log in / create account
+
+
+
+Navigation
+
+ • Linux Raid
+ • Community portal
+ • Current events
+ • Recent changes
+ • Random page
+ • Help
+ • Donations
+
+Search
+
+[ ] [Go] [Search]
+Toolbox
+
+ • What links here
+ • Related changes
+ • Special pages
+ • Printable version
+ • Permanent link
+
+MediaWiki
+GNU Free Documentation License 1.2
+
+ • This page was last modified 04:50, 3 June 2008.
+ • This page has been accessed 5,723 times.
+ • Content is available under GNU Free Documentation License 1.2.
+ • Privacy policy
+ • About Linux-raid
+ • Disclaimers
+
diff --git a/debian/docs/rebuilding-raid.html b/debian/docs/rebuilding-raid.html
new file mode 100644
index 00000000..1d7b8c0d
--- /dev/null
+++ b/debian/docs/rebuilding-raid.html
@@ -0,0 +1,561 @@
+<?xml version="1.0" standalone="no"?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml" xmlns:dc="http://purl.org/dc/elements/1.1/" xml:lang="en" lang="en">
+ <head>
+ <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
+ <title>JD : /linux/rebuilding-raid</title>
+ <link rel="stylesheet" media="screen" type="text/css" href="http://www.davidpashley.com/css/ie.css" />
+ <link rel="stylesheet" media="print" type="text/css" href="http://www.davidpashley.com/css/print.css" />
+ <link xmlns="" rel="alternate" type="application/rss+xml" title="RSS 2.0" href="http://www.davidpashley.com/blog/?flav=rss" />
+ <link xmlns="" rel="alternate" type="text/xml" title="RSS .92" href="http://www.davidpashley.com/blog/?flav=rss" />
+ <link xmlns="" rel="alternate" type="application/atom+xml" title="Atom 0.3" href="http://www.davidpashley.com/blog/?flav=atom" />
+ </head>
+ <body>
+ <script type="text/javascript">
+<!--
+ function addGlobalStyle(css) {
+ var head, style;
+ head = document.getElementsByTagName('head')[0];
+ if (!head) { return; }
+ style = document.createElement('style');
+ style.type = 'text/css';
+ style.innerHTML = css;
+ head.appendChild(style);
+ }
+
+ var full_width = 0;
+
+ function make_max_width() {
+ if (!full_width) {
+ full_width = 1;
+ addGlobalStyle(' #page { max-width:100%; }');
+ } else {
+ full_width = 0;
+ addGlobalStyle(' #page { max-width:70em; }');
+ }
+ }
+ -->
+ </script>
+ <script src="http://www.google-analytics.com/urchin.js" type="text/javascript"></script>
+ <script type="text/javascript">
+ _uacct = "UA-261983-1";
+ urchinTracker();
+ </script>
+ <div id="page" style="width: expression(Math.min(parseInt(this.offsetWidth), 70*12 ) + 'px');">
+ <div id="header">
+ DavidPashley.com
+ </div>
+ <div xmlns="" id="menu">
+ <ul>
+ <li>
+ <a href="/" title="Home">Home</a>
+ </li>
+ <li>
+ <a href="/blog/" title="Blog">Blog</a>
+ </li>
+ <li>
+ <a href="/articles/" title="Articles">Articles</a>
+ </li>
+ <li>
+ <a href="/projects/" title="Projects">Projects</a>
+ </li>
+ <li>
+ <a href="/about/" title="About">About</a>
+ </li>
+ <li class="last">
+ <a href="/contact/" title="Contact">Contact</a>
+ </li>
+ </ul>
+ </div>
+ <div xmlns="" class="advert">
+ <script type="text/javascript">
+<!--
+ google_ad_client = "pub-3364074587908580";
+ google_ad_width = 733;
+ google_ad_height = 95;
+ google_ad_format = "728x90_as";
+ google_ad_type = "text_image";
+ google_ad_channel ="";
+ google_color_border = "FFFFFF";
+ google_color_bg = "FFFFFF";
+ google_color_link = "7788AA";
+ google_color_url = "556699";
+ google_color_text = "000000";
+ -->
+ </script>
+ <script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"></script>
+ </div>
+ <div id="main">
+ <div xmlns="" id="sidebar">
+ <a href="javascript:make_max_width()">Use full width</a>
+ <div id="about">
+ <h1>About</h1>
+ <p><a href="mailto:david@davidpashley.com">David Pashley</a><br />
+ Systems Support<br />
+ Runtime Collective
+ </p>
+ <p>
+ <a href="http://www.davidpashley.com/blog/?flav=rss">
+ <img class="button" src="/images/rss10.png" width="80" height="15" alt="[RSS 1.0 Feed]" />
+ </a>
+ <a href="http://www.davidpashley.com/blog/?flav=atom">
+ <img class="button" src="/images/atom.png" width="80" height="15" alt="[RSS 2.0 Feed]" />
+ </a>
+ <a href="/foaf.rdf">
+ <img class="button" src="/images/foaf.png" width="80" height="15" alt="[FOAF Subscriptions]" />
+ </a>
+ <a href="http://www.catb.org/hacker-emblem/">
+ <img class="button" src="/images/hacker.png" width="80" height="15" alt="[Hacker]" />
+ </a>
+ </p>
+ </div>
+ <div class="archives">
+ <h1>Archives</h1>
+ <table class="blosxomCalendar">
+ <tr>
+ <td align="left">
+ <a href="http://www.davidpashley.com/blog/2008/Jun">&lt;</a>
+ </td>
+ <td colspan="5" align="center" class="blosxomCalendarHead">July 2008</td>
+ <td align="right">
+ <a href="http://www.davidpashley.com/blog/2008/Aug">&gt;</a>
+ </td>
+ </tr>
+ <tr>
+ <td class="blosxomCalendarWeekHeader">Su</td>
+ <td class="blosxomCalendarWeekHeader">Mo</td>
+ <td class="blosxomCalendarWeekHeader">Tu</td>
+ <td class="blosxomCalendarWeekHeader">We</td>
+ <td class="blosxomCalendarWeekHeader">Th</td>
+ <td class="blosxomCalendarWeekHeader">Fr</td>
+ <td class="blosxomCalendarWeekHeader">Sa</td>
+ </tr>
+ <tr>
+ <td class="blosxomCalendarEmpty"> </td>
+ <td class="blosxomCalendarEmpty"> </td>
+ <td class="blosxomCalendarCell"> 1</td>
+ <td class="blosxomCalendarCell"> 2</td>
+ <td class="blosxomCalendarCell"> 3</td>
+ <td class="blosxomCalendarCell"> 4</td>
+ <td class="blosxomCalendarCell"> 5</td>
+ </tr>
+ <tr>
+ <td class="blosxomCalendarCell"> 6</td>
+ <td class="blosxomCalendarCell"> 7</td>
+ <td class="blosxomCalendarCell"> 8</td>
+ <td class="blosxomCalendarCell"> 9</td>
+ <td class="blosxomCalendarCell">10</td>
+ <td class="blosxomCalendarCell">11</td>
+ <td class="blosxomCalendarBlogged">
+ <a href="http://www.davidpashley.com/blog/2008/Jul/12">12</a>
+ </td>
+ </tr>
+ <tr>
+ <td class="blosxomCalendarCell">13</td>
+ <td class="blosxomCalendarCell">14</td>
+ <td class="blosxomCalendarCell">15</td>
+ <td class="blosxomCalendarCell">16</td>
+ <td class="blosxomCalendarCell">17</td>
+ <td class="blosxomCalendarCell">18</td>
+ <td class="blosxomCalendarCell">19</td>
+ </tr>
+ <tr>
+ <td class="blosxomCalendarCell">20</td>
+ <td class="blosxomCalendarCell">21</td>
+ <td class="blosxomCalendarCell">22</td>
+ <td class="blosxomCalendarCell">23</td>
+ <td class="blosxomCalendarCell">24</td>
+ <td class="blosxomCalendarCell">25</td>
+ <td class="blosxomCalendarCell">26</td>
+ </tr>
+ <tr>
+ <td class="blosxomCalendarCell">27</td>
+ <td class="blosxomCalendarCell">28</td>
+ <td class="blosxomCalendarCell">29</td>
+ <td class="blosxomCalendarCell">30</td>
+ <td class="blosxomCalendarCell">31</td>
+ <td class="blosxomCalendarEmpty"> </td>
+ <td class="blosxomCalendarEmpty"> </td>
+ </tr>
+ </table>
+ </div>
+ <div class="archives">
+ <h1>Tags</h1>
+ <p>
+ <div id="tagcloud">
+ <a href="http://www.davidpashley.com/blog/tags/ data integrity" class="smallTag" alt="There are 2 entries tagged data integrity"> data integrity</a>
+ <a href="http://www.davidpashley.com/blog/tags/ databases" class="smallTag" alt="There are 2 entries tagged databases"> databases</a>
+ <a href="http://www.davidpashley.com/blog/tags/ gotchas" class="smallTag" alt="There are 2 entries tagged gotchas"> gotchas</a>
+ <a href="http://www.davidpashley.com/blog/tags/ java" class="smallestTag" alt="There are 1 entries tagged java"> java</a>
+ <a href="http://www.davidpashley.com/blog/tags/ jetty" class="smallestTag" alt="There are 1 entries tagged jetty"> jetty</a>
+ <a href="http://www.davidpashley.com/blog/tags/ maven" class="smallestTag" alt="There are 1 entries tagged maven"> maven</a>
+ <a href="http://www.davidpashley.com/blog/tags/ netcat" class="smallestTag" alt="There are 1 entries tagged netcat"> netcat</a>
+ <a href="http://www.davidpashley.com/blog/tags/ rsync" class="smallestTag" alt="There are 1 entries tagged rsync"> rsync</a>
+ <a href="http://www.davidpashley.com/blog/tags/ servlets" class="smallestTag" alt="There are 1 entries tagged servlets"> servlets</a>
+ <a href="http://www.davidpashley.com/blog/tags/alpha" class="smallestTag" alt="There are 1 entries tagged alpha">alpha</a>
+ <a href="http://www.davidpashley.com/blog/tags/apache" class="smallestTag" alt="There are 1 entries tagged apache">apache</a>
+ <a href="http://www.davidpashley.com/blog/tags/article" class="mediumTag" alt="There are 7 entries tagged article">article</a>
+ <a href="http://www.davidpashley.com/blog/tags/async routing" class="smallestTag" alt="There are 1 entries tagged async routing">async routing</a>
+ <a href="http://www.davidpashley.com/blog/tags/Atom" class="smallTag" alt="There are 2 entries tagged Atom">Atom</a>
+ <a href="http://www.davidpashley.com/blog/tags/backups" class="smallestTag" alt="There are 1 entries tagged backups">backups</a>
+ <a href="http://www.davidpashley.com/blog/tags/bash" class="mediumTag" alt="There are 6 entries tagged bash">bash</a>
+ <a href="http://www.davidpashley.com/blog/tags/bitlbee" class="smallestTag" alt="There are 1 entries tagged bitlbee">bitlbee</a>
+ <a href="http://www.davidpashley.com/blog/tags/blog" class="smallTag" alt="There are 3 entries tagged blog">blog</a>
+ <a href="http://www.davidpashley.com/blog/tags/bzip2" class="smallestTag" alt="There are 1 entries tagged bzip2">bzip2</a>
+ <a href="http://www.davidpashley.com/blog/tags/C++" class="smallestTag" alt="There are 1 entries tagged C++">C++</a>
+ <a href="http://www.davidpashley.com/blog/tags/certificate authority" class="smallestTag" alt="There are 1 entries tagged certificate authority">certificate authority</a>
+ <a href="http://www.davidpashley.com/blog/tags/comments" class="smallestTag" alt="There are 1 entries tagged comments">comments</a>
+ <a href="http://www.davidpashley.com/blog/tags/conffiles" class="smallestTag" alt="There are 1 entries tagged conffiles">conffiles</a>
+ <a href="http://www.davidpashley.com/blog/tags/d-i" class="smallestTag" alt="There are 1 entries tagged d-i">d-i</a>
+ <a href="http://www.davidpashley.com/blog/tags/database" class="bigTag" alt="There are 9 entries tagged database">database</a>
+ <a href="http://www.davidpashley.com/blog/tags/databases" class="smallestTag" alt="There are 1 entries tagged databases">databases</a>
+ <a href="http://www.davidpashley.com/blog/tags/debconf" class="smallTag" alt="There are 3 entries tagged debconf">debconf</a>
+ <a href="http://www.davidpashley.com/blog/tags/Debian" class="biggestTag" alt="There are 11 entries tagged Debian">Debian</a>
+ <a href="http://www.davidpashley.com/blog/tags/decompression errors" class="smallestTag" alt="There are 1 entries tagged decompression errors">decompression errors</a>
+ <a href="http://www.davidpashley.com/blog/tags/disk" class="smallestTag" alt="There are 1 entries tagged disk">disk</a>
+ <a href="http://www.davidpashley.com/blog/tags/DocBook" class="smallestTag" alt="There are 1 entries tagged DocBook">DocBook</a>
+ <a href="http://www.davidpashley.com/blog/tags/dpkg" class="smallestTag" alt="There are 1 entries tagged dpkg">dpkg</a>
+ <a href="http://www.davidpashley.com/blog/tags/Eddie" class="smallestTag" alt="There are 1 entries tagged Eddie">Eddie</a>
+ <a href="http://www.davidpashley.com/blog/tags/email addresses" class="smallestTag" alt="There are 1 entries tagged email addresses">email addresses</a>
+ <a href="http://www.davidpashley.com/blog/tags/encodings" class="smallestTag" alt="There are 1 entries tagged encodings">encodings</a>
+ <a href="http://www.davidpashley.com/blog/tags/Fedora" class="smallestTag" alt="There are 1 entries tagged Fedora">Fedora</a>
+ <a href="http://www.davidpashley.com/blog/tags/feed parser" class="smallestTag" alt="There are 1 entries tagged feed parser">feed parser</a>
+ <a href="http://www.davidpashley.com/blog/tags/firefox" class="smallestTag" alt="There are 1 entries tagged firefox">firefox</a>
+ <a href="http://www.davidpashley.com/blog/tags/gnome" class="smallestTag" alt="There are 1 entries tagged gnome">gnome</a>
+ <a href="http://www.davidpashley.com/blog/tags/gnome-terminal" class="smallestTag" alt="There are 1 entries tagged gnome-terminal">gnome-terminal</a>
+ <a href="http://www.davidpashley.com/blog/tags/google" class="smallTag" alt="There are 2 entries tagged google">google</a>
+ <a href="http://www.davidpashley.com/blog/tags/google maps" class="smallestTag" alt="There are 1 entries tagged google maps">google maps</a>
+ <a href="http://www.davidpashley.com/blog/tags/gotchas" class="mostHugeTag" alt="There are 20 entries tagged gotchas">gotchas</a>
+ <a href="http://www.davidpashley.com/blog/tags/guadec" class="smallestTag" alt="There are 1 entries tagged guadec">guadec</a>
+ <a href="http://www.davidpashley.com/blog/tags/gzip" class="smallestTag" alt="There are 1 entries tagged gzip">gzip</a>
+ <a href="http://www.davidpashley.com/blog/tags/hardware failure" class="smallestTag" alt="There are 1 entries tagged hardware failure">hardware failure</a>
+ <a href="http://www.davidpashley.com/blog/tags/Hardy" class="smallestTag" alt="There are 1 entries tagged Hardy">Hardy</a>
+ <a href="http://www.davidpashley.com/blog/tags/HCI" class="smallestTag" alt="There are 1 entries tagged HCI">HCI</a>
+ <a href="http://www.davidpashley.com/blog/tags/Helsinki" class="smallTag" alt="There are 3 entries tagged Helsinki">Helsinki</a>
+ <a href="http://www.davidpashley.com/blog/tags/hotbabe" class="smallestTag" alt="There are 1 entries tagged hotbabe">hotbabe</a>
+ <a href="http://www.davidpashley.com/blog/tags/init" class="smallestTag" alt="There are 1 entries tagged init">init</a>
+ <a href="http://www.davidpashley.com/blog/tags/InnoDB" class="smallestTag" alt="There are 1 entries tagged InnoDB">InnoDB</a>
+ <a href="http://www.davidpashley.com/blog/tags/IO::File" class="smallestTag" alt="There are 1 entries tagged IO::File">IO::File</a>
+ <a href="http://www.davidpashley.com/blog/tags/irssi" class="mediumTag" alt="There are 5 entries tagged irssi">irssi</a>
+ <a href="http://www.davidpashley.com/blog/tags/Jonathan Corbet" class="smallestTag" alt="There are 1 entries tagged Jonathan Corbet">Jonathan Corbet</a>
+ <a href="http://www.davidpashley.com/blog/tags/juniper" class="smallestTag" alt="There are 1 entries tagged juniper">juniper</a>
+ <a href="http://www.davidpashley.com/blog/tags/junos" class="smallestTag" alt="There are 1 entries tagged junos">junos</a>
+ <a href="http://www.davidpashley.com/blog/tags/lazyweb" class="smallTag" alt="There are 2 entries tagged lazyweb">lazyweb</a>
+ <a href="http://www.davidpashley.com/blog/tags/ldap" class="smallTag" alt="There are 3 entries tagged ldap">ldap</a>
+ <a href="http://www.davidpashley.com/blog/tags/LDAP" class="smallestTag" alt="There are 1 entries tagged LDAP">LDAP</a>
+ <a href="http://www.davidpashley.com/blog/tags/linux" class="smallTag" alt="There are 3 entries tagged linux">linux</a>
+ <a href="http://www.davidpashley.com/blog/tags/Linux 2.6" class="smallestTag" alt="There are 1 entries tagged Linux 2.6">Linux 2.6</a>
+ <a href="http://www.davidpashley.com/blog/tags/livejournal" class="smallTag" alt="There are 3 entries tagged livejournal">livejournal</a>
+ <a href="http://www.davidpashley.com/blog/tags/lwn.net" class="smallestTag" alt="There are 1 entries tagged lwn.net">lwn.net</a>
+ <a href="http://www.davidpashley.com/blog/tags/mail" class="smallestTag" alt="There are 1 entries tagged mail">mail</a>
+ <a href="http://www.davidpashley.com/blog/tags/mdadm" class="smallestTag" alt="There are 1 entries tagged mdadm">mdadm</a>
+ <a href="http://www.davidpashley.com/blog/tags/MIME" class="smallestTag" alt="There are 1 entries tagged MIME">MIME</a>
+ <a href="http://www.davidpashley.com/blog/tags/mjray" class="smallestTag" alt="There are 1 entries tagged mjray">mjray</a>
+ <a href="http://www.davidpashley.com/blog/tags/mozilla" class="smallestTag" alt="There are 1 entries tagged mozilla">mozilla</a>
+ <a href="http://www.davidpashley.com/blog/tags/mutt" class="smallestTag" alt="There are 1 entries tagged mutt">mutt</a>
+ <a href="http://www.davidpashley.com/blog/tags/MySQL" class="smallTag" alt="There are 4 entries tagged MySQL">MySQL</a>
+ <a href="http://www.davidpashley.com/blog/tags/mysql" class="smallTag" alt="There are 4 entries tagged mysql">mysql</a>
+ <a href="http://www.davidpashley.com/blog/tags/network troubleshooting" class="smallestTag" alt="There are 1 entries tagged network troubleshooting">network troubleshooting</a>
+ <a href="http://www.davidpashley.com/blog/tags/networking" class="smallestTag" alt="There are 1 entries tagged networking">networking</a>
+ <a href="http://www.davidpashley.com/blog/tags/OOP" class="smallTag" alt="There are 3 entries tagged OOP">OOP</a>
+ <a href="http://www.davidpashley.com/blog/tags/OpenSSL" class="smallTag" alt="There are 2 entries tagged OpenSSL">OpenSSL</a>
+ <a href="http://www.davidpashley.com/blog/tags/Oracle" class="smallTag" alt="There are 4 entries tagged Oracle">Oracle</a>
+ <a href="http://www.davidpashley.com/blog/tags/Oracle XE" class="smallestTag" alt="There are 1 entries tagged Oracle XE">Oracle XE</a>
+ <a href="http://www.davidpashley.com/blog/tags/patents" class="smallestTag" alt="There are 1 entries tagged patents">patents</a>
+ <a href="http://www.davidpashley.com/blog/tags/perl" class="mediumTag" alt="There are 5 entries tagged perl">perl</a>
+ <a href="http://www.davidpashley.com/blog/tags/perl DBI" class="smallestTag" alt="There are 1 entries tagged perl DBI">perl DBI</a>
+ <a href="http://www.davidpashley.com/blog/tags/politics" class="smallestTag" alt="There are 1 entries tagged politics">politics</a>
+ <a href="http://www.davidpashley.com/blog/tags/post-inst" class="smallestTag" alt="There are 1 entries tagged post-inst">post-inst</a>
+ <a href="http://www.davidpashley.com/blog/tags/postgresql" class="smallestTag" alt="There are 1 entries tagged postgresql">postgresql</a>
+ <a href="http://www.davidpashley.com/blog/tags/PostgreSQL" class="mediumTag" alt="There are 5 entries tagged PostgreSQL">PostgreSQL</a>
+ <a href="http://www.davidpashley.com/blog/tags/printconf" class="smallestTag" alt="There are 1 entries tagged printconf">printconf</a>
+ <a href="http://www.davidpashley.com/blog/tags/privoxy" class="smallestTag" alt="There are 1 entries tagged privoxy">privoxy</a>
+ <a href="http://www.davidpashley.com/blog/tags/programming" class="smallestTag" alt="There are 1 entries tagged programming">programming</a>
+ <a href="http://www.davidpashley.com/blog/tags/prompt" class="smallestTag" alt="There are 1 entries tagged prompt">prompt</a>
+ <a href="http://www.davidpashley.com/blog/tags/prompts" class="smallestTag" alt="There are 1 entries tagged prompts">prompts</a>
+ <a href="http://www.davidpashley.com/blog/tags/puppet" class="smallestTag" alt="There are 1 entries tagged puppet">puppet</a>
+ <a href="http://www.davidpashley.com/blog/tags/pyblosxom" class="smallestTag" alt="There are 1 entries tagged pyblosxom">pyblosxom</a>
+ <a href="http://www.davidpashley.com/blog/tags/python" class="smallestTag" alt="There are 1 entries tagged python">python</a>
+ <a href="http://www.davidpashley.com/blog/tags/quota" class="smallestTag" alt="There are 1 entries tagged quota">quota</a>
+ <a href="http://www.davidpashley.com/blog/tags/RAID" class="smallestTag" alt="There are 1 entries tagged RAID">RAID</a>
+ <a href="http://www.davidpashley.com/blog/tags/rfc2821" class="smallestTag" alt="There are 1 entries tagged rfc2821">rfc2821</a>
+ <a href="http://www.davidpashley.com/blog/tags/rfc2822" class="smallestTag" alt="There are 1 entries tagged rfc2822">rfc2822</a>
+ <a href="http://www.davidpashley.com/blog/tags/Rome" class="smallestTag" alt="There are 1 entries tagged Rome">Rome</a>
+ <a href="http://www.davidpashley.com/blog/tags/root ca" class="smallestTag" alt="There are 1 entries tagged root ca">root ca</a>
+ <a href="http://www.davidpashley.com/blog/tags/routing" class="smallestTag" alt="There are 1 entries tagged routing">routing</a>
+ <a href="http://www.davidpashley.com/blog/tags/RPM" class="smallTag" alt="There are 2 entries tagged RPM">RPM</a>
+ <a href="http://www.davidpashley.com/blog/tags/RSS" class="smallTag" alt="There are 2 entries tagged RSS">RSS</a>
+ <a href="http://www.davidpashley.com/blog/tags/samba" class="smallestTag" alt="There are 1 entries tagged samba">samba</a>
+ <a href="http://www.davidpashley.com/blog/tags/sambaSID" class="smallestTag" alt="There are 1 entries tagged sambaSID">sambaSID</a>
+ <a href="http://www.davidpashley.com/blog/tags/sane" class="smallestTag" alt="There are 1 entries tagged sane">sane</a>
+ <a href="http://www.davidpashley.com/blog/tags/SAX" class="smallestTag" alt="There are 1 entries tagged SAX">SAX</a>
+ <a href="http://www.davidpashley.com/blog/tags/SCIM" class="smallestTag" alt="There are 1 entries tagged SCIM">SCIM</a>
+ <a href="http://www.davidpashley.com/blog/tags/Scott James Remnant" class="smallestTag" alt="There are 1 entries tagged Scott James Remnant">Scott James Remnant</a>
+ <a href="http://www.davidpashley.com/blog/tags/scp" class="smallestTag" alt="There are 1 entries tagged scp">scp</a>
+ <a href="http://www.davidpashley.com/blog/tags/security" class="smallTag" alt="There are 3 entries tagged security">security</a>
+ <a href="http://www.davidpashley.com/blog/tags/serverfault" class="smallTag" alt="There are 2 entries tagged serverfault">serverfault</a>
+ <a href="http://www.davidpashley.com/blog/tags/shell" class="mediumTag" alt="There are 5 entries tagged shell">shell</a>
+ <a href="http://www.davidpashley.com/blog/tags/Shibboleth" class="smallestTag" alt="There are 1 entries tagged Shibboleth">Shibboleth</a>
+ <a href="http://www.davidpashley.com/blog/tags/sip" class="smallestTag" alt="There are 1 entries tagged sip">sip</a>
+ <a href="http://www.davidpashley.com/blog/tags/slapindex" class="smallestTag" alt="There are 1 entries tagged slapindex">slapindex</a>
+ <a href="http://www.davidpashley.com/blog/tags/software patents" class="smallestTag" alt="There are 1 entries tagged software patents">software patents</a>
+ <a href="http://www.davidpashley.com/blog/tags/Solaris" class="smallestTag" alt="There are 1 entries tagged Solaris">Solaris</a>
+ <a href="http://www.davidpashley.com/blog/tags/spam" class="smallTag" alt="There are 2 entries tagged spam">spam</a>
+ <a href="http://www.davidpashley.com/blog/tags/spamassassin" class="smallestTag" alt="There are 1 entries tagged spamassassin">spamassassin</a>
+ <a href="http://www.davidpashley.com/blog/tags/SQL" class="smallestTag" alt="There are 1 entries tagged SQL">SQL</a>
+ <a href="http://www.davidpashley.com/blog/tags/SQL::Translator" class="smallestTag" alt="There are 1 entries tagged SQL::Translator">SQL::Translator</a>
+ <a href="http://www.davidpashley.com/blog/tags/ssh" class="smallestTag" alt="There are 1 entries tagged ssh">ssh</a>
+ <a href="http://www.davidpashley.com/blog/tags/sshd" class="smallestTag" alt="There are 1 entries tagged sshd">sshd</a>
+ <a href="http://www.davidpashley.com/blog/tags/SSL" class="smallestTag" alt="There are 1 entries tagged SSL">SSL</a>
+ <a href="http://www.davidpashley.com/blog/tags/style" class="smallestTag" alt="There are 1 entries tagged style">style</a>
+ <a href="http://www.davidpashley.com/blog/tags/subversion" class="smallTag" alt="There are 2 entries tagged subversion">subversion</a>
+ <a href="http://www.davidpashley.com/blog/tags/tabs" class="smallestTag" alt="There are 1 entries tagged tabs">tabs</a>
+ <a href="http://www.davidpashley.com/blog/tags/tar" class="smallestTag" alt="There are 1 entries tagged tar">tar</a>
+ <a href="http://www.davidpashley.com/blog/tags/terminal" class="smallestTag" alt="There are 1 entries tagged terminal">terminal</a>
+ <a href="http://www.davidpashley.com/blog/tags/timezone" class="smallestTag" alt="There are 1 entries tagged timezone">timezone</a>
+ <a href="http://www.davidpashley.com/blog/tags/tips" class="smallestTag" alt="There are 1 entries tagged tips">tips</a>
+ <a href="http://www.davidpashley.com/blog/tags/toilet breaks" class="smallestTag" alt="There are 1 entries tagged toilet breaks">toilet breaks</a>
+ <a href="http://www.davidpashley.com/blog/tags/tomcat" class="smallTag" alt="There are 3 entries tagged tomcat">tomcat</a>
+ <a href="http://www.davidpashley.com/blog/tags/travel" class="smallTag" alt="There are 4 entries tagged travel">travel</a>
+ <a href="http://www.davidpashley.com/blog/tags/turing test" class="smallestTag" alt="There are 1 entries tagged turing test">turing test</a>
+ <a href="http://www.davidpashley.com/blog/tags/tuxpaint" class="smallestTag" alt="There are 1 entries tagged tuxpaint">tuxpaint</a>
+ <a href="http://www.davidpashley.com/blog/tags/Ubuntu" class="smallTag" alt="There are 2 entries tagged Ubuntu">Ubuntu</a>
+ <a href="http://www.davidpashley.com/blog/tags/udev" class="smallestTag" alt="There are 1 entries tagged udev">udev</a>
+ <a href="http://www.davidpashley.com/blog/tags/UI" class="smallestTag" alt="There are 1 entries tagged UI">UI</a>
+ <a href="http://www.davidpashley.com/blog/tags/UK" class="smallestTag" alt="There are 1 entries tagged UK">UK</a>
+ <a href="http://www.davidpashley.com/blog/tags/untagged" class="mediumTag" alt="There are 96 entries tagged untagged">untagged</a>
+ <a href="http://www.davidpashley.com/blog/tags/upgrades" class="smallestTag" alt="There are 1 entries tagged upgrades">upgrades</a>
+ <a href="http://www.davidpashley.com/blog/tags/user administration" class="smallestTag" alt="There are 1 entries tagged user administration">user administration</a>
+ <a href="http://www.davidpashley.com/blog/tags/user adminitration" class="smallestTag" alt="There are 1 entries tagged user adminitration">user adminitration</a>
+ <a href="http://www.davidpashley.com/blog/tags/user-friendly" class="smallestTag" alt="There are 1 entries tagged user-friendly">user-friendly</a>
+ <a href="http://www.davidpashley.com/blog/tags/validation" class="smallestTag" alt="There are 1 entries tagged validation">validation</a>
+ <a href="http://www.davidpashley.com/blog/tags/vim" class="smallestTag" alt="There are 1 entries tagged vim">vim</a>
+ <a href="http://www.davidpashley.com/blog/tags/warnquota" class="smallestTag" alt="There are 1 entries tagged warnquota">warnquota</a>
+ <a href="http://www.davidpashley.com/blog/tags/water" class="smallestTag" alt="There are 1 entries tagged water">water</a>
+ <a href="http://www.davidpashley.com/blog/tags/weblogs" class="smallestTag" alt="There are 1 entries tagged weblogs">weblogs</a>
+ <a href="http://www.davidpashley.com/blog/tags/Windows XP" class="smallestTag" alt="There are 1 entries tagged Windows XP">Windows XP</a>
+ <a href="http://www.davidpashley.com/blog/tags/Workrave" class="smallestTag" alt="There are 1 entries tagged Workrave">Workrave</a>
+ <a href="http://www.davidpashley.com/blog/tags/wtf" class="bigTag" alt="There are 8 entries tagged wtf">wtf</a>
+ <a href="http://www.davidpashley.com/blog/tags/XSLT" class="smallTag" alt="There are 2 entries tagged XSLT">XSLT</a>
+ <a href="http://www.davidpashley.com/blog/tags/xss" class="smallestTag" alt="There are 1 entries tagged xss">xss</a>
+ <a href="http://www.davidpashley.com/blog/tags/xterm title" class="smallestTag" alt="There are 1 entries tagged xterm title">xterm title</a>
+ <a href="http://www.davidpashley.com/blog/tags/yum" class="smallestTag" alt="There are 1 entries tagged yum">yum</a>
+ </div>
+ <br />
+ </p>
+ </div>
+ <div class="archives">
+ <h1>Categories</h1>
+ <p><a href="http://www.davidpashley.com/blog/">/</a> (188)<br />  <a href="http://www.davidpashley.com/blog/computing">computing/</a> (49)<br />    <a href="http://www.davidpashley.com/blog/computing/vim">vim/</a> (1)<br />  <a href="http://www.davidpashley.com/blog/databases">databases/</a> (1)<br />    <a href="http://www.davidpashley.com/blog/databases/mysql">mysql/</a> (7)<br />    <a href="http://www.davidpashley.com/blog/databases/oracle">oracle/</a> (1)<br />    <a href="http://www.davidpashley.com/blog/databases/postgresql">postgresql/</a> (4)<br />  <a href="http://www.davidpashley.com/blog/debian">debian/</a> (24)<br />  <a href="http://www.davidpashley.com/blog/films">films/</a> (6)<br />  <a href="http://www.davidpashley.com/blog/general">general/</a> (20)<br />  <a href="http://www.davidpashley.com/blog/life">life/</a> (20)<br />  <a href="http://www.davidpashley.com/blog/linux">linux/</a> (6)<br />  <a href="http://www.davidpashley.com/blog/meta">meta/</a> (6)<br />  <a href="http://www.davidpashley.com/blog/music">music/</a> (12)<br />  <a href="http://www.davidpashley.com/blog/networks">networks/</a> (0)<br />    <a href="http://www.davidpashley.com/blog/networks/juniper">juniper/</a> (1)<br />  <a href="http://www.davidpashley.com/blog/politics">politics/</a> (6)<br />  <a href="http://www.davidpashley.com/blog/programming">programming/</a> (0)<br />    <a href="http://www.davidpashley.com/blog/programming/java">java/</a> (11)<br />    <a href="http://www.davidpashley.com/blog/programming/perl">perl/</a> (8)<br />    <a href="http://www.davidpashley.com/blog/programming/shell">shell/</a> (1)<br />  <a href="http://www.davidpashley.com/blog/spamusements">spamusements/</a> (1)<br />  <a href="http://www.davidpashley.com/blog/systems-administration">systems-administration/</a> (0)<br />    <a href="http://www.davidpashley.com/blog/systems-administration/puppet">puppet/</a> (2)<br />    <a href="http://www.davidpashley.com/blog/systems-administration/tomcat">tomcat/</a> (1)<br />
+ </p>
+ </div>
+ <div class="donation">
+ <form action="https://www.paypal.com/cgi-bin/webscr" method="post">
+ <input type="hidden" name="cmd" value="_s-xclick" />
+ <input type="image" src="https://www.paypal.com/en_US/i/btn/x-click-but04.gif" border="0" name="submit" alt="Make payments with PayPal - it's fast, free and secure!" />
+ <input type="hidden" name="encrypted" value="-----BEGIN PKCS7-----MIIHFgYJKoZIhvcNAQcEoIIHBzCCBwMCAQExggEwMIIBLAIBADCBlDCBjjELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAkNBMRYwFAYDVQQHEw1Nb3VudGFpbiBWaWV3MRQwEgYDVQQKEwtQYXlQYWwgSW5jLjETMBEGA1UECxQKbGl2ZV9jZXJ0czERMA8GA1UEAxQIbGl2ZV9hcGkxHDAaBgkqhkiG9w0BCQEWDXJlQHBheXBhbC5jb20CAQAwDQYJKoZIhvcNAQEBBQAEgYBD1LP+xrMTTJNaC943UTKqkYfhTtX0XggghWl7tg/boQneAFA9vIFrcJhUspkcWE0SogGNYGFItT0Rtufx7JkrlSqmCj6tUFYlF4dB5vRtCmAAOvKtRw+u8uU2KLrINW5Za6Rsr/+mlZciDsueK2Unw4p9tmpaK5pF7zjbiMd5mTELMAkGBSsOAwIaBQAwgZMGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQIzYN5SEiBrAKAcMVdle4DhtCNVJAjUtNm5SfdWCHbbK+bFc72nG26lSM5h7TH3Qw2bSt05urvY4qwDCDSlSG+sVvlyC0e38hsCkNwIHMXLcgn9PmgFPb1vOBffVEkILkxg/819qcUjvczdAHmujcMvaXayjQODBF8RDegggOHMIIDgzCCAuygAwIBAgIBADANBgkqhkiG9w0BAQUFADCBjjELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAkNBMRYwFAYDVQQHEw1Nb3VudGFpbiBWaWV3MRQwEgYDVQQKEwtQYXlQYWwgSW5jLjETMBEGA1UECxQKbGl2ZV9jZXJ0czERMA8GA1UEAxQIbGl2ZV9hcGkxHDAaBgkqhkiG9w0BCQEWDXJlQHBheXBhbC5jb20wHhcNMDQwMjEzMTAxMzE1WhcNMzUwMjEzMTAxMzE1WjCBjjELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAkNBMRYwFAYDVQQHEw1Nb3VudGFpbiBWaWV3MRQwEgYDVQQKEwtQYXlQYWwgSW5jLjETMBEGA1UECxQKbGl2ZV9jZXJ0czERMA8GA1UEAxQIbGl2ZV9hcGkxHDAaBgkqhkiG9w0BCQEWDXJlQHBheXBhbC5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMFHTt38RMxLXJyO2SmS+Ndl72T7oKJ4u4uw+6awntALWh03PewmIJuzbALScsTS4sZoS1fKciBGoh11gIfHzylvkdNe/hJl66/RGqrj5rFb08sAABNTzDTiqqNpJeBsYs/c2aiGozptX2RlnBktH+SUNpAajW724Nv2Wvhif6sFAgMBAAGjge4wgeswHQYDVR0OBBYEFJaffLvGbxe9WT9S1wob7BDWZJRrMIG7BgNVHSMEgbMwgbCAFJaffLvGbxe9WT9S1wob7BDWZJRroYGUpIGRMIGOMQswCQYDVQQGEwJVUzELMAkGA1UECBMCQ0ExFjAUBgNVBAcTDU1vdW50YWluIFZpZXcxFDASBgNVBAoTC1BheVBhbCBJbmMuMRMwEQYDVQQLFApsaXZlX2NlcnRzMREwDwYDVQQDFAhsaXZlX2FwaTEcMBoGCSqGSIb3DQEJARYNcmVAcGF5cGFsLmNvbYIBADAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4GBAIFfOlaagFrl71+jq6OKidbWFSE+Q4FqROvdgIONth+8kSK//Y/4ihuE4Ymvzn5ceE3S/iBSQQMjyvb+s2TWbQYDwcp129OPIbD9epdr4tJOUNiSojw7BHwYRiPh58S1xGlFgHFXwrEBb3dgNbMUa+u4qectsMAXpVHnD9wIyfmHMYIBmjCCAZYCAQEwgZQwgY4xCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJDQTEWMBQGA1UEBxMNTW91bnRhaW4gVmlldzEUMBIGA1UEChMLUGF5UGFsIEluYy4xEzARBgNVBAsUCmxpdmVfY2VydHMxETAPBgNVBAMUCGxpdmVfYXBpMRwwGgYJKoZIhvcNAQkBFg1yZUBwYXlwYWwuY29tAgEAMAkGBSsOAwIaBQCgXTAYBgkqhkiG9w0BCQMxCwYJKoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0wNjAzMDQwOTMwMzBaMCMGCSqGSIb3DQEJBDEWBBRw63X5l5Xjq3OImPAD3b2LjgmntTANBgkqhkiG9w0BAQEFAASBgCUcGd5XcaxY6p3qUl5uDnkWvvG8hr1bDogJ16obgcYkjk3BJ5vvVS6XbiiWtNxd2teEhoIhQVIto+JhkW1GN6p+l6hHOixPeghFzG+bLHtey7XTpmv1spkJxgH9Z7z/YApW+MVeL+ge8qsk4kbPvXQEM/qFYGrRQwl/YaCoMHZR-----END PKCS7----- " />
+ </form>
+ </div>
+ </div>
+ <div xmlns="" id="body">
+ <div id="content">
+ <h1>Sat, 12 Jul 2008</h1>
+ <div class="entry"><h2>Rebuilding a RAID array</h2><div><p>I recently had a failed drive in my RAID1 array. I've just installed
+the replacement drive and thought I'd share the method.</p><p>Let's look at the current situation:</p><pre>
+root@ace:~# <b>cat /proc/mdstat</b>
+Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
+md1 : active raid1 sda3[1]
+ 483403776 blocks [2/1] [_U]
+
+md0 : active raid1 sda1[1]
+ 96256 blocks [2/1] [_U]
+
+unused devices: &lt;none&gt;
+</pre><p>So we can see we have two mirrored arrays with one drive missing in both.</p><p>Let's see that we've recognised the second drive:</p><pre>
+root@ace:~# <b>dmesg | grep sd</b>
+[ 21.465395] Driver 'sd' needs updating - please use bus_type methods
+[ 21.465486] sd 2:0:0:0: [sda] 976773168 512-byte hardware sectors (500108 MB)
+[ 21.465496] sd 2:0:0:0: [sda] Write Protect is off
+[ 21.465498] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00
+[ 21.465512] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
+[ 21.465562] sd 2:0:0:0: [sda] 976773168 512-byte hardware sectors (500108 MB)
+[ 21.465571] sd 2:0:0:0: [sda] Write Protect is off
+[ 21.465573] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00
+[ 21.465587] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
+[ 21.465590] sda: sda1 sda2 sda3
+[ 21.487248] sd 2:0:0:0: [sda] Attached SCSI disk
+[ 21.487303] sd 2:0:1:0: [sdb] 976773168 512-byte hardware sectors (500108 MB)
+[ 21.487314] sd 2:0:1:0: [sdb] Write Protect is off
+[ 21.487317] sd 2:0:1:0: [sdb] Mode Sense: 00 3a 00 00
+[ 21.487331] sd 2:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
+[ 21.487371] sd 2:0:1:0: [sdb] 976773168 512-byte hardware sectors (500108 MB)
+[ 21.487381] sd 2:0:1:0: [sdb] Write Protect is off
+[ 21.487382] sd 2:0:1:0: [sdb] Mode Sense: 00 3a 00 00
+[ 21.487403] sd 2:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
+[ 21.487407] sdb: unknown partition table
+[ 21.502763] sd 2:0:1:0: [sdb] Attached SCSI disk
+[ 21.506690] sd 2:0:0:0: Attached scsi generic sg0 type 0
+[ 21.506711] sd 2:0:1:0: Attached scsi generic sg1 type 0
+[ 21.793835] md: bind&lt;sda1&gt;
+[ 21.858027] md: bind&lt;sda3&gt;
+</pre><p>So, sda has three partitions, sda1, sda2 and sda3, and sdb has no partition
+table. Let's give it one the same as sda. The easiest way to do this is using
+<tt>sfdisk</tt>:</p><pre>
+root@ace:~# <b>sfdisk -d /dev/sda | sfdisk /dev/sdb</b>
+Checking that no-one is using this disk right now ...
+OK
+
+Disk /dev/sdb: 60801 cylinders, 255 heads, 63 sectors/track
+
+sfdisk: ERROR: sector 0 does not have an MSDOS signature
+ /dev/sdb: unrecognised partition table type
+Old situation:
+No partitions found
+New situation:
+Units = sectors of 512 bytes, counting from 0
+
+ Device Boot Start End #sectors Id System
+/dev/sdb1 * 63 192779 192717 fd Linux RAID autodetect
+/dev/sdb2 192780 9960299 9767520 82 Linux swap / Solaris
+/dev/sdb3 9960300 976768064 966807765 fd Linux RAID autodetect
+/dev/sdb4 0 - 0 0 Empty
+Successfully wrote the new partition table
+
+Re-reading the partition table ...
+
+If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
+to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
+(See fdisk(8).)
+</pre><p>If we check <tt>dmesg</tt> now to check it's worked, we'll see:</p><pre>
+root@ace:~# <b>dmesg | grep sd</b>
+...
+[ 224.246102] sd 2:0:1:0: [sdb] 976773168 512-byte hardware sectors (500108 MB)
+[ 224.246322] sd 2:0:1:0: [sdb] Write Protect is off
+[ 224.246325] sd 2:0:1:0: [sdb] Mode Sense: 00 3a 00 00
+[ 224.246547] sd 2:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
+[ 224.246686] sdb: unknown partition table
+[ 227.326278] sd 2:0:1:0: [sdb] 976773168 512-byte hardware sectors (500108 MB)
+[ 227.326504] sd 2:0:1:0: [sdb] Write Protect is off
+[ 227.326507] sd 2:0:1:0: [sdb] Mode Sense: 00 3a 00 00
+[ 227.326703] sd 2:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
+[ 227.326708] sdb: sdb1 sdb2 sdb3
+</pre><p>So, now we have identical partition tables. The next thing to do is to add the new partitions to the array:</p><pre>
+root@ace:~# <b>mdadm /dev/md0 --add /dev/sdb1</b>
+mdadm: added /dev/sdb1
+root@ace:~# <b>mdadm /dev/md1 --add /dev/sdb3</b>
+mdadm: added /dev/sdb3
+</pre><p>Everything looks good. Let's check <tt>dmesg</tt>:</p><pre>
+[ 323.941542] md: bind&lt;sdb1&gt;
+[ 324.038183] RAID1 conf printout:
+[ 324.038189] --- wd:1 rd:2
+[ 324.038192] disk 0, wo:1, o:1, dev:sdb1
+[ 324.038195] disk 1, wo:0, o:1, dev:sda1
+[ 324.038300] md: recovery of RAID array md0
+[ 324.038303] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
+[ 324.038305] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
+[ 324.038310] md: using 128k window, over a total of 96256 blocks.
+[ 325.417219] md: md0: recovery done.
+[ 325.453629] RAID1 conf printout:
+[ 325.453632] --- wd:2 rd:2
+[ 325.453634] disk 0, wo:0, o:1, dev:sdb1
+[ 325.453636] disk 1, wo:0, o:1, dev:sda1
+[ 347.970105] md: bind&lt;sdb3&gt;
+[ 348.004566] RAID1 conf printout:
+[ 348.004571] --- wd:1 rd:2
+[ 348.004573] disk 0, wo:1, o:1, dev:sdb3
+[ 348.004574] disk 1, wo:0, o:1, dev:sda3
+[ 348.004657] md: recovery of RAID array md1
+[ 348.004659] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
+[ 348.004660] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
+[ 348.004664] md: using 128k window, over a total of 483403776 blocks.
+</pre><p>Everything still looks good. Let's sit back and watch it rebuild using the wonderfully useful <tt>watch</tt> command:</p><pre>
+root@ace:~# <b>watch -n 1 cat /proc/mdstat</b>
+Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
+md1 : active raid1 sdb3[2] sda3[1]
+ 483403776 blocks [2/1] [_U]
+ [=====&gt;...............] recovery = 26.0% (126080960/483403776) finish=96.2min speed=61846K/sec
+
+md0 : active raid1 sdb1[0] sda1[1]
+ 96256 blocks [2/2] [UU]
+
+unused devices: &lt;none&gt;
+</pre><p>The Ubuntu and Debian installers will allow you create RAID1 arrays
+with less drives than you actually have, so you can use this technique
+if you plan to add an additional drive after you've installed the
+system. Just tell it the eventual number of drives, but only select the
+available partitions during RAID setup. I used this method when a new machine recent
+didn't have enough SATA power cables and had to wait for an adaptor to
+be delivered.</p><p><small>(Why did no one tell me about <tt>watch</tt> until recently. I wonder
+how many more incredibly useful programs I've not discovered even after 10
+years of using Linux</small>)</p></div>
+ [<a href="http://www.davidpashley.com/blog/tags/linux" rel="tag">linux</a>, <a href="http://www.davidpashley.com/blog/tags/mdadm" rel="tag">mdadm</a>, <a href="http://www.davidpashley.com/blog/tags/RAID" rel="tag">RAID</a>] | <a href="http://www.davidpashley.com/blog/linux/rebuilding-raid" title="Permalink"># Read Comments (3)</a> |
+ <div class="archives"><div id="relatedstories"><h2>Related Stories</h2><p><a href="http://www.davidpashley.com/blog/linux/copying-files-with-netcat">Copying files with netcat</a><br /><a href="http://www.davidpashley.com/blog/computing/network-troubleshooting">Network Troubleshooting Article</a><br /></p></div></div>
+ <div class="archives">
+
+ </div>
+
+</div>
+ <h2>Comments</h2>
+ <div class="blosxomComments">
+ <div class="blosxomComment"><!-- Rebuilding a RAID array --><a name="1215931866.99" id="1215931866.99"></a>
+ One extra step that I do is install an MBR on the new disk, to make it bootable:<br />
+<br />
+install-mbr /dev/sdb<br />
+ Posted by <a href="http://gedmin.as">Marius Gedminas</a> at Sun Jul 13 07:51:06 2008
+ </div>
+ <div class="blosxomComment"><!-- Rebuilding a RAID array --><a name="1215971647.83" id="1215971647.83"></a>
+ Great article!<br />
+Maybe it would be even more useful if merged here:<br />
+<a href="http://linux-raid.osdl.org/index.php/Reconstruction">http://linux-raid.osdl.org/index.php/Reconstruction</a><br />
+ Posted by <a href="http://edpeur.blogspot.com/">Eduardo Pérez Ureta</a> at Sun Jul 13 18:54:07 2008
+ </div>
+ <div class="blosxomComment"><!-- Rebuilding a RAID array --><a name="1216035050.47" id="1216035050.47"></a>
+ Semi-Tangential note about performance:  On my home (== partly "play") machine, I made the experience that "mdadm --manage .. --fail"-ing the root partition before doing lots of package upgrades (installing KDE 4/experimental and lots of other updates in my case, on a mostly etch system.  Dual screen support sucks if the screens don't have the same size, btw!) speeds up apt considerably, while the subsequent reconstruct step (--remove and then --add the partition) doesn't slow down the system much during light desktop workload.<br />
+<br />
+My system is a few years old (no SATA, probably not too much cache on the disks, too) and has only 512M RAM, so maybe a better equipped system would make this less noticeable.<br />
+<br />
+(... and no, I probably wouldn't force-fail part of my /home partition for any length of time :-)<br />
+ Posted by <a href="http://fortytwo.ch/">cmot</a> at Mon Jul 14 12:30:50 2008
+ </div>
+ <br />
+ <div class="blosxomCommentForm">
+ <form method="post" action="http://www.davidpashley.com/blog/linux/rebuilding-raid" id="comments_form">
+ <p><input type="hidden" name="secretToken" value="pleaseDontSpam" /><input name="parent" type="hidden" value="linux/rebuilding-raid" /><input name="title" type="hidden" value="Rebuilding a RAID array" />
+ Name:<br />
+ <input maxlength="50" name="author" size="50" type="text" value="" /><br />
+ <br />
+ E-mail:<br />
+ <input maxlength="75" name="email" size="50" type="text" value="" /><br />
+ <br />
+ URL:<br />
+
+ <input maxlength="100" name="url" size="50" type="text" value="" /><br />
+ <br />
+ Comment:<br />
+ <textarea cols="50" name="body" rows="12"></textarea><br />
+ <br />
+ Please enter "fudge" to prove you are a human
+ <input size="50" type="text" name="human" />
+ <br />
+ <input name="Submit" type="submit" value="Submit" />
+ <!-- <input name="preview" type="submit" value="Preview" />
+ <input type="button" onclick="forgetMe(this.form)" value="Clear Info" />
+ <input type="checkbox" name="bakecookie" />Remember info?</input> -->
+ </p>
+ </form>
+ </div>
+ </div>
+ </div>
+ </div>
+ <div xmlns="" id="footer">Copyright 2004,2005,2006,2007,2008 David Pashley<br />
+ All Rights Reserved
+ </div>
+ </div>
+ </div>
+ </body>
+</html><!--
+ -* Generated by mod-xslt 1.3.9; http://www.mod-xslt2.com/
+ -* Copyright (C) 2002,2003 Carlo Contavalli - <ccontavalli at masobit.net>
+ -* derived from work by Philipp Dunkel and others (http://www.mod-xslt2.com/main/credits.xml)
+ -* Thanks to http://www.masobit.net/ for paying me while working on mod-xslt
+ -* and for providing resources to the project. -->
diff --git a/debian/mdadm.docs b/debian/mdadm.docs
index 1d1cdc50..39333b61 100644
--- a/debian/mdadm.docs
+++ b/debian/mdadm.docs
@@ -1,4 +1,4 @@
-docs/*
+debian/docs/*
TODO
debian/README.recipes
debian/README.initramfs-transition
diff --git a/debian/rules b/debian/rules
index 0d0c2800..d9c36f0f 100755
--- a/debian/rules
+++ b/debian/rules
@@ -6,7 +6,7 @@
#export DH_VERBOSE=1
-TG_BRANCHES="contrib/docs/raid5-vs-raid10 contrib/docs/superblock_formats contrib/docs/md.txt contrib/docs/jd-rebuilding-raid debian/conffile-location debian/disable-udev-incr-assembly debian/no-Werror"
+TG_BRANCHES="debian/conffile-location debian/disable-udev-incr-assembly debian/no-Werror"
-include /usr/share/topgit/tg2quilt.mk
RUNDIR = /run/mdadm