Recently I've had to backup/restore data from a failing drive with LVM over Raid.
Luckily I had access to the backup of the current metadata configuration located in "/etc/lvm/backup/".
Below is what the volumegroup looked like:
vg0 {
id = "xvni1W-24Xu-dVoR-PlXh-gQvQ-62fL-QX64O3"
seqno = 9
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 65536 # 32 Megabytes
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "9gbyhX-Owvj-u4Q4-wR1E-IEf2-gyUA-CJBCJK"
device = "/dev/md3" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 1928892288 # 919.768 Gigabytes
pe_start = 384
pe_count = 29432 # 919.75 Gigabytes
}
}
logical_volumes {
lv0_sites {
id = "Sg1fYr-NTzr-8AA2-v29K-tcz5-rUMj-uRoXY1"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 1280 # 40 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
lv0_m {
id = "scNeN4-4bmg-Y6kq-zKuO-n8B8-s8mw-FTUYqk"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 12800 # 400 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 1280
]
}
}
}
}
Now to extract data with dd, use the below formula (this will only work for linear stripe):
skip=$[extent_size*stripes+pe_start] count=$[extent_size*(extent_count-1)]
So to get the lv0_m data off of the volume:
dd if=/dev/sdb4 of=/opt/bak/lv0_m.iso bs=512 skip=$[65536*1280+384] count=$[65536*(12800-1)] conv=sync,noerror
Once the iso is created, it can then be loop mounted via:
mount -o loop -t ext3 /opt/bak/lv0_m.iso /mnt/lv0_m
You should then be able to see all the files in the mount point which can then be used for data restoration.
why extent_count - 1 ??
I'm curious as to why you're subtracting 1 from the extent_count? Doesn't this end up leaving the final extent in the lv out of the output image?
By example, say the source lv has two segments instead of one:
lv0_m {
id = "scNeN4-4bmg-Y6kq-zKuO-n8B8-s8mw-FTUYqk"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 2
segment1 {
start_extent = 0
extent_count = 12000 # xxx Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 1280
]
}
segment2 {
start_extent = 12000
extent_count = 800 # xxx + this = 400 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 13280
]
}
}
...so to handle this properly (if they actually were, for instance, on different pv's), one would need to use two dd's:
dd if=/dev/sdb4 of=/opt/bak/lv0_m.iso bs=512 skip=$[65536*1280+384] count=$[65536*(12000-1)] conv=sync,noerror
dd if=/dev/sdb4 of=/opt/bak/lv0_m.iso bs=512 skip=$[65536*13280+384] seek=$[65536*12000] count=$[65536*(800-1)] conv=sync,noerror
(note, i have no idea if seek on an output -file- will append properly)
if you map this out, you'll notice that effectively there is an extent-sized hole between the first and second segment, in the image. For an even more obvious example, say for some reason the segment is one extent large, the above would effectively not copy anything.