LVM thin pools are great. I love them. They offer a great deal of flexibility especially combined with autoextend functionality. The idea is to keep your pools as small as possible and let them autoextend whenever needed. So if the pool is always as small as possible, why do you need to shrink it?
In my case I made the mistake to put my backup disk in the same volume group as my main disk pool. When the backup disk was almost full, it autoextended automatically to my main disk pool which is RAID. Therefore my backup was now at a greater risk because it was spread over more disks. If one disk failed, the backup was gone. So I reduced the size of backups and wanted to shrink this backup pool so it can fit onto one disk again (and move it to its own volume group).
Obviously there is not that important stuff on this server as I was willing to risk this for this experiment. And in fact, I did screw up once by selecting the wrong pool in one of the steps. I had to restore the system from the backup while it was still working.
thin_shrink
In my reseach I stumbled across a very recent tool called thin_shrink
. It is part of the same familiy of thin provisioning tools like thin_check
you are probably already familiar with. However it’s just not released yet. This tool is based on previous work by Nikhil Kshirsagar. It will do most of the heavy lifting of this process.
So I compiled the tools using the latest commit and went experimenting with thin_shrink
. If you decide to experiment with it yourself, I will provide my pdata_tools
binary (commit: 749c86a) for x86_64 so you don’t have to compile it from source. This binary contains all the tools as subcommands (e.g. pdata_tools thin_check
). Only use the binary if you trust me though… Although the binary is not static, as compiled it only depends on Glibc version 2.28 or later, Linux kernel 4.11 or later and the GCC Support library version 4.2 or later. Any decently recent system (Debian Buster era or later) should be fine.
Furthermore I assume you already have existing LVM2 and thin-provisioning-tools installed. Otherwise what are you even doing here?
About this experiment
In case the title wasn’t clear: the operations performed in this experiment are really dangerous. I expect you are very familiar with LVM thin pools and know how they are made up and stored on disk etc. If you screw up only once your data will likely be gone. Or if you, like me, specify the wrong disk, that data will be gone. You should see this experiment as dd
level dangerous. Therefore it’s best to do it on a sort of Live CD system or in initramfs. And while being rested… If this doesn’t sound like you, use it as some reading material to gain insight in some of the LVM thin pool internals exclusively.
I performed it succesfully on a live system so it can be done, but especially if there is some LVM process actively managing your volumes, things could go wrong. This risk could be reduced by locking the volume groups you are working on, but I have no idea how this is done (and how to allow yourself access to locked groups).
Simple scenario
The first example is a quite simple scenario to explain the idea. Afterwards there is slightly more complicated and realistic scenario. The current situation is a single physical volume fully using a thin pool (pvdisplay -m
):
--- Physical volume ---
PV Name /dev/sda2
VG Name vg
PV Size 11.34 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 2903
Free PE 0
Allocated PE 2903
PV UUID 2QsyDS-VPvq-U1GJ-1DKl-y1qP-YDx0-XQ8wnm
--- Physical Segments ---
Physical extent 0 to 2:
Logical volume /dev/vg/lvol0_pmspare
Logical extents 0 to 2
Physical extent 3 to 2899:
Logical volume /dev/vg/tpool0_tdata
Logical extents 0 to 2896
Physical extent 2900 to 2902:
Logical volume /dev/vg/tpool0_tmeta
Logical extents 0 to 2
And an overview of lvs -a
as well:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
[lvol0_pmspare] vg ewi------- 12.00m
mythin vg Vwi-a-tz-- 50.00g tpool0 1.40
tpool0 vg twi-aotz-- <11.32g 6.20 13.57
[tpool0_tdata] vg Twi-ao---- <11.32g
[tpool0_tmeta] vg ewi-ao---- 12.00m
Prerequisites
First we need to unmount all filesystems using this thin pool. In this case by doing umount /dev/vg/mythin
.
Then make all the logical volumes using this thin pool inactive: lvchange -an vg/mythin
. However leave the thin pool itself (tpool0
) active.
Some dmsetup
magic
LVM does not let us keep the tdata
and tmeta
hidden volumes active (and writable), while the tpool
is inactive. Therefore we have to use a more low-level tool called dmsetup
. LVM sort of uses this tool under the hood. Where LVM uses a forwards slash /
to seperate volume group and logical volume, dmsetup uses an hyphen -
.
Confirm no one is using the thin pool anymore: dmsetup info -c vg-tpool0
. Make sure the number in the Open
column is zero:
Name Maj Min Stat Open Targ Event UUID
vg-tpool0 254 3 L--r 0 1 0 LVM-L2NmDISLluoqDkXVxMzmFMMIsUIReAqumfROKSFGPko4S6uWQfckRy3hwjmQlcFN-pool
Now remove the thin pool table from the system: dmsetup remove vg-tpool0
. This does not remove anything on disk, it just unmaps it from the system. You could read ‘forget’ instead of ‘remove’.
Confirm no one is using the pool device itself anymore: dmsetup info -c vg-tpool0-tpool
(note the extra tpool
). Again make sure the number in the Open
column is zero:
Name Maj Min Stat Open Targ Event UUID
vg-tpool0-tpool 254 2 L--w 0 1 0 LVM-L2NmDISLluoqDkXVxMzmFMMIsUIReAqumfROKSFGPko4S6uWQfckRy3hwjmQlcFN-tpool
Remove the pool device itself as well: dmsetup remove vg-tpool0-tpool
.
Modifying the metadata
Now we have to dump the metadata of the pool: thin_dump --format xml -o metadata /dev/mapper/vg-tpool0_tmeta
. This will create a file called metadata
. Note that we can’t use /dev/vg.../...
notation anymore and have to use the underlying dm-thin
device /dev/mapper/...
instead.
Next there will be some math involved. All the calculations in the next part are going to be manual, so if you miscalculate anything, well you’re basically screwed. While experimenting I was always able to restore by restoring the original files (metadata or volume group backups), this can (read: will) erase your data, especially if there are still volumes active on the same volume group.
We have to get the size of a PE in KiB. I am using a PE size of 4 MiB = 4096 KiB. The data in the thin pool is also split in so called data blocks. By default (on Debian at least) data blocks are 64 KiB. Get your data block size from the metadata file we created: head metadata | grep superblock
. The data_block_size
entry contains the size of the data blocks in 512-byte sectors (e.g. 128 512-byte sectors is 64 KiB). From this same command we also need to extract the number of data blocks. Look for the nr_data_blocks
entry.
For my example we now have the following data:
- PE_SIZE: 4096 KiB
- DATA_BLOCK_SIZE: 64 KiB
- NR_DATA_BLOCKS: 185408
Let’s say we want to shrink the thin pool by 1000 extents (= 1000*PE_SIZE
= 4096000 KiB = 4096000/DATA_BLOCK_SIZE
data blocks = 64000 data blocks). Therefore our new NR_DATA_BLOCKS is NR_DATA_BLOCKS - 64000 = 121408
.
Modifying the data
Now we are going to use the new unreleased tool thin_shrink
from thin-provisioning-tools. It is a subcommand of the pdata_tools
binary. It takes the metadata file as input and writes an updated metadata file as output (metadata_shrinked
). It also needs the new number of data blocks we just calculated and the location of the actual thin pool data device: ./pdata_tools thin_shrink --input metadata --output metadata_shrinked --data /dev/mapper/vg-tpool0_tdata --nr-blocks 121408
.
What this tool basically does is checking for mappings in your pool beyond the new number of data blocks you specified and copying the referenced data to a region in your mapping which is below the new number of data blocks and not already in use. It then updates the mapping in the metadata. In the end there should be no more mappings beyond your specified number of data blocks (assuming you have enough free space in your pool of course). I’m unsure how flexible the tool already is with mappings (can it split ranges?), but looking at the TODO there are no unchecked thin_shrink
items anymore. As the tool is only copying data to unused locations, is should be pretty safe to use if you specify the correct the data device.
After the tool is done copying your data, you’ll have a new metadata file (metadata_shrinked
).
Restoring the metadata
Now we have to restore the updated metadata, but also with the new number of data blocks: thin_restore -i metadata_shrinked -o /dev/mapper/vg-tpool0_tmeta --nr-data-blocks 121408
.
We can now make the tdata
and tmeta
volumes inactive as we don’t need them anymore: lvchange -an vg/tpool0_tdata
& lvchange -an vg/tpool0_tmeta
.
Modifying the volume group
Now that the actual thin pool is shrinked, we need to make the volume group also aware of these changes. This might be the trickiest part in my opinion as it requires manual editing of the volume group backup file.
Backup the volume group vgcfgbackup vg -f vg_backup
and copy this file cp vg_backup vg_shrinked
. The original might be useful if you screwed up…
Now in this file look for the tpool0_tdata
section and reduce the extent_count
of the last (in this example only) segment by the number of PE’s we reduced our pool with. So in this case from 2897
to 1897
. The last segment does not necessarily have to be the last numbered segment. Look for the segment with the highest start_extent
in this section. In the more advanced example later on there will be some example files.
In the same tpool0_tdata
section you might need to reduce the segment_count
if you have removed complete segments. I’ll mention it here for completeness, but is not the case for this example. It will be in the advanced example.
Finally we need to reduce the extent_count
in the tpool0
section by the number of PE’s we reduced our pool with as well. Also from 2897
to 1897
in this case.
Now we restore this changed volume group definition :vgcfgrestore --force -f vg_shrinked vg
.
Profit
In theory we should be done! You can activate volumes lvchange -ay vg/mythin
and make sure no errors occur. If they do, restore the original metadata and volume group and hope your data isn’t lost.
lvs -a
afterwards:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
[lvol0_pmspare] vg ewi------- 12.00m
mythin vg Vwi-a-tz-- 50.00g tpool0 1.40
tpool0 vg twi-aotz-- 7.41g 9.47 13.54
[tpool0_tdata] vg Twi-ao---- 7.41g
[tpool0_tmeta] vg ewi-ao---- 12.00m
pvdisplay -m
afterwards which now shows the 1000 free PE’s:
--- Physical volume ---
PV Name /dev/sda2
VG Name vg
PV Size 11.34 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 2903
Free PE 1000
Allocated PE 1903
PV UUID 2QsyDS-VPvq-U1GJ-1DKl-y1qP-YDx0-XQ8wnm
--- Physical Segments ---
Physical extent 0 to 2:
Logical volume /dev/vg/lvol0_pmspare
Logical extents 0 to 2
Physical extent 3 to 1899:
Logical volume /dev/vg/tpool0_tdata
Logical extents 0 to 1896
Physical extent 1900 to 2899:
FREE
Physical extent 2900 to 2902:
Logical volume /dev/vg/tpool0_tmeta
Logical extents 0 to 2
More advanced scenario
In this more advanced scenario I will only be explaining the differences between the previous scenario.
This is our new situation. The thin pool data is now divided into three segments with dummy volumes in between (pvdisplay -m
):
--- Physical volume ---
PV Name /dev/sda2
VG Name vg
PV Size 11.34 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 2903
Free PE 0
Allocated PE 2903
PV UUID 2QsyDS-VPvq-U1GJ-1DKl-y1qP-YDx0-XQ8wnm
--- Physical Segments ---
Physical extent 0 to 2:
Logical volume /dev/vg/lvol0_pmspare
Logical extents 0 to 2
Physical extent 3 to 1899:
Logical volume /dev/vg/tpool0_tdata
Logical extents 0 to 1896
Physical extent 1900 to 1949:
Logical volume /dev/vg/lvol1
Logical extents 0 to 49
Physical extent 1950 to 2449:
Logical volume /dev/vg/tpool0_tdata
Logical extents 1897 to 2396
Physical extent 2450 to 2499:
Logical volume /dev/vg/lvol2
Logical extents 0 to 49
Physical extent 2500 to 2899:
Logical volume /dev/vg/tpool0_tdata
Logical extents 2397 to 2796
Physical extent 2900 to 2902:
Logical volume /dev/vg/tpool0_tmeta
Logical extents 0 to 2
- Dump the metadata:
thin_dump --format xml -o metadata /dev/mapper/vg-tpool0_tmeta
This is the new data:
- PE_SIZE: 4096 KiB
- DATA_BLOCK_SIZE: 64 KiB
- NR_DATA_BLOCKS: 179008
We are going to shrink the thin pool by 750 PE’s this time (= 3072000 KiB, = 48000 data_blocks).
- Recalculate new NR_DATA_BLOCKS:
179008 - 48000 = 131008
.
- Thin shrink the data of the pool:
./pdata_tools thin_shrink --input metadata --output metadata_shrinked --data /dev/mapper/vg-tpool0_tdata --nr-blocks 131008
.
- Restore the new metadata with new number of data blocks:
thin_restore -i metadata_shrinked -o /dev/mapper/vg-tpool0_tmeta --nr-data-blocks 131008
.
- Backup volume group:
vgcfgbackup vg -f vg_backup
and copy it:cp vg_backup vg_shrinked
.
Our volume group now looks like this:
vg_backup (click to expand)
# Generated by LVM2 version 2.03.11(2) (2021-01-08): Tue Aug 9 17:22:02 2022
contents = "Text Format Volume Group"
version = 1
description = "vgcfgbackup vg -f vg_backup"
creation_host = "debian" # Linux debian 5.10.0-16-amd64 #1 SMP Debian 5.10.127-2 (2022-07-23) x86_64
creation_time = 1660058522 # Tue Aug 9 17:22:02 2022
vg {
id = "L2NmDI-SLlu-oqDk-XVxM-zmFM-MIsU-IReAqu"
seqno = 11
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "2QsyDS-VPvq-U1GJ-1DKl-y1qP-YDx0-XQ8wnm"
device = "/dev/sda2" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 23789568 # 11.3438 Gigabytes
pe_start = 2048
pe_count = 2903 # 11.3398 Gigabytes
}
}
logical_volumes {
tpool0 {
id = "mfROKS-FGPk-o4S6-uWQf-ckRy-3hwj-mQlcFN"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1660053194 # 2022-08-09 15:53:14 +0200
creation_host = "debian"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 2797 # 10.9258 Gigabytes
type = "thin-pool"
metadata = "tpool0_tmeta"
pool = "tpool0_tdata"
transaction_id = 1
chunk_size = 128 # 64 Kilobytes
discards = "passdown"
zero_new_blocks = 1
}
}
mythin {
id = "gmg1PY-Qfgf-RTMh-z041-GeRi-plMq-tMkE8y"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1660053424 # 2022-08-09 15:57:04 +0200
creation_host = "debian"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 12800 # 50 Gigabytes
type = "thin"
thin_pool = "tpool0"
transaction_id = 0
device_id = 1
}
}
lvol1 {
id = "6qrwIW-HysC-U6dg-7heq-sIDU-dPSI-WD2I98"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1660058009 # 2022-08-09 17:13:29 +0200
creation_host = "debian"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 50 # 200 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 1900
]
}
}
lvol2 {
id = "woY9hl-Lr7u-PAb7-J5W8-cJvy-gY8I-3aNr28"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1660058040 # 2022-08-09 17:14:00 +0200
creation_host = "debian"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 50 # 200 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 2450
]
}
}
lvol0_pmspare {
id = "HP00dk-NWL9-6aAl-DA3n-fumB-s2In-nsCU0L"
status = ["READ", "WRITE"]
flags = []
creation_time = 1660053194 # 2022-08-09 15:53:14 +0200
creation_host = "debian"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 3 # 12 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
tpool0_tmeta {
id = "rG2jw4-kqCW-YSSp-ItRo-aaca-wywp-7YjaKh"
status = ["READ", "WRITE"]
flags = []
creation_time = 1660053194 # 2022-08-09 15:53:14 +0200
creation_host = "debian"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 3 # 12 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 2900
]
}
}
tpool0_tdata {
id = "kJBwlz-vLMx-PlTG-fSZo-O2w1-FPDU-v11II1"
status = ["READ", "WRITE"]
flags = []
creation_time = 1660053194 # 2022-08-09 15:53:14 +0200
creation_host = "debian"
segment_count = 3
segment1 {
start_extent = 0
extent_count = 1897 # 7.41016 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 3
]
}
segment2 {
start_extent = 1897
extent_count = 500 # 1.95312 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 1950
]
}
segment3 {
start_extent = 2397
extent_count = 400 # 1.5625 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 2500
]
}
}
}
}
Now are last segment is not big enough to hold all the PE’s we are removing. segment3
is only 400 PE’s. Therefore we can remove segment3
completely. Now we have 750 - 400 = 350
PE’s left to remove.
segment2
is now the last segment and is 500 PE’s. We therefore we have to keep this segment, but reduce it to 500 - 350 = 150
PE’s. Because we now also removed an entire segment from tpool0_tdata
, we have to reduce its segment_count
by 1 to 2.
Same as before we also need to reduce the total extent_count
by 750 PE’s from 2797 to 2047.
So our volume group now looks like this:
vg_shrinked (click to expand)
# Generated by LVM2 version 2.03.11(2) (2021-01-08): Tue Aug 9 17:22:02 2022
contents = "Text Format Volume Group"
version = 1
description = "vgcfgbackup vg -f vg_backup"
creation_host = "debian" # Linux debian 5.10.0-16-amd64 #1 SMP Debian 5.10.127-2 (2022-07-23) x86_64
creation_time = 1660058522 # Tue Aug 9 17:22:02 2022
vg {
id = "L2NmDI-SLlu-oqDk-XVxM-zmFM-MIsU-IReAqu"
seqno = 11
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "2QsyDS-VPvq-U1GJ-1DKl-y1qP-YDx0-XQ8wnm"
device = "/dev/sda2" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 23789568 # 11.3438 Gigabytes
pe_start = 2048
pe_count = 2903 # 11.3398 Gigabytes
}
}
logical_volumes {
tpool0 {
id = "mfROKS-FGPk-o4S6-uWQf-ckRy-3hwj-mQlcFN"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1660053194 # 2022-08-09 15:53:14 +0200
creation_host = "debian"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 2047 # 10.9258 Gigabytes
type = "thin-pool"
metadata = "tpool0_tmeta"
pool = "tpool0_tdata"
transaction_id = 1
chunk_size = 128 # 64 Kilobytes
discards = "passdown"
zero_new_blocks = 1
}
}
mythin {
id = "gmg1PY-Qfgf-RTMh-z041-GeRi-plMq-tMkE8y"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1660053424 # 2022-08-09 15:57:04 +0200
creation_host = "debian"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 12800 # 50 Gigabytes
type = "thin"
thin_pool = "tpool0"
transaction_id = 0
device_id = 1
}
}
lvol1 {
id = "6qrwIW-HysC-U6dg-7heq-sIDU-dPSI-WD2I98"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1660058009 # 2022-08-09 17:13:29 +0200
creation_host = "debian"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 50 # 200 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 1900
]
}
}
lvol2 {
id = "woY9hl-Lr7u-PAb7-J5W8-cJvy-gY8I-3aNr28"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1660058040 # 2022-08-09 17:14:00 +0200
creation_host = "debian"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 50 # 200 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 2450
]
}
}
lvol0_pmspare {
id = "HP00dk-NWL9-6aAl-DA3n-fumB-s2In-nsCU0L"
status = ["READ", "WRITE"]
flags = []
creation_time = 1660053194 # 2022-08-09 15:53:14 +0200
creation_host = "debian"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 3 # 12 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
tpool0_tmeta {
id = "rG2jw4-kqCW-YSSp-ItRo-aaca-wywp-7YjaKh"
status = ["READ", "WRITE"]
flags = []
creation_time = 1660053194 # 2022-08-09 15:53:14 +0200
creation_host = "debian"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 3 # 12 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 2900
]
}
}
tpool0_tdata {
id = "kJBwlz-vLMx-PlTG-fSZo-O2w1-FPDU-v11II1"
status = ["READ", "WRITE"]
flags = []
creation_time = 1660053194 # 2022-08-09 15:53:14 +0200
creation_host = "debian"
segment_count = 2
segment1 {
start_extent = 0
extent_count = 1897 # 7.41016 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 3
]
}
segment2 {
start_extent = 1897
extent_count = 150 # 1.95312 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 1950
]
}
}
}
}
For your convience also a diff:
diff (click to expand)
--- vg_backup 2022-08-09 17:22:02.656000000 +0200
+++ vg_shrinked 2022-08-09 17:27:25.492000000 +0200
@@ -45,7 +45,7 @@
segment1 {
start_extent = 0
- extent_count = 2797 # 10.9258 Gigabytes
+ extent_count = 2047 # 10.9258 Gigabytes
type = "thin-pool"
metadata = "tpool0_tmeta"
@@ -166,7 +166,7 @@
flags = []
creation_time = 1660053194 # 2022-08-09 15:53:14 +0200
creation_host = "debian"
- segment_count = 3
+ segment_count = 2
segment1 {
start_extent = 0
@@ -181,7 +181,7 @@
}
segment2 {
start_extent = 1897
- extent_count = 500 # 1.95312 Gigabytes
+ extent_count = 150 # 1.95312 Gigabytes
type = "striped"
stripe_count = 1 # linear
@@ -190,17 +190,6 @@
"pv0", 1950
]
}
- segment3 {
- start_extent = 2397
- extent_count = 400 # 1.5625 Gigabytes
-
- type = "striped"
- stripe_count = 1 # linear
-
- stripes = [
- "pv0", 2500
- ]
- }
}
}
- Restore volume group:
vgcfgrestore --force -f vg_shrinked vg
.
And the result:
--- Physical volume ---
PV Name /dev/sda2
VG Name vg
PV Size 11.34 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 2903
Free PE 750
Allocated PE 2153
PV UUID 2QsyDS-VPvq-U1GJ-1DKl-y1qP-YDx0-XQ8wnm
--- Physical Segments ---
Physical extent 0 to 2:
Logical volume /dev/vg/lvol0_pmspare
Logical extents 0 to 2
Physical extent 3 to 1899:
Logical volume /dev/vg/tpool0_tdata
Logical extents 0 to 1896
Physical extent 1900 to 1949:
Logical volume /dev/vg/lvol1
Logical extents 0 to 49
Physical extent 1950 to 2099:
Logical volume /dev/vg/tpool0_tdata
Logical extents 1897 to 2046
Physical extent 2100 to 2449:
FREE
Physical extent 2450 to 2499:
Logical volume /dev/vg/lvol2
Logical extents 0 to 49
Physical extent 2500 to 2899:
FREE
Physical extent 2900 to 2902:
Logical volume /dev/vg/tpool0_tmeta
Logical extents 0 to 2
Although this scenario is more advanced than the first one, it can get a lot more complicated when segments are not in order in the volume group or even spread over multiple disks in RAID like situations. I will leave these scenarios as an exercise for the reader. I’ve already spent more than enough time on this article which is already really niche.
Conclusion
Should you do this? Probably not. Was it an interesting process? For me, it sure was! I can’t wait until there is some sort of official/recommended way to do this. This will make LVM thin pools even better than it already is.