So now I read the whole thing😄

You can attach a disk onto a mirrored pair, making a 3 or 4 way mirror 

Is that more what you are looking for?

Sent from my iPhone

> On Mar 21, 2015, at 12:26 PM, T L <tlunde at gmail.com> wrote:
> 
> Linda -
> 
> Yup, I understand that the top level vdevs can't be removed from a pool. In fact, farther down in the original note, I'd written "I think that it is true that one cannot remove a vdev from a ZFS pool. "
> 
> That is, however, not what I'm trying to do.
> 
> If anyone is willing to read beyond the first sentence of the original message, I'm interested in your input. :-)
> 
> Thanks
> Thomas
> 
>> On Mar 21, 2015 10:03 AM, "Linda Kateley" <lkateley at kateley.com> wrote:
>> The answer is no. Top level vdev can't be removed from a pool. Raidsets can't be changed in number of disks used. You can replace disks with bigger or smaller disks.
>> 
>> If you want more help feel free to contact me directly I teach zfs classes for a living :)
>> 
>> Sent from my iPhone
>> 
>>> On Mar 20, 2015, at 9:12 PM, T L <tlunde at gmail.com> wrote:
>>> 
>>> Does anyone know if I can "stack" and (later) "unstack" vdevs from a ZFS pool? 
>>> 
>>> (Apropos my last message, this question would be germane to ZFS on Linux as well as on BSD, FreeNAS, et al. )
>>> 
>>> I think that it is true that one cannot remove a vdev from a ZFS pool. (If that's wrong, please correct me and the rest is irrelevant. )
>>> 
>>> Any pool (similar to LVM on Linux) that is larger than the space of a single drive must contain multiple vdevs. For redundancy, a vdev is often ( but not necessarily) more than a single drive; a vdev can be two or three mirrored drives or 3+ drives in RAIDZn.
>>> 
>>> So, if one has a RAIDZ1 set of 2T drives, one would have 4T of usable space. To go to 6T of usable space, one would fail and replace each 2T drive with a 3T drive. When the last drive is replaced, the space would expand to 6T.
>>> 
>>> My concern is the limited drives that can fit in a case. Say that I can have up to 8 drives. I could use 4 pairs of 2T drives, each pair being a vdev. When I start upgrading to 3 or 4T drives, I've still got to have 4 vdevs in my pool.
>>> 
>>> Would it be possible to have the drives set so that I have each pair of drives (striped) make up a vdev and then create a vdev made up of a mirrored pair of those striped pairs and then make the pool up of that mirrored pair of vdevs?  (In this way, there would only be 2 vdevs at the pool level, rather than 4. )
>>> 
>>> 
>>> The point is that, when I go from 2T drives to 4T drives, I can replace a striped pair of 2Ts with a single 4T drive (i.e. a vdev with a single member). Thus, after replacing all 2s with 4s, lets my pool will still have a pair of (mirrored) vdevs. And that, in turn, then lets me add additional drives to the box (and space to the pool) by adding 4 more drives (each being a part of a mirrored pair, making up 2 more vdevs which get added to the original 2 vdevs in the pool). 
>>> But the cool thing is that I'd get the benefit of the upgrade without having to replace every drive; I'd see more space in the pool as soon as I start adding the third vdev (either as the 9th and 10th devices, or after replacing 4 of the 2s with (4) 4T drives). 
>>> 
>>> What happens when I go from 4T drives to 8T drives across the board is too far in the future to worry about now. I have a bunch of 2s and have started buying 4s, so thinking about how to handle that upgrade as the 2s age & fail is on my mind.
>>> 
>>> Advice and comments appreciated.
>>> 
>>> Thanks
>>> Thomas 
>>> 
>>> _______________________________________________
>>> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
>>> tclug-list at mn-linux.org
>>> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>> 
>> _______________________________________________
>> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
>> tclug-list at mn-linux.org
>> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
> _______________________________________________
> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
> tclug-list at mn-linux.org
> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.mn-linux.org/pipermail/tclug-list/attachments/20150321/bbff721b/attachment.html>