You put that in as an option to zfs

#zfs set readonly=on filesystemname


On Tue, Mar 11, 2014 at 7:07 PM, <tclug at freakzilla.com> wrote:

> That is cool, and I might end up doing that with this pool some day.
>
> But is there somewhere where it specifies that the filesystem should be
> mounted read-only?
>
>
>
> On Tue, 11 Mar 2014, Linda Kateley wrote:
>
>  Yea it's binary.. Only zfs can read it..
>>
>> The cool thing is that if you use another server to read the pool, it
>> will be able to build a new cache file for it..
>>
>> lk
>>
>>
>> On 3/11/14, 6:08 PM, tclug at freakzilla.com wrote:
>>
>>> Finally, the expert (:
>>>
>>> Yeah, I have an /etc/zpool/zpool.cache, but it seems to be in a (mostly)
>>> binary format rather than an editable text file.
>>>
>>> I think last time I rebooted it came up read-write, so maybe that...
>>> resolved itself... which is cool but I wish I had a bit more info on what
>>> the heck was going on!
>>>
>>> On Tue, 11 Mar 2014, Linda Kateley wrote:
>>>
>>>  So yes zfs caches everything it can in the arc. Most distros have an
>>>> arc max setting or some tunables for arc management. One of the easiest is
>>>> setting primary-cache=metadata, that will tell zfs to only cache metadata.
>>>>
>>>> On Solaris there is file in /etc/zfs called zpool.cache. It functions
>>>> similarily to the xxtab files in that if it exists it will be read in at
>>>> boot and contains all device and filesystem info, but it differs in that if
>>>> it doesn't exist it will be built with info from info on disks.
>>>>
>>>> Sent from my iPhone
>>>>
>>>>  On Mar 11, 2014, at 3:09 PM, tclug at freakzilla.com wrote:
>>>>>
>>>>> That's why I was asking (: Someone wake Linda up!
>>>>>
>>>>>  On Tue, 11 Mar 2014, Jake Vath wrote:
>>>>>>
>>>>>> Call me stupid, but I forgot we were talking about ZFS on Linux...
>>>>>> you're
>>>>>> right. No fstab and no vfstab.
>>>>>> Sorry about that.
>>>>>> -> Jake
>>>>>> On Tue, Mar 11, 2014 at 3:01 PM, Jeremy MountainJohnson
>>>>>> <jeremy.mountainjohnson at gmail.com> wrote:
>>>>>>      Not sure on the distro you have, but with ZFSonLinux you don't
>>>>>>      use fstab. For example, in Arch there is a service to handle
>>>>>>      this if enabled at boot (part of the zfs package). The file
>>>>>>      system mount point is configured with zfs user space tools, or
>>>>>>      defaults to what you set originally when you created the volume.
>>>>>> Also, curious on the ram problems. Arch, the distro I use, is tweaked
>>>>>> to be heavy on caching to RAM. So, often times when I am working with
>>>>>> extensive I/O and large files, 90% of memory will be dedicated to
>>>>>> caching in RAM and never touch swap (ext4, sw raid1). If I need that
>>>>>> cached RAM, it diverts it out of the cache automatically. The free
>>>>>> command shows how RAM is allocated. I'm no zfs expert, but perhaps zfs
>>>>>> is caching like crazy to RAM, although now that you're stable with
>>>>>> more RAM, this kinda debunks that.
>>>>>> --
>>>>>> Jeremy MountainJohnson
>>>>>> Jeremy.MountainJohnson at gmail.com
>>>>>> On Tue, Mar 11, 2014 at 2:42 PM, <tclug at freakzilla.com> wrote:
>>>>>>      Course I'm not using ECC RAM. This is a home system (:
>>>>>>
>>>>>>      The data is... well, be nice if it didn't get corrupted,
>>>>>>      but if a video file gets a small glitch in it, it's not a
>>>>>>      huge deal. I can always rerip one disc if I need to. I
>>>>>>      also figured that's why I have two smaller raidz1 (which
>>>>>>      is equivalent to raid5, right?) pools - it should be able
>>>>>>      to fix the occasional checksum error.
>>>>>>
>>>>>>      I've not seen any crop up on this setup until that scrub,
>>>>>>      which was after I copied and erased about 8TB a couple of
>>>>>>      times. So not super worried.
>>>>>>
>>>>>>      I can't really not use the filesystem during a scrub,
>>>>>>      since a scrub takes over 24 hours. I could restrict it to
>>>>>>      read-only.
>>>>>>
>>>>>>      Hey, that reminds me, for some reason the thing mounts as
>>>>>>      read-only when I reboot. And since it's not in fstab I
>>>>>>      don't know where to fix that... anyone?...
>>>>>>
>>>>>>      On Tue, 11 Mar 2014, Jake Vath wrote:
>>>>>>
>>>>>>                  Now, I am seeing occasional checksum
>>>>>>            errors. I stress-tested the
>>>>>>                  heck out of the thing for a week or so
>>>>>>            (filled up the
>>>>>>                  filesystem, then deleted most the junk I
>>>>>>            used for that, etc) and
>>>>>>                  when I ran a scrub it found 12 of them.
>>>>>>            I'm assuming that since
>>>>>>                  I am running multiple redundancies that
>>>>>>            that's not a huge
>>>>>>                  problem. Is this correct? Should I
>>>>>>            cronjob a scrub once a month?
>>>>>>
>>>>>>            Are you using ECC RAM?
>>>>>>            If you're not, then you'll see some
>>>>>>            checksumming/parity calculation errors.
>>>>>>            Is this a huge problem? I guess it could be
>>>>>>            when you consider how important
>>>>>>            your data is to you.
>>>>>>            Your ZPool(s) could get really screwed up if
>>>>>>            you're getting checksumming
>>>>>>            errors.
>>>>>>
>>>>>>            A cronjob to scrub the system isn't a bad
>>>>>>            idea, I guess you'd have to make
>>>>>>            sure that nothing is going to try and use the
>>>>>>            system during the scrubbing
>>>>>>            process though.
>>>>>>
>>>>>>            -> Jake
>>>>>>
>>>>>>            -> Jake
>>>>>>
>>>>>>            On Tue, Mar 11, 2014 at 2:24 PM,
>>>>>>            <tclug at freakzilla.com> wrote:
>>>>>>                  This is a follow-up to my ZFS woes from
>>>>>>            a month or so ago.
>>>>>>
>>>>>>                  Funny thing. When that machine had
>>>>>>            16gigs of RAM + 16gigs of
>>>>>>                  swap, it was using 15gig of RAM and not
>>>>>>            touching swap at all,
>>>>>>                  and ZFS performace was horrible.
>>>>>>
>>>>>>                  So I threw another 16gigs of RAM in
>>>>>>            there.
>>>>>>
>>>>>>                  Now it uses 20gigs of RAM (still not
>>>>>>            touching swap, obviously)
>>>>>>                  and ZFS performance is fine.
>>>>>>
>>>>>>                  Now, I am seeing occasional checksum
>>>>>>            errors. I stress-tested the
>>>>>>                  heck out of the thing for a week or so
>>>>>>            (filled up the
>>>>>>                  filesystem, then deleted most the junk I
>>>>>>            used for that, etc) and
>>>>>>                  when I ran a scrub it found 12 of them.
>>>>>>            I'm assuming that since
>>>>>>                  I am running multiple redundancies that
>>>>>>            that's not a huge
>>>>>>                  problem. Is this correct? Should I
>>>>>>            cronjob a scrub once a month?
>>>>>>
>>>>>>                  I'm pretty gald I didn't need to move
>>>>>>            away from ZFS...
>>>>>>
>>>>>>                  --
>>>>>>
>>>>>>            _______________________________________________
>>>>>>                  TCLUG Mailing List - Minneapolis/St.
>>>>>>            Paul, Minnesota
>>>>>>                  tclug-list at mn-linux.org
>>>>>>
>>>>>> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>>>>>>
>>>>>>      _______________________________________________
>>>>>>      TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
>>>>>>      tclug-list at mn-linux.org
>>>>>>      http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>>>>>> _______________________________________________
>>>>>> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
>>>>>> tclug-list at mn-linux.org
>>>>>> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>>>>>>
>>>>> _______________________________________________
>>>>> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
>>>>> tclug-list at mn-linux.org
>>>>> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>>>>>
>>>> _______________________________________________
>>>> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
>>>> tclug-list at mn-linux.org
>>>> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>>>>
>>>>  _______________________________________________
>>> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
>>> tclug-list at mn-linux.org
>>> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>>>
>>
>> _______________________________________________
>> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
>> tclug-list at mn-linux.org
>> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>>
>>  _______________________________________________
> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
> tclug-list at mn-linux.org
> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.mn-linux.org/pipermail/tclug-list/attachments/20140311/cd067e3a/attachment-0001.html>