The one variable I see that makes me suspicious is

media  failmode               wait                   default

I don't do alot of work with the linux version, but early on this 
variable was added. It originally was set to panic. (yea that was alot 
of fun)...That means that if a number of IO's don't commit what should 
the pool do? There are now 3 options wait, continue or panic.

Failmode wait says if a write doesn't get committed then wait until it 
does. This might make sense for this scenario, a drive is not failing 
but is flaky, ZFS is holding or waiting on completion. Not sure how to 
see spins in linux.I am not even sure i could write this dtrace script 
to see, I guess I might go after the pid.

When ever I saw high cpu, it usually was something either locking or 
waiting or something managing queues

try #zpool set failmode=continue poolname

and see if that helps

lk

On 2/8/14, 9:36 PM, tclug at freakzilla.com wrote:
> Scratch that; all the clients are using NFSv4:
>
> /usr/local/media from cockerel:/usr/local/media
>  Flags: 
> rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=0.0.0.0,local_lock=none,addr=192.168.0.71
>
>
> On Sat, 8 Feb 2014, tclug at freakzilla.com wrote:
>
>> Looks like it's supported on both server and clients. Not sure if 
>> it's enforced...
>>
>> On Sat, 8 Feb 2014, Jake Vath wrote:
>>
>>>
>>> Are you using NFSv4?
>>>
>>> -> Jake
>>>
>>> On Feb 8, 2014 9:09 PM, <tclug at freakzilla.com> wrote:
>>>       On Sat, 8 Feb 2014, Jake Vath wrote:
>>>
>>>             What kernel version are you running?
>>>
>>>
>>>         sterling at cockerel:/home/sterling> uname -a
>>>         Linux cockerel 3.5.0-45-generic #68-Ubuntu SMP Mon Dec 2
>>>       21:58:52 UTC
>>>         2013 x86_64 x86_64 x86_64 GNU/Linux
>>>
>>>       This is several Ubuntus ago, I'm tempted to do an upgrade but I
>>>       have a bunch of custom code things running there and I've not
>>>       had time to make a full OS backup...
>>>
>>>       And in case you want this too:
>>>
>>>         sterling at cockerel:/home/sterling> dpkg --list|grep zfs
>>>         ii  dkms
>>>         2.2.0.3-1.1ubuntu1.1+zfs6~quantal1     all  Dynamic
>>>       Kernel
>>>         Module Support Framework
>>>         ii  libzfs1 0.6.2-1~quantal
>>>         amd64        Native ZFS filesystem library for Linux
>>>         ii  mountall  2.42ubuntu0.4-zfs2
>>>         amd64        filesystem mounting tool
>>>         ii  ubuntu-zfs                            7~quantal
>>>         amd64        Native ZFS filesystem metapackage for Ubuntu.
>>>         ii  zfs-dkms  0.6.2-1~quantal
>>>         amd64        Native ZFS filesystem kernel modules for Linux
>>>         ii  zfsutils  0.6.2-1~quantal
>>>         amd64        Native ZFS management utilities for Linux
>>>
>>>
>>>       There is something I noticed when I looked at the FS options; I
>>>       have this guy:
>>>
>>>         media  sharenfs              off  default
>>>
>>>       I am sharing this guy over NFS, I'm not sure if setting this
>>>       option somehow optimises the pool for it or if it's just a way
>>>       to write the /etc/exports/whatever file. Google/Oracle's
>>>       documentation on the subject is lacking (:
>>>
>>>
>>>       _______________________________________________
>>>       TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
>>>       tclug-list at mn-linux.org
>>>       http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>>>
>>>
>>
>
>
> _______________________________________________
> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
> tclug-list at mn-linux.org
> http://mailman.mn-linux.org/mailman/listinfo/tclug-list

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.mn-linux.org/pipermail/tclug-list/attachments/20140209/d8a2acaa/attachment.html>