There is a variable called zfs_max_vdev_pending. It sets the queue size 
for all disks. I only know the solaris well, but I know there is 
something similar in linux. It is set to 10 by default. You may want to 
up this to 2-5 per disk queue. With the 10, and 8 disks, it is pretty 
low. Maybe try 24. If this number is set too high you may see spiky cpu. 
The cpu will be managing the queue.

linda

On 11/11/13 11:37 PM, tclug at freakzilla.com wrote:
> Hehe. One external, 8-bay enclosure, using two SATA ports. The ports 
> go directly to the motherboard - no additional controller. Server 
> software is Ubuntu 12.10, with ZFS added on from the zfs-native PPA.
>
> When I was using this as md-software RAID5, I had two disks in each 
> half of the enclosure. No performance issues. Now this is 8 disks 
> rather than 4, and raidz2 (so RAID6) rather than RAID5, but still... 
> hit play on a video and wait 6 seconds for it to start?... that's a 
> bit... off. No errors except the three checksum errors I've had.
>
>
> On Mon, 11 Nov 2013, Thomas Lunde wrote:
>
>>
>> All of the drives are in a single external enclosure?
>>
>> How is that enclosure connected to the rest of the PC?  USB? (2? 3?) 
>> eSATA? FireWire? Something else?
>>
>> If eSATA, then you may be having issues with a port multiplier.
>>
>> In any case, it's really hard to troubleshoot by guessing. So, if 
>> you'd like further help to address performance issues, maybe you 
>> could provide a full hardware and software description of the system. :)
>>
>> Thomas
>>
>>> On Nov 11, 2013, at 9:09 PM, tclug at freakzilla.com wrote:
>>>
>>> No idea what most of what you said is, no (:
>>>
>>> These are all identical drives, in an external enclosure, so none of 
>>> it is my own SATA cables. And again, no errors when they were in a 
>>> software RAID5 (though there were half as many drives) and nothing 
>>> in the system logs, which is why I am concerned...
>>>
>>>> On Mon, 11 Nov 2013, Thomas Lunde wrote:
>>>>
>>>>
>>>> Bit flips like this helped me to discover that two of my 10 SATA 
>>>> cables were marginal.
>>>>
>>>> Since these are >2T drives, did you do anything with ashift? 
>>>> Depending on which ZFS implementation you're using, this question 
>>>> might not make sense?
>>>>
>>>> An array of drives where some are faking 512 byte sectors and ( 
>>>> some are really using 512 byte sectors OR some are using 4K sectors 
>>>> ) can cause abysmal performance.
>>>>
>>>> Thomas
>>>>
>>>> _______________________________________________
>>>> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
>>>> tclug-list at mn-linux.org
>>>> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>>> _______________________________________________
>>> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
>>> tclug-list at mn-linux.org
>>> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>> _______________________________________________
>> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
>> tclug-list at mn-linux.org
>> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>>
> _______________________________________________
> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
> tclug-list at mn-linux.org
> http://mailman.mn-linux.org/mailman/listinfo/tclug-list