On Tue, 1 Apr 2014, Ben wrote:

> -h will always be different from the actual disk usage, you might also 
> want to play around with -B option too.

I've done that.  Using --si -sB GB gives the same result as --si -sh. 
Did you think that they would be different?


> Honestly though, NFS has never been particularly good with stuff like 
> this.

Well, there is the HP bug reported in the du info page, and recapped 
below, but are there other problems?


> What happens when you use --apparent-size option.
> --apparent-size
>   print apparent sizes,  rather  than  disk  usage;  although the
>   apparent  size is usually smaller, it may be larger due to holes
>   in ('sparse') files, internal  fragmentation,  indirect blocks,
>   and the like

I want to try that, but I'm having this problem right now:

$ ls /project/guanwh
ls: cannot access /project/guanwh: Stale file handle

I tried logging out and logging back in, but that didn't help. They might 
be working on the issue after I reported it.  If it had been something 
simple, they probably would have told me by now.

Mike


> On Tue, Apr 1, 2014 at 3:40 PM, Mike Miller <mbmiller+l at gmail.com> wrote:
>
>> I thought this issue was caused by a bug in the GNU du code, and I'm still
>> not sure that it isn't, but it might be caused by bugs in file systems or
>> NFS.  I'm using this version of du:
>>
>> $ du --version
>> du (GNU coreutils) 8.4
>> Copyright (C) 2010 Free Software Foundation, Inc.
>>
>> I'm using one of the MSI supercomputers.  The /project/guanwh directory is
>> NFS mounted.  Here's the kind of thing I'm seeing:
>>
>> $ du -sh /project/guanwh/miller/CHoP/intensity/
>> 41G     /project/guanwh/miller/CHoP/intensity/
>>
>> $ du -sm /project/guanwh/miller/CHoP/intensity/
>> 41171   /project/guanwh/miller/CHoP/intensity/
>>
>> $ du -sb /project/guanwh/miller/CHoP/intensity/
>> 65435522887     /project/guanwh/miller/CHoP/intensity/
>>
>> $ du -sm /project/guanwh/miller/CHoP/intensity/
>> 41299   /project/guanwh/miller/CHoP/intensity/
>>
>> $ du -sh /project/guanwh/miller/CHoP/intensity/
>> 41G     /project/guanwh/miller/CHoP/intensity/
>>
>> Those commands were run seconds apart while a file transfer was increasing
>> the amount of disk used.
>>
>> What you are seeing is that the result with -b (bytes) is correct, or at
>> least nearly so, while the results with -m and -h are off by many
>> gigabytes.  I am in the process of transferring files into that directory,
>> but I don't see why options -m, -h and -b should give wildly different
>> numbers!
>>
>> I'm wondering if the problem I'm having has to do with NFS mounting. There
>> is a known issue:
>>
>> https://www.gnu.org/software/coreutils/manual/html_node/du-invocation.html
>>
>> "On BSD systems, du reports sizes that are half the correct values for
>> files that are NFS-mounted from HP-UX systems. On HP-UX systems, it reports
>> sizes that are twice the correct values for files that are NFS-mounted from
>> BSD systems. This is due to a flaw in HP-UX; it also affects the HP-UX du
>> program."
>>
>> I am seeing usage with -sb that is 50% larger than that with -sB KB (or
>> -sB MB or -sB GB).
>>
>> For me, the message is to use -sb instead of -sh.  The latter gives a nice
>> compact result, but it is probably reading numbers of file blocks and those
>> can have different meanings and cause different results.
>>
>> Mike
>> _______________________________________________
>> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
>> tclug-list at mn-linux.org
>> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>>
>
>
>
> -- 
> Ben Lutgens
> Linux / Unix System Administrator
>
> Three of your friends throw up after eating chicken salad.  Do you think:
> "I should find more robust friends" or "we should check that refrigerator"?
>       -- Donald Becker, on vortex-bug, suspecting a network-wide problem
>