This can be a good thread.

I use rsync all the time, mostly without provisions for partial transfers,
and it works great. I recommend using the "-avz" switch combination, so that
you get zlib compression of the data prior to transmission.

Your conclusions are highly subjective, and I mean this in a good way.
What is a BIGFILE to you can be tiny to me. I routinely move GBs of data
around our Gigabit networks with rsync and with NFS as the underlying
filesystem. It costs me little to re-send files when rsync fails, which
seldom happens. But in the event that I move files to other systems that
are not on our network, I make rsync allow partial transfers.

I rsync 1.x TB single files. Those are huge.

Your 7GB file is a big file, do not get me wrong. And your network speeds
seem fast.

The underlying filesystem plays a role in the speed of calculating checksums,
which I do not know how rsync does. I am certain it does not checksum the
whole file unless it is instructed; i could be wrong. But it seems to me that
it is instructed in your case (see below).

Your strategy of over-truncating sounds solid to me. It is also nice to know
the count (in bytes) that was different. This is subjective at a whole
different level -- this triggers all kinds of paranoia in my head about using
this method too.

Can you see if there are any other switches that better control the behaviour
of rsync in the calculation of checksums (verify mode)? It seems odd that it
would not allow for some kind of continuation from where it left off without
going through the trouble of calculating checksums. I am only guessing, but I
think rsync's verify option the way you used it is searching for differences
in the file by segmenting it and calculating checksums. As such, it has no
knowledge of where the problem can be, and it has to calculate checksums for
the whole length of the file. So, the bahaviour your discribe does make sense
to me, at least.