Rather than hundreds or thousands of "tiny" MB sized extents.
I wonder what the best mkfs.xfs parameters might be to encourage that?
You need to use the mkfs.xfs defaults for any single drive filesystem, and trust
the allocator to do the right thing. XFS uses variable size extents and the
size is chosen dynamically--you don't have direct or indirect control of the
extent size chosen for a given file or set of files AFAIK.
As Dave Chinner is fond of pointing out, it's those who don't know enough about
XFS and choose custom settings that most often get themselves into trouble (as
you've already done once). :)
The defaults exist for a reason, and they weren't chosen willy nilly. The vast
bulk of XFS' configurability exists for tuning maximum performance on large to
very large RAID arrays. There isn't much, if any, additional performance to be
gained with parameter tweaks on a single drive XFS filesystem.
A brief explanation of agcount: the filesystem is divided into agcount regions--
called allocation groups, or AGs. The allocator writes to all AGs in parallel
to increase performance. With extremely fast storage (SSD, large high RPM RAID)
this increases throughput as the storage can often sink writes faster than a
serial writer can push data. In your case, you have a single slow spindle with
over 7,000 AGs. Thus, the allocator is writing to over 7,000 locations on that
single disk simultaneously, or, at least, it's trying to. Thus, the poor head
on that drive is being whipped all over the place without actually getting much
writing done. To add insults to injury, this is one of these low RPM low head
performance "green" drives correct?
Trust the defaults. If they give you problems (unlikely) then we can't talk. ;)