Using /dev/urandom directly, yes that doesn't make sense because it consistent returns non-uniformly random numbers when used to generate larger amounts of entropy than the blocking pool can source, but most that use their own PRNG's seed them off of /dev/urandom, and there are cases I've seen of people doing very large numbers (on the order of millions) of short (15 or so minutes run time on decent hardware) simulations that do this.I'd almost say that making the partitioning level configurable at
build time might be useful. I can see possible value to being able
to at least partition down to physical cores (so, shared between
HyperThreads on Intel processors, and between Compute Module cores
on AMD processors), as that could potentially help people running
large numbers of simulations in parallel.
I don't like build time size configurations. It doesn't make sense
for simulations to use urandom. It may make sense to have
some run time tunable, but for now that's too much complexity.
So I'll stay with the simpler per node pool for now.
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature