This is the core rt_garbage_collect():
for (i=0; i<RT_HASH_DIVISOR; i++) {
unsigned tmo;
if (!rt_hash_table[i])
continue;
tmo = expire;
for (rthp=&rt_hash_table[i]; (rth=*rthp); rthp=&rth->u.rt_next) {
if (atomic_read(&rth->u.dst.use) ||
(now - rth->u.dst.lastuse < tmo && !rt_fast_clean(rth))) {
tmo >>= 1;
continue;
}
*rthp = rth->u.rt_next;
rth->u.rt_next = NULL;
rt_free(rth);
break;
^^^^^^ try deleting this line
}
You may want to try to allow the kernel to remove the whole cache. This is
probably a bit excessive but I think that the idea of removing more than 1
entry per chain would improve things (we could break in function of the
current value of ops->entries). The point is that if we would have all the
routes well distributed in the hash table we would just now remove 256 dst
entries per call but we are not sure of that, so I think it's better to
break the gc in function of the number of entries really freed.
Andrea Arcangeli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/