Re: [PATCH V11] cgroup/rstat: Avoid flushing if there is an ongoing root flush

From: kernel test robot
Date: Fri Sep 13 2024 - 17:54:58 EST


Hi Jesper,

kernel test robot noticed the following build errors:

[auto build test ERROR on tj-cgroup/for-next]
[also build test ERROR on axboe-block/for-next linus/master v6.11-rc7]
[cannot apply to akpm-mm/mm-everything next-20240913]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Jesper-Dangaard-Brouer/cgroup-rstat-Avoid-flushing-if-there-is-an-ongoing-root-flush/20240913-010800
base: https://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git for-next
patch link: https://lore.kernel.org/r/172616070094.2055617.17676042522679701515.stgit%40firesoul
patch subject: [PATCH V11] cgroup/rstat: Avoid flushing if there is an ongoing root flush
config: x86_64-allnoconfig (https://download.01.org/0day-ci/archive/20240914/202409140533.2vt8QPj8-lkp@xxxxxxxxx/config)
compiler: clang version 18.1.8 (https://github.com/llvm/llvm-project 3b5b5c1ec4a3095ab096dd780e84d7ab81f3d7ff)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240914/202409140533.2vt8QPj8-lkp@xxxxxxxxx/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@xxxxxxxxx>
| Closes: https://lore.kernel.org/oe-kbuild-all/202409140533.2vt8QPj8-lkp@xxxxxxxxx/

All errors (new ones prefixed by >>):

>> mm/vmscan.c:2265:2: error: call to undeclared function 'mem_cgroup_flush_stats_relaxed'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
2265 | mem_cgroup_flush_stats_relaxed(sc->target_mem_cgroup);
| ^
mm/vmscan.c:2265:2: note: did you mean 'mem_cgroup_flush_stats_ratelimited'?
include/linux/memcontrol.h:1429:20: note: 'mem_cgroup_flush_stats_ratelimited' declared here
1429 | static inline void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg)
| ^
1 error generated.


vim +/mem_cgroup_flush_stats_relaxed +2265 mm/vmscan.c

2250
2251 static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
2252 {
2253 unsigned long file;
2254 struct lruvec *target_lruvec;
2255
2256 if (lru_gen_enabled())
2257 return;
2258
2259 target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat);
2260
2261 /*
2262 * Flush the memory cgroup stats, so that we read accurate per-memcg
2263 * lruvec stats for heuristics.
2264 */
> 2265 mem_cgroup_flush_stats_relaxed(sc->target_mem_cgroup);
2266
2267 /*
2268 * Determine the scan balance between anon and file LRUs.
2269 */
2270 spin_lock_irq(&target_lruvec->lru_lock);
2271 sc->anon_cost = target_lruvec->anon_cost;
2272 sc->file_cost = target_lruvec->file_cost;
2273 spin_unlock_irq(&target_lruvec->lru_lock);
2274
2275 /*
2276 * Target desirable inactive:active list ratios for the anon
2277 * and file LRU lists.
2278 */
2279 if (!sc->force_deactivate) {
2280 unsigned long refaults;
2281
2282 /*
2283 * When refaults are being observed, it means a new
2284 * workingset is being established. Deactivate to get
2285 * rid of any stale active pages quickly.
2286 */
2287 refaults = lruvec_page_state(target_lruvec,
2288 WORKINGSET_ACTIVATE_ANON);
2289 if (refaults != target_lruvec->refaults[WORKINGSET_ANON] ||
2290 inactive_is_low(target_lruvec, LRU_INACTIVE_ANON))
2291 sc->may_deactivate |= DEACTIVATE_ANON;
2292 else
2293 sc->may_deactivate &= ~DEACTIVATE_ANON;
2294
2295 refaults = lruvec_page_state(target_lruvec,
2296 WORKINGSET_ACTIVATE_FILE);
2297 if (refaults != target_lruvec->refaults[WORKINGSET_FILE] ||
2298 inactive_is_low(target_lruvec, LRU_INACTIVE_FILE))
2299 sc->may_deactivate |= DEACTIVATE_FILE;
2300 else
2301 sc->may_deactivate &= ~DEACTIVATE_FILE;
2302 } else
2303 sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE;
2304
2305 /*
2306 * If we have plenty of inactive file pages that aren't
2307 * thrashing, try to reclaim those first before touching
2308 * anonymous pages.
2309 */
2310 file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
2311 if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
2312 !sc->no_cache_trim_mode)
2313 sc->cache_trim_mode = 1;
2314 else
2315 sc->cache_trim_mode = 0;
2316
2317 /*
2318 * Prevent the reclaimer from falling into the cache trap: as
2319 * cache pages start out inactive, every cache fault will tip
2320 * the scan balance towards the file LRU. And as the file LRU
2321 * shrinks, so does the window for rotation from references.
2322 * This means we have a runaway feedback loop where a tiny
2323 * thrashing file LRU becomes infinitely more attractive than
2324 * anon pages. Try to detect this based on file LRU size.
2325 */
2326 if (!cgroup_reclaim(sc)) {
2327 unsigned long total_high_wmark = 0;
2328 unsigned long free, anon;
2329 int z;
2330
2331 free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES);
2332 file = node_page_state(pgdat, NR_ACTIVE_FILE) +
2333 node_page_state(pgdat, NR_INACTIVE_FILE);
2334
2335 for (z = 0; z < MAX_NR_ZONES; z++) {
2336 struct zone *zone = &pgdat->node_zones[z];
2337
2338 if (!managed_zone(zone))
2339 continue;
2340
2341 total_high_wmark += high_wmark_pages(zone);
2342 }
2343
2344 /*
2345 * Consider anon: if that's low too, this isn't a
2346 * runaway file reclaim problem, but rather just
2347 * extreme pressure. Reclaim as per usual then.
2348 */
2349 anon = node_page_state(pgdat, NR_INACTIVE_ANON);
2350
2351 sc->file_is_tiny =
2352 file + free <= total_high_wmark &&
2353 !(sc->may_deactivate & DEACTIVATE_ANON) &&
2354 anon >> sc->priority;
2355 }
2356 }
2357

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki