On Fri, 2011-09-23 at 19:20 -0300, Glauber Costa wrote:You're right. I had something else from another work I'm doing in mind and got confused.@@ -623,6 +624,9 @@ static inline struct task_group *task_group(struct
task_struct *p)
struct task_group *tg;
struct cgroup_subsys_state *css;
+ if (!p->mm)
+ return&root_task_group;
+
css = task_subsys_state_check(p, cpu_cgroup_subsys_id,
lockdep_is_held(&p->pi_lock) ||
lockdep_is_held(&task_rq(p)->lock));
Hmm, why is that? Aren't kthreads part of the cgroup muck just as much
as normal tasks are?