On Mon, Sep 13 2021 at 13:01, Sohil Mehta wrote:
Currently, the task wait list is global one. To make the implementationHow are per cpu wait lists going to solve the problem?
scalable there is a need to move to a distributed per-cpu wait list.
+So this interrupt happens for any of those notifications. How are they
+/*
+ * Handler for UINTR_KERNEL_VECTOR.
+ */
+DEFINE_IDTENTRY_SYSVEC(sysvec_uintr_kernel_notification)
+{
+ /* TODO: Add entry-exit tracepoints */
+ ack_APIC_irq();
+ inc_irq_stat(uintr_kernel_notifications);
+
+ uintr_wake_up_process();
differentiated?
Again. We have proper wait primitives.
+ return -EINTR;So any of these notification interrupts does a global mass wake up? How
+}
+
+/*
+ * Runs in interrupt context.
+ * Scan through all UPIDs to check if any interrupt is on going.
+ */
+void uintr_wake_up_process(void)
+{
+ struct uintr_upid_ctx *upid_ctx, *tmp;
+ unsigned long flags;
+
+ spin_lock_irqsave(&uintr_wait_lock, flags);
+ list_for_each_entry_safe(upid_ctx, tmp, &uintr_wait_list, node) {
+ if (test_bit(UPID_ON, (unsigned long*)&upid_ctx->upid->nc.status)) {
+ set_bit(UPID_SN, (unsigned long *)&upid_ctx->upid->nc.status);
+ upid_ctx->upid->nc.nv = UINTR_NOTIFICATION_VECTOR;
+ upid_ctx->waiting = false;
+ wake_up_process(upid_ctx->task);
+ list_del(&upid_ctx->node);
does that make sense?
+/* Called when task is unregistering/exiting */What? You have to do a global list walk to find the entry which you
+static void uintr_remove_task_wait(struct task_struct *task)
+{
+ struct uintr_upid_ctx *upid_ctx, *tmp;
+ unsigned long flags;
+
+ spin_lock_irqsave(&uintr_wait_lock, flags);
+ list_for_each_entry_safe(upid_ctx, tmp, &uintr_wait_list, node) {
+ if (upid_ctx->task == task) {
+ pr_debug("wait: Removing task %d from wait\n",
+ upid_ctx->task->pid);
+ upid_ctx->upid->nc.nv = UINTR_NOTIFICATION_VECTOR;
+ upid_ctx->waiting = false;
+ list_del(&upid_ctx->node);
+ }
added yourself?