GJ> Hello,
GJ> I'm writing a device driver for the PCI/Highway controller, a card that is
GJ> used here at the University for data aquisition. The card uses a mmio
GJ> region for controlling the measurement interfaces connected to it and can
GJ> do dma to transfer data from such an interface. The mmio region is made
GJ> available to the user via read()/write() and mmap().
GJ> The problem is this: when doing a dma transfer, the mmio region may not
GJ> be accessed. This is a hardware requirement. There's no problem with
GJ> read/write because the driver can check for a dma in progress and put
GJ> the process onto a wait queue or just fail. However, a process can still
GJ> access it via a mmap() mapping.
GJ> I'm already allowing only one process to open the device. But, I do want
GJ> tosupport multi-threaded environments. Threads should be safe because
GJ> they cantake care that they don't access this memory while doing dma but
GJ> I wantsome sort of protection.
GJ> Is it possible to temporarily disable this io memory and all its mappings
GJ> during dma. Preferably I would like to put offending threads on a wait
GJ> queue and restart the memory access when dma is done, but this is not
GJ> required. Any thoughts?
The following code (untested) should be sufficient to unmap a mapped
page. It may need the mm semaphore I'm not certain. I need to look at that
some more. It's from a generic dirty page kernel facility I am
writing.
Then all you have to do is make certain the page isn't faulted in
after the unmapping starts, until the dma transfer is done. This
should fairly straight forward, if you implement the nopage handler
yourself.
void unmap_page(struct page *page)
{
struct inode *inode = page->inode;
struct vm_area_struct *vma = inode->i_mmap;
unsigned long pg_offset = page->offset;
if (!vma) return; /* this should be the normal case */
/* This makes it safe to call atomic_dec... */
atomic_inc(&page->count);
for(;vma;vma = vma->vm_next_share) {
unsigned long offset = pg_offset - vma->vm_offset;
if ((vma->vm_start <= offset)
&& (offset < vma->vm_end)) {
struct mm_struct *mm = vma->vm_mm;
flush_cache_page(vma, offset);
zap_page_range(vma->mm, offset, PAGE_SIZE);
flush_tlb_range(vma, offset);
atomic_dec(&page->count);
}
}
__free_page(page);
}
I hope this helps.
Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu