On Wed, Nov 29, 2017 at 09:55:23PM +0800, Wei Wang wrote:
+static void send_one_desc(struct virtio_balloon *vb,This internal kick complicates callers. I suggest that instead,
+ struct virtqueue *vq,
+ uint64_t addr,
+ uint32_t len,
+ bool inbuf,
+ bool batch)
+{
+ int err;
+ unsigned int size;
+
+ /* Detach all the used buffers from the vq */
+ while (virtqueue_get_buf(vq, &size))
+ ;
+
+ err = virtqueue_add_one_desc(vq, addr, len, inbuf, vq);
+ /*
+ * This is expected to never fail: there is always at least 1 entry
+ * available on the vq, because when the vq is full the worker thread
+ * that adds the desc will be put into sleep until at least 1 entry is
+ * available to use.
+ */
+ BUG_ON(err);
+
+ /* If batching is requested, we batch till the vq is full */
+ if (!batch || !vq->num_free)
+ kick_and_wait(vq, vb->acked);
+}
+
you move this to callers, just return a "kick required" boolean.
This way callers do not need to play with num_free at all.
+/*This assugnment can overflow. Next line compares with UINT_MAX but by
+ * Send balloon pages in sgs to host. The balloon pages are recorded in the
+ * page xbitmap. Each bit in the bitmap corresponds to a page of PAGE_SIZE.
+ * The page xbitmap is searched for continuous "1" bits, which correspond
+ * to continuous pages, to chunk into sgs.
+ *
+ * @page_xb_start and @page_xb_end form the range of bits in the xbitmap that
+ * need to be searched.
+ */
+static void tell_host_sgs(struct virtio_balloon *vb,
+ struct virtqueue *vq,
+ unsigned long page_xb_start,
+ unsigned long page_xb_end)
+{
+ unsigned long pfn_start, pfn_end;
+ uint64_t addr;
+ uint32_t len, max_len = round_down(UINT_MAX, PAGE_SIZE);
+
+ pfn_start = page_xb_start;
+ while (pfn_start < page_xb_end) {
+ pfn_start = xb_find_next_set_bit(&vb->page_xb, pfn_start,
+ page_xb_end);
+ if (pfn_start == page_xb_end + 1)
+ break;
+ pfn_end = xb_find_next_zero_bit(&vb->page_xb,
+ pfn_start + 1,
+ page_xb_end);
+ addr = pfn_start << PAGE_SHIFT;
+ len = (pfn_end - pfn_start) << PAGE_SHIFT;
that time it is too late. I think you should do all math in 64 bit to
avoid surprises, then truncate to max_len and then it's safe to assign
to sg.
+what exactly does this loop do? Does this wait
+ xb_clear_bit_range(&vb->page_xb, page_xb_start, page_xb_end);
+}
+
+static inline int xb_set_page(struct virtio_balloon *vb,
+ struct page *page,
+ unsigned long *pfn_min,
+ unsigned long *pfn_max)
+{
+ unsigned long pfn = page_to_pfn(page);
+ int ret;
+
+ *pfn_min = min(pfn, *pfn_min);
+ *pfn_max = max(pfn, *pfn_max);
+
+ do {
+ ret = xb_preload_and_set_bit(&vb->page_xb, pfn,
+ GFP_NOWAIT | __GFP_NOWARN);
+ } while (unlikely(ret == -EAGAIN));
forever until there is some free memory? why GFP_NOWAIT?
return num_freed_pages;
}
+/*
+ * The regular leak_balloon() with VIRTIO_BALLOON_F_SG needs memory allocation
+ * for xbitmap, which is not suitable for the oom case. This function does not
+ * use xbitmap to chunk pages, so it can be used by oom notifier to deflate
+ * pages when VIRTIO_BALLOON_F_SG is negotiated.
+ */
I guess we can live with this for now.
Two things to consider
- adding support for pre-allocating indirect buffers
- sorting the internal page queue (how?)