Thi is a test

stavrou (stavrou@central.ntua.gr)
Fri, 05 Apr 1996 01:15:13 +0200


This is a multi-part message in MIME format.

--------------68BC792E1664
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

This is only a test.!
By M.O.S

--------------68BC792E1664
Content-Type: message/rfc822
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Newsgroups: linux.dev.kernel
Path: news.central.ntua.gr!news.ntua.gr!swidir.switch.ch!swsbe6.switch.ch!scsing.switch.ch!news.belwue.de!news.uni-stuttgart.de!comnets.rwth-aachen.de!ratatosk.yggdrasil.com!vger.gate.yggdrasil.com!not-for-mail
Approved: linux-kernel@vger.rutgers.edu
References: <199604031650.CAA00695@fuzzbox.gh.cs.usyd.edu.au>
X-Env-Sender: owner-linux-kernel-outgoing@vger.rutgers.edu
Date: Thu, 4 Apr 1996 07:37:03 +0300 (EET DST)
X-Hdr-Sender: torvalds@cs.helsinki.fi
From: Linus Torvalds <torvalds@cs.helsinki.fi>
Subject: Re: PPP and 1.3.83 - even WORSE!
Message-ID: <Pine.LNX.3.91.960404072452.9464A-100000@linux.cs.Helsinki.FI>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: owner-linux-kernel@vger.rutgers.edu
Lines: 78

On Thu, 4 Apr 1996, David Monro wrote:
>
> I just tried upgrading to 1.3.83, and tried a couple of transfers. On a
> 14.4k link I should (and used to) get 1.5 kB/s. With 1.3.82 I get 0.9,
> with 1.3.83 I get 0.23! (This is on a 68Kb file). I think we are going
> to see a storm of complaints about this one ;-(

That's _very_ interesting. Because 83 has only one very minimal patch to
the network code, and that patch shouldn't even trigger under normal
circumstances (it's only triggered when the receive queue allocation
grows beyond the allowed queue length - which shouldn't happen if the
sender sends sane packets within the window).

Even worse, the 83 patch actually has a bug that should probably crash
the machine if the new code is ever triggered ;-)

I'm including the fix for this bug, but I don't think it's responsible: I
really think you would see much worse problems than just bad throughput
if this bug was ever triggered. I'll take a closer look at the TCP dumps
soon, maybe that gives me some idea.

One thing that I'm starting to suspect is simply a timing issue: the new
code since 1.3.73 has been doing the ACK's from within the timer bottom
half handler, asynchronously from the actual network code. Now, if there
is a race within the slhc compression, it would be triggered a lot more
easily thanks to the more asynchronous network behaviour.

Thanks,
Linus

----------
--- v1.3.83/linux/net/ipv4/tcp_input.c Wed Apr 3 16:06:57 1996
+++ linux/net/ipv4/tcp_input.c Wed Apr 3 22:00:35 1996
@@ -1487,6 +1487,7 @@
if (skb->acked)
break;
__skb_unlink(skb, list);
+ kfree_skb(skb, FREE_READ);
}
}

@@ -1501,20 +1502,24 @@
{
struct sk_buff * skb = list->next;

- for (;;) {
- struct sk_buff * next;
+ if (skb != (struct sk_buff *) list) {
+ for (;;) {
+ struct sk_buff * next;

- if (skb == (struct sk_buff *) list)
- break;
- next = skb->next;
- if (next->seq == skb->seq) {
+ next = skb->next;
+ if (next == (struct sk_buff *) list)
+ break;
if (before(next->end_seq, skb->end_seq)) {
__skb_unlink(next, list);
+ kfree_skb(next, FREE_READ);
continue;
}
- __skb_unlink(skb, list);
+ if (next->seq == skb->seq) {
+ __skb_unlink(skb, list);
+ kfree_skb(skb, FREE_READ);
+ }
+ skb = next;
}
- skb = next;
}
}

----------

--------------68BC792E1664--