Re: Comments on Microsoft Open Source documentA

Khimenko Victor (khim@sch57.msk.ru)
Mon, 9 Nov 1998 23:53:51 +0300 (MSK)


9-Nov-98 10:47 you wrote:
>> Even HTTP/1.0 can do everything that FTP does and can do it better
>> (much easier to implement in certain regards), but this hasn't
>> stopped FTP from being used.

> One thing http/1.0 certainly does better than ftp is to freeze and
> drop a link in mid transfer - something that happens with such
> monotonous regularity for me that I've stopped even considering using
> http for file transfers...

This is not a problem at all if you'll use appropriate tools. For example
if you'll use Apache and wget this will not be problem at all. But HTTP/1.0
if FAR more slow if want to download complex file structure with a lot of
small files. HTTP/1.1 parhaps could solve this problem but you'll need add
yet-to-be-developed file format to handle directory indexes (and no, HTML
will not work -- file attributes, time, symlinks, etc). For FTP there are no
such format as well but there are tools for a lot of pupular ftp-server with
tricks and this tools WORKS! Not always, of course, but most times...
While HTTP/1.1 does not works. For now.

P.S. For example wget was unable to download directory with filenames like
"Who is this?.jpeg" via HTTP but was able to download the same directory via
FTP ...

-- cut --
AC> FTP is dying, the main things that keep it alive are the fact http
AC> daemons are bad at handing out large files, and the fact http clients dont
AC> use byte ranges on broken file transfer retries.
-- cut --
In fact apache works with big files reliable and Netscape (4.05+ at least) could
use byte ranges for file download...

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/