Re: unicode (char as abstract data type)

Albert D. Cahalan (acahalan@cs.uml.edu)
Fri, 17 Apr 1998 19:03:22 -0400 (EDT)


Alex Belits writes:
> On Fri, 17 Apr 1998, Alan Cox wrote:
>
>>> UNICODE is more then just irritating. The problem is that the
>>> programming language thinks in terms of char* text. You start
>>> using wchar_t and before you know it, you have a huge mess and
>>> you just can't seem to get the types quite right anymore.
>>
>> That is why UTF8 is the right format to use in real situations.
>> UTF8 works just like ascii in memory handling respects - its just
>> that x++ is no longer always move on one char and strlen(x) isnt
>> the right answer
>
> The problem is, for handling the data in applications UTF-8 is the
> very worst format ever invented by a human.

UTF-8 is also dead.

I really don't think it is wise to fight Sun, Microsoft, and Apple
on this. We could get screwed much worse than EBCDIC users are.
Incompatibility with the rest of the world is just not cool.

The perfect time to switch is while adding 64-bit filesystem calls.
I certainly don't want to see 8-bit kernel calls on Merced.

Just think about it: WE WILL BE ALONE.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu