Re: unicode (char as abstract data type)

Alex Belits (abelits@phobos.illtel.denver.co.us)
Sun, 19 Apr 1998 02:57:11 -0700 (PDT)


On Sat, 18 Apr 1998, Pavel Machek wrote:

> > And how will I name the encoding that I will get from that if I don't
> > use Unicode, but have it in local charset instead? People who write
> > "internationalization" standards should once in a while check, if Unicode
> > is now anything but the most worldwide hated charset.
>
> Hmm. Ok: You have some encoding on console. It's hardwired. It's
> US-ascii on my 386 and it's iso-latin-2 on my 486. You take unicode
> char. See if you can display it. If not, look if you can display some
> approximation (forget accent). If you can not, give up and print ?
> instead. [In forum we additionaly try to approximate chars like 1/2
> with string "1/2".] It's doable and I'm doing that in forum. :-)

Now imagine what will happen to vi (or anything with curses), if it will
think that console uses single character for single character that it
displays. This is really a situation where with all "we have unicode on
console" talk one can better accept the limitation of hardware rather than
stuff unicode conversion tables into kernel in a poor attempt to cover
those limitations up. At least with "dumb" console I can load my charset
and be sure that whatever will be in it, will be shown right, and
everything else at least will be seen in recognizable byte-values.

--
Alex

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu