suzmccarth scripsit:

> This seems to me to be the only logical answer. I can't think why
> else Tamil couldn't have precomposed characters for aksharas since
> Canadian Syllabics works for 3 languages with fairly different
> syllable structure, Western Cree, Naskapi and Inuit.

Well, not using the abugida model for the Indic scripts would have
gobbled up immense numbers of codepoints, since CV and CCV and even
CCCV are possible in certain languages. (No Indic script is used for
only one language.) In addition, the question of which consonants
are ligatured and which not depends on both the language and the font.
It's simpler, all things considered, to use so-called "Brahmic encoding";
i.e. to deal with the underlying abugida.

The original proposal for Ethiopic also proposed an abugida-like encoding,
though with each vowel encoded whether intrinsic or not. This was shot
down, however; although I don't know the details, I suspect the fact that
Ethiopic-script users think of their *fidel* as being a N x 5 array of
related syllabograms was influential.

The Indic languages, however, were encoded following the model of ISCII,
which followed the model of typewriters.

Unicode is a *practical* encoding above all.

--
Barry gules and argent of seven and six, John Cowan
on a canton azure fifty molets of the second. jcowan@...
--blazoning the U.S. flag http://www.ccil.org/~cowan