When you study how glyphs are rendered onto screens, you find
that quite a bit of effort has gone into creating acceptable
appearance and legibility. If you go back to earlier VGA CRT
displays, and even more to 5 x 7 or 5 x 9 dot-matrix displays and
printers, you become quite aware that if you can only make a dot
be on or off, so you are quite restricted by the coarseness of
the matrix (array of dot positions).

However, text, even on a modern CRT, with a much finer degree of
possible detail, still becomes acceptable only because of
considerable development work. (For some gruesome examples of how
it can go wrong, using simplistic schemes, older screen text in
Linux sometimes looked/looks simply awful, and was/is also hard
to read. The Linux folks have generously contributed a huge
amount, being primarily repaid by peer respect, but they seemed
to care little about ugliness. Nevertheless, I'm a Linux fan.)

At least two major related schemes, both sophisticated, are used
to render a given font properly on-screen (and also for printing,
although a good printer can do at least 300 dots/inch, about 120
dots/cm, making life somewhat easier.) Keep in mind that serious
typesetting (such as for books and periodicals) is done at
roughly 2,500 dots/in, or about 1,000 dots/cm. I'm thinking of
the Linotronic, which might be exactly 1,000 dots/cm.

Probably the better known of the two schemes is TrueType, a trade
name/trademark (Microsoft, fairly sure). However, quite its
equal, and some would say better, is the well-established
PostScript, which (fairly sure) is embedded into Adobe Portable
Document Format (.pdf) files. Both of these are (just about sure)
called "scalable" font systems, meaning that you can resize
(scale) the basic font over (usually) quite a wide range of
sizes, yet have a very decent appearance over that complete
range. (Mac OS X uses PDF (see below), and I assume by
implication, PostScript, for its screen display.)

Both schemes define the basic character shape as outlines, I'm
reasonably certain. Almost always, the process of rendering the
glyph entails filling in the outline to create a solid version.
However, given a certain-size array of rows and columns of dots,
such as the size of the space to be occupied on-screen at a given
size, as the good Linux folks demonstrated too often, you can't
simplistically just use the outline to define which dots are on
and which are off; the results are likely to be unacceptable.

In TrueType, the basic dot array is modified by a process called
"hinting", rather misleadingly, imho, because "hint" seems
somewhat implicit and a matter of choice (accept or reject) on
the part of the recipient of the hint. In TT, hinting is a well-
defined process, with no uncertainty.

Hinting consists of extra information in the font file that
serves to modify the basic filled-in shape so that the glyph will
look as good as it can be made to, when rendered to screen or
paper. It turns on some dots that would simplistically be off,
and turns off others that might likewise be on.

I was studying the Microsoft Typography pages (almost no sales
pitch, "low-key", quite well written, interesting, and
informative) some time back. Recommended, to those interested in
typography, and not just computer typography. (Disclaimer: I
haven't been there for a few months, but I rather doubt that
Microsoft would make changes for the worse.)

I was quite surprised to learn that TrueType hinting is done by a
computer language, no less, very likely an interpreter,
technically. Apparently, it's called (at least implicitly) the
TrueType language.

Also of considerable interest was the comment (not necessarily at
Microsoft's site) that defining TrueType hints well is so
difficult that only about six people in the world are good at it.

Apparently, that is why only the popular, "mainstream" TrueType
computer fonts look good at all sizes. Not sure, but I think even
Hermann Zapf's superb fonts are not properly hinted for computer
use.

Microsoft has software to help do hinting of a font; its screens
are very graphical and memorably distinctive.

PostScript also defines outlines, afaik, and its counterpart to
hinting is apparently easier to do, iirc requiring less skill to
define the counterpart to hints (indeed, if any are required at
all!), but (iirc) requiring a much more-sophisticated font
rendering engine.

Both PostScript and TrueType define the basic glyph shape in
great detail; curves are (iirc) quadratics in TT, and cubics
(Bézier splines?) in PS.

Some faces such as Verdana were specifically designed for CRT
screens, and probably also good for flat-panel displays. Credit
goes to Matthew Carter, imho, a very talented type designer.
Apparently, he designed several basic shapes for different
on-screen sizes; although they are quite likely to be hinted,
Verdana still has plainly-visible differences as you change size
in a font selector, for example.

The setup screens, and KDE's default font, for/in recent Mandrake
Linux apparently use Adobe Helvetica, more than likely designed
specifically for CRT screens. This font, while not at all
"decorative" or "artistic", like say, Bodoni or Palatino, is
nevertheless an unusually-handsome working font. It is actually
refreshing and a pleasant surprise to see it at first, and it
continues to be very agreeable and readable.

A compromise method of trying to improve the appearance of basic,
dumb non-scalable coarse-matrix glyphs is called "antialiasing".
You all know of the staircasing* approximation to curves and
diagonals in lesser-quality computer representations. Usually
undesirable, when a finer matrix can't be had, but (as usual)
when the display can render a good gray scale, it's practical to
compute pixel details at all sorts of intermediate levels of gray
(and even some color) to minimize the prominence of the
"jaggies".

*Traditional rooflines in The Netherlands ☺

When used to improve the appearance of type, especially if the
"raw" glpyhs have large jaggies, unfortunately, anti-aliasing
creates fuzzy equivalent edges, which is not desirable, in
general.

In general, human eyes really want an extremely-abrupt,
minimally-fuzzy transition between the body of a glyph and its
surrounding space. CRTs are only somewhat satisfactory, because
affordable and practical color tubes can't provide a transition
that is sharp enough, although the design engineers have worked
hard to do so. A modern computer monitor CRT is an amazing piece
of technology, nevertheless, just as it is.
A working value for typical modern CRT resolution is about 90 to
100 dots per inch (roughly 40 dots per cm), although it's usually
given the other way around, as dot pitch in mm. That is really
too coarse.

Monochrome (green, amber, or white) CRTs for text were better,
with sharper edges, but the desirability of color made them
generally fade into obscurity.

Flat panels have very good borders on their pixels, and should be
easier to read; I don't have extensive experience, but did use a
small screen a few years ago, which I liked a lot. Some existing
technology for further improving flat-panel legibility was
recently publicized by Microsoft.

Historically, computer-driven plotters moved pens over the
surface of the paper, making very neat line drawings, with larger
size letters (afaik) often as outline fonts. When it was still
NBS (now NIST), the Hershey outline plotter fonts from there were
highly regarded. At least one dedicated word processor (like a
successor to the typewriter, that is) wrote text by very fast pen
plotting.

Historically, among the first on-screen glyphs that were not
experimental and "one-off" efforts used the Charactron CRT, which
contained a stencil inside a very long electron gun. The
defocused electron beam would be positioned onto one glyph in the
stencil, and the beam would be extruded into that shape. By
deflecting the beam to where necessary, and focusing the stencil
outline onto the screen, it wrote a visible character. These were
used in the US SAGE air-defense network until fairly recently.

Textronix, quite respected and very well-known to electronic
technicians for its test equipment, especially oscilloscopes,
used a plotter-like scheme to write characters onto 'scope
screens. Their CAD* CRT terminals were the electronic-image
counterpart to plotters. *computer-aided design

One of the very first electronic desktop calculators, the Friden
EC-130 (and the EC-132), wrote its numerals onto the screen,
plotter-style, but in the now-very-familiar stacked-parallelogram
seven-segment shape.

Less well-known is the alphanumeric 14-segment ("starburst")
pattern, like an x superimposed over a + in a rectangular box.
These are legible, but surely not typographical!

I find it hard to resist including Nixie [tm, Burroughs] tubes,
special-purpose neon lamps with back-to-front stacked metal
cathodes shaped like typographic numerals. Close up, you would
see the lit cathode surrounded by a pretty, transparent orange
neon glow fuzz. They used to be quite popular, and many older
electronic calculators used them. The numeral shapes were very
well designed.

I think Sol Sherr is the author of a book about electronic
displays; quite good.

Apologies for remaining typos; if they look bizarre, that's
because I use the Dvorak letter layout. (I like it a lot.)

Nicholas Bodley |@| Waltham, Mass.
Sent by Opera 6.05 e-mail via TheWorld
who also has Mandrake Linux 9.0 as dual-boot