On Sun, 11 Jul 2004 16:13:30 -0400, Peter T. Daniels
<grammatim@...> wrote:

> But computers don't think of characters as "small images," do they? They
> think of them as type.

I dare say that nobody involved in computer typography would refer to the
detailed data formats as "type". Fundamentally, *all* data in a computer
is binary numbers; how they are to be interpreted depends upon the
software.

As I understand it, TrueType font files contain quite a number of sections
(loose term; some are tables), but among the most-important are numbers
that define the outline of each glyph, most likely by coordinates of
control points. The outlining is defined as if done by an hypothetical
moving very-thin-nib pen, and direction is important. Afaik, all outlines
need to be closed. Being closed, they can be filled; all space within
becomes defined as black. Apparently, the outline is defined on quite a
large grid, possibly as big as 2,048 bits square; actually, I don't think
quite that big.

Once filled, in effect a grid with a pitch (spacing) of square boxes is
superimposed upon the outline to start to create a bit map. However, as
older Linux typography shows so painfully, you can't stop there.

What's a bit map?
If an [o] represents a "set" bit, and background, reset bits, and using a
monospace face/font to read this,
a small (condensed, maybe!) bitmapped capital A is
o
o o
o o
ooooo
o o
o o
o o

Essentially, somewhere else, the long string of 0 0 1 0 0 0 1 0 1 0 1 0 0
0 1 (etc; I gave the top 3 rows) is placed into a 5 x 7 box. Not hard to
understand, just not familiar.

The Microsoft Typography Web pages show some excellent examples, as I
recall; just now, a friend needs some help with his computer, and my time
is limited, else I'd research and give an URL. Imho, those pages are well
worth reading, commendably informative and free of maketing distortion and
annoyance.

One somewhat-makeshift solution to avoid Great Ugliness is to smooth the
effective outlines by creating gray-scale pixels to give the illusion of
smooth contours; one dramatically-visible example is the standard Windows
98 "splash screen" (think title), when shown on a large (say, 17-inch)
monitor, in which "Windows 98" has been processed this way. However,
fuzzy edges are to be avoided in typography, and this process is largely a
stopgap, imo. For technical and historical reasons not important here,
this process of creating teh illusion of smooth contours is called
"antialiasing" sometimes.

For sharp-edged typography, one can do worse than look at Verdana, which
Matthew Carter designed specifically for use on limited-resolution
computer screens. (Mine runs about 90 dots/inch, typical.)

However, for decent real-world computer typography, starting to create a
bitmap as described above is not enough, and very unsatisfactory,
especially with smaller sizes. (See the Microsoft Typography pages; there
might be better references.) PostScript has its own way of modifying the
bitmap created by simply superimposing an outline onto a grid. I've read
more about the details of True Type, though.

OK: I know I've lost some people. What we're concerned with is creating a
small bitmap image that can be simply and "dumbly" copied to the proper
place to create text to be printed. (Placement is according to tables in
the True Type font file, I'm essentially certain.)
Consider this: You have a beautiful filled outline of your NovaFont "A", a
serif style. You now place a transparency with a grid of squares over it,
but there are only 16 squares for the full width of the A! The squares
are too big. Oh, yes, I forgot. For computer typography, remember that
all data is binary. Each square can only be totally filled (black) or
totally empty (white, or background color).

Another way to think of it is to have a transparent sheet with the exact
outline of your NovaFont A on it, place it over a coarse grid, and decide
which squares should be filled in to create the best version of the A. If
you think you have done decently, try an "m". Hah!
You'll be lucky if the three vertical strokes are all the same width. Some
will be double the width of others.

I'm writing this in a rush, but the situation fundamentally is the same in
computer text rendering; a limited-resolution (detail) grid to represent a
subtle shape with lots of detail.

[Aside: back about 15 years ago, in the days of DOS and codepages,
monitors allocated small square (or nearly-square) spaces for each
character, in effect; the text coming from the computer filled in those
squares (in effect) with bitmaps obtained from the codepage currently in
use. Monospacing, of course. Printers either had their own collections of
bitmaps internally, or else the computer downloaded a font into the
printer, again a bitmap collection. Printers back then actually handled
one character at a time, quite often. ]

OK: We're trying to fit more detail into fewer spaces than can possibly
render anything decent. Enter hinting...

Hinting is not, as its name suggests, a matter of advice or choice. In
TrueType, it's a deterministic process that actually uses the TrueType
language, a collection of very-specialized computer instructions, to
modify the "raw" bitmap to make the ultimate glyph look at least
acceptable, and to the extent possible, consistent with other glyphs in
the same face/font, and finally to try to make the result look like the
design created by the typographer.

TrueType needs an intelligent human with patience, a lot of ability, and
excellent esthetic judgment to tell it what to do. There are apparently
fewer than a dozen living capable True type "hint-designers", and they
deserve a lot of credit.

I've tried hard to explain this, but the background of some Qalamites is
such that more help might be needed. Please understand that I'm not an
expert in compter typography, only a dilettante who is quite interested in
the subject. Back in the late 1940s, I think, I was occupied with the
question of what is the minimum acceptable size for decent dot-matrix
characters (answer: 5 x7, caps only, 5 x 9, caps with l.c. descenders.
CJK, 24 (x 24?) for Japanese*). If any experts are reading this, please
feel free to enlighten me! I do think i have the basics down, reliably,
though. *i suspect that Japan pushed the development of 24-pin printers
for this reason.

To reply to Peter's original query, while conventional usage doesn't
regard individual glyph bitmaps as images, in practical fact, they are.

===

I do think that computer typography is quite as important as calligraphy
with pen or brush, as well as traditional "analog" typography. to the
extent that it influences perceived text, or gryphs in more detail, I
think it might be a fit topic.

Better terms, theoretically, would be "quantized" ("digital") versus
"continuous" ("analog") formats for rendered glyphs.

===

Apologies for leftover typos; apparent weirdness occurs because of the
Dvorak letter layout.

My regards to all,

--
Nicholas Bodley /*|*\ Waltham, Mass.
Opera 7.5 (Build 3778), using M2