Re: Stacking up on standard works

From: Brian M. Scott
Message: 69168
Date: 2012-04-01

At 10:05:21 PM on Saturday, March 31, 2012, Tavi wrote:

> --- In cybalist@yahoogroups.com, Piotr Gasiorowski
> <gpiotr@...> wrote:

>>> No, no. It's the sound correspondences which should be
>>> predictable

>> Warning: you are not using this word in its normal
>> meaning. Sound correspondences, once established, may
>> have some predictive power, but they are not predictable.

> That's right. I meant lexical correspondences should be
> (ideally) predictable from sound correspondences.

>>> (i.e. "regular" in the traditional IE-ist jargon).

>> Why IE-ist? Why "jargon"? Other linguists call them
>> "regular" as well. Regular not in some strictly technical
>> IE-ist sense of the word, but regular as everybody
>> understands this word: recurrent, systematic and
>> pervasive.

> But "regular" means it obeys a rule,

No, 'regular' means that its occurrence (in whatever
environment is specified) *is* a rule. An empirically
observed rule, but a rule none the less.

> i.e. what neogrammarians called a "sound law".

Exactly: that's precisely what a regular correspondence is:
a Lautgesetz.

> So I'd prefer "recurring" or "recurrent" instead.

Thereby downgrading or outright ignoring the important
requirements of systematicity and pervasiveness.

>>> You see a pattern here and there, then you make a
>>> hypothesis and test it, and if it works, voila!

>> You make it sound very simple, but it *isn't* that simple
>> at all. Patterns are only too easy to see. Any random
>> process may generate "patterns". Even the stars in the
>> sky form patterns.

> I disagree. Randomness is just the opposite of a pattern.

No, it isn't. There is in fact no really good general
definition of randomness, but any mathematician can tell you
that stochastic processes can generate very strong patterns;
see, for example, the chaos game method of using iterated
function systems to generate fractals. Another example is
the ubiquity of power laws describing the distributions of
random phenomena. See also the Bak-Tang-Wiesenfeld sandpile
model.

<http://en.wikipedia.org/wiki/Chaos_game>
<http://en.wikipedia.org/wiki/Power_law>
<http://en.wikipedia.org/wiki/Bak-Tang-Wiesenfeld_sandpile>

Moreover, it is *very* well known that human beings are
extremely good at seeing patterns, whether those patterns
really exist or not.

>> How do you know that the patterns you see "here and
>> there" in two different languages are evidence of their
>> shared ancestry?

> IMHO all you can prove (to a reasonable degree of
> certainity) is a set of words in language A and another
> set of words in language B have a shared *source*.

One can often do a great deal more, e.g., distinguish
inheritance and borrowing from the same source.

> The problem is that a the lexicon of a given language is
> typically made up of several strata (multi-layer) due to
> language replacement and contact processes, and it isn't
> always easy to tell which is the "inherited" part.

This is a commonplace. It's also of limited relevance to
reconstruction of proto-languages. If F is a linguistic
taxon, proto-F is simply the most recent common ancestor of
F; its own history is largely irrelevant to its comparative
reconstruction from F. For that history we must resort to
internal reconstruction, and perhaps eventually to
comparative reconstruction of a bigger taxon at a deeper
historical level.

Brian