There is no "best" way to look at it
I've started reading Doppleganger by Naomi Klein, and in a true irony the key point seems to be illustrated by the "Klein Bottle"
This is a variant of the Moebius strip
and reminiscent of M.C. Escher's waterfalls.
And what is described in Escher Godel and Bach as 'strange loops'.
Another favorite example of which is intransitive dice, for which the
term "best" does not exist: whichever die you pick, and you go first,
I can always pick one of the remaining three that will beat your choice
2/3 of the time.
A beats B, B beats C, C beats D, and D beats A. see wikipedia
=========================
None of the standard Western education even exposes us to such things that cannot be laid flat, even
PhD level training in most fields does not help.
We
seek "better and better" ways to grab hold of ( "grasp") such things,
and keep failing, and assume that with just a little more work we will
surely achieve our (impossible) goal.
In
fact, the whole familiar process of breaking something into parts and
looking at each separately is exactly the wrong thing to do. The problem is not with the parts, the parts are fine, the problem is the relationship between the parts.
The mountain will not come to Mohammed, we must go to it.
There is no "right way to look at it." that surely additional conversation and discourse will lead us to.
I suspect that all serial-string logic has this weakness, and "step-by-step" progress along the string remains open to such twisting.
I think we need to go to at least 2-dimensional, "image-processing" techniques, where there is a huge amount of cross-structure that prevent skew and twisting, like the diagonals that strengthen the very weak rectangular shape on bridges which otherwise could collapse sideways.
While I recognize that this concept makes me a heretic, I suspect as well that the "Word of God"
cannot be represented in any language using serial symbol strings, ie what we call "words".
Not the least of which have the problem that the "meaning" of a "word" is highly context
dependent, so the meaning of any sentence will change over time.
This is why for example
the US Supreme Court is always stuck with the problem of trying to figure out how to
interpret what prior decisions, especially those over 100 years old, actually mean.
What was the intent, and do those "words" continue to express that intent, and should
they be true to the intent, or true to the words ( and what those words mean today?).
So, sadly, even if say the words of the Koran are kept sacred and unchanging,
the meaning people will take from them keeps changing over time.
It is really inconvenient that simple approaches to seeking truth and meaning don't work.
Partially for this reason this blog is titled "tree-circles", a whole different way to find
meaning in signals that is neither deductive nor inductive.
Another problem with using 1-dimensional symbol-strings ( "sentences of words")
is that this type of reasoning is incredibly noise-sensitive. In theory perhaps for
a perfect theoretical "Turing Machine" various things are considered "computable"
but in the world our bodies live in , nothing is without noise, and nothing lasts
forever, and most things decay rapidly - so the infinite immutable "tape" of
1's and zero's cannot ever be implemented.
So, even in the golden haired child of Science, algebra and calculus,
a single error becomes a single-point-of-failure. A 20 page computation
with an error on page 3 is meaningless and "wrong".
Theologians may argue forever over "the meaning" of a single word,
but their whole process is inescapably flawed, not to mention
context sensitive over time.
On the other hand, images are very noise-tolerant, so that even with multiple "errors" the
end result may still be correct. You can "see through" the "salt and pepper" noise and
realize what this image represents.

I got in a great deal of trouble in my computer science classes for arguing that, if the founders had image processing chips and computers, the whole field would have been based on image processing not symbol processing. And we would now have
"context processing engines" not just "content processing engines". Turns out I made my arguments the week our department chair had just won the Turing Award. Bad timing on my part. And meaning or ability to engage and be heard depends so much on timing and fickle social context. People never finish reading even your sentences or argument, but stop part way in, "auto-complete" your meaning into something they grew up with, and discard your work as being wrong since that meaning is wrong.
At least computer science has finally come up with Kubernetes and similar technology to save the entire context along
with application "code" ( symbol strings. )
Nevertheless, the current golden-haired child of AI is "Large Language Models", by which they mean a huge amount of language.
It seems safe to predict that this will migrate and evolve towards Large Image Models, or other 2-dimensional and larger bases.
The symbols in any language essentially, like even DNA, have epigenetic auras, more than just metadata, which alters the meaning of the symbols enough to matter, enough that if you ignore it, your answers will not be stable or reliable.
0 Comments:
Post a Comment
Note: Only a member of this blog may post a comment.
Subscribe to Post Comments [Atom]
<< Home