Forum Home | User Profile | Register | Members | Groups | Search | FAQ | Back to:OnIntelligence.org
onintelligence.org Forum Index -> General Discussion On the Book -> On the Representation of Information

 
This topic is locked: you cannot edit posts or make replies.   This forum is locked: you cannot post, reply to, or edit topics. View previous topic :: View next topic  


Author Message
bkumnick


Joined: 26 Feb 2008
Posts: 14
Location: Sunnyvale CA, USA

02-27-08, 12:33 am
PostPost subject: On the Representation of Information Reply with quote

Hi,

Here is some food for thought regarding limitations of the representation of information.

The representation of existence and the representation of thought are both closed, complete, consistent and context dependent. The universe contains everything that exists. That means its representation must be complete and closed in the mathematical sense. The behavior of the universe is consistent with the known laws of Physics. That means its representation must be at least as consistent as the laws of Physics. We think in context. Nature represents existence in context and the behavior of things that exist depends on the environmental context each thing exists in and the behavior of the things it is composed of. Each thing is defined by its relations to the things it is composed of and by its relations to the things in its existential context or environment. Thus, the representation and encoding of existence must be context dependent. Any other option would be exponentially more complex. The symbolic representation of information is incomplete, frequently inconsistent, and context independent. The representation of symbols does not depend on the context they are used in. Taken together these factors make it impossible for the representation of existence and the representation of thought to be based on information. We need to stop trying to model Nature in terms of information and figure out how to represent it using a non-symbolic, relative encoded, context dependent direct representation. Only when the capabilities of the representation we use to model existence or model thought match those of existence and thought respectively, will we fully understand either. Any other approach will be exponentially more complex and ultimately doomed to fail.

Just because information can be used to represent anything doesn’t mean it is the only possible representation. Just because information can be used to represent anything, doesn’t mean it can be used to represent everything. Just because information can be used to represent anything doesn’t mean it is the most efficient, least complex representation for representing everything. Just because we use information in computers, and we use information to communicate doesn’t mean our brain uses information internally to think. The same is true of the representation of existence. Just because we can model parts of existence using information doesn’t mean the representation of existence itself is based on information. In fact, there are exponentially less complex representations, both for the representation of existence and the representation of thought.

AI practictioners who assume the brain must be too complex, too cluttered, or too innefficient to be worth studying are basing their conclusions on the mistaken assumption that neurons represent thought using information. Granted, if neurons did represent thought using information they would be correct. But neurons don't. We communicate with information, but the representation of information is far too complex, incomplete, inconsistent, and domain limited to use for the representation of thought. The representation of thought is mathematically closed, complete, consistent and completely domain independent. It is so powerful it can represent not only anything in the universe, but anything we can imagine. It has no domain limitations. It is not subject to the limitations of Godel's Incompleteness theorems and it is exponentially less complex than the representation of information. It is not symbolic and it does not use a fixed encoding.
Back to top
View user's profile Send private message

Author Message
danshawen


Joined: 28 Sep 2009
Posts: 37

09-29-09, 05:48 am
PostPost subject: Reply with quote

You might wish to read Chris Langan's CTMU (Cognitive Theoretic Model of the Universe), or Gregory Chaitin's diatribe on mathematical philosophy.
Back to top
View user's profile Send private message

Author Message
bkumnick


Joined: 26 Feb 2008
Posts: 14
Location: Sunnyvale CA, USA

10-01-09, 03:02 pm
PostPost subject: Reply with quote

I read the first 15 pages of Chris Langan's CTMU paper, critiqued it, and sent him the critique. The CTMU approach won't work because it is based on symbolic information processing and it will run afoul of the limitations imposed by Goedel's Incompleteness theorems.

More fundamentally, it is not possible to create a truly self-aware, sentient thinking machine based on symbolic information processing. The representation of information encodes syntax, but not semantics. The meaning of information is not represented, encoded, or contained in the symbolic representation of information. The meaning of information is interpreted and understood by the mind of an intelligent observer. The brain does not base its representation of thought or knowledge on the representation of information. Information does not have the "right stuff" to represent meaning. A completely new knowledge representation and a new computational model is required.
Back to top
View user's profile Send private message

Author Message
danshawen


Joined: 28 Sep 2009
Posts: 37

10-01-09, 04:52 pm
PostPost subject: Reply with quote

I very much agree! (with the idea that a more fundamental language of sensing/processing/self-organizing must be implemented before symbolic language processing is interfaced, if we are trying to faithfully reproduce the function of the neocortex). Intelligence before symbols.

These are exciting ideas Jeff and his associates banter about here.
Back to top
View user's profile Send private message

Author Message
bkumnick


Joined: 26 Feb 2008
Posts: 14
Location: Sunnyvale CA, USA

10-01-09, 08:08 pm
PostPost subject: Deep problems in the representation of information Reply with quote

We cannot use symbolic information or symbolic computation as the basis for the development of sentient systems due to multiple DEEP PROBLEMS in the fundamental representation of information.

I consider these deep problems because all logic, mathematics, natural and artificial languages, communication, computation, and most of science are based on the concept of "information". In essence, all of humankinds written records are based on information of one kind or another. If we must replace the use of information as the substrate for the development of sentient computation we need to dig deep indeed. Information is so ingrained in our education and communication that it is difficult for most people to even conceive the possibility of any other alternative representation.

The concept of information suffers from the following deep problems as it relates to its use as the basis for sentient computation:

1) We think from the first person direct perspective. We represent information from the 3rd person indirect perspective. Cogito ergo sum. I think, therefore I exist. This is fundamental. It is impossible for an indirect representation to represent anything directly. Information represents everything using reference semantics. Information is always a label or reference that represents something else. Information can never represent anything directly, in the sense of first person direct representation. Even if we write "I thought that", the "I" is really a proxy, substitute, or representation of the writer. The I is not the writer himself or herself. "I thought that" is a 3rd person indirect representation of a 1st person direct statement. I don't think there is any way around this. This limitation is fundamental to the concept and formal definition of information. It is built into the foundation of sentential logic, set theory, mathematics, language and all symbolic computation. This fundamental limitation prevents a computer from thinking for itself, from the first person direct perspective. There is no way an information processing system can have a true sense of self if all its computation is based on indirect representation.

2) When we think, our mind allows us to remember and understand the semantics or meaning of information or knowledge. We inherently understand the meaning of our own knowledge, and we can interpret and understand the meaning of information and convert the information into knowledge for subsequent storage and recall. Somehow, our brain must use an internal knowledge representation that can encode, represent, store and recall semantic meaning, not just the syntax of information. In contrast, information only encodes and represents syntax. Information is just a sequence of symbols with syntax, but no meaning outside the mind of an intelligent observer. The meaning of information is not encoded or represented by information. A book cannot understand the meaning of the writing contained within it. A computer cannot understand the meaning of the symbol sequences it manipulates. It can recognize symbols and symbol sequences and manipulate symbols based on preexisting instructions, but there is more to meaning than recognition of symbols and symbolic manipulation. A symbolic information processing system cannot represent, process, store, or recall that which information does not even encode or represent. I don't think there is any way any information based computational system can work around this fundamental limitation.

3) We think in context. It is reasonable to assume we utilize a context sensitive encoding, and/or a context sensitive representation of thought and knowledge. Doing so would be much less complex, and much more efficient than using a context free encoding and context free representation, and then being forced to "simulate" the context using "higher-order" representational structures. Why represent, store and process representation for context dependencies if it can be built in to the underlying knowledge representation or encoding? On the other hand, information is encoded and represented in context free form. For example, we always use the same symbol to represent the letter "e" in the latin alphabet. We always spell the same word the same way. We always use the same binary encoding to represent the same number. For example, we always represent the number 5 using the binary sequence 101. Information requires the use of context free encoding and representation to support efficient and effective communication between individuals. However, thought is private. Why should the same requirements apply to the representation of thought? The different parts of the brain have no need to send each other information, decode it and interpret it. Why can't each brain use a unique private encoding specifically optimized for maximal compression of the unique knowledge that it stores and processes? Why can't the encoding encode the semantic meaning along with the syntax? While it may be possible to use a context free representation to represent context dependent thought, it would certainly be a lot more complex and much less efficient than it would be to move the context dependencies into the encoding or knowledge representation.

4) Godel's Incompleteness Theorems. All fixed formal symbolic systems above a certain minimal complexity (that of Peano arithmetic) are either incomplete or inconsistent. Yet thought appears to be both complete and consistent. (I am using the terms "complete" and "consistent" in their formal mathematical sense). Thought outruns logic. We can think about things that we can't represent using logic or any logic based fixed formal system. We can use a multitude of different fixed formal systems to work around this problem, but if we do, then to represent the entire universe of knowledge, we must find a way to translate between each representation and ensure the mutual consistency of all the interdependent representations. This is all very complex, cumbersome, error prone, and innefficient. I can't see the brain using multiple representations to get around this problem if a single representation can avoid the problem all together. It would be too much of a kluge, too complex, too slow, and too innefficient.

I have invented a new knowledge representation that avoids the consistency and completeness limitations of Godel's Incompleteness theorems. The key to overcoming the consistency and completeness limitations of Godel's Incompleteness theorems is to create a representation based on direct representation instead of indirect representation. Using a direct representation, one can create a fixed formal system of minimal complexity that is both consistent and complete. This can be done by creating a first person direct representation of abstraction that represents a non-extensible upper ontology. The upper ontology is based on the representation of abstraction. It is the first order abstraction of abstraction itself. Since anything can be represented as an abstraction, it is then possible to represent everything else in the universe of thought indirectly in terms of an abstraction. Simultaneously, in a single representation, this allows us to think both directly from the first person direct perspective in context using the representation of abstraction, and indirectly by using abstraction to form an abstract representation of anything we can think about. In one shot, this solves problems 1, 2, 3, and 4 above. It will allow us to create sentient systems that can think and understand the meaning of knowledge from the first person direct perspective in context. Cogito ergo sum in a sentient machine.

BTW: biological neurons are direct physical implementations of the upper ontology of abstraction.

Best regards,

Barry Kumnick
Back to top
View user's profile Send private message

Author Message
danshawen


Joined: 28 Sep 2009
Posts: 37

10-02-09, 03:15 am
PostPost subject: Reply with quote

"biological neurons are direct physical implementations of the upper ontology of abstraction."

But this seems to say: "My work here is done." Is it?

Tell me then what your next step will be, now that you have reduced "cognito ergo sum" to a 4D vector equation? If it is impractical (now, forever?) to implement, other than to point to your head, you are finished then, right?

For a while, you were sounding a bit like Buckminster Fuller (who was great), but now this is beginning to remind me too much of Louis Savain (a crackpot who sometimes posts here), whom I loathe. The 4D vector paradigm smacks of Savain. He is also rampantly anti-Einstein and all that he represented. I don't know if this was intentional or not.

To convince me that you are not Louis, you'll have to tell me what you know about Einstein's Theory of Relativity (pick whichever one you are most comfortable with). If Spacetime doesn't, "move", maybe we have a problem. Savain never overcame his confusion over the Xeno paradox, which he apparently learned about in divinity school. Irwin Corey had more than one illegitimate comedic child. Good for laughs; not much else.

Have some respect. People are trying to do serious work here.
Back to top
View user's profile Send private message

Author Message
danshawen


Joined: 28 Sep 2009
Posts: 37

10-02-09, 06:37 am
PostPost subject: Reply with quote

Another hint: It should not be necessary to discuss Gödel's incompleteness theorem in relation to implementing an artificial intelligence.

We already knew there were big holes in symbolic representations (and logic) before Gödel, but the main issue with using it for creating an artificial intelligence is, we don't. We are intelligent first. Reading and writing symbols (and thinking and communicating to other people with them) comes much later.
Back to top
View user's profile Send private message

Author Message
bkumnick


Joined: 26 Feb 2008
Posts: 14
Location: Sunnyvale CA, USA

10-02-09, 10:01 am
PostPost subject: Reply with quote

Gödel's incompleteness theorems are important because they PROVED formal systems are either incomplete or inconsistent. The brain's underlying knowledge representation is complete AND consistent. If we don't find a practical and efficient way around incompleteness and inconsistency, then the underlying knowledge representation will not be complete enough to represent everything we can think, or it would be inconsistent, (or both). Using a hodepodge of multiple representations to work around the consistency and completeness issue is too complex and too innefficient. Doing so creates a combinatorial increase in the complexity of the solution. The same problem is a root cause of the complexity and brittleness problems found in large scale software development today.

Yes, I agree biological intelligence is logically and physically prior to symbolic communication. Symbolic communication and logic are a product of intelligence. Biological intelligence is not a product of logic or symbolic communication.

My work is far from done. In this type of forum, there is limited space to write about a single topic. Discussing the detailed relationships between the representation of abstraction, and neurobiology is too big of a subject to cover in one post. All I intended was to point out that there was a direct relationship between the abstract theory of computation and an existing biological implementation. It is useful to use neurobiology as an exemplar of a working intelligence, and learn what we can from it.

Ongoing developments include documentation of the knowledge representation and computational model and exploratory development of computer simulations for quantitative feasibility assessment.
Back to top
View user's profile Send private message

Author Message
danshawen


Joined: 28 Sep 2009
Posts: 37

10-02-09, 03:06 pm
PostPost subject: Reply with quote

OK; looks fine to me, but I had to check.

But our own (non-symbolic) internal representation is not entirely consistent either. Just as computers manipulate numbers without knowing what a number actually is, we somehow manipulate sensor data (lengths, durations, intensities, or whatever) without knowing what those ore, either. If yours is consistent so far, (or seems to be), it might just be a simple scope problem.
Back to top
View user's profile Send private message

Author Message
bkumnick


Joined: 26 Feb 2008
Posts: 14
Location: Sunnyvale CA, USA

10-02-09, 07:57 pm
PostPost subject: Reply with quote

Thanks.

You appear to be using a different definition of "consistent".

By "consistent" I mean that the knowledge representation is guaranteed to be internally self-consistent regardless of what it is representing. Because the representation only uses a single fixed upper ontology to represent everything in the universe of discourse, there is only one "universal" domain of representation and one function used to represent everything. One immutable function cannot be inconsistent with itself (barring the occurrence of computer hardware errors). Even the brain can become inconsistent if it malfunctions due to disease. Not much can be done about that, beyond using transactional support in the database and backing up the system periodically.

One of the big problems in conventional software development is we must use multiple "representations" to represent different problems. In other words, we must write distinct programs to handle each unique type of problem. In addition, it is very difficult, and often impossible for conventional programs to handle unexpected input sequences, or unexpected data types if they occur in the input sequences. The representation I am using doesn't suffer from any of those limitations. It is a "univeral computing machine". It doesn't suffer from "brittleness" because there is no predesigned program with preprogrammed designers' expectations to be violated. The system is designed to be able to represent anything that can be experienced within the limits of the frequency range and sensory modalities it processes.

Abstractions are fired or "instantiated" when their intensional conditions and relations are satisfied by their input fields. Therefore, if an abstraction occurs, its occurrence IS its meaning. We understand the meaning of something when the representation of the conditions and relations that define its "intentional meaning" are activated and cause the associated neuron or neurons to fire. The occurrence and satisfaction of the intensional conditions and relations in an abstraction causes the instantiation or firing of its extensional "instances". Hence it is inherently "aware" of its own meaning when it is active. That is the cause of low-level first person direct self-awareness. Higher level self-awareness is caused by the same process, except at higher levels of abstraction. The lowest level raw sensory inputs have no intensional definitions. They are just input signals. At that level, there is no real meaning. It is just input signals. Meaning emerges as those signals are integrated with others at higher levels of abstraction.

Barry
Back to top
View user's profile Send private message

Author Message
danshawen


Joined: 28 Sep 2009
Posts: 37

10-03-09, 09:29 am
PostPost subject: Reply with quote

"intentional"

Sounds like a great approach, Barry. The "consistency" of the model sounds like the same one we apply -- we believe our senses until and unless we later discover that we have been "tricked" (like roadrunner painting a tunnel scene on the side of a wall so the coyote slams into it).

It also sounds as if you are using object oriented programming tools to develop this ("instantiate", etc.). This suggests the right sort of implementation. You must have needed to tinker with the compiler a good deal, or are you simply using an interpreter instead? I have been away from this technology for some years. You needn't supply any details here that give away the whole approach.

I'm just trying to get a handle on how far the best approaches have gotten, to get ready for a presentation I need to do in February.

What sorts of tests have you been able to run that yielded the most surprising (good) results?
Back to top
View user's profile Send private message

Author Message
danshawen


Joined: 28 Sep 2009
Posts: 37

10-03-09, 09:32 am
PostPost subject: Reply with quote

Be forwarned, everyone: Jeff's prize very likely depends on your system correctly identifying a cat wearing a "dog" mask!
Back to top
View user's profile Send private message

Author Message
bkumnick


Joined: 26 Feb 2008
Posts: 14
Location: Sunnyvale CA, USA

10-03-09, 12:55 pm
PostPost subject: Reply with quote

I implemented a system that operated on similar principles (performing dendritic integration over a simulated 3d model of a biological neural network) about ten years ago in C++. I had not yet invented the mathematical algorithm required to efficiently represent dendritic integration, but I was able to use a "brute-force" OO approach that worked functionally, but was too slow for practical use. The network was able to learn context dependent input patterns and sequences, and perform abstraction and generalization so I knew I was basically on the right track. I just needed to find the right mathematical equation to efficiently represent and perform dendritic integration. The brute force approach was just too innefficient and too slow.

Several of the software packages and tools I used for the original system are now obsolete, and no longer work with current OS's. I am redesigning the system to use my new dendritic integration algorithm and I am porting it to C#. Technologies involved in the new design include C#, .Net 3.5, LINQ to Entities, SQL Server, Mathematica, and Managed Direct X (for high speed scientific visualization).
Back to top
View user's profile Send private message

Display posts from previous:   
This topic is locked: you cannot edit posts or make replies.   This forum is locked: you cannot post, reply to, or edit topics.    Page 1 of 1 All times are GMT - 8 Hours

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2002 phpBB Group

Please contact the board administrators if you have any questions regarding the OnIntelligence.org forums.