009d-Naming and testing representations



A simple way to see how our representations of things infects our awareness and our view of the world is by doing an exercise. 

Right now, stop what you are doing, get up and walk around and point to objects you see. As you point to those things, name the thing out loud with a made up name. Instead of using a name that ordinarily applies to the object, make up some other name. Use whatever name you want. The only requirements are that you don't censor your naming and say the made up name for things as fast as you can without thinking.  This is an improvisation exercise.

Get up and try this exercise right now. 

What happened when you tried the exercise?  Did your experience change?   

Many people report their experiences becoming more vivid.  Some report that the world seems to expand.  That details, such as textures become more prominent, colors brighter, things get bigger. Different possibilities present themselves for what things are and what they can be used for present themselves.   Simply by naming things differently, your awareness of things seems to change, and the things themselves seem to  change too. 

This sort of simple experiment with awareness should be something that an AI can do as well. A human like AI should be able to have an experience that alters it's consciousness just a little bit, by giving things alternate names. 

What is interesting about such an exercise, is you may start to see patterns in naming and objects emerge. Eg. a name theme may emerge. You may find the naming reveals something about your own thoughts. Or the names you use may indicate a new kind of pattern between objects previously hidden to you. It is not uncommon in improvisational exercises to have things take a sexual bent.  

The ability to improvise is something most human beings can do.  Most children do it more easily than most adults.  But it should also be something a good AI can do. 

An example of this naming occurs when we say, "your other left" when giving directions. We are engaging in the same sort of renaming. Only in that context, the renaming is more purposeful. The renaming is engaged in by both participants in the exchange.  The shifting of awareness that occurs makes the renaming meaningful. 

This naming ability occurs with very young children. It would not be surprising to any of us to see small children with a few pebbles which they have named. The child could easily make up a story about the named pebbles as characters. When a child shows one pebble to a parent and says - "this pebble is you and this pebble is me and we are going on hike." And then the child proceeds to walk the pebbles along the parents arm as if the arm were a hillside. This ordinary childhood action is often accompanied by a narrative.

Supposedly, in "REALITY", none of the things the child says are true. But they are explainable using our symbolic expressions of awareness and representation.  That simple childhood act is representation in action. 

pebble = AW:pebble ≈ AW(pebble;parent) = pebble;parent
or pebble = pebble;parent

Of course, "parent" is shorthand for a much more complex set of representations the child must have already developed. Even as the representations become more and more complex, we can model the awareness of the child and the representations the child is making. We just have to increase the verbosity and relationships of objects and awareness to each other. 

We may make errors in how we do the exact modeling of complex representations and experiences. But we can model all the activity this way. We can correct our errors to construct more accurate models that express all the kinds of awareness and representational activity that occurs. The awareness and representation functions give us a way to model all kinds of representation and awareness symbolically. 

The pebble as parent example is just one kind of behavior that demonstrates a strong representation activity. Drawings and pictures are another strong representation. Writing is another. The stories children and adults make up give us a straightforward insight into how representation and awareness work.   The pebble;parent is a kind of substitution.

This representation activity is applied over time to ever more complex ideas and situations. We develop very complex ideas and narratives. One thing we also do is test our representations and narratives. The primary tests we use for representations and narratives are consistency and predictability. 

If a child's representations are inconsistent, the child will readily change a story, or abandon an idea because the representation is inconsistent. If a child's representations are not predictive, a narrative description is abandoned or a new representation is created. When a person figures out something is wrong, usually, the person changes their mind of how or what a representation is, or how the representation works. * 

[*note: When a person cannot change there mind about a representational construct, what is inhibiting the change?  I suggest that altering our ideas, our representations of things and how things interact and work, is driven by a need a for consistency and prediction.  The inhibition to not alter a representation is likely driven by hidden consistency and inconsistency desires]. 

When does the testing of a representation happen?  Testing representations always comes after the fact of representation, and not before. Prediction and consistency are values that come after the act of representation. And certainly after the existence of awareness.  Consistency, or any other condition we apply to representations, must come after the representation being experienced.  

Because testing comes after the fact of representation, after the fact of the experience of a representation, some experiences may be so powerful as to override the ordinary desire to make representations consistent and predictable. Being open to changing representations and narratives will naturally lead to changing representations and narratives. Being closed to the modifications of representations leads to the opposite behavior representations become fixed and unchangeable.   

In daily experience, we see this with the exclusion of experience and data. eg., the zealot simply refuses to believe counter examples to a belief even exist let alone that they are valid. This rigidity of behavior to representation modification presents different problems and benefits.  One the one hand, non-modifying representations are predictable because they do not change with new information.  On the other, non-modifcation of representation may have survival consequences.  

The non-adaption of representations is a feature of many science fiction stories about AI and robots.  It is the robot which cannot adapt its representations, which acts the most like a machine, that presents us with the danger of machines run amok.   In human beings, we refer to some examples of this behavior as "trying the same thing over and over expecting different results."   In both cases, the representations become mechanistic.  

Mechanization of what ever kind is explicitly non-adaptive.  Adaption means changing what you do and what you think.  For our purposes, AIs must be adaptive. 


-----


The traditional approach to AI is to build systems that perform some task that a person can perform. The problem with this approach has always been that while computers can now play chess better than almost all human beings, they can't do almost all of the things children can do.

This is because ideas of "right" and "wrong" have been put before ideas of awareness and representation.   Said another way, we put our tests for representations before the representations themselves. 

If we put a child of 1 in front of a chess board, the child can begin playing with the chess pieces. We have no computer capable of doing the same thing. Our chess playing computers do not recognize that knights are really horsies and that the King is always the biggest piece on the board. 

Winning in chess means capturing the king (or trapping the king if you prefer). It's a powerful metaphor that has nothing to do with chess. The king is also a metaphor for the players. One player beats the other; one player is defeated. Chess playing computers are never defeated. The computer has no concept of losing.   The computer has no concept of running out of options.  The computer does not lose it's temper and destroy the board when it realizes it will "lose".  

Before we can begin creating systems of strong AI we need to have systems that appear to be aware and engage in representations. 

We can suspect when there is real awareness and representation going on when we see actions that appear "wrong". Ideas of right and wrong come after the making of representations. After implementing a system that does representations we can work on modifying it to be one that seeks consistency and predictability.   [Perhaps predictability and consistency are values that are reinforced by evolution]

Currently, errors and mistakes for computers are a big issue. Errors or mistakes for human beings are a misapplied or bad representation. Errors or mistakes for computers are really a manifestation of errors or mistakes made by humans. To be able to construct a computer that has a conscious AI, it must have the ability to make mistakes. If it cannot make mistakes, it is only a machine.

previous next