071a-Approaching a Basic Model - representation - all the way down. simulation 
I was reading "Predictive Reward Signal of Dopamine Neurons". It is really quite good. But it presents a whole host of problems that remain unresolved.
the first is: how a representation becomes a dopamine response?
How, exactly, does dopamine reinforce synaptic connections? what is the physical mechanism of reinforcement?
the article talked about reinforcement in the same way a computer neural net can be reinforced... but obviously the neurons are not mathematical functions. so what does reinforcement mean? is it the production by the neuron of more receptor sites in the synapse?
it's clear that to have neural learning take place there is, there must be, an about signal that indicates the kind of learning. and there must be some differentiation of how signaling happens to show how connections are made from the motor function (or idea) up the chain to the stimulus. The paper showed how this happens. it was quite convincing in this regard.
but it did not show how a representation becomes an initiator of dopamine release, or ecitation. For instance, the paper refers to appetitive stimulus as a rewarding environmental stimulus but it leaves open exactly how such a stimulus can even exist in the whole system without making a reference to it being an external representation. The stimulus is a representation, it's not a biological signal. what is a stimulus to the neuron to start this whole learning process?
And that is the deep problem. This paper describes the function, and shows to some degree how parkinsons disease affects the dopamine reward/learning process, but it never gets us over the representational gap.
The representational gap is the connection from an idea all the way to down to a neural function. such as strengthening a synaptic connection.
For instance, it doesn't help us jump the chasm of how a rationalization may produce a dopamine reward process.... even though the rationalization is fictional! think of all the religious rationalizations the elicit behavior to conform to the religious groups norms, and to get up early for church, etc, when the whole religious system is based on ideas and rationalizations. How do those religious ideas "produce" the neuronal changes necessary to impart the behavior of religious people?
And that brings us back to the basic problem of representation. To actually understand that function, we must understand how representation works, ALL THE WAY DOWN. At the lowest level of physical function in the brain we must see the representation (or some component of representation) in action. And we must see the reverse, how some configuration of neural and chemical functions leads to representation.
Think of it this way. If you were just told you won a million dollars versus a hundred dollars, how differently are your feelings (and correspondingly) the chemicals released in your brain? What is the difference between a million and a hundred? A child wouldn't care, but once you comprehend the idea, that million dollars sounds pretty sweet. It's like Christmas. And Christmas is something learned, but entirely fictional! the extra zeroes in a million are pretty fictional too.
It must be that some neural configuration is tied to a million and Christmas that produces all these good feelings, produces a dopamine release and the release of other chemicals. For instance, Christmas causes adults to engage in all sorts of activities that they don't engage in at other times of year. So... there are Christmas neurons? And they sit dormant most of the year? That seems like an irrational proposition... maybe there are holiday neurons? or maybe special occasion neurons?
Do you see what happens with these questions? we start to make a representation to neurons AS IF they were representations of other things. When the neurons cannot be those things. (it's a variation on Searle's chinese box) Because in every model, the neurons look exactly like computational automata. Very sophisticated perhaps, but they are automata, even perhaps turing machines.
We could instead propse that what the whole system is doing is a simulation. that whole brain system is a simulation. and the objects in that simulation, the representations, those are just simulation objects and have no neural correlates. Those simulation objects may be simulated by specific brain regions and functions, and to do certain kinds of simulation requires certain brain functions and signaling molecules and chemistry etc.
We may see in a simulation model that there is some correlation to brain function and simulation behavior. but it would never HAVE TO BE a situation of correlates. Any more than a computer system has correlates during a video game simulation. The art work is a correlate, but that is where we do the representation.
And that brings us to the computation representation problem. All the computer programs are instantiation of human representations. The brain is producing representations of it's own... of us, of the world, etc.
if we think of the brain as a simulator, then the even the smallest brains or neural structures of organisms are doing simulation. it's not the brain function that is important at all, it's the simulation function. the brain function is the means of producing the simulation function.
to build the brain function, as we encounter in creating an automata, we must be able to create structures that do simulation. and it's clear we must have structures and signals to create those structures. And we must have signals that indicate which structures are used.... signals that originate in the simulation and correspond to the brain or computational process.
eg. motions, inputs, imagination, etc, must have corollaries in the automata/brain process.
what lets people share experiences, is that their brains are doing the same simulations. they are making the same representations. but for this to happen, the representations must work, ALL THE WAY DOWN.... or as we see with people, they work down to the point that the representations between people diverge... is it that the brain/automata architecture and processes diverge? or is it that the representations diverge... and of course, it can be both.
What we see in small animals, worms, is that neural signals represent some environmental fact or impulse. that is how we talk about it. moreover, that is how the worm acts. can we mess with it's neural architecture and affect it's behavior? yes? what are we affecting? how the simulation is generated.
so, my task is to build simple neural systems for computers that simulate their environments. starting with motor action and sensory inputs... the jelly fish?
all the automata problems I've been going over should continue to be issues. about signaling is going to look like plant signaling in many situations where signals/chemicals can pass from cell to cell. or it looks like environmental signals (again like plants) where dopamine is present or serotonin is present or noradrenaline is present in the organism. in the computer automata model, these are going to correspond to states of an actor or inputs to various automata of an actor. the basic actor rules apply, and an actor could setup reinforcement qualities for it's message queue.
all these actors must simulate a world.... that's the point.
a note on motor outputs. outputs must be "actions in the simulation". outputs, if they are representations, and learnable, are "in simulation" actions. they certainly have "out of simulation" effects, such as changing the environment to change input/sensory information. but if the outputs are treated as outside the organisms/AI simulation, then it can never form a simulation of actions. instead it forms a simulation of input modulation (some actions are like this, such as eye focusing or automatic squinting). but very clearly, the idea of action in the simulation is just that. it is different from modifying sensory inputs.
this simulation would have evolutionary advantages. there are likely organisms with nervous systems that do not have a simulation model, but the nervous system is a functional improvement over some other way mechanism. But once we reach an organism that responds not the to the world, but to it's simulation of the world, then we must say that it's nervous system is producing such a simulation.
such organisms will then have an evolutionary advantage if they produce every more useful and reliable simulations. And organisms that have inferior simulations will of course be disadvantaged from evolutionary pressures. in a simple sense, a "better" simulation produces better "guesses" of behavior (when being chased by a predator) or of threats, or of meals(if you are the predator). we should be able to see how optimum simulations produce optimum guesses of future simulation events.