None of the existing computational models will work, because the meaning of the computations exists with the programmer and not with the machine doing the computation.  The kinds of programs created today all rely on human meaning playing a role in both the production of computations and the outputs of those programs and the meaning of the interaction between machines. 

This new model treats representations as valid for what they are: non-physical phenomena, and shows how representations have functional properties.  Certain representational functions occur in chemical interactions, particularly the catalytic property of proteins. Alone, chemical interactions are meaningless.  But by encapsulating and embodying chemical interactions, cells become agents that engage in representational actions, and groups of cells produce higher level representational functions and instantiate meaning and ideas.

By adhering to the ideas of systems biology and representational functions manifest in cells an approach is made to break, computations down to their atomic components in multiple interfacing automata to produce sustained programs that interact in a functionally similar to chemistry and representation without relying on mathematics or simulation.  A fitness function for this process can then be employed to enhance the development of a computational chemistry to produce more complex program structures such as DNA and membranes, and eventually cells.  The goal of this effort is to produce machine generated computational cells which instantiate meaning and ideas on their own. 

A test of successful development of this model are computational cellular structures where access to interacting programs lies behind a an address space managing program, which would be a membrane like interface for the programs internal environment.  A further clue to success is if cells of computations show impulsive and stigmergic behavior based on their interactions with the "outside" computational environment.  If groups of computational cells can auto-develop and multiply into new address/processor spaces by growing from simpler computational programs and dividing, this would be a good demonstration of auto-generated growth.  

The primary arguments against algorithms as a means of producing representations and consciousness is that it takes a conscious person to know which algorithms to choose and what the results of those algorithms are.  Meaning remains in the programer and the user and not in the programs and computation.  However, computation itself need not be so constrained. The composition of algorithms can be done through cellular automata from a set of initial conditions, but even here, we see the force of representation making by programmers where the rules of the cellular automata, and the structure of the neighborhood(addresses) are determined by the programmer. And thus even the meaning of computation by cellular automata remains something extrinsic to the automata system.  To achieve machine representation making and consciousness, the machine itself must form representations and conceive of their meaning without relying on a programmer to construct it's programs.  In this way, the machine can construct intrinsic meaning that is used to generate it's own cellular automata rules and structures.     [A New Kind of Science:  Stephen Wolfram 2002]

There is a second problem with causation and representation that is not obvious, and it is the problem of bi-directional causation.  Representations have causal power, and thus push downwards to making changes in components of an organism and the function of the parts of an organism push outward from their intrinsic natures to produce physical effects.  This bi-directional causation must be captured in a machine consciousness.  The problem is, our existing computational models are all one way computations.  

The only way to achieve this kind of bi-directional causation is to have complexes of programs which change their structures and interactions based on the interactions of higher order program complexes.  Thus large complex action programs are actually complexes of thousands or millions of simple automata that all interact to produce the complex functionality.  In this way, changes to the complex functionality cause interactive changes to the component programs, all the way down to the simplest automata. 

A strong algorithmic approach simply cannot do backward causation.  It can do backward value signaling (back-propogation), but this is still uni-directional computation.  But no modern programming model allows for both structure and function of the programs to change so that a single program can do opposite functions, unless that program becomes additively more and more monstrous.  A giant mess of spaghetti code that can handle every conceivable condition, which is a recipe for brittleness and halting conditions and loops.  But again, these programs can only exist if someone conceives them.   No, the machine must develop, must conceive it's own programs and it must alter it's behavior and functions by altering it's programs, to instantiate bi-directional causation.

-----

Computations consist of 3 primary features.  The address of data, the value of data, and the function to be performed on the data. By removing all of these features from the programmers selective control we can produce programs, algorithmic agnostic programs.  Most of the programs will fail and will not execute.  But some of these programs will execute.  Like the universe of chemical reactions, we only care about ones which can form a homeostatic system. Thus the task of the programmer is to create processes which produce these minimal program candidates and to select for groups of these candidates the demonstrate homeostatic properties. 

Below are diagrams and a discussion to show how atomic components may be assembled to produce a computational homeostatic "chemistry":

-----
Note:  This is recent work, so please excuse the roughness :)


Computational species, outcomes derive functions

how are computations put together?  they are assembled according to the outcome... according to the imagined outcome. Procedures connected or organized together to produce an output.  To have computations be self-assembling, self managing, self-adapting, form membrane spaces, form fluid spaces, form connections.  the computation system MUST care about the outcomes of computation, the output of each computation. 

And these computations must then be the inputs to other computations that assemble or disassemble computations.  Computational results are outcomes, ergo there must be some other computation that cares about these outcomes and affects the process cyclicly... and so on. and so on.  A web of assembling and disassembling computations. 

in computational terms there is a procedure that takes a and b and makes (a,b)   there is a procedure that if (a,b) -> c     there is a procedure that cares about (a,b)   and if there are no (a,b) then it makes a procedure to produce (a,b)   and there is a procedure that cares if there are c's  and if there are no c's then it creates the procedure to make the program that does (a,b)->c 

this is 4 pgms so far.  how do they care?  they take some other signal.   there is some other positive signal that is the inverse of (a,b) and the inverse of c  a signal that says the programs to make (a,b) and to make c are not around.  In exactly the same way that the proteins of a cell produce steady state information that is manages the levels of proteins and molecules in the cell.

because the programs operate at a representational level above their data, the programs must produce markers to indicate that they exist and are operating.  this is the problem with computation, it is representational (the hidden representation is the output)  therefore, to make programs self aware, self adaptive, the programs must be constantly making references to both of these levels.    there can be data in the upper level of programs that is processed by programs in the lower level  and vice versa.    the programs exist in two computational spaces and the data exists in an opposite computational space.    this way the programs and data, the functions and objects can interact with each other adaptively.  

f:(a,b);c ---> (a,b) -> c, f    

f then is a value  in the space that can be used to indicate output.  or c is the indicator of output.  if a function produces two outputs, a second function can use one output as an indicator of the function and one output is the functional output.   in this way, the f output can be used by another function to make or unmake the function f:(a,b);c     how are the functions unmade?  they resolve to smaller computational species, until the computational species is merely data.  

all data are combined into computational species or data.  

-----  

cellular automata are a way to structure the computational process using simple input and output program species.    but celluar automata are misleading.  the visual display of cellular automata looks like one thing, when it is actually another thing.  eg. gliders are not really gliders at all, but a shifting group of values through connected automata. 

so how, in a species model, do we build large programs and have species of computations interact with these programs?   we think of them like cellular automata objects with open and connected input surfaces.  a simple species will have a single input surface for a  matching value.  a larger program may have many interior surfaces, that are only accessible by a single surface program.      

computational structures like cell walls, dna like structures, organelles etc.   will require large collections of computational species to form programs that have interactions with simpler program species.  


the way to start making these species (and then hopefully large complexes or structures)  is to have one that produces some output.  then another that does something with that output  then another computation modifies the initial computatoin based on this second tier output.  why the 3 tiers?   because the second and third tiers necessarily involve outside forces or have outside effects

note that outcome regulation is driven by external factors as well as the initial transformation.    in computation, the program that does (a,b) -> c may be the regulated component, not the objects a, b


notice that programs and data are mixed, but constitute different stages of effect.  
a  program uses data to produce data:  ProgramC (a,b) -> c
data is combined into a program (c,d) -> ProgramU     Note this combination hides a program that performs the function.
all the programH does is combine c,d and those two objects are programU.
ProgramU takes (e,f) to make u
ProgramU takes, u and undoes ProgramC (whatever programC is, it becomes separate parts)

because computation is already representational, the data must affect the programs and not merely the other data.  In chemistry models, we use programs to manipulate the data according to the "rules" of chemistry.  Because the "rules" of chemistry are intrinsic to the atoms themselves, we must get around the problem of extrinsic affect by producing computational atoms.  

In this computational quasi-chemistry model we must determine what are computational atoms, and then how are those atoms put together and taken apart. how does the binding work?   and for computers, it's programs everywhere.  The programs are determined by their outputs.  So the outputs of a program must have an effect on if the program itself exists, and if it works, or should the program stop.   thus there are two kinds of objects.    and to avoid the programs producing programs directly problems (flooding the processor and memory space with useless programs), the interactions between programs and data are all regulated by other programs and data at alternating stages.  

every object is a species that may play two roles.  role one is as a program.   role two is as data.  but all data is used by programs to regulate what that program does.  and a program may take another program as data and add something to it, or take something away.  and for this to work, the programs must consist of a constellation of simple computational species, like free cellular automata where each stage of automata computation alternated between outputting data to the next stage, or changing the automata program itself at the next stage. (as in the actor model).

For a  function (a,b) -> c to be a program, it must drawn into the representational function realm, and out of the realm where representations take place in the mind of the programmer.   But because programs are representations, we are forced into a kind of two-step to regulate the creation and destruction of programs through auto-transformation of their data, which includes, the address and the function or algorithm.

-----

Below is a diagram where the programs care about their outputs and produce inputs to other programs.  For example, No object "d" means no programU.  No data objects, "e", or "f" means no object "u".   And that shows how we regulate the undoing of programC either by restricting the objects "e" and "f" so that programU never runs.  Or by restricting object "d", so programU is never created from "c,d" by programH.

the reality is that this is a kind of loop.   in a real system, it is the side-effects which produce actions (such as movement, or making new structures etc)  but it is the core functionality that does regulation.   for instance, object "u" may be useful for making some other program too.  object u may also be a data element in many other programs either as input to the program (as it is here) or as an element of the program itself. 

this networking of interactions follows the same model we see in systems biology networks where the molecular interactions form effective networks  [Introduction to Systems Biology  Uri Alon 2007]




and what is a membrane or a structure?  it is a collection of addressed programs.    these addressed programs accept data elements and do something that passes on data elements or programs.   it is classic actor theory:  the data element may change the structure of the program, pass on a new data element, change an address. 

in structured programs (vs unstructured or fluid programs shown above)  the programs have address elements of other programs.  a data element may be the input to a program, or an addressed program may back modify a program by treating it as an input.  


What are the elements of a program?   input value, function to produce output, output addresses, input address
and remember, bit or byte conservation and that programs and data exist in different spaces.   Different interpreter or process spaces.

I can't stress this enough.  bit or byte conservation is CRITICAL to this system working.  if byte conservation does not happen, then everything is magic, everything is in the land of representation and we are back to programmers writing program.  there must be conservation of bits or bytes for this to work. 

 
A program does not create or undo another program, it creates or undoes data.   in theory a program can change itself, but how is that regulated?  instead, it's the sea of programs and data interacting.   when a program is read, it is read and run.   if a program creates or alters or undoes another program, then after it is run, it is no longer available to be run.   that is, the register space it took up, is now different.  All programs are single function; this is a product of bit conservation.     

thus the whole system is at a minimum, two interpreters running programs in their own spaces.  the red space and the blue space are like linda spaces.  but the programs themselves, only address data elements.   to the interpreter, some of these data elements are programs. 

this is perfectly acceptable:



because from the programs perspective, this is all that is happening:


programC is producing output data.  that data C also happens to be programZ, but that is for another interpreter to determine and run. Program C outputs the data object ProgramZ, but Z is a program in the red interpreter space, not the blue space:

to manage this, programs are simply read and run once.  if the program stays around, it will be run again, but before it runs again it may be modified by another program (because it may be an input to another program)  note that inputs are transformed into outputs, that is all a program species can do, unless it's outputs are associated to addressed programs.   

this system must be bit/byte complete/conserving and only have one way functions   (a,b) -> c
Nothing is lost, nothing is created, everything is transformed.



for addressed programs, the programs exist in separate interpreter spaces.  


here we see the programs sharing the same interpreter space. double circled data elements must share the same address space between programs regardless of what interpreter space they are in.  a double circled data element is output to a computational address (register) and that same value must be input from that register by another program.  it is in this way we can construct complex program trees.  

what happens if some other data element takes that address space?   the program is changed.  in the larger sense, these addresses/registers are part of the program.  if some other data element occupies that space, then the larger program itself has changed.   And this is a feature, not a bug!   because in this way, a large program may be in more than one state, where it's sub component computational species, get run only up to a point. where they wait, until some blocking element/value in an address/register is removed and replaced with a valid value.  then this larger program will continue to function. 

essentially, the program folds, and it gives a system of these kinds of programs a way to regulate in time. 

but what about the non-addressed elements?   the program must actually look at some register to see what value it has.  and this is true, but it can look at any register in it's processor/interpreter (linda) space.  ie, randomly grab register spaces. 





now, how are the species created?    (a,b) -> c is the function, but then it must also be a program.  how is that program created?    and these programs are created in steps.  the basic step is where a data value is moved from one linda space to another linda space.   And this is PURELY a representational action, because the space is a representation.   

eg.  a;a in space1   becomes a;a in space2     this is a move.     space2 is a single programs address/input value.  

a program consists of  valid inputs, address/register or virtual address, function to change inputs producing an output in an address space. 

what is the function itself?  how does the transformation fact get created?    and initially they are all random.  what makes species survive is the system doesn't halt.  the functions themselves are data.  

function, input value, address are all the data elements. they get combined into data elements in one address space, and are processed as programs by another interpreter.

EVERYTHING ABOUT A PROGRAM IS DATA.   to make a program, you just put data together.  then you run your data.  if it runs, great!  if it doesn't, it's still just data.  things get put together randomly.  (in data space) and run by an interpreter (in the interpreters space)   things that run cause changes to data.    it is the ecological stability of these objects and the regulation of the making and unmaking of programs by the data and programs themselves.  

**** you have to think about this the right way!  otherwise, it doesnt' make any sense and won't work. it looks like perpetual motion, or a rube goldberg device. IT'S NOT.   the reason things work is again CO-OCCURRENCE.  the interpreter process and the combinatorial process work to produce a steady state of function and interaction. 


what is key is address.  a program can address a local register place or a virtual linda space (both input and output).    the program itself may reside in a local register place or it may reside in a virtual linda space.    in either space, a program is always treated as a read then process.   data is always treated as a grab and output (bit conservation).  

all of these programs are computational species.  they are very small, the most minimal kind of program.  it is only by combining them that we get larger programs.  We combine them by sharing address spaces for data values.  these computational species are very simple, like cellular automata. 


here we have two programs that use the same address space, but only function in the presence of different values:


in this example, we see very much the model of cellular automata.  certain input values produce certain output values.  and the two automata share one address space.  if the value in that space is "a" the top automata functions.  if the value in that address spaces is "d" the bottom automata functions.  


ƒ either binds or unbinds the input values, connects or disconnects the input values 
and produces a combined output value, or a split output value. If the function only tests a value in a particular address or cell, we can think of them almost as traditional cellular automata.   if the function responds irrespective of address or cell, the function behaves in a fluid space vs a fixed structure.  In a fluid space, it is the interpreter which fetches a value to give to the function because the computation does not specify an address.  

although, the computation may specify an interpreter space.   

so that one interpreter function may only fetch values from another interpreters address space.  It is this mechanism that lets us span different computational spaces of action.  the corollary to biology is membranes and the spaces membranes enclose.  membrane spaces are addressed computations sharing address spaces.   but where data goes into the membrane and out of the membrane, these are different interpreter spaces, either addresses on other machines, or in other interpreter memory spaces.  

when a single program structure contains or regulates all the data elements that may enter or exit such a space, then we would call it a membrane, because it is the regulator or arbiter of what data goes into and out of the isolated space.   Inside that space there may be other programs and even other membranes, but the data/address space is only accessible by going through a larger program of automata (a membrane).

Here we see a set of automata that interconnect themselves that regulate all of the interactions of address spaces for a group of programs running in different interpreters. the programs surrounding these programs addresses and data spaces are the only means to interact with that set of internal programs.

a second membrane structure may form, when one interpreter's programs captures all of another interpreters space.  that is, the only way for the data inside on interpreters address space to get sent around is for it to pass into another interpreter via a single program (of automata connected by shared addresses)  as shown below in pink.  The programs in the pink circle are all the programs, data, addresses that interpreter has and can run.  An actual cell would likely contain both of these kinds of "membranes".






so how do automata access address spaces to produce fixed places or dynamical addressing in interpreter spaces?  how would a membrane program develop to access all of an interpreters space?  



address spaces are data.  
inputs are data
accepted input values are data
the function to connect and disconnect input values is data. 

1110 1110 ; 11100111    the function is merely ; but the program function is (1110, 1110) ; 11100111  
A full program includes the addresses.  if the addresses are virtual addresses, the address is assumed to be in the interpreter space.


the trick to not getting stuck back in the loop of what a [ ; ] is and what [ ( ) and , ] are, is by putting the code in different spaces!  so that code and data do not interact in the same interpreter!  the interpreters know about each others addresses, and they know about them because of programs!  

(1110:a10110, 1110:b110111) ; 11100111:a10110     this program takes two elements in different address spaces (a and b) and combines them and returns them back into the same "a" address space.   where does this bit of data reside?  it could be in a "c" address space or in ether "a" or "b"

the question then is, how do we make a computation?  and how do we undo a computation?  


a program in one interpreter space, is not a program in another interpreter space.  there is a kind of virtual program space.



there is a space that has 3 addresses, and in those addresses are values.  this is the atomic computation.



the function space are the interpreter functions.   the interpreter is a pushdown automata (or greater)  it has a rule space and a data space.  the rule space is some other interpreter(s) data space. thus the output of a,b -> c  c in the ƒ space must be a valid transformation of a and b.  this is a function.  it is like saying "add"  or "divide" or "substring" in a programming language.   as an automata, it looks like a single rule of a cellular automata.  two values produce a third value.  except where this ƒ space differs from a cellular automata, is that the ƒ is included in a program.  and for the program to run,  the programs address spaces that must contain the values of the function ƒ. 



a program is not just the function.  it is also the addressing of inputs that the function processes, and the output address the function populates from the elements that are processed/transformed.    by itself  a function ƒ is just a piece of data, like those below.  these are functions which do nothing.   eg.  (a,_) -> a   or simply (a) -> a  or an invalid function.  (a,Null) -> Null as shown below.  




 
is an x value in an address space.  this is one constituent part of a working program.   it is a function ƒ.  the value x and the address are transformed into x @ address.  This is where we see the importance of byte conservation come into play.  this address must be atomic.  For structures, this address is fixed.  it has a fixed value in some interpreter.   the function (x,adds) -> x@addr  sets the address with x.  

 some other function, say (y,adds) -> y@addr  would set y into that space, if that function exists.   if both functions exist, then it's a first come first serve.  and what happens?  the adds is no longer available to be set.   so (y,adds) is a condition that the function can never execute, because it must get y and addr from some linda space, and if adds has already been grabbed by the x function, then adds is no longer in the linda space accessed by the function (y,adds)->y@addr





here we see all the functions needed to make a program.   an internal function of transformation.  the functions of values in addresses.  and the value output to an address space following the transformation.  

the three elements of x, y and ƒ (x,y);z  must be in place for the interpreter to output the z@ addr space.   the x and y addresses lose their values.  the program (function and addresses) remains in place. 

so then another program has to come along to add x and another program to add y values into this program for this program to run again.  

the above program works because the function: (x,addr) -> x@addr happens and the function: (y,addr)->y@addr happens and those two functions are accessed by the function: (x@addr,y@addr) -> z@addr    x and y disappear from their address spaces and are combined into the new address space as z.  (following byte conservation).  

if there is no data( function): (y,addr)->y@addr  then the function:  (x@addr,y@addr) -> z@addr will never happen.  because (y,addr)->y@addr is the data the other function needs to run.  of course there must be some other function that responds missing (y,addr)->y@addr  data and then goes about initiating a process to produce it.    the programs care about their outputs, they care that they all work together.   output oriented.  

work product is a side-effect of this output responsive programming.  that is, the work product produces more reliable, more enduring, more homeostatic structures and that is why they exist (no halts, no loops).  


the program object, processed by the interpreter is this:

  
the program must give two receiver addresses and one output address for the binding program to process the function.   The unbinding function (on the right) takes a single input and outputs two address.   THE FUNCTION IS DIFFERENT FROM THE PROGRAM, IT IS THE RULE THE PROGRAM FOLLOWS.  The program is the function + the addresses.  

All programs must only be a single function!   No program can do more than one binding or one combination (or a movement of data from address to address).   For programs that are bound together, the function occupies a fixed place BETWEEN THE ADDRESSES.  These bound functions form larger programs (not unlike how automata can be put together to become larger programs.)   For free functions, the function moves around and is bound to addresses given by the interpreter (we can think of the address space as fluid)

below are examples of bound programs:



they key is to understand that the programs and data may reside in different spaces, in different interpreter spaces.  And a program in one interpreter space may simply be data in another interpreter space. 


because of this, it seems like a good idea for the what makes up a program in terms of form, is merely data in another space.  

eg.  addr:addr:value,value;value:addr    is where two values are combined into a third value and the values associated. 

but in another interpreter the form is addr:value, addr:value ; addr:value

this satisfies the basic need where the rules space and the data space are kept separate and then rules and data can be developed in both spaces by random behavior of the interpreters and random survival of these spaces.   that is, we can select for survival spaces that produce more long lasting programs as a simple requirement.  

[ *** note:  I have a kind of hunch that it is the different kinds of bonding that take place in cells is a corollary to the kinds of interpreter spaces (I think 3 is the minimum number)  that process information.   hydrogen, covalent, ionic all serve different bonding/unbonding roles in the cell and all these kinds of bond relationships are occurring all the time.  Structure plays a huge role of course, but so does bonding.  Structure in this model is handled by the function and the addresses, the trick in terms of computation is to seed the interpreter spaces with functions that get bound to addresses to produce cycles of bonding and unbonding.   how binding happens is computation in the computer, it is a binding function.  The binding function in physics is chemistry.  physics is not programmatic, it is not representational. ]


in a true binding model, the function goes away, because it is transformed.  that must be a feature of the fluid space.  in membrane/structure space, their is no transformation, it's data passing.  

how to handle transformation?  and we do it through a few steps in computation, because to do the transformation, requires a program.  because what matters in transformations is structure.  and structure in computation is values and addresses.  and we still have the problem of functions which modify values and addresses.  and functions must be values in addresses that get run by a processor.

the real problem here is to remember it's computation. how does the transformation know to happen? that is a computation.  in chemistry, the transformation happens, as if by magic.  but really it's produced reductively from the electro-chemical bonding or shell states of electrons etc,  the computational corollary is that physics is a giant interpreter running each chemical process, all the hydrogen, ionic, covalent (and other) bondings.  in the computer we get stuck with computation, so to have the computations adapt, we have to have a way for the computations to care about each others outputs.  we have to implement transformations in a few steps vs directly.  

to understand how to create a non-representational space with computation to give rise to representation making, you have to think of the functions, addresses, and data values in the right way.  

the funny thing is, figuring out how something like this has to work is part of the problem.  the other part is believing it MUST be this way when it must, against the convention of traditional thought, this is a stateless, dataflow, actor model and then create an implementation.  This is auto-representation.  half the problem is trusting the theory, because the theory leads inexorably to the fact that states are illusory (they are representations) and so much of our thinking is state oriented.  

in this case, we have to develop computational chemistry, which relies on computational "atoms" to create molecules that have interactions with other molecules and produce transformations etc.   These molecules are programs.  and once you get this fluid problem solved, that leads to structures and membranes, and the solutions to the chemistry, have space/address issues that are the opposite of the structures.   how do large structures, which are large programs, large computational structures, get made and unmade?  j


everything is a data element at an address.  some of these elements are functions, some are programs, some are address values, program parts that interact with other parts.  but these different data elements must be processed differently by different interpreters, if only one interpreter is used, then structures can't really be made because everything interacts with each other.   (this is why cellular automata are fixed and have fixed addresses.  because you can't do cellular automata that move their rules and address connections around and create stable structures (the address, the value, and the transformation comprise the cellular automata program, the value and transformation comprise the rule)




How do we know something like this cannot work?  because we can't do the transformation.   There is a hidden function that the computer must perform to cause a transformation to happen.  where is that function?   For a computer to make it's own representations it must manage and make it's own functions, it must care about the outcomes of it's representations, it's functions, and the kind of data it needs and produces for those functions.   to care, means to have some other function that looks at those outputs.   this is why functions, data, and address must all exist explicitly, but operate in different spaces, different processor or interpreter spaces.

the interpreters do only two things.  Add data to addresses (random addresses 
for fluid function-programs), and process the programs (functions+data values) in their own address spaces which have been put there by other interpreters.  The programs produce output when run which are placed in the programs output address spaces.  

If we think of these programs as deterministic finite automata, there are many DFAs and the rule tape and the data tape are shared as data and rule tapes with other DFAs.  Most of these rule,data mixing DFA's will simply not work, but some of them will form interactive complexes that can further share rule and data space with other DFA like complexes.  These interacting DFA like complexes are functional representational corollaries to molecular interactions.