001-What are we trying to do? How can we do it better?
Given the 50 year history of artificial intelligence, it seems reasonable to believe that many smart people should have developed machine consciousness by this point. But no one has. Nor has anyone come up with a robust model for what consciousness is and how consciousness functions. A model for consciousness is a pre-requisite for the development of machine consciousness.
For centuries, there have been many proposals regarding what consciousness and mind are, but these proposals all have theoretical problems and none of them provide a framework for HOW to create a machine consciousness. If we understood what mind was, then we should have a basis to create an artificial mind. But all of the approaches we have taken to date do not lead us to artificial minds. They may lead us to interesting technologies and to interesting research, but not to new minds.
All of the existing propositions for mind and consciousness are problematic in various ways. They are incomplete, and are often contradicted by elements of experience or empirical evidence. Psychology, biology, and computer science have all expanded what we know about the world, and about what and how consciousness may and may not work in the world. We have made many advances to understand certain features that consciousness and minds demonstrate, especially our own. But there is no workable theory about consciousness itself and its function apart from the fact that it arises in our biology and that manipulating our biology alters our experiences, cognitive abilities, and consciousness.
Brilliant people and organizations encounter the need for a robust model of consciousness, but the problem itself is often treated as so complex, that easier, more "solvable" problems are tackled instead. In a world where we have limited time to work on difficult problems, people often choose, not just problems they are interested in, but problems they think are solvable. This is a rational thing to do. But this approach has not led to a model for consciousness and how consciousness functions. Some argue that this is because the problem is so complex that we cannot understand consciousness. Alternatively, it could be that we are not looking at the problem in the correct way. [*note: the discovery of zero was a necessary component to the development of algebra and other mathematics. we may simply be in the same position as the Romans, that to advance our technology we need to advance our ability to think about certain kinds problems]
The consciousness problem spreads across many areas of inquiry which by itself deters many researchers from exploring the problem in a complete way. Additionally, there are little to no academic or economic opportunities to explore areas outside of trained expertise. For instance, cell biologist do not spend time understanding how the art making process happens. Computer scientists do not spend time studying meditation. Social scientists do not engage in linguistics research. Philosophers do not spend time acting, nor time writing computer programs. Drug researchers don't spend time exploring the effect of hallucinogens on art making and creativity. Painters don't spend time studying cellular biology to understand how synaptic connections develop and how axons grow. Chemists don't spend time studying the basic principles of mathematics and representation. Linguists don't spend time thinking about the social behavior of ants. Mathematicians don't spend time thinking about the process of microtubule polymerization. Developmental psychologists don't spend time figuring out algorithms to describe human behavior, or exploring the deeper problem of whether algorithms can or cannot describe behavior.
I'm sure readers of this list will think of individuals who are unique counter examples to what I have mentioned here. And that is the problem, any counter example, such as a cell biologist who also paints, is rare. Yet, just looking over this list, we see that consciousness is tied to every one of these fields.
Philosophy seems the only field that could attack this problem, but having read philosophy, it's obvious philosophy is entrenched academically and is so encumbered with terminology and ideological complexity, that no philosopher could propose a straightforward metaphysical and epistemological explanation for the following kinds of problems: how lovers can know when they are being called on the telephone by their beloved, or how poets know which word is the right word, or how painters know which color is the wrong color, or how dancers know where their partner is going to be, or how performers know where the crowd is emotionally and takes a crowd to a rousing performance. These could be great philosophical research problems, but they are too "simple" to address in philosophy. As a field, philosophy is to complicated and conflicted by it's own history to produce work products that an be applicable to research. In short, philosophy is simply too academic and too blinded by it's dependence on wordiness and by thinking in words, to produce explanations that can be tested empirically or that can form the basis for a technology. Of course there are exceptions, but they are few while the issues of consciousness are immediate and long standing.
In practical terms, people get paid to do what we know and train them to do. Philosophers are trained to do philosophy, not systems biology. People are rewarded in our social and economic system for doing reasonably predictable things. A robust theory of consciousness, let alone machine consciousness is described as an impossible goal and thus the problem space is broken down into smaller pieces. Yet, we know our current theories and reasoning are inadequate to address the problem of consciousness. Thus no one who benefits from current theories and piecemeal approaches, especially if they have benefit economically, is likely to fund research that look both fruitless and scatter shot. The synthetic development of ideas from different fields is an obvious threat in the academic world from those who invest themselves in deep and specialized learning.
New ideas produce change, and existing players do not benefit from having the world of ideas change on them. If new ideas accrete in existing models, that is acceptable, but throwing out the existing models, or abandoning existing models because they are inadequate is to risky and to rebellious.
So let's acknowledge these academic problems and propose that if the current approaches where going to work, they should have worked by now. As existing approaches have not produced strong theories of consciousness nor a machine consciousness, there is no reason to think further accretion of existing knowledge should lead us to those results either.
Let's move on to what viewpoints seem to cause our continuing ignorance regarding what consciousness is and how consciousness functions.
1) Consciousness is truly magical and cannot be created via our technology. ie. machine consciousness is impossible.
2) Consciousness is only possible biologically. And if this is true, the hope is that simulation of biology would produce consciousness through that simulation. But this is just a guess and #1 is true.
3) As human beings, we are not capable of understanding what consciousness actually is, and therefore we cannot create machine consciousness. This is a variation on #1.
4) All of the approaches to date are good efforts, but ultimately wrong, and we need novel approaches. That is, machine consciousness is possible, the problem lies in figuring out how to do it and all existing approaches are inadequate in some way, otherwise we should have seen some successes.
David Chalmers essay artificial intelligence make compelling arguments: The Singularity: A Philosophical Analysis, A Computational Foundation for the Study of Cognition]
There are many people that work in artificial intelligence and computation that do not believe machine consciousness is possible. They believe that something is happening in the biology that we do not, even cannot, understand. For these people, biological simulation is, at best, the only worthwhile idea to explore in the hope of producing machine consciousness. The theory goes that by emulating or simulating a human brain, consciousness will emerge from that complex simulation. Let's be clear thought, that this view is a belief that consciousness is some magic property that emerges from a certain kind of complexity.
However, if we can understand what consciousness is, and how it works, then we should be able to understand when we create a consciousness through a biological simulation. If we don't understand what consciousness is and how it works, then how could we ever know when we create a consciousness with a biological simulation? This is a variant of the zombie problem. We may create a perfect simulation, but how do we know that simulation is conscious? How do we it has internal experiences; that it has qualia?
A robust theory could be tested against a biological simulation, but also against computational system that is not a biological simulation. Such a test should provide an experimental results to verify the theory in conjunction with our own intuitions and experiences. Such a theory should match our predictions about the behavior of machine consciousness whether it simulates biology or not. We should see some kind of equivalence of consciousness function between biological, simulated biology, and pure computation machine consciousness.
Biological simulation has become the primary research direction of machine consciousness. But without a theory of how consciousness works, we can never know if a simulation produces consciousness. We need a theory for biological consciousness and for simulations of biological consciousness. Of course, if we have such a theory, it should point the way to create machine consciousness directly, without needing to simulate biology, or, the theory should categorically exclude the possibility of machine consciousness.
Which means the primary option for a researcher is to believe that #4 is true. We need new approaches to understand what consciousness is and how it works. Any theory of how consciousness exists and functions should categorically include or exclude machine consciousness.
The problem with the simulation approach is that it implies there is something special about the functioning of molecules, the chemistry which makes up biology. And that by simulating the chemistry, we should be able to make a simulated brain that makes consciousness. But we know that the brain is a complex structure of molecules and chemical interactions (and possible quantum interactions). So, what is it that the chemistry IS DOING, that leads to consciousness? That is the question asked from the physics end of the brain. We know the brain is made up of molecules, so what is it about the molecules interacting that leads to cells, that leads to brains, that leads to consciousness? What does the arrangement of the chemistry have to do with consciousness? That is the fundamental questions, starting from the molecule level and working up.
The simulation approaches we are undertaking now, never answer this question. Instead we have a frankenstein approach. If we put the pieces together in the right way, our machines will come alive, without understanding why or even how the consciousness happens in the first place. This voodoo approach will not work. And if it does work, we will not know why it worked.
In this series of pages, we are going to describe what consciousness is, it's features and processes, and provide a model for how to instantiate consciousness in a machine. [ya know, reading this, I can't help but start laughing. That proposition sounds ludicrous! but that is exactly what we are going to do.]
We begin this journey by going back to fundamental problems of knowledge and try to avoid making the mistakes and the assumptions for what consciousness is and how consciousness functions that have already been made by others. We want to avoid other approaches and theories because we know those do not work. Instead we will rely on first principles in the hope that these will lead us to understand how mind and consciousness work.
Simply by avoiding the solutions that others have accepted (which we know do not lead to workable outcomes), we will find it easier to develop new ideas for how consciousness exists and works. We know our existing theories are inadequate, so we should develop new ones. Basically, we need to stop believing in the group of ideas and predispositions we currently have so we can find some new ideas that might work better. This means we need to start over, at the beginning, and figure out what is really going on in the world, and in ourselves.
This also means that the results of a new theory for minds and consciousness should describe how other theories make sense and how they do not make sense. New ideas are needed that subsume other theories. We rely upon the current crop of ideas and theories because they have explanatory power. But existing theories do not have enough explanatory power to explain how consciousness happens, nor how to create machine consciousness. Thus any new ideas must, in some way, surpass the explanatory power of existing theories.
We must be able to understand which foundational principles are correct to be able to understand what consciousness is. To do this we should recognize how fundamental consciousness itself is.
We cannot create a mind if we cannot understand what actually constitutes a mind. Nor can we create a mind without understanding what needs to be assembled to create one? We know our biology is critical to our mind and consciousness. But what parts of our biology are the physical requirements or processes which produce a mind? Which of those physical processes can be applied to creating machine consciousness? How can we create a mind if we don't know what is happening to produce consciousness at the physical level, regardless of it being a biological or machine consciousness?
What do we mean by a machine when we say "machine consciousness"? We mean a machine that performs computations. Or more generally, we mean a machine that is programmable and executes programs. Computer, computation or machine are all interchangeable phrasings.
In this process, we should come to understand more about our own minds. We should be able to distinguish between facts about our own bodies, about our brains, about our minds, and about the physical processes that underly our consciousness as distinct from facts about any kind of mind and consciousness. We have three different categories of problems surrounding consciousness. There are specific problems related to machine consciousness. There are separate issues that are unique to biology, to human (and animal) consciousness. And there are basic consciousness problems. A good theory must delineate facts between biological and computational consciousness and what requirements are specific to any kind of consciousness.
How can we disentangle all these different problems of consciousness to create computational minds if we do not have a framework that addresses the fundamental issues that arise with consciousness? What problems are critical to creating an explanatory framework and what problems are specific to the instance where consciousness is found, ie biology?
Thankfully, somebody already did some of this work for us...