Wednesday, February 4, 2004

Bracketing qualia

More shameless self-plagiarism, from the KurzweilAI.net thread:

Two general but opposing points about qualia, again:

1. Contrary to the opinions of some, qualia are not phenomena from another, "mental" dimension -- they're not magical, not mystical, not ontologically distinct from the physical or natural world (however that's interpreted). But, even though they're the most familiar and immediate phenomena available to us, there is a puzzling and often frustrating aspect to qualia when they're brought into conjunction with our usual modes of scientific or rational explanation -- "blue" just doesn't seem to have anything at all in common with "a particular frequency of electromagnetic radiation", for example. And one reason for that, I think, is just that qualia are "primordial" -- they come before all concepts, and in fact before language itself -- whereas scientific explanation is a very sophisticated structure of abstract concepts that has been built *out of* qualia. Little wonder, then, that things get a bit knotted when we try to turn that sort of complex conceptual apparatus back on itself to "explain" the very phenomena that constitute its raw material.

2. But, that said, I also don't agree with others who might be thought of as the opposite extreme from the qualia mystics: the reducers or dismissers of qualia, who seem to hope that the puzzle and frustration they present to scientific or even philosophic explanation will go away if we just ignore them. At best, this side sees qualia as an inexplicable and apparently quite pointless side-effect of mental activity (the epiphenomenalists); at worst, this group might hope to dismiss the concept of qualia altogether as a kind of illusion or philosophical mistake (Dennett and co.). But I think that "qualia" -- in the sense of concrete, specific, qualitative conscious experience, as in tastes and feels, colors and sounds -- are quite real, even though they're quite distinct from chemical reactions, compression waves, and the like. I think that qualia are actual structures of neuronal activity, generated in one part of the apparatus of consciousness and received by another part that is specialized to apprehend these structures as primary, irreducible tokens. And far from being a pointless side-effect, these kinds of tokens are, first, logically necessary for consciousness itself, and, second, practically necessary for the sort of complex and autonomous behavioral control that a conscious control system provides.

Saturday, January 31, 2004

The privacy of consciousness

See the note to the following post for the origins of this material.

We can in fact "measure" consciousness, at least in principle, as indicated in operations that directly stimulate the brain of waking patients, who report resulting conscious experience, or qualia (to actually measure a conscious experience, we would need to run those operations in reverse, in a sense: instead of noting experiences as we apply electrical current, we would note current changes as patients were presented with experiences). But of course this is just needles on dials, ink on graph paper, or whatever, not consciousness itself. We're pretty sure, in such a situation, that we are measuring consciousness because we believe that other people, including this patient, are conscious, and the patient is telling us his or her experience. But still, all we have is just the report and the recording instruments – we can't seem to observe consciousness itself. It seems intuitively obvious that consciousness is, in a very fundamental sense, private ... but why, exactly?

Well, again, let's suppose that consciousness were a mechanism that consisted of two major components, fitted to one another -- call them World and Actor, or W and A. And then consider two different conscious mechanisms, C1 and C2:

C1 = W1-A1
C2 = W2-A2

Any observation of any form – measurement, direct perception, whatever – reaches the awareness of the observer only through W. But that awareness itself is only present in the connection between W and A. So if C1 wanted to observe the consciousness of C2, say, even if C1 were to take apart or dissect C2, all that C1 would be able to observe would be what came to it via W1 – through that portal, so to speak, it could observe C2 as a whole, or observe the separate parts W2 and A2, but it could not *observe* C2's consciouness because that's a phenomenon that only occurs at the Actor (or maybe Awareness) level, i.e., at the A1 or A2 level. For C1 really to become aware of C2's consciousness, it would need to get its A1 component in direct connection with C2's World component – that is, it would have to dissect not just C2 but also itself, and then re-wire its own A1 to C2's world:

C1-2 = W2-A1

Which would make an interesting basis for a science-fiction plot (though no doubt Philip K. Dick has already done something like that), but is otherwise unlikely.

Qualia again ("blue is not a number")

NOTE: This post, slightly edited, is taken from what to me was/is an interesting thread on Ray Kurzweil's "Mind.X" forum. The title refers to the handle of one of the more active participants in that thread, whose questioning regarding this project has been a spur to get me to think not just about how to refine the ideas here, but also how to improve the communication of them. What follows here and in the post above are some of the results.

Take the color blue -- not the concept "blue" which gathers together all the particular perceptions of blueness, but a specific perception of blue itself, the quale or experience of blue. It's not a number, precisely, it's a color. But what exactly does it mean to say that it's a "color"? What else could it be? Could it be a number, for example? The problem with that is that number itself is a concept not a perception, not a quale. We humans understand numbers because we invented the concept (and we tend to think that computers deal in numbers or "computations" because that's what we invented computers for), but how would number work in place of color in a conscious but non-linguistic organism? Not well, I think, because it's an abstraction, and what we need is a concrete experience. Similarly, even though we believe that blue "represents" a certain frequency of light, it's hard to see (so to speak) how there could be a direct perception of frequency, because "frequency" too is just a concept. If we had the problem of designing an artifact that perceived light frequencies (as opposed to simply being affected by frequencies, as is a digital camera, say, or for that matter, anything that's not transparent), we would need to come up with a way of presenting to it a simple, immediate, irreducible, and distinguishable quality for each frequency that we wanted the artifact to be able to distinguish. And that, for evolved organisms, is just what color does and is.


But there's another part of the story to come, and it may be the hardest part -- this involves explaining the distinction between perception and simple effect, because that gets to the core of the idea of consciousness as such. What we need, here, is a mechanism that is able to "apprehend" the various qualities presented to it, or to which it turns its "attention", but isn't simply determined by them -- "apprehension" meaning that the perceiving mechanism is affected by the qualities that it perceives but has them available as input for further processing. Ultimately, of course, such a mechanism is as determinate as any other system in nature, but insofar as we look just at the connection between the "perceiving system" and the "world" of qualities, on all channels, that it's able to perceive, then this sort of apprehension opens up a certain free play, so to speak, a looseness of connection that isn't present in other kinds of mechanism -- the world affects the perceiver but doesn't determine it.

Tuesday, January 27, 2004

Dialogue with the Dualist

Dualist: You see, the question is why should this specific sensation, this "redness", accompany the whole chain of events that follow from the impact of photons of a certain energy on the retina?


Me: Well, any specific sensation is probably arbitrary -- but it must be some sensation --


D: But why "some"? Why should there be any sensation? Not to mention, where does such a thing come from ? how, out of simple matter and energy, do we get sensation, this redness?


M: How else could information be conveyed?


D: Well, lots of ways, but let's take one: as a bit stream.


M: So are you asking why we don't perceive a string of 1's and 0's instead of the sensation red?


D: Yes.


M: Okay, but now you've just broken the problem into smaller chunks -- how would we perceive the first bit in the stream, say a "1"? As an actual numeral?


D: As any sort of token. As a certain voltage level, say.


M: And how would we perceive that voltage level? We'll assume not as a needle pointing at some number on an internal volt-meter ?-- but how else? As a little shock?


D: Well, maybe.


M: But then wouldn't that be a sensation?


D: Alright, but we've actually dodged around the main point -- why does there need to be this inner perception at all? This is just the old homunculus answer, isn't it -- our eyes send a signal to a little television screen inside our skulls, where a little man watches?


M: Yes, and then how does he (or she) see? But it's precisely sensation that puts a stop to that infinite regress. I think you're right if you're just objecting to my use of the word "perceive" in this regard -- I really should have used "experience". So my question should have been: how do we experience the first bit in the stream? Or how do we experience any information?


D: Well, information is just difference --


M: Exactly! So then it hardly matters whether we experience this "difference" a bit at a time (so to speak), or in 24-bit chunks -- millions of colors! -- the simple point is that the tokens of information must be different, and that's all that sensations, or qualia, are.


D: No! That's not all they are! They're actual feelings -- that is, they actually feel like something, they're not mere abstract differences, which is the whole point here.


M: But my point is that they must feel like something for that information to be actually experienced --


D: Alright, fine, but then it's experience that's the issue here -- why and whence does this come about? Why should there be experience at all?


M: Okay, now we're getting to the heart of the matter -- or the heart of the heart. You'll grant that without experience there is no consciousness?


D: Yes, fine.


M: So really you're asking why should there be consciousness, yes?


D: Yes. I'm not happy playing the anti-Socrates, by the way, but I'll put up with it a while longer.


M: Thank you. (I'm sure I'll return the favor.) But what kind of answer to that question would satisfy you? If I could show that consciousness was functional, would that do it?


D: Umm --


M: Maybe not. Perhaps you're not really asking why there should be consciousness, but how there can be consciousness?


D: Look, it just comes down to the fact that you don't find red or blue in nature, nor in our brains, but only in our minds.


M: Yes, that far we can go together. But for you, I think, that's pretty much the end of the road. Whereas I would like to take at least another step or two, or try to, by saying, first of all, what that word "mind" means, and then saying what it means for something to be "in the mind".


D (laughing): Those are ... giant steps, wouldn't you say?


M (tentatively): Umm ... maybe.

(Possibly to be continued.)

Sunday, January 25, 2004

"Observation", "experience" and "self-observation"

We can use the two-part structure of consciousness to (re)define some terms:
  • "observation", in its broad sense, refers to the first stage of consciousness, in which environmental signals are mapped to a fabricated "world";
  • "experience" refers to the second stage of consciousness, in which a behavior-determining system accesses that "world", as well as other sources of information such as memory and imagination.

Thus, observation is not experience, nor experience observation. One consequence of which must be that "self-observation" isn't true observation, but rather a form of experience. That is, one can't really observe oneself, one can only experience oneself.

This last point takes us beyond the stated limits of this project (which is focused on consciousness, as opposed to self-consciousness) -- but it also relates to the way in which we investigate, or even just think about, consciousness per se, and so it's worth pursuing a little. Why is it that self-awareness seems at once to be like observation, but also to be different (especially so, perhaps, in the peculiarly slippery, "glassy", protean qualities of its object)? Let's make a quick hypothesis: the advent of language, or of a token-base communication system, allowed the development of a third "layer" or stage of consciousness, and, in particular, the formation of a "self" which represented the whole of consciousness, an inherently recursive information structure. From the vantage point of this third stage, this self, then, consciousness itself can appear as something observed, even though this construct is entirely within experience, and so has none of the "hardness" or durability of true observation, its self-referential nature making it especially unstable.

Saturday, January 24, 2004

Consciousness as disconnection

Comparing the two simple entities below -- earthworm and thermostat -- and the alternative "wiring" options in each case leads to an imprecise but suggestive formula for the structure that is at the basis of conscious awareness:

Consciousness is a disconnection from the environment -- a disconnection that allows attention to the world.

In other words, the binary or two-stage model of consciousness introduces a gap, and in the space of that gap there arises information or phenomenal awareness as a new kind of re-connection.

The earthworm and the thermostat

An earthworm is an example of a simple entity that seems clearly to demonstrate purposeful behavior and some form and degree of sensitivity -- we might balk at ascribing consciousness to it, but it doesn't seem like an impossible stretch to imagine that it actually "feels" in some way. Whether or not this is actually the case depends, according to the argument here, on its neurophysiology, or on how its simple behavioral options are connected to its simple sensory inputs. If input is directly connected to output – that is, if its behavior consists just of reflex arcs – then we can at least say that it has no need of feeling. But if its neural inputs, however crude (sensitivity to temperature, nutrients, light perhaps) are instead connected to an intermediate neural structure that modulates and "represents" the external stimuli, and if there is another neural structure that is able to use these representations, however simply, in order to determine behavior, then we might very well say that the earthworm does indeed "feel", because some level of phenomenal awareness would then be required as the means of connecting the two intermediate neural structures.

This is just saying again what was said below, but relying upon at least a plausible intuition that even a very simple mobile organism can feel. But now let's consider a thermostat, a control device that also exhibits what might be called "purposive behavior", though only with a certain amount of metaphorical licence since in this case the mechanism involved is very obvious: a thermometer falling below a set level triggers a switch to start a furnace. Nevertheless, here we have another case, like the earthworm, of a simple entity that receives environmental input and determines its behavior in light of that input. Suppose that we complicate the thermostat a bit by adding another input channel besides temperature – perhaps a clock, say, or a light-meter, so that the triggering termperature can be set lower after a certain time, or with the onset of darkness. How would these two "sensory" inputs work together? One way – perhaps the more likely way – would be to connect both directly to a more complicated switch that relied upon a double trigger to start the furnace. But another way – perhaps a more flexible way – might be to connect both inputs not directly to the switch but to an intermediate layer, where their signals could be represented as qualitatively distinct "tokens" on a linear scale – and then to make a second layer out of a simple processing chip that could accept these tokens as input and determine upon its "behavior" – whether or not to start the furnace – based upon its processing of these inputs. In this case, would we be as tempted to say of the thermostat what we were of the worm – that, in some fashion and to some degree, it "feels"?

Well, perhaps not, and for a number of reasons, some of them good ones – such as the fact that the thermostat is a special purpose device whereas the worm is an autonomous entity, and much more complex than even this artificially complicated thermostat. But this is also a test of our intuition in this entire area, and, just as the intuition that the sun revolves around the earth mislead us in the past, so might the sense that even quite simple evolved organisms can feel, but even quite sophisticated designed ones cannot, be a mere prejudice.

Wednesday, January 21, 2004

"What is it like to be" X?

Let's stipulate that it isn't like anything to be a rock, or a plant, or, just by itself, a running computer. It may be like something, however simple, to be a worm, and it's very probably like something to be a turtle, say, or a bat, or a dog. That is, it can only be "like something" to be something if the thing we're imagining has a so-called "inner world" – or, in the theory developed here, has a certain internal control structure that makes use of an inner world. So in this sense it would be entirely appropriate to say that it would be "like something" to be a machine that possessed such a control structure as well, a binary control system with attention.

Tuesday, January 20, 2004

Qualia: primordial information tokens

In explanations of Shannon's theory, information is typically represented simply by tokens, each of which is distinct or unique, but otherwise quite arbitrary. But such a "token" is really a derivative concept, a kind of abstraction of the notion of qualitative difference. Before any such abstractions, before there were names of things, before there were objects, even, or things themselves, there was qualia. That is, qualia are logically prior to any other form of information - tokens, cyphers, names or numbers or quantities are all, in one way or another, derived from the primary form of information, qualia. So a "quale", by itself, is neither "hot" nor "cold", but simply a unique, distinctive bearer of information - e.g., what conveys the information "red", before anything else, is just redness. And that's all that redness does.

So now let's bring this back to the idea of consciousness as a binary control system. We're accustomed to think of awareness as centered in, and orginating from, a point - as in a "point of view". But suppose we assume, instead, that "awareness" is actually a function of a fairly complex subsystem of consciouness - a "behavior-determining" subsystem – with a number of parts and processes, all of which are necessarily outside of awareness (below, before, or just, in any case, beyond). And just as its own component parts and processes are inherently out of reach of the behavior-determining system that is able to manifest awareness, so the processes - neural pathways, silicon-etched circuits, or anything else - that underlie the world-making system are out of reach of its "awareness". All that is present to that awareness is the information space so formed, the space that we call the "world". And all that a particular state - the particular state upon which attention is focused at a particular time - of that information space is, or can be, made out of is qualia.

It's in this sense that qualia are not just functional, but are logically necessary to any system, like consciousness, which presents an information space for the attention of a decision-making process.

Friday, January 16, 2004

Why zombies lurch

Even without the FX, Hollywood zombies are more realistic than philosophical zombies because they usually lurch when they stumble toward you - Hollywood seems to understand that trying to get by without a consciousness makes things like walking difficult, not to mention talking. Philosophical zombies, on the other hand, are supposed to be indistinguishable from us, by any known test, and yet lack so-called "inner states" altogether. And from the apparent fact that such a chimera is a "conceptual possibility" (see David Chalmers, "Self-Ascription Without Qualia: A Case-Study") we can apparently deduce that these inner states - consciousness, qualia, whatever - are entirely dispensable, "epiphenomenal", or just "along for the ride". Now, maybe it's just me, but, with apologies to David Chalmers, this looks very much like a case of assuming that which you set out to prove - i.e., of begging the question. The alternative view - namely, that inner states, including qualia, are highly desirable if not indispensable for any kind of complex behavior control - is no more defeated by a mere conceptual possibility than a theory of gravity would be by imagining that someone could levitate.

But what Chalmers and others are doing in making this sort of argument, of course, is appealing to the same sort of deeply rooted intuition that has anchored philosophical debate in this area for a very long time - the simple idea that we can, in principle, follow the so-called "physical processes" underlying any given mental state to completion and never encounter the actual state itself - hence that state (which we can hardly deny, though some try to) is simply some mysterious extra, a superfluity, an "epiphenomenon". Zombies are just one more kick at that can (as are, for some strange reason, examples from things Chinese - e.g., Block's "Chinese Nation" or Searle's "Chinese Room"). Thus, goes the argument, in all its forms, the ineradicable gulf between mental states and physical processes.

Like many things that won't go away, there's something at once both superficial (the reason we'd like it to go away) and deep (the reason it won't) about this reasoning. On a superficial level, it sometimes seems like this kind of argument is just some sort of level mistake, like someone saying they followed every process occurring in the City Hall, but never encountered the city government itself - hence, city government must be either an illusion or an "epiphenomenon". But no, obviously, it's simply an abstraction, a way of grouping a set of concrete activities and entities. The problem with trying to explain away dualism in the same way, though, is that the situation is somehow reversed: unlike a city government, the mental states under investigation seem to be the very essence of concrete, primary, irreducible experience, and are the raw material out of which any concept of "physical process" must be made. And this is only part of what makes the issue a deep one - there are also matters involving the very nature of "explanation" and of "awareness".

For this and other reasons, qualia remain at the crux of the issue of consciousness itself. Still, the clumsiness of movie zombies ought to give us a clue about the functional efficacy of an "inner life".

On applying the right intuition

One very important source of confusion in this area, I find, is the erasure of the distinction between consciousness per se, and what might be called "linguistic consciousness", or what I refer to below as self-consciousness. I do believe that self-consciousness can only arise as a result of language -- in large part because the notion of a "self" is a creation of language -- but that's no doubt an issue for another time. The important point here is simply that there can be awareness - qualia, feels, experience, etc. - without necessarily language or even "thoughts", and certainly without awareness of the awareness. Of course, whenever we think about this kind of thing, we're necessarily being self-aware -- i.e., meta-aware -- and so there's an understandable tendency to merge these two quite distinct levels or even kinds of awareness -- but we should resist that. One technique for helping to do so, and keep our intuitions focused on the right level, is to imagine the world from an animal's perspective, say a dog or a cat. (There may be some who would deny conscious experience to animals, but if so the best argument against them would be to recommend they get a pet.)

An exercise in Applied Philosophy

-- which is how I've described this project elsewhere.

If it were actually implemented, I think, as I say in the Prefatory note, that it might provide us with a concrete platform with which to go beyond thought-experiments finally, and actually try out some conjectures. My idea, in other words, is not so much to build functioning robots, but to use this project, with its concrete, practical focus, as a means to help clarify some of the very confused and confusing notions that swirl around this whole area. For this reason, there will likely be an ongoing focus in this blog on philosophical issues, and on how they're affected by even a proposal for a project of this sort. So the project, even without implementation, might function as a kind of extended thought-experiment.