shlevy:

shieldfoss:

voxette-vk:

shieldfoss:

voxette-vk:

shlevy:

squareallworthy:

voxette-vk:

fnord888:

(To be precise, self-consciousness is an observed fact; the consciousness of others is a theory to explain their behavior, one that gets its explanatory value from the inference from ourselves to others.) 

Finally, I will just reiterate that if we didn’t have firsthand knowledge of our own consciousness, and all we had to go on was the external behavior of our and others’ bodies, it would be a lot simpler and apparently sufficient to explain that behavior according to unconscious mechanistic processes. But sense we do have such awareness, the mechanistic explanation just doesn’t cut it.

Sure, self-consciousness is a reason to infer consciousness to others. But the only reason?

Imagine a being observing human civilization from the outside, with no similarity of structure or organization to humans suggesting that it should infer consciousness to humans from its own self-consciousness. But it can observe and understand the products of human civilization, including, inter alia, first person biographical accounts that describe conscious experience and long philosophical arguments about “the hard problem of consciousness”. Would it be reasonable for such a being to ascribe those things to “unconscious mechanistic processes”?

Since this being knows that consciousness exists from its own example, I think that kind of behavior would be compelling evidence that humans are conscious. Particularly since the humans in this case are not intended as imitations of that being’s species or developed by them, raised in their culture, etc. Though I would say that the evidence is less than completely dispositive: after all, this being can point to reductive materialists like Daniel Dennett arguing that the “hard problem of consciousness” and so on is just a big conceptual misunderstanding.

On the other hand, imagine some kind of AI that is (just suppose) not conscious and never heard of the idea, but it’s also tasked in a similar way with cataloguing human civilization. Would this behavior of having philosophic arguments about the hard problem of consciousness be so baffling, so defiant of all explanation, that it would have to reject all priors about everything being reducible to physics and rewrite its explanation of human behavior in terms of consciousness being an independent explanatory force? I think not.

At least not if what it was able to discern was approximately equivalent to the state of our own science. I’m not an epiphenomenalist (in fact, I agree with all of Yudkowsky’s arguments about why it is arbitrary and can’t explain anything). I believe that human consciousness exercises causal influence on the physical world in a way that must be, in principle, detectable. And while as human beings we’re able to identify this influence subjectively, we are not (yet) at the point where we can objectively confirm it in others. (By the way, I do regard this as a problem for the interactionist theory; I just think it’s more plausible than the alternatives.)

If you can suppose that some thus-far undetectable consciousness-stuff adheres to humans, why not suppose that it adheres to computers as well?

It seems to me that you’re trying to have it both ways. You’re saying that one could, in principle, build a computer that has all the externally-observable mental abilities of a human being, but that it would not be conscious. And you’re also saying that human consciousness exercises causal influence on the universe. But influence on what? If it influences human behavior, then human behavior must be different from computer behavior, but this violates the hypothesis that a computer can demonstrate human-like behavior.

From where I’m sitting, it sure feels like my consciousness influences my behavior. If so, how can a computer exhibit human-like behavior if it doesn’t have the same sort of influence over its behavior?

With good enough cameras and a high quality display, you could build something that has all the externally-observable “reflective” capabilities we normally associate with a mirror, but didn’t actually, say, reverse polarization of “reflected” circularly polarized light. I could describe the system to you and you would know this without having to actually run the experiment, because you understand that the underlying causes leading to the same experimental behavior are different in relevant ways.

With consciousness, we don’t know what the relevant ways are, but we do know enough about causal properties underlying the functioning of computers to explain all of their behavior without invoking it. Why then would we assume it when we have no evidence it’s there? I genuinely don’t understand how this argument is meaningfully different from saying that if you have an android showing all the high level external behaviors of a human, including cognitive abilities and all, that if you cracked its skull open you should expect to find a tasty meal that will give you kuru.

Things that would make the story different:

  • If a computer intelligence were built by trying to faithfully copy e.g the cellular architecture of brains, and it behaved the same at the macro scale
  • If a computer intelligence independently (without getting it from interaction with humans or having it programmed in or anything) started talking about having subjective first-person experiences

In those cases, I’d lean toward some kind of pan-psychism where (proto?)consciousness is some kind of ever-present partner to physical processes, with the nature of the partnership depending on information flows in some way (basically a more sophisticated IIT). Notably though, even in that case, or true epiphenomenalism, it would still be true that our modern scientific systems suffer from the hard problem.

Exactly.

When I talk about the “same” or similar behavior, I mean the high-level, externally visible behavior. Not the same behavior all the way down to the subatomic level.

Obviously, if consciousness is causally efficacious (and if it weren’t, how would our tongues be flapping about it?), there has got to be some difference between the properties of a conscious intellect and an unconscious intellect. And clearly there are obvious physical differences between computer architecture and the architecture of the human brain. I am saying that these differences leave ample space for the one to involve the influence of consciousness and the other not to, even though the high-level behavior may be similar.

Again, just as a “human calculator” and a pocket calculator can exhibit similar calculating ability (in some domains), without you expecting that if you cracked open the pocket calculator, you would find meat inside. The high-level similarity leaves open ample possibility for differences in low-level implementation.

And I agree with @shlevy that we have a pretty thorough understanding of how the logic gates that make up a computer physically work. Their operation is completely predictable on the basis of known physical laws that don’t involve any kind of irreducibly top-down emergent forces. Of course, we don’t have advanced AI yet, but if one were built from essentially the same kinds of parts, just with faster processing and more complex code, there wouldn’t be any reason to posit the introduction of unknown forces at work. If the AI were built with some weird poorly-understood quantum-mechanical CPU, then perhaps it would be a different story.

I feel like you’re undermining yourself when you have both the statements (paraphrased) “Consciousness is causally efficacious” and “Conscious and unconscious systems can have similar high-level behavior.”

Honestly I feel as though the entire game was up when Turing Completeness became a thing.

How so?

When I say consciousness is causally efficacious, I mean two things:

  • That when I have thoughts and have words coming out of my mouth, the thoughts are actually the cause of the mouth movements, rather than there being two separate “tracks” in some Spinozistic sense, or the thoughts and the mouth movements being independently caused by bottom-up physical processes.
  • That I have free will, meaning that my mind is the uncaused cause of at least some of my thoughts and actions. That is, that the mind acts not only as a passive cause (a causal “middleman” whose actions are the result of prior causes) but also as an active cause (an initiator of independent causal power).

I don’t mean:

  • Every human action is caused directly and unmediatedly by consciousness.
  • I have total free will over every aspect of every conscious action I take.
  • I am not influenced by outside circumstances in the options and choices available to me.

I don’t take any of these as incompatible with the notion that unconscious systems could exhibit similar high-level behavior.

Take talking. On the obvious level, you have recordings. In many contexts, you could fool me with a simple audio recording into thinking that a human being was exercising conscious thought to speak to me. I don’t think you would say that means the 19th-century tech phonograph record exhibits consciousness if it passes this extremely basic version of a “Turing test”.

On a more sophisticated level, you have a chatbot. A pretty simple chatbot can fool people into thinking that they’re conversing with a human being, when it’s confined to a limited context. Again, I don’t think you’d say the chatbot is conscious.

And I don’t see why, even if you had a much more sophisticated version, able to walk around, do some shopping, and even work a menial job in a way that outwardly passed as human (perhaps as a bit odd), that requires this robot to have a non-physical mind and an inner world of subjective experience, or for this mind to cause the robot to speak these words or undertake these actions. Rather, that speech and action could be caused by the inexorable operation of the subatomic particles making up the robot, and talking about the robot having conscious mental processes is just a framing that we use to talk about that low-level activity in a more human-comprehensible way. Not something different from the low-level processes.

In fact, my suspicion is that we’re in agreement about this in regard to the robot. You just think that a human being is the same kind of robot. And what we really disagree about is the nature of human consciousness. We are using the word “consciousness” in a different sense.


The issue of free will is even more open-and-shut. As a matter of high-level behavior, I don’t know what you would even expect to find differently between a being that could freely choose its behavior and a being that operated on the basis of deterministic drives. At each point in time, you might have the ability to act one way or another, but you can’t roll back the clock to see if things sometimes go differently.

We observe free will in ourselves on the basis of introspection. But with other people, we only observe their external behavior. We can’t observe their process of willing. How am I supposed to know by external observation, when someone commits a robbery, whether he had the mental ability at that moment to do otherwise, or whether his genes and environment dictated mechanically that he take that course, or even whether it was the result of random fluctuations?

If I don’t have access to the low-level substrate, all of them seem like reasonable explanations for the high-level behavior.

Honestly I feel as though the entire game was up when Turing Completeness became a thing.

That seems to me rather like proof that a non-conscious entity, given a sufficiently long set of instructions, can mimic any high-level behavior you like. Exactly the opposite of what you’re saying.

That seems to me rather like proof that a non-conscious entity, given a sufficiently long set of instructions, can mimic any high-level behavior you like. Exactly the opposite of what you’re saying.

That’s question begging.

It’s proof that an entity, given a sufficiently long set of instructions, can mimic any high level behavior.

Which includes the behavior of humans.

Which means either the human consciousness is not causally efficacious, or the entity is conscious.

Compare the Chinese Room – I hold that the Room+Dictionary+Reader system speaks Chinese, even if you cannot point to any single sub-component of it that speaks Chinese.

It’s proof that an entity, given a sufficiently long set of instructions, can mimic any high level behavior.

Which includes the behavior of humans.

Which means either the human consciousness is not causally efficacious, or the entity is conscious.

… or that there are multiple potential underlying causal mechanisms that can produce a given high level behavior??? I don’t understand why this part of the discussion gets people caught up so often.

(also you’re assuming strong church-turing, but whatever)

you’re assuming strong church-turing

I kind of am, yeah. It feels like way less of a bullet to bite once you’re already materialist.

there are multiple potential underlying causal mechanisms that can produce a given high level behavior

I don’t buy that.

I mean, for sufficiently simple high level behaviors, yes, and for sufficiently similar “multiple mechanisms” then also yes, but if you’re doing a deep interrogation of how somebody experiences qualia and they give replies that indicate they have qualia, I don’t buy that there’s something else in there that just perfectly simulates what you’d answer if you have qualia.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s