voxette-vk:

shlevy:

squareallworthy:

voxette-vk:

fnord888:

(To be precise, self-consciousness is an observed fact; the consciousness of others is a theory to explain their behavior, one that gets its explanatory value from the inference from ourselves to others.) 

Finally, I will just reiterate that if we didn’t have firsthand knowledge of our own consciousness, and all we had to go on was the external behavior of our and others’ bodies, it would be a lot simpler and apparently sufficient to explain that behavior according to unconscious mechanistic processes. But sense we do have such awareness, the mechanistic explanation just doesn’t cut it.

Sure, self-consciousness is a reason to infer consciousness to others. But the only reason?

Imagine a being observing human civilization from the outside, with no similarity of structure or organization to humans suggesting that it should infer consciousness to humans from its own self-consciousness. But it can observe and understand the products of human civilization, including, inter alia, first person biographical accounts that describe conscious experience and long philosophical arguments about “the hard problem of consciousness”. Would it be reasonable for such a being to ascribe those things to “unconscious mechanistic processes”?

Since this being knows that consciousness exists from its own example, I think that kind of behavior would be compelling evidence that humans are conscious. Particularly since the humans in this case are not intended as imitations of that being’s species or developed by them, raised in their culture, etc. Though I would say that the evidence is less than completely dispositive: after all, this being can point to reductive materialists like Daniel Dennett arguing that the “hard problem of consciousness” and so on is just a big conceptual misunderstanding.

On the other hand, imagine some kind of AI that is (just suppose) not conscious and never heard of the idea, but it’s also tasked in a similar way with cataloguing human civilization. Would this behavior of having philosophic arguments about the hard problem of consciousness be so baffling, so defiant of all explanation, that it would have to reject all priors about everything being reducible to physics and rewrite its explanation of human behavior in terms of consciousness being an independent explanatory force? I think not.

At least not if what it was able to discern was approximately equivalent to the state of our own science. I’m not an epiphenomenalist (in fact, I agree with all of Yudkowsky’s arguments about why it is arbitrary and can’t explain anything). I believe that human consciousness exercises causal influence on the physical world in a way that must be, in principle, detectable. And while as human beings we’re able to identify this influence subjectively, we are not (yet) at the point where we can objectively confirm it in others. (By the way, I do regard this as a problem for the interactionist theory; I just think it’s more plausible than the alternatives.)

If you can suppose that some thus-far undetectable consciousness-stuff adheres to humans, why not suppose that it adheres to computers as well?

It seems to me that you’re trying to have it both ways. You’re saying that one could, in principle, build a computer that has all the externally-observable mental abilities of a human being, but that it would not be conscious. And you’re also saying that human consciousness exercises causal influence on the universe. But influence on what? If it influences human behavior, then human behavior must be different from computer behavior, but this violates the hypothesis that a computer can demonstrate human-like behavior.

From where I’m sitting, it sure feels like my consciousness influences my behavior. If so, how can a computer exhibit human-like behavior if it doesn’t have the same sort of influence over its behavior?

With good enough cameras and a high quality display, you could build something that has all the externally-observable “reflective” capabilities we normally associate with a mirror, but didn’t actually, say, reverse polarization of “reflected” circularly polarized light. I could describe the system to you and you would know this without having to actually run the experiment, because you understand that the underlying causes leading to the same experimental behavior are different in relevant ways.

With consciousness, we don’t know what the relevant ways are, but we do know enough about causal properties underlying the functioning of computers to explain all of their behavior without invoking it. Why then would we assume it when we have no evidence it’s there? I genuinely don’t understand how this argument is meaningfully different from saying that if you have an android showing all the high level external behaviors of a human, including cognitive abilities and all, that if you cracked its skull open you should expect to find a tasty meal that will give you kuru.

Things that would make the story different:

  • If a computer intelligence were built by trying to faithfully copy e.g the cellular architecture of brains, and it behaved the same at the macro scale
  • If a computer intelligence independently (without getting it from interaction with humans or having it programmed in or anything) started talking about having subjective first-person experiences

In those cases, I’d lean toward some kind of pan-psychism where (proto?)consciousness is some kind of ever-present partner to physical processes, with the nature of the partnership depending on information flows in some way (basically a more sophisticated IIT). Notably though, even in that case, or true epiphenomenalism, it would still be true that our modern scientific systems suffer from the hard problem.

Exactly.

When I talk about the “same” or similar behavior, I mean the high-level, externally visible behavior. Not the same behavior all the way down to the subatomic level.

Obviously, if consciousness is causally efficacious (and if it weren’t, how would our tongues be flapping about it?), there has got to be some difference between the properties of a conscious intellect and an unconscious intellect. And clearly there are obvious physical differences between computer architecture and the architecture of the human brain. I am saying that these differences leave ample space for the one to involve the influence of consciousness and the other not to, even though the high-level behavior may be similar.

Again, just as a “human calculator” and a pocket calculator can exhibit similar calculating ability (in some domains), without you expecting that if you cracked open the pocket calculator, you would find meat inside. The high-level similarity leaves open ample possibility for differences in low-level implementation.

And I agree with @shlevy that we have a pretty thorough understanding of how the logic gates that make up a computer physically work. Their operation is completely predictable on the basis of known physical laws that don’t involve any kind of irreducibly top-down emergent forces. Of course, we don’t have advanced AI yet, but if one were built from essentially the same kinds of parts, just with faster processing and more complex code, there wouldn’t be any reason to posit the introduction of unknown forces at work. If the AI were built with some weird poorly-understood quantum-mechanical CPU, then perhaps it would be a different story.

I feel like you’re undermining yourself when you have both the statements (paraphrased) “Consciousness is causally efficacious” and “Conscious and unconscious systems can have similar high-level behavior.”

Honestly I feel as though the entire game was up when Turing Completeness became a thing.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s